Home Blog Page 3

5 Keys to Europe’s AI Second


Europe is standing at a crossroads for its AI future. Huge plans are in movement to embrace the facility of Synthetic Intelligence, with the EU AI Continent Motion Plan and the upcoming Cloud and AI Improvement Act. However to really make AI work for everybody, the roadmap should turn into clear deliverable steps.

This weblog seems to be at what it takes to construct a really AI-ready continent via 5 essential steps Europe should take to steer in AI, fostering innovation, guaranteeing safety, and empowering its workforce. Our insights are drawn from our management in AI infrastructure, networking, safety, and collaboration options and our dedication to worldwide AI rules:

  1. Construct Sturdy AI Foundations: safe, scalable, power environment friendly.
  2. Make AI Safe: cybersecurity, danger administration, world requirements.
  3. Defend Innovation: secure AI improvement, world requirements, AI fashions weaknesses analysis.
  4. Preserve Information Flowing: versatile information guidelines for AI coaching, diminished localization mandates.
  5. Empower Individuals with AI Expertise: large-scale coaching, partnerships, modernized public procurement.

1. Construct Sturdy AI Foundations

AI thrives on strong digital, bodily, and power infrastructure. Europe’s need to triple its information facilities capability within the subsequent 5 to 7 years is an important step, alongside advancing AI Factories and AI Gigafactories. This infrastructure have to be safe, dependable, and power environment friendly.

Our analysis reveals solely 7% of European organizations think about themselves AI-ready, citing infrastructure limitations, safety considerations, and expertise gaps as main boundaries. But, maximizing AI’s worth throughout the economic system calls for trusted infrastructure and strong safety.

Equally, solely 8% of EU companies really feel their infrastructure is prepared for AI, in comparison with 15% globally, and 66% acknowledge restricted scalability. That’s a giant hole. To deal with this, the EU should encourage strategic investments in new and modernized information facilities, prioritizing safety, resilience, and power effectivity.

AI wants connectivity. Incentivizing broadband investments, implementing the Gigabit Infrastructure Act swiftly, and streamlining regulatory necessities will expedite information heart building and high-speed broadband deployment throughout Europe. The EU must also mandate  substitute or mitigation of out of date community belongings to reinforce safety and efficiency.

Additional, Europe urgently must strengthen its infrastructure via versatile funding mechanisms that help each sovereign and world public cloud providers. To get entry to the world’s finest expertise, a mixture of world and native options will give Europe the pliability it wants. European success on AI hinges on its potential to entry the most effective applied sciences.

Lastly, we have to speed up power acquisition whereas enhancing the effectivity of expertise infrastructures supporting AI. This consists of selling grid modernization, guaranteeing safety in underlying management networks (together with by addressing the fragmented nationwide NIS2 implementation), and inspiring energy-efficient ICT tools via schemes just like the Sustainability Ranking Scheme for Information Facilities, with fiscal incentives.

2. Make AI Safe, From High to Backside

Latest developments have confirmed that even probably the most refined AI fashions may be tricked or misused. Attackers are more and more focusing on AI infrastructure and the distinctive vulnerabilities inside AI deployment environments, from provide chain compromises to immediate injection and information poisoning.

We have to shield the tech that runs AI, and the AI programs themselves, from how they’re constructed to how they’re used. As AI turns into extra pervasive, it additionally broadens the risk floor which may be exploited.

The EU ought to combine robust cybersecurity insurance policies into AI funding initiatives, requiring strong danger administration. Upholding internationally acknowledged requirements just like the NIST AI Threat Administration Framework, OWASP High 10, and MITRE ATLAS is vital to strengthening AI safety.

AI can be our strongest protection. AI-enabled cybersecurity options can analyze huge quantities of knowledge, detect anomalies, and automate responses at machine pace. The EU ought to promote these cybersecurity instruments, guaranteeing they’re not thought of high-risk beneath the AI Act, thereby fostering innovation on this vital space.

Leveraging generative AI (GenAI) to reinforce cybersecurity and broaden the cyber workforce is one other important step as these instruments democratize entry to superior safety, simplifying complicated duties and serving to handle the scarcity of cybersecurity professionals.

Lastly, selling regulatory alignment and mutual recognition of safety certifications and conformity evaluation will streamline compliance and speed up the deployment of significant AI-powered safety options.

3. Defend Innovation to Spark New Concepts

New AI concepts received’t flourish with out belief and security. The fast improvement and deployment of open-source fashions, whereas accelerating AI adoption, additionally introduce potential dangers, together with information leakage, information poisoning, and insecure outputs. The instance of the DeepSeek R1 mannequin, which confirmed 100% success in jailbreaking makes an attempt in Cisco’s analysis, underscores that even high-performing fashions can have important safety flaws.

It’s essential to collaborate with the personal sector on pre- and post-deployment testing of AI fashions, encouraging automated crimson teaming and steady validation. Europe ought to again expertise options that mechanically take a look at AI for weaknesses, and cooperate with worldwide counterparts to help the uptake of worldwide acknowledged requirements and practices in AI cybersecurity and danger administration. Selling devices just like the NIST Adversarial Machine Studying (AML) taxonomy will assist determine and mitigate AI-associated dangers.

Tasking ENISA to judge weaknesses in publicly accessible AI fashions, publish findings, and promote market-based options will strengthen the AI provide chain. These efforts require robust cooperation between authorities businesses and personal sector, as expertise develops quickly and knowledge sharing will turn into as necessary as technical necessities.

4. Preserve Information Flowing and Requirements Clear

AI learns from information. To unleash Europe’s AI potential, particularly in areas like healthcare, information guidelines must be versatile sufficient for AI coaching, whereas defending privateness. Guaranteeing that European information guidelines are tailored to the AI revolution (e.g. AI coaching) will likely be key.

Information localization mandates usually decelerate innovation, price extra, and will even create new safety dangers. With that in thoughts, the EU ought to strengthen its worldwide information move coverage by pursuing multilateral agreements for information transfers. These agreements ought to set up mutual recognition of nations with strong information safety frameworks, in distinction with present guidelines which have failed to offer long run authorized certainty, hindering enterprise and innovation.

And on the subject of how AI works, we should always give attention to internationally acknowledged requirements. This implies everybody speaks the identical tech language, making it simpler for companies to innovate and for AI to work seamlessly throughout borders. This strategy simplifies compliance for companies and strengthens Europe’s place as a worldwide chief.

5. Empower Individuals with AI Expertise

AI is essentially reshaping industries and labour markets, so we’d like to ensure everybody has the abilities to thrive. Our analysis reveals solely 9% of EU companies really feel they’ve the appropriate expertise for AI, a stark distinction to 21% within the US.

We have to put money into coaching packages that educate AI expertise. Partnerships between governments and corporations are very important. The EU should additionally encourage and enshrine skills-based hiring practices in each the personal and public sectors. Cisco is doing its half, aiming to coach 1.5 million Europeans in digital expertise by 2030, and coaching 5,000 instructors in AI and information science.

It’s not nearly AI professionals – 92% of ICT jobs will likely be considerably or reasonably reworked by GenAI.  Which is why Cisco, along with eight main world firms and advisers established the AI-enabled ICT Workforce Consortium, analyzing how job profiles will likely be impacted by AI, sharing insights and detailed coaching choices to assist people reskill and upskill.

Lastly, selling digital transformation within the public sector is not going to solely enhance service supply and safety but in addition foster AI adoption and assist break down information silos. Crucially, the EU ought to modernize public procurement processes to align with rising technological options, and chorus from introducing broad European choice clauses as these usually restrict selection, hinder innovation, and infrequently ship the anticipated long-term native development.

Can Europe turn into an AI Continent?

Europe has the chance to construct an AI future that’s sensible, but in addition safe, truthful, and inclusive. It means considering large, investing correctly, and fostering collaboration between policymakers, business, and academia. By specializing in robust foundations, strong safety, innovation, open information flows, and a talented workforce, Europe can actually turn into an AI Continent.


Cisco is the worldwide chief in networking, safety, and collaboration options. We’re dedicated to supporting Europe’s AI journey. As a signatory of the EU AI Pact and a proponent of worldwide AI rules, we perceive that maximizing AI’s worth throughout the economic system calls for trusted infrastructure and strong safety. Learn our detailed suggestions for the AI Continent Motion Plan and Cloud and AI Improvement Act right here.

Share:

IBM: Price of U.S. information breach reaches all-time excessive and shadow AI is not serving to



From a monetary standpoint, whereas the worldwide common price of an information breach fell to $4.44 million, the common U.S. price of a breach elevated, reaching a file $10.22 million. Bigger regulatory fines and better detection and escalation prices within the U.S. contributed to this surge, IBM said. 

From an business perspective, healthcare breaches stay the costliest for the 14th consecutive yr, costing a mean of $7.42 million.

“Attackers proceed to worth and goal the business’s affected person private identification data (PII), which can be utilized for identification theft, insurance coverage fraud and different monetary crimes,” IBM said. “Healthcare breaches took the longest to determine and include at 279 days. That’s greater than 5 weeks longer than the worldwide common.”

Different fascinating findings from the examine embody:

  • The impact of storage location: “30% of all breaches concerned information distributed throughout a number of environments, down from 40% final yr. In the meantime, breaches involving information saved on premises elevated sharply to twenty-eight% from 20% final yr. Nevertheless, prices for every class differed. Information breaches involving a number of environments price a mean $5.05 million, whereas information breached on premises price a mean $4.01 million,” IBM said.
  • Phishing dominates amongst preliminary assault vectors: “Phishing changed stolen credentials this yr as the most typical preliminary vector (16%) attackers used to achieve entry to programs. At a mean $4.8 million per breach, it was additionally one of many costliest. In the meantime, provide chain compromise surged to change into the second most prevalent assault vector (15%), and second costliest ($4.91 million) after malicious insider threats ($4.91 million).”
  • The price of shadow AI: 20% of respondents stated they suffered a breach on account of safety incidents involving shadow AI. “For organizations with excessive ranges of shadow AI, these breaches added $670,000 to the common breach price ticket in contrast to people who had low ranges of shadow AI or none. These incidents additionally resulted in additional private identifiable data (65%) and mental property (40%) information being compromised. And that information was most frequently saved throughout a number of environments, revealing only one unmonitored AI system can result in widespread publicity. The swift rise of shadow AI has displaced safety abilities shortages as one of many high three expensive breach components tracked by this report,” IBM said.
  • Time to determine and include a breach decreased: “The imply time organizations took to determine and include a breach fell to 241 days, reaching a nine-year low and persevering with a downward development that began after a 287-day peak in 2021,” IBM said. “As famous in final yr’s report, safety groups proceed to enhance their imply time to determine (MTTI) and imply time to include (MTTC) with the assistance of AI-driven and automation-driven defenses.”

When it comes to suggestions, IBM emphasised identification and entry administration (IAM):

“Fortifying identification safety with the assistance of AI and automation can enhance IAM with out overburdening chronically understaffed safety groups. And as AI brokers start to play a bigger position in organizational operations, the identical rigor have to be utilized to defending agent identities as to defending human identities. Identical to human customers, AI brokers more and more depend on credentials to entry programs and carry out duties. So, it’s important to implement robust operational controls, or providers that may allow you to achieve this, and keep visibility into all non-human identification (NHI) exercise. Organizations should have the ability to distinguish between NHIs utilizing managed (vaulted) credentials and people utilizing unmanaged credentials.”

Why I Don’t Belief My Children’ Apps – The Hidden Cell Privateness Dangers Mother and father Ought to Know


We’re in an period the place dad and mom like me have grown up with smartphones. My dad and mom, as a lot as I beloved them, have been what we’d seek advice from as ‘technologically challenged’.  I typically had to assist them navigate the digital world, instructing them the best way to spot phishing emails or PayPal scams. 

Now, as a dad or mum myself, I discover the roles reversed.  It’s my job to guard my children not simply from the hazards of the bodily world, however from the cellular app privateness dangers for youths. Most children don’t take into consideration issues like information assortment, on-line monitoring, predatory promoting or extreme permissions. They only need to play video games, chat with associates and have enjoyable. That’s why it’s as much as us as dad and mom and guardians to remain vigilant about kids’s cellular app safety and privateness.

Why I Don’t Belief My Children’ Apps – The Hidden Cell Privateness Dangers Mother and father Ought to Know

What Knowledge Are Apps Actually Accumulating?

Each the Apple App Retailer and Google Play Retailer require builders to reveal the sorts of knowledge their apps acquire. These Privateness Vitamin Labels and Knowledge Security Labels are supposed to present transparency about information assortment practices and can provide dad and mom higher perception about cellular app privateness for youngsters.

Play Retailer Knowledge Security Label

For apps designed for youths, the foundations are imagined to be stricter. In response to the Google Play Households Coverage and Apple App Evaluate tips, apps that focus on kids aren’t supposed to gather device-specific or user-specific information.  These necessities are strengthened by digital privateness legal guidelines similar to Kids’s On-line Privateness Safety Act (COPPA) in america and the Normal Knowledge Safety Regulation (GDPR) in Europe.

However in actuality, it’s not all the time that straightforward. Take Roblox, some of the well-liked children apps on the planet. Roblox collects a spread of knowledge, together with voice recordings, private info and location information. Roblox claims to serve a basic viewers, not simply children. This technicality lets the app sidestep among the stricter guidelines round COPPA compliance, despite the fact that kids make up a big a part of its consumer base. 

The Florida Lawyer Normal just lately subpoenaed Roblox as a part of an investigation into how the platform protects kids from on-line exploitation. The subpoena geared toward strengthening digital safeguards for minors consists of paperwork associated to Roblox’ information assortment and processing practices.

Knowledge Assortment Declarations for Roblox on Android (prime) and iOS (backside)

Can We Belief App Retailer Privateness Labels?

Each the Google Play Retailer and the Apple App Retailer present perception into what information an app collects, the way it’s used, and who it’s shared with. However right here’s the catch: these labels typically don’t inform the entire story about children app information assortment.

Most builders solely disclose the info their very own app collects, not the info collected by third-party parts. These third-party libraries, additionally referred to as Software program Improvement Kits (SDKs) or dependencies, are sometimes used for analytics, in-app promoting or further options and save time for builders.  Sadly, many builders don’t totally perceive how these SDKs deal with information, which creates critical privateness dangers in children apps. 

One of many largest cellular information breaches in historical past provides an ideal instance of this. In early 2025, Gravy Analytics  suffered a large breach, exposing tens of thousands and thousands of consumer information on the darkish internet. Hundreds of apps have been affected, together with among the hottest apps accessible similar to Tinder and Sweet Crush.  Lots of the app builders had no thought they have been even related to Gravy Analytics — they have been merely utilizing a third-party advert library to monetize their apps.  However behind the scenes, this promoting SDK collected cellular app telemetry and private information. 

In the present day, among the similar advert libraries are nonetheless current in children’ apps. In actual fact, some of the well-liked advert libraries just lately eliminated COPPA compliance from its Android library, however dozens of childrens’ apps nonetheless use it immediately. That’s why dad and mom needs to be cautious when reviewing app retailer declarations and be cautious of hidden third-party information sharing.

Shedding Gentle on Cell Privateness Dangers

To gauge the accuracy of the Knowledge Security labels,  I made a decision to run a real-world take a look at by inspecting app site visitors.  What I discovered was troubling to say the least.

I downloaded a children app that explicitly claimed it didn’t acquire ANY information and didn’t share information with third events as proven under.

However after I analyzed  the community site visitors, I discovered one thing regarding: each 30 seconds, the app despatched an encrypted message containing 7,448 characters to the developer’s server.  

That’s a number of info for an app that supposedly collects nothing. What’s being transmitted?  Why is it encrypted?  We will’t say for certain, however we all know that information is being decrypted on the developer finish and used for one thing.

This sort of hidden cellular information assortment highlights why dad and mom have to be vigilant about cellular app privateness for youngsters. As we’ve seen, not all apps adhere to their information assortment statements and privateness insurance policies. 

What About Roblox?

To match, I ran the identical evaluation on the hit sport Roblox. The Roblox app itemizing states it collects private info, approximate location and in-app buy information. Once I created an account as a minor, I noticed some variations. The community site visitors was largely restricted to session-related information app telemetry, foreign money standing and stock updates. Primarily based on this habits, we see Roblox has taken some measures to try to safeguard children’ app privateness.

In July, Roblox launched new security measures for teenagers, together with AI-powered age estimation know-how and monitored Trusted Connections conversations to higher defend younger customers on the platform.

How Mother and father Can Defend Their Children from Cell Privateness Dangers 

1. Allow Parental Controls

Parental controls allow you to filter content material, block inappropriate apps and prohibit downloads.

2. Examine App Permissions

Each perform on a telephone, from the digicam to location providers, requires permission. Apps ought to solely request permissions to entry functionalities which can be completely crucial for the app to work correctly.  For instance, a easy Sudoku puzzle sport shouldn’t ask for exact location or entry to file audio.

Listed below are permissions I all the time block for youths’ apps: 

  • Digicam entry
  • Microphone entry
  • Contacts entry
  • Exact location*

*Apps within the ‘Children’ part of the Play Retailer are prohibited from gathering this information

3. Be Cautious of “Free” Apps

The app shops are inundated with children sport apps that look  free, however drive income by  adverts, microtransactions or subscriptions. ‘in-app purchases’.  Some apps even try and lock the consumer right into a month-to-month/yearly subscription. Take into account paying upfront for paid apps to keep away from these misleading in-app buy practices.

4. Look ahead to Misleading Advertisements

Many apps are full of misleading adverts disguised as video games. These adverts typically trick children into enjoying, then redirect them to the app retailer. Typically the ‘shut’ button is hidden or too small to identify. For instance, this advert for the Township sport popped up on the display screen with none warning, making it virtually really feel prefer it was part of the unique sport.  The choice to exit the advert was crammed within the nook, virtually not possible to see.  When you lastly see it and faucet it, the app retailer pops with the choice to put in the sport!

5. Take into account Subscription Companies Like Apple Arcade

In my residence, I’ve inspired my younger children to make use of Apple Arcade as a result of it provides  ad-free video games with no in-app purchases.  On Android, the Google Play Children tab provides some safer choices in addition to paid apps, although dad and mom ought to nonetheless vet every app rigorously.

6. Keep away from Social Options in Children Apps

Social apps for youngsters are dangerous attributable to predatory habits and on-line security dangers. Apps like Roblox or older platforms like Membership Penguin embody chat options that may expose children to strangers.  In my home, apps with social limits are strictly off limits till my kids are sufficiently old to grasp the potential risks.

The Backside Line: Defending Children’ Privateness Is As much as Us

Our children are among the most weak customers within the digital world. They rely upon us to guard their privateness, security, and on-line safety. Which means going past trusting app retailer labels, questioning what apps are actually doing behind the scene and refusing to just accept shady information practices as regular.

We should proceed to push for stronger cellular app privateness protections for youngsters and demand stricter enforcement of the insurance policies that exist already. In relation to children apps and information assortment, you possibly can by no means be too cautious.

On the office entrance, whether or not you’re a developer, safety skilled, privateness specialist or enterprise mobility supervisor, NowSecure options detect hidden information leaks, dangerous SDKs, extreme permissions, privateness points and compliance gaps, together with violations of COPPA, HIPAA and GDPR rules. 

In a world the place even “protected” children apps can secretly acquire information, NowSecure provides you the instruments to assess the apps you construct and vet the third-party apps you employ totally and act with confidence. Contact NowSecure to discover how we may also help defend your cellular software’s privateness posture.



set up NATIVE iOS mapbox navigation SDK in EXPO


Is there any means to make use of the native iOS mapbox navigation sdk in an expo module WITHOUT utilizing a 3rd celebration expo or react-native package deal?

I am following this expo information on Wrap third-party native libraries , attempting to make use of the mapbox ios navigation sdk, which appears to be solely out there through SPM, not Cocoapods (right here mapbox says “CocoaPods help is at present in improvement and can be added in future variations.”).

I’ve arrange the ~/.netrc file with my non-public mapbox key.
I’ve additionally created a config plugin that efficiently provides all vital values to the Data.plist recordsdata as instructed within the Mission Configuration half.

That is my config plugin withMapboxToken.js:

const { withInfoPlist } = require('@expo/config-plugins');

const withMapboxToken = (config) => {
  return withInfoPlist(config, (config) => {
    // Add Mapbox entry token
    config.modResults.MBXAccessToken = course of.env.MAPBOX_PUBLIC_TOKEN;
    
    // Add location permissions
    config.modResults.NSLocationWhenInUseUsageDescription = 
      "Reveals your location on the map and helps enhance the map.";
    
    config.modResults.NSLocationAlwaysAndWhenInUseUsageDescription = 
      "Reveals your location on the map and helps enhance the map.";

    // Add background modes for audio and site updates
    if (!config.modResults.UIBackgroundModes) {
      config.modResults.UIBackgroundModes = [];
    }
    
    if (!config.modResults.UIBackgroundModes.consists of('audio')) {
      config.modResults.UIBackgroundModes.push('audio');
    }
    
    if (!config.modResults.UIBackgroundModes.consists of('location')) {
      config.modResults.UIBackgroundModes.push('location');
    }

    console.log('✅ Mapbox token and permissions configured');
    return config;
  });
};

module.exports = withMapboxToken; 

There appears to be no downside regarding the linking/bridging between native iOS and XCode, as I efficiently managed to jot down a Easy “Hey World” view in Swift, which reveals up within the expo improvement construct.
Now, when attempting to put in the SDK, this appears to be the half the place I fail.
I first tried all methods I may assume (handbook set up in XCode, or utilizing a Bundle.swift file) of to first MANUALLY set up the SDK and have a improvement construct working on my iPhone with none issues.

Remember the fact that I am utilizing expo managed workflow, so the ios/ and android/ folders are mechanically generated and should not be manually modified, I even have them in my .gitignore. I simply needed to strive manually putting in the SDK first, earlier than attempting to automate this for instance through a config plugin or a script that I run after prebuild and earlier than creating a brand new improvement construct.

When attempting to run npx expo run:ios --device , I get this error within the output:

  1 | import ExpoModulesCore
> 2 | import MapboxDirections
    |        ^ no such module 'MapboxDirections'
  3 | import MapboxNavigationCore
  4 | import MapboxNavigationUIKit
  5 | import UIKit

› Compiling expo-linking Pods/ExpoLinking » ExpoLinking-dummy.m

› 1 error(s), and 1 warning(s)

CommandError: Didn't construct iOS undertaking. "xcodebuild" exited with error code 65.

So it appears to me the Swift package deal(s) haven’t been accurately put in.
And that is what the “Bundle Dependencies” a part of my XCode Mission Navigator appears to be like like:
8

SFL to Ship Fast Deployment AISSat-4 for Norway’s Increasing Maritime Surveillance


SFL to Ship Fast Deployment AISSat-4 for Norway’s Increasing Maritime Surveillance

by Clarence Oxford

Los Angeles CA (SPX) Jul 30, 2025






SFL Missions Inc. has secured a contract from the Norwegian House Company (NOSA) to develop AISSat-4, a ship-tracking nanosatellite scheduled for launch inside a 12 months. The mission goals to bolster Norway’s maritime monitoring community by including capability as present satellites strategy the tip of their service lives.



Constructed on SFL’s confirmed SPARTAN 6U platform, AISSat-4 will carry a single Computerized Identification System (AIS) receiver developed by Kongsberg Seatex of Trondheim. The SPARTAN bus has a robust monitor file, with 18 industrial satellites launched so far.



SFL’s vertically built-in construction permits fast growth, testing, and deployment. “We’ve in depth expertise in implementing AIS missions, and due to this fact we have now the experience and design heritage wanted to implement the AISSat-4 mission on a brief schedule,” mentioned Dr. Robert E. Zee, SFL Missions Director and CEO.



The AISSat-4 satellite tv for pc will seize as much as 1.5 million distinctive AIS indicators each day, even in busy maritime corridors. It should reinforce Norway’s Blue Justice Ocean Surveillance Program, which permits nations to share space-based AIS knowledge to fight unlawful fishing and maritime crime globally.



Norway operates one of many world’s most superior space-based marine surveillance methods by NOSA and the Norwegian Coastal Administration. “The societal advantages we achieve from accumulating AIS data from satellites is critical. It’s due to this fact essential that we guarantee the upkeep of this functionality,” mentioned Coastal Administration Director Einar Vik Arset.



The Norwegian house AIS fleet started with AISSat-1 in 2010, which collected knowledge for 12 years. It was adopted by AISSat-2, NorSat-1 and -2 in 2017, NorSat-3 in 2021, NorSat-TD in 2023, and NorSat-4 in early 2025. These satellites built-in more and more superior applied sciences together with radar detectors, optical cameras, laser communications, and fifth-generation AIS receivers.



The upcoming AISSat-4 satellite tv for pc will focus solely on AIS knowledge assortment, persevering with the legacy of space-based maritime situational consciousness initiated over 15 years in the past in partnership with SFL.


Associated Hyperlinks

SFL Missions

Naval Warfare within the twenty first Century