Home Blog Page 5

CNTXT AI Launches Munsit: The Most Correct Arabic Speech Recognition System Ever Constructed


In a defining second for Arabic-language synthetic intelligence, CNTXT AI has unveiled Munsit, a next-generation Arabic speech recognition mannequin that isn’t solely essentially the most correct ever created for Arabic, however one which decisively outperforms world giants like OpenAI, Meta, Microsoft, and ElevenLabs on normal benchmarks. Developed within the UAE and tailor-made for Arabic from the bottom up, Munsit represents a robust step ahead in what CNTXT calls “sovereign AI”—expertise constructed within the area, for the area, but with world competitiveness.

The scientific foundations of this achievement are specified by the group’s newly revealed paper, Advancing Arabic Speech Recognition Via Massive-Scale Weakly Supervised Studying, which introduces a scalable, data-efficient coaching technique that addresses the long-standing shortage of labeled Arabic speech information. That technique—weakly supervised studying—has enabled the group to assemble a system that units a brand new bar for transcription high quality throughout each Trendy Normal Arabic (MSA) and greater than 25 regional dialects.

Overcoming the Information Drought in Arabic ASR

Arabic, regardless of being one of the vital broadly spoken languages globally and an official language of the United Nations, has lengthy been thought-about a low-resource language within the area of speech recognition. This stems from each its morphological complexity and a scarcity of huge, various, labeled speech datasets. Not like English, which advantages from numerous hours of manually transcribed audio information, Arabic’s dialectal richness and fragmented digital presence have posed important challenges for constructing strong computerized speech recognition (ASR) techniques.

Fairly than ready for the sluggish and costly strategy of handbook transcription to catch up, CNTXT AI pursued a radically extra scalable path: weak supervision. Their method started with an enormous corpus of over 30,000 hours of unlabeled Arabic audio collected from various sources. Via a custom-built information processing pipeline, this uncooked audio was cleaned, segmented, and robotically labeled to yield a high-quality 15,000-hour coaching dataset—one of many largest and most consultant Arabic speech corpora ever assembled.

This course of didn’t depend on human annotation. As an alternative, CNTXT developed a multi-stage system for producing, evaluating, and filtering hypotheses from a number of ASR fashions. These transcriptions have been cross-compared utilizing Levenshtein distance to pick out essentially the most constant hypotheses, then handed by a language mannequin to guage their grammatical plausibility. Segments that failed to satisfy outlined high quality thresholds have been discarded, guaranteeing that even with out human verification, the coaching information remained dependable. The group refined this pipeline by a number of iterations, every time enhancing label accuracy by retraining the ASR system itself and feeding it again into the labeling course of.

Powering Munsit: The Conformer Structure

On the coronary heart of Munsit is the Conformer mannequin, a hybrid neural community structure that mixes the native sensitivity of convolutional layers with the worldwide sequence modeling capabilities of transformers. This design makes the Conformer significantly adept at dealing with the nuances of spoken language, the place each long-range dependencies (equivalent to sentence construction) and fine-grained phonetic particulars are essential.

CNTXT AI applied a big variant of the Conformer, coaching it from scratch utilizing 80-channel mel-spectrograms as enter. The mannequin consists of 18 layers and consists of roughly 121 million parameters. Coaching was performed on a high-performance cluster utilizing eight NVIDIA A100 GPUs with bfloat16 precision, permitting for environment friendly dealing with of large batch sizes and high-dimensional characteristic areas. To deal with tokenization of Arabic’s morphologically wealthy construction, the group used a SentencePiece tokenizer educated particularly on their {custom} corpus, leading to a vocabulary of 1,024 subword items.

Not like standard supervised ASR coaching, which usually requires every audio clip to be paired with a rigorously transcribed label, CNTXT’s technique operated completely on weak labels. These labels, though noisier than human-verified ones, have been optimized by a suggestions loop that prioritized consensus, grammatical coherence, and lexical plausibility. The mannequin was educated utilizing the Connectionist Temporal Classification (CTC) loss perform, which is well-suited for unaligned sequence modeling—important for speech recognition duties the place the timing of spoken phrases is variable and unpredictable.

Dominating the Benchmarks

The outcomes converse for themselves. Munsit was examined in opposition to main open-source and industrial ASR fashions on six benchmark Arabic datasets: SADA, Frequent Voice 18.0, MASC (clear and noisy), MGB-2, and Casablanca. These datasets collectively span dozens of dialects and accents throughout the Arab world, from Saudi Arabia to Morocco.

Throughout all benchmarks, Munsit-1 achieved a mean Phrase Error Fee (WER) of 26.68 and a Character Error Fee (CER) of 10.05. By comparability, the best-performing model of OpenAI’s Whisper recorded a mean WER of 36.86 and CER of 17.21. Meta’s SeamlessM4T, one other state-of-the-art multilingual mannequin, got here in even increased. Munsit outperformed each different system on each clear and noisy information, and demonstrated significantly sturdy robustness in noisy situations, a important issue for real-world purposes like name facilities and public providers.

The hole was equally stark in opposition to proprietary techniques. Munsit outperformed Microsoft Azure’s Arabic ASR fashions, ElevenLabs Scribe, and even OpenAI’s GPT-4o transcribe characteristic. These outcomes should not marginal features—they symbolize a mean relative enchancment of 23.19% in WER and 24.78% in CER in comparison with the strongest open baseline, establishing Munsit because the clear chief in Arabic speech recognition.

A Platform for the Way forward for Arabic Voice AI

Whereas Munsit-1 is already reworking the chances for transcription, subtitling, and buyer help in Arabic-speaking markets, CNTXT AI sees this launch as just the start. The corporate envisions a full suite of Arabic-language voice applied sciences, together with text-to-speech, voice assistants, and real-time translation techniques—all grounded in sovereign infrastructure and regionally related AI.

“Munsit is greater than only a breakthrough in speech recognition,” mentioned Mohammad Abu Sheikh, CEO of CNTXT AI. “It’s a declaration that Arabic belongs on the forefront of worldwide AI. We’ve confirmed that world-class AI doesn’t have to be imported — it may be constructed right here, in Arabic, for Arabic.”

With the rise of region-specific fashions like Munsit, the AI business is coming into a brand new period—one the place linguistic and cultural relevance should not sacrificed within the pursuit of technical excellence. In actual fact, with Munsit, CNTXT AI has proven they’re one and the identical.

Firebase gRPC-Core Compilation Error on Xcode 16.3 with Flutter iOS App


I’m engaged on a Flutter venture that makes use of Firebase for iOS. My present setup is as follows:

macOS Model: macOS Sequoia (15.x)
Xcode Model: Xcode 16.3
Flutter Model: 3.x
Firebase SDK Model: 10.25.0 (as resolved by firebase_core)
gRPC-Core Model: 1.62.5 (as resolved by CocoaPods)
When I attempt to construct my Flutter app for the iOS simulator, the construct fails with the next error:

Parse Problem (Xcode): A template argument record is predicted after a reputation prefixed by the template key phrase /Customers/nlay/Paperwork/ios/Pods/gRPC-Core/src/core/lib/promise/element/basic_seq.h:102:37

What I’ve Tried So Far:
Cleansing and Reinstalling Pods:

cd ios
rm -rf Pods Podfile.lock construct Runner.xcworkspace
pod set up --repo-update
cd ..
flutter clear
flutter run

This didn’t resolve the difficulty.

Modifying the Podfile:

I added the next configurations within the post_install block:

config.build_settings['CLANG_CXX_LANGUAGE_STANDARD'] = 'c++17'
config.build_settings['CLANG_CXX_LIBRARY'] = 'libc++'
config.build_settings['OTHER_CPLUSPLUSFLAGS'] = '-std=c++17'
config.build_settings['EXCLUDED_ARCHS[sdk=iphonesimulator*]'] = 'arm64'

Regardless of these adjustments, the error persists.
Downgrading Firebase SDK:
I tried to downgrade Firebase to older variations (e.g., firebase_core: 2.15.1), however the challenge stays.

Checking Xcode Compatibility:
I perceive that Firebase and gRPC-Core are recognized to have points with Xcode 15+ and 16+. Nonetheless, I can’t downgrade to Xcode 14.3 as a result of it’s not supported on macOS Sequoia (15.x).

My Present Constraints:
I’m unable to downgrade to Xcode 14.3 attributable to macOS Sequoia’s necessities.
I want to make use of Firebase in my venture, so eradicating it’s not an choice.
I’ve already tried all recognized workarounds, together with cleansing the CocoaPods cache, modifying the Podfile, and downgrading dependencies.

Questions:
Is there a selected mixture of Firebase and gRPC-Core variations that works with Xcode 16.3?
Are there any patches or workarounds for this gRPC-Core compilation challenge?
Ought to I think about using a CI/CD service or a special macOS model to construct my app? If that’s the case, what would you advocate?
Is there an ETA for Firebase/gRPC to launch a model suitable with Xcode 16+?
Any assist or steering can be tremendously appreciated. Thanks!

Gentle powered robotic climbs tracks like tiny cable automobile


Cable vehicles are definitely helpful for transporting cargo up steep mountain slopes, however what if you wish to do the identical form of factor on a a lot smaller scale? Properly, you can attempt utilizing a tiny new light-powered robotic, which is cable of carrying gadgets up skinny mid-air tracks.

Developed by Assoc. Prof. Jie Yin and colleagues at North Carolina State College, the “robotic” is definitely only a looped ribbon of light-sensitive liquid crystal elastomer. That ribbon has quite a few twists in it, making it look a bit like a spiraled rotini noodle that is been shaped into a hoop.

When the robotic is suspended on a horizontal or diagonal-sloping observe – comparable to a wire or thread – it is positioned in order that the observe runs by way of two or three consecutive twists within the ribbon. The remainder of the bot hangs beneath the observe. The cargo merchandise in flip hangs from the underside of the looped robotic.

Upon being uncovered to infrared mild emitted from an overhead supply, the part of elastomer that is situated closest to that supply (the highest part, by way of which the observe runs) responds by contracting. Because it contracts it additionally rolls, forming an auger-like screw-drive mechanism.

That mechanism not solely pulls the robotic alongside the observe, it additionally repeatedly strikes light-exposed elastomer away from the sunshine supply whereas concurrently drawing light-disadvantaged elastomer up into the sunshine. On this style, the bot can indefinitely make its manner alongside the observe so long as the sunshine supply persists.

An illustration of the robot (right) alongside its alpine inspiration
An illustration of the robotic (proper) alongside its alpine inspiration

 Fangji Qi, NC State College

In lab checks performed thus far, totally different variations of the robotic have been in a position to transfer alongside each straight and curved tracks ranging in thickness from the width of a human hair to the width of a consuming straw. The bots may additionally make their manner over observe obstacles comparable to knots, climb slopes as steep as 80 levels, and carry cargo over 12 occasions their very own weight.

“We’re now serious about particular purposes for this expertise, in addition to adapting the smooth robots to answer inputs apart from infrared mild,” says Yin. “For instance, growing a smooth ring robotic that operates in daylight or in response to different exterior power sources.”

A paper on the analysis was just lately revealed within the journal Superior Science. You’ll be able to see the robotic in motion, within the video beneath.

Aerial tram-like autonomous smooth ring robotic

Supply: North Carolina State College



Keytake Evaluation: How I Created a High quality Video in Minutes

0


What in case you may create a full video earlier than your espresso even cools down? Sounds inconceivable, proper?

However with in the present day’s expertise, it’s not simply doable. It’s taking place. In spite of everything, video content material is predicted to account for a good portion of all web visitors (round 82% by 2025). Content material creators, educators, and companies are scrambling to maintain up with the demand.

However that’s the place instruments like Keytake are available in. Should you’ve ever had a fantastic concept however no time, expertise, or price range to show it into a refined video, it is the right answer.

On this Keytake overview, I am going to talk about the professionals and cons, what it’s, who it is best for, and its key options. Then, I am going to present you the way I used Keytake to generate this video in minutes:

I am going to end the article by evaluating Keytake with my high three alternate options (Pictory, Synthesys, and Deepbrain AI). By the tip, you may know if Keytake is best for you!

Verdict

Keytake is a quick and simple device for producing movies in minutes, providing AI voiceovers, a number of enter choices, and ready-made templates that make constant manufacturing a breeze. Nonetheless, its restricted customization and lack of non-public contact could end in generic movies. Plus, extra advanced tasks may require further enhancing.

Professionals and Cons

  • Automates video manufacturing in minutes with no technical expertise.
  • Create movies from concepts, URLs, docs, and extra.
  • AI voiceovers in 100+ languages and 350+ avatars.
  • Prepared-made templates for coaching, demos, and extra.
  • Nice for constant video manufacturing.
  • Customization choices could also be restricted, resulting in generic movies.
  • Complicated tasks may have additional enhancing for a refined end.
  • Automated movies could lack emotional depth or a private contact.

What’s Keytake?

 

Keytake is an AI video editor that turns your concepts, paperwork, or internet content material into skilled movies in simply minutes. It automates video creation by producing the clips, voiceover, script, and visuals from a single immediate.

The core drawback Keytake addresses is fairly easy: Conventional video manufacturing is extremely time-consuming. Keytake eliminates the necessity for scripting, enhancing, and voiceovers. Meaning anybody can create high-quality movies with minimal effort!

How Keytake Works

To make use of Keytake, all you must do is add a doc. This generally is a Google Doc, Phrase file, or URL. Keytake’s AI will analyze your content material, break it into logical segments, and generate a video with synchronized visuals.

Keytake’s AI automates video creation, dealing with timing, transitions, and matching visuals to your textual content. You’ll be able to choose from varied templates and AI voice choices or report your voice for use as a voiceover. You’ll be able to even modify the pronunciation.

The AI is fairly good, but it surely works finest while you give it well-structured content material to work with. Headings, bullet factors, and clearly outlined sections assist Keytake perceive learn how to arrange the video segments higher.

Goal Viewers

Keytake is for individuals who recurrently create a lot of content material. Small to medium companies can use it for coaching movies and product demonstrations, and educators find it irresistible for creating lecture content material and explainer movies with out coping with the technical facet of issues. In the meantime, content material creators and entrepreneurs are one other massive person group (particularly in case you’re producing video content material for a number of platforms).

The underside line is that Keytake will reduce your video manufacturing considerably. Should you’re spending an excessive amount of time wrestling with video manufacturing, Keytake is likely to be precisely what you want.

Who’s Keytake Finest For?

Keytake is finest for the next sorts of folks:

  • Content material creators can use Keytake to supply partaking movies shortly and effectively.
  • Educators can use Keytake to create lecture content material, explainer movies, or coaching supplies.
  • Enterprise professionals and groups can use Keytake to create coaching movies, product demonstrations, or displays with out coping with conventional video enhancing.
  • Entrepreneurs can use Keytake to supply video content material for a number of platforms to succeed in wider audiences.
  • World groups and organizations can use Keytake to create multilingual movies and scale video manufacturing.

Keytake Key Options

Listed here are Keytake’s key options:

  • AI Video Creation: Immediately flip your concepts, paperwork, or URLs into skilled movies in minutes.
  • A number of Enter Codecs: Create AI movies from textual content prompts, Google Docs, Phrase information, and URLs.
  • 350+ Sensible AI Avatars: Gives a big library of lifelike AI avatars to relate and current your content material.
  • Multilingual Help: Generates movies with AI voiceovers and subtitles in over 100 languages.
  • Skilled Templates: Consists of templates for onboarding, coaching, product demos, and extra.
  • Participating Visuals and Animations: Integrates AI-generated photos, animations, and visuals for higher storytelling.
  • Deep Analysis Capabilities: Conducts thorough analysis for video content material with related, up-to-date data.
  • All-in-One Enhancing Instruments: Combines script technology, voiceovers, sound results, subtitles, and transitions in a single platform for streamlined video creation.

Tips on how to Use Keytake

Here is how I used Keytake to generate a video with synthetic intelligence:

  1. Join Keytake
  2. Begin with a Template
  3. Choose Your Template
  4. Edit Your Video
  5. Flip an Thought right into a Video
  6. Add a Textual content Immediate
  7. Add Your Video Particulars
  8. Select a Template
  9. Export Your Video

Step 1: Join Keytake

Signing up for Keytake.

I began by going to keytake.ai and choosing “Signal Up” on the highest proper.

Step 2: Begin with a Template

Starting with a template on Keytake.

After creating an account, I used to be taken to my tasks. I appreciated the clear and easy interface.

Within the heart have been 3 ways I may begin making a video with AI:

  1. Doc to video
  2. Immediate to video
  3. Begin with a template

I used to be curious in regards to the templates, so I chosen “Begin with a template.”

Step 3: Choose Your Template

Selecting a template on Keytake.

Keytake presents quite a lot of templates for coaching, advertising and marketing, technique, gross sales, product, and human assets. These templates are available in a number of completely different kinds as nicely, together with calm, skilled, daring, fashionable, easy, and extra.

I chosen the “Distant Employee Onboarding” template.

Step 4: Edit Your Video

The Keytake video editor.

After choosing my template, I used to be taken to a video editor.

Adding scenes to a video template on Keytake.

On the left, I may add scenes from my template by choosing “Add Scene.”

Adding an avatar, a voiceover, and music to the timeline when making a video with Keytake.

On the backside was a timeline the place I may add an avatar, an AI voiceover, and music. I may click on and drag the perimeters to decide on how lengthy or brief I wished every clip to be.

Tools when editing a video on Keytake.

On the far proper have been all of my instruments:

  • Voiceover (select an AI voice and add some textual content to use a voiceover to your video)
  • Media (add or add media offered by Keytake)
  • Textual content (add titles, physique textual content, and captions to your video)
  • Components (add shapes and graphics to your video)
  • AI Gen (generate photos, sound results, and scripts with synthetic intelligence)
  • Subtitles (create computerized or guide subtitles)
  • Avatar (choose an AI avatar)

On the highest proper, I may save or export the video.

Step 5: Flip an Thought right into a Video

Generating a video from a text prompt with Keytake.

Again in my Tasks, I wished to attempt producing a video with a immediate. I chosen “Thought to video.”

Step 6: Add a Textual content Immediate

Generating a video with a text prompt on Keytake.

A window opened the place I described what I wished my video to be about: “Clarify how lacking sleep impacts your mind.” I hit “Subsequent to generate the video.

It’s also possible to choose the “Deep analysis” button, and Keytake will carry out in-depth analysis in your subject.

Step 7: Add Your Video Particulars

Telling Keytake more about my video.

From there, Keytake requested me so as to add extra particulars about my video:

  • The use case
  • Target market
  • Tone
  • Variety of scenes

As soon as every little thing was chosen, I hit “Subsequent.”

Step 8: Create Your Video

Selecting a template and creating a video with Keytake.

Subsequent, I chosen a template and hit “Create Video.”

Step 9: Export Your Video

Keytake instantly set to work producing my AI video.

The platform did an unimaginable job of producing scenes and making a script with an AI voiceover that aligned with my subject. Regardless of the media not aligning with my subject, it nonetheless made for a superb place to begin.

Giving my Keytake video a name, choosing the format, and selecting the resolution and exporting the video.

I hit “Export” to export it, the place I may choose the title, format, and backbone for my video.

A couple of minutes later, my video was exported! I may obtain or share it.

Here is how my video made with Keytake got here out:

General, Keytake made it extremely straightforward to show a easy concept right into a video with AI. I used to be impressed by how easily I may go from choosing a template to exporting a video with voiceover, visuals, and customized scenes.

Prime 3 Keytake Alternate options

Listed here are the very best Keytake alternate options.

Pictory

The primary Keytake different I’d suggest is Pictory. Pictory makes use of AI to show long-form content material like webinars, blogs, and podcasts into partaking, shareable video clips. It is a superb device for entrepreneurs and content material creators wanting to maximise their digital attain.

Each platforms are designed for quick, no-code video creation and use AI to automate this course of. Nonetheless, Pictory stands out with its intensive media library, text-to-video and image-to-video options, and robust customization choices like customized branding and real-time staff collaboration.

In the meantime, Keytake turns concepts and paperwork into skilled movies in minutes with its library of over 350 sensible AI avatars and deep analysis capabilities. Keytake is especially good for creating multilingual movies and shortly producing coaching, onboarding, or advertising and marketing displays with out technical expertise.

For AI video enhancing with versatile customization choices, select Pictory. For quick video creation from paperwork or prompts with multilingual and analysis options, select Keytake!

Learn my Pictory overview or go to Pictory!

Synthesys

The subsequent Keytake different I’d suggest is Synthesys. Synthesys is an AI video generator for creating skilled movies with sensible avatars and natural-sounding voiceovers in over 140 languages and 600 voices.

Each platforms provide automated, no-code video creation. They’re glorious for companies, educators, and entrepreneurs seeking to produce explainer movies, coaching supplies, and product demos at scale.

Nonetheless, Synthesys stands out with its customization choices and superior enhancing capabilities. You’ll be able to tweak each aspect of your video: avatar choice, voice emotion, and pitch. You’ll be able to even add visible components like textual content, photos, and transitions.

In the meantime, Keytake focuses on pace and ease. It automates your complete course of from doc or concept to completed video. It is a streamlined expertise with skilled templates and multilingual help, however with much less granular management over every video aspect.

Should you’re a creator who needs most flexibility, avatar and voice customization, and the power to fine-tune each side of your video, select Synthesys. Should you prioritize automated video technology from paperwork or URLs with minimal guide enter, select Keytake!

Learn my Synthesys overview or go to Synthesys!

Deepbrain AI

The ultimate Keytake different I’d suggest is Deepbrain AI. Deepbrain AI makes a speciality of creating sensible AI avatars and presents a variety of customization choices.

Each platforms help multilingual video creation. Keytake presents AI voiceovers in over 100 languages, whereas Deepbrain AI helps greater than 80 languages with correct lip-syncing.

Deepbrain AI stands out with its intensive avatar library and deeper customization choices, together with the power so as to add backgrounds, music, and branding components immediately inside the AI video editor. In the meantime, Keytake focuses on fast, automated video creation from paperwork and URLs with skilled templates.

For customizable, sensible AI avatars with versatile enhancing, Deepbrain AI is the higher selection. For quick, automated video technology from textual content prompts with broad language help and templates, select Keytake!

Keytake Evaluation: The Proper Device For You?

As somebody who has expertise utilizing extra sophisticated video editors like Premiere Professional, Keytake was a breath of recent air. I went from a video concept to a completely narrated video in minutes.

It is good for creating coaching content material and fast explainers with out getting misplaced in video manufacturing. Should you’re somebody who values pace and ease over ultra-fine customization, Keytake is a brilliant selection.

Nonetheless, in case you’re engaged on extra emotionally pushed content material or one thing that requires a cinematic really feel, you may really feel a bit boxed in by its templates and automation. However for what it does, Keytake delivers.

In any other case, you may wish to attempt these Keytake alternate options:

  • Pictory is finest for entrepreneurs and repurposers of long-form content material (e.g., webinars, blogs, and podcasts) who need robust branding management and real-time collaboration.
  • Synthesys is finest for companies and creators who need customizable avatars, voice tone, and detailed video components for polished displays.
  • Deepbrain AI is finest for professionals who want sensible avatars, correct lip-syncing, and robust visible branding of their movies.

Thanks for studying my Keytake overview! I hope you discovered it useful.

Keytake presents a free plan that features 3 video exports per 30 days (as much as 3 minutes every), 1 customized avatar, normal video processing, AI dubbing, display screen recording with an avatar, and the power to share and obtain movies. Strive it for your self and see the way you prefer it!

Degree up Your Safety Stack with Splunk on Cisco


Are you a Cisco Safety buyer in search of deeper, extra centralized insights into your safety posture? The Enhancing Cisco Safety Options with Knowledge Analytics (ECSS) Studying Path in Cisco U. is designed that can assist you just do that. This complete Studying Path focuses on the highly effective integration of Cisco Safety merchandise with Splunk, providing a streamlined strategy to information evaluation and a unified view of your safety panorama.

Too many dashboards, not sufficient perception

Cisco merchandise supply incredible choices for assembly all of your safety information wants, however with out Splunk, chances are you’ll want to make use of a number of dashboards or manually correlate information from varied sources.

This could result in:

  • Inefficiencies: Spending helpful time switching between dashboards and manually correlating information.
  • Delayed response: Issue figuring out and responding to threats in real-time, notably AI threats.
  • Elevated complexity: The sheer quantity of information and the shortage of a unified view can simply overwhelm anybody.

The answer: Cisco and Splunk for unified safety intelligence

There’s a solution to get all these insights streamlined into one dashboard.

Splunk gives a unified platform for information aggregation, evaluation, and visualization, providing a single pane of glass into your total safety ecosystem, together with any operational expertise (OT) networks that may have been beforehand siloed.

Right here’s the way you’ll profit:

  • Centralized information assortment: Combination information from all of your Cisco safety merchandise right into a single Splunk occasion.
  • Customizable dashboards: Create tailor-made dashboards that present the precise insights you want.
  • Actual-time monitoring: Monitor your safety setting in real-time and reply to threats sooner.
  • Enhanced operational effectivity: Scale back complexity and streamline your safety workflows.
  • Improved decision-making: Make knowledgeable choices primarily based on complete and correct information.

What does ECSS cowl?

The ECSS Studying Path covers introductory to intermediate-level data of Splunk and methods to use it for detectinginvestigating, and responding to safety threats throughout your total community, whether or not on-site or utilized by a contractor globally.

There are six tracks:

  • Observe 1: The Fundamentals of Observability
  • Observe 2: Cisco, together with Cisco XDR, Splunk SIEM, and Splunk SOAR.
  • Observe 3: Cisco Cloud Safety App
  • Observe 4: Cisco Safety Legacy Apps and Know-how Add-Ons
  • Observe 5: Cisco Safety with Splunk Use Circumstances
  • Observe 6: Cisco Apps Troubleshooting and Tuning

A free pre-release of Tracks 1 and three is now out there on Cisco U. The remaining, together with all labs, will probably be launched later within the Summer time of 2025.

Get began on ECSS at the moment

The ECSS Studying Path gives a transparent path to enhanced safety administration by integrating Cisco Safety merchandise with Splunk. By centralizing information, streamlining workflows, and offering real-time insights, you’ll be able to improve your safety posture and defend your group from evolving threats.

Begin studying ECSS at the moment and expertise the advantages of a really unified safety answer. The ability to achieve full visibility and management of your safety setting is at your fingertips.

How will this Studying Path provide help to? Tell us within the feedback!

Additional Cisco U. Splunk studying

Set Up Cisco Duo with Splunk for Multifactor Authentication Logging

Highlight 2024: Safe Syslog from the ASA to Splunk by way of SC4S

Community Entry Management Monitoring Utilizing Cisco Identification Companies Engine and Splunk

 

Join Cisco U. | Be a part of the  Cisco Studying Community at the moment without spending a dime.

Comply with Cisco Studying & Certifications

X | Threads | Fb | LinkedIn | Instagram | YouTube

Use  #CiscoU and #CiscoCert to hitch the dialog.

Share: