Home Blog Page 15

Room by room, coronary heart by coronary heart: Serving to foster youth thrive in school


Might is Nationwide Foster Care Consciousness Month — a time devoted to recognizing the resilience of youth in foster care and the essential position all of us play in supporting their journeys. For a lot of of those younger individuals, stepping onto a university campus is not only the beginning of a brand new chapter — it’s the beginning of a brand new life. That’s why Transfer-in Day Mafia exists: to make sure foster youth aren’t simply seen throughout their school transition however actually supported.


New Rooms, New Beginnings

For a lot of, school is a time of firsts — first style of independence, first actual shot at shaping a future, and first steps right into a world of chance. It’s a season of discovery, pleasure, and the fun of the unknown. For a lot of first-generation school college students, these feelings run even deeper. However for younger individuals rising from the foster care system, the expertise is commonly marked by a completely totally different actuality: survival.

Think about stepping onto a university campus carrying each belonging you personal in a single backpack. No dad and mom to assist arrange your dorm. No household to ship you care packages. No blueprint for learn how to navigate this brand-new world. Simply hope — and the sheer will to succeed in opposition to the percentages.

“Solely 3–4% of youth who age out of foster care ever earn a university diploma — Transfer-in Day Mafia is set to alter that.”

The hurdles dealing with foster youth are staggering. In response to The Nationwide Foster Youth Institute, solely about 3–4% of youth who age out of foster care ever earn a university diploma. Many by no means even get the prospect to enroll. The explanations are as heartbreaking as they’re complicated: unstable housing, lack of economic assets, emotional trauma, and an absence of dependable grownup help. Even after overcoming these obstacles to achieve a college, many foster youth discover themselves remoted, ill-prepared, and overwhelmed.

That’s the place Transfer-in Day Mafia is available in.

Cisco employee, Jenina John-Guobadia, and husband with MIDM crewCisco employee, Jenina John-Guobadia, and husband with MIDM crew
Cisco worker, Jenina John-Guobadia along with her husband and MIDM crew.

Transfer-in Day Mafia exists with a strong, clear mission: to make sure that college students from the foster care system will not be forgotten as they step into school life. Their work begins with the fundamentals — turning naked dorm rooms into protected, welcoming houses. A easy consolation like an actual mattress, a desk stocked with provides, or a closet full of necessities can imply the distinction between feeling like an outsider and believing you belong.

For a few of these college students, a dorm room is the primary steady place they’ve ever known as their very own. It’s their sanctuary, their launchpad, and their first actual style of what it means to dream with out limits. And but, with out help, even one thing as primary as a furnished room can appear out of attain.

Greater than a Makeover

By means of its involvement with Transfer-in Day Mafia, Cisco helps bridge that hole. Past offering monetary help, Cisco has mobilized its workers and assets to straight uplift these college students — serving to to furnish dorm rooms, provide know-how wants, and be certain that no pupil walks into school empty-handed.

An inspiring instance of this dedication is Cisco’s ongoing help for the “Undertake a Scholar” program. By means of this initiative, Cisconians come collectively to buy care bundle gadgets for college students getting ready to start their school journeys. These care packages are full of necessities like bedding, toiletries, faculty provides and even private notes of encouragement. It’s a collective effort that brings the Cisco neighborhood collectively in help of latest beginnings, sending a strong message to every pupil: you’re seen, you’re valued, and you’re supported.

Cisco Volunteers with the Move In Day Mafia Founder, TeeJ MercerCisco Volunteers with the Move In Day Mafia Founder, TeeJ Mercer
Cisco Volunteers with Transfer-in Day Mafia Founder, TeeJ Mercer

Collectively, Transfer-in Day Mafia and Cisco are ensuring that these younger individuals — who’ve already confronted extra adversity than many do in a lifetime — have a basis to construct on. They’re sending a message that somebody believes of their potential, that they aren’t alone, and that their desires are legitimate.

For each pillow positioned on a mattress, each lamp arrange on a desk, each laptop computer linked to Wi-Fi represents a brand new starting. A recent begin. A approach ahead. As a result of each baby deserves the prospect to not simply survive school — however to thrive.

Share:

HPE Aruba unveils raft of latest switches for knowledge heart, campus modernization



And in large-scale enterprise environments embracing collapsed-core designs, the swap acts as a high-performance aggregation layer. It consolidates providers, simplifies community structure, and enforces safety insurance policies natively, lowering complexity and operational value, Grey stated. As well as, the swap provides the agility and safety required at colocation amenities and edge websites. Its built-in Layer 4 stateful safety and automation-ready platform allow speedy deployment whereas sustaining sturdy management and visibility over distributed infrastructure, Grey stated.

The CX 10040 considerably expands the capability it will probably present and the roles it will probably serve for enterprise prospects, in response to one business analyst.

“From the enterprise aspect, this expands on the characteristic set and capabilities of the unique 10000, giving prospects the power to run extra providers straight within the community,” stated Alan Weckel, co-founder and analyst with The 650 Group. “It helps drive a decrease TCO and supply a safer community.” 

Aimed as a VMware various

Grey famous that HPE Aruba is combining its lately introduced Morpheus VM Necessities plug-in package deal, which provides a hypervisor-based package deal aimed toward hybrid cloud virtualization environments, with the CX 10040 to ship a significant various to Broadcom’s VMware package deal.

“If prospects need to get out of the enterprise of getting to purchase VM cloud or Cloud Basis stuff and all of that, they’ll change the distributed firewall, microsegmentation and many the capabilities discovered within the outdated VMware NSX [networking software] and the CX 10k, and Morpheus can simply change that performance [such as VM orchestration, automation and policy management],” Grey stated.

The 650 Group’s Weckel weighed in on the concept of the CX 10040 as a VMware various:

30 AI Phrases Each Tester Ought to Know


Synthetic Intelligence
Synthetic intelligence refers to non-human applications that may remedy subtle duties requiring human intelligence. For instance, an AI system that intelligently identifies photographs or classifies textual content. In contrast to slim AI that excels at particular duties, synthetic common intelligence would possess the power to know, study, and apply data throughout totally different domains just like human intelligence.

AI System
An AI system is a complete framework that features the AI mannequin, datasets, algorithms, and computational assets working collectively to carry out particular features. AI programs can vary from easy rule-based applications to advanced generative AI programs able to creating unique content material.

Slender AI
Slender AI (additionally known as weak AI) refers to synthetic intelligence that’s targeted on performing a particular activity, equivalent to picture recognition or speech recognition. Most present AI functions use slim AI, which excels at its programmed operate however lacks the broad capabilities of human intelligence.

Skilled Level of View: AI is absolutely only a research of clever brokers. These brokers are autonomous, understand and act on their very own inside an atmosphere, and customarily use sensors and effectors to take action. They analyze themselves with respect to error and success after which adapt, presumably in actual time, relying on the applying” . This helps the thought of AI programs being complete frameworks able to studying and adapting.

– Tariq King No B.S Information to AI in Automation Testing

Machine Studying

Machine Studying

Formally, machine studying is a subfield of synthetic intelligence.

Nevertheless, lately, some organizations have begun interchangeably utilizing the phrases synthetic intelligence and machine studying. Machine studying permits pc programs to study from and make predictions based mostly on knowledge with out being explicitly programmed. Several types of machine studying embody supervised studying, unsupervised studying, and reinforcement studying.

Machine Studying Mannequin
A machine studying mannequin is a illustration of what a machine studying system has realized from the coaching knowledge. These studying fashions kind the premise for AI to research new knowledge and make predictions.

Machine Studying Algorithm
A machine studying algorithm is a particular set of directions that permit a pc to study from knowledge. These algorithms kind the spine of machine studying programs and decide how the mannequin learns from enter knowledge to generate outputs.

Machine Studying Strategies
Machine studying strategies embody numerous approaches to coach AI fashions, together with determination bushes, random forests, assist vector machines, and deep studying, which use synthetic neural community architectures impressed by the human mind.

Machine Studying Methods
Machine studying programs are end-to-end platforms that deal with knowledge preprocessing, mannequin coaching, analysis, and deployment in a streamlined workflow to unravel particular computational issues.

Skilled Level of View: “Machine studying is taking a bunch of information, trying on the patterns in there, after which making predictions based mostly on that. It’s one of many core items of synthetic intelligence, alongside pc imaginative and prescient and pure language processing” . This highlights the position of machine studying fashions in analyzing knowledge and making predictions.”

– Trevor Chandler QA: Masters of AI Neural Networks

Generative AI

Generative AI
Generative AI is a kind of AI mannequin that may create new content material equivalent to photographs, textual content, or music. These AI instruments leverage neural networks to provide unique outputs based mostly on patterns realized from coaching knowledge. Generative AI instruments like chatbots have remodeled how we work together with AI applied sciences.

Massive Language Mannequin
A big language mannequin is a kind of AI mannequin skilled on huge quantities of textual content knowledge, enabling it to know and generate human language with outstanding accuracy. These fashions energy many conversational AI functions and might carry out numerous pure language processing duties.

Hallucination
Hallucination happens when an AI mannequin generates outputs which might be factually incorrect or don’t have any foundation in its coaching knowledge. This phenomenon is especially frequent in generative AI programs and poses challenges for accountable AI improvement.

Skilled Level of View: “One of many challenges with generative AI is guaranteeing the outputs are correct. Whereas these fashions are highly effective, they will generally produce outcomes which might be incorrect or deceptive, which is why understanding their limitations is vital” . This instantly addresses the problem of hallucination in generative AI programs.”

– Guljeet Nagpaul Revolutionizing Take a look at Automation: AI-Powered Improvements

Neural Community

Neural Community
A neural community is a computational mannequin impressed by the human mind’s construction. It consists of interconnected nodes (neurons) that course of and transmit data. Neural networks kind the inspiration of many superior machine studying strategies, notably deep studying.

Synthetic Neural Community
A synthetic neural community is a particular implementation of neural networks in pc science that processes data by means of layers of interconnected nodes to acknowledge patterns in knowledge used to coach the mannequin.

Deep Studying
Deep studying is a subset of AI that makes use of multi-layered neural networks to research giant quantities of information. These advanced networks can robotically extract options from knowledge, enabling breakthroughs in pc imaginative and prescient and speech recognition.

Skilled Level of View: “Pure language processing refers to code that provides expertise the power to know the that means of textual content, full with the author’s intent and their sentiments. NLP is the expertise behind textual content summarization, your digital assistant, voice-operated GPS, and, on this case, a customer support chatbot” ‌1‌‌2‌. This instantly helps the thought of NLP enabling computer systems to interpret and generate human language”

– Emily O’Connor from AG24 Session on Testing AI Chatbot Powered By Pure Language Processing

Varieties of Studying

Supervised Studying
Supervised studying is a kind of machine studying the place the mannequin learns from labeled coaching knowledge to make predictions. The AI system is skilled utilizing input-output pairs, with the algorithm adjusting till it achieves the specified accuracy.

Unsupervised Studying
Unsupervised studying entails coaching an AI on unlabeled knowledge, permitting the mannequin to find patterns and relationships independently. This type of synthetic intelligence is especially helpful when working with datasets the place the construction is not instantly obvious.

Reinforcement Studying
Reinforcement studying is a kind of machine studying method the place an AI agent learns by interacting with its atmosphere and receiving suggestions within the type of rewards or penalties. This method has been essential in growing AI that would grasp advanced video games and robotics.

Skilled Level of View: “Coaching a neural community is like educating it to distinguish between cats and canines. You feed it knowledge, reward it for proper solutions, and modify weights for improper ones. Over time, it learns to acknowledge patterns within the knowledge, very similar to how people study by means of expertise” . This highlights the method of coaching synthetic neural networks to acknowledge patterns.”

– Noemi Ferrera 

Pure Language Processing

Pure Language Processing
Pure language processing (NLP) is a discipline inside synthetic intelligence targeted on enabling computer systems to know, interpret, and generate human language. NLP powers every part from translation providers to conversational AI that may have interaction in human-like dialogue.

Transformer
A transformer is a kind of AI mannequin that learns to know and generate human-like textual content by analyzing patterns in giant quantities of textual content knowledge. Transformers have revolutionized pure language processing duties and kind the spine of many giant language fashions.

Robotic Process Automation Digital Worker

Free Automation with Playwright with AI Course

Key AI Phrases and Ideas

Mannequin
An AI mannequin is a program skilled on knowledge to acknowledge patterns or make selections with out additional human intervention. It makes use of algorithms to course of inputs and generate outputs.

Algorithm
An algorithm is a set of directions or steps that permit a program to carry out computation or remedy an issue. Machine studying algorithms are units of directions that allow a pc system to study from knowledge.

Mannequin Parameter
Parameters are inside to the mannequin whose worth may be estimated or realized from knowledge. For instance, weights are the parameters for neural networks.

Mannequin Hyperparameter
A mannequin hyperparameter is a configuration that’s exterior to the mannequin and whose worth can’t be estimated from knowledge. For instance, the training price for coaching a neural community is a hyperparameter.

Mannequin Artifact
A mannequin artifact is the byproduct created from coaching the mannequin. The artifacts can be put into the ML pipeline to serve predictions.

Mannequin Inputs
An enter is a knowledge level from a dataset that you just cross to the mannequin. For instance:

  • In picture classification, a picture may be an enter
  • In reinforcement studying, an enter could be a state

Mannequin Outputs
Mannequin output is the prediction or determination made by a machine studying mannequin based mostly on enter knowledge. The standard of outputs will depend on each the algorithm and the information used to coach an AI mannequin.

Dataset
A dataset is a set of information used for coaching, validating, and testing AI fashions. The standard and quantity of information in a dataset considerably influence the efficiency of machine studying fashions.

Floor Fact
Floor reality knowledge means the precise knowledge used for coaching, validating, and testing AI/ML fashions. It is extremely essential for supervised machine studying.

Information Annotation
Annotation is the method of labeling or tagging knowledge, which is then used to coach and fine-tune AI fashions. This knowledge may be in numerous types, equivalent to textual content, photographs, or audio utilized in pc imaginative and prescient programs.

Options
A characteristic is an attribute related to an enter or pattern. An enter may be composed of a number of options. In characteristic engineering, two options are generally used: numerical and categorical.

Compute
Compute refers back to the computational assets (processing energy) required to coach and run AI fashions. Superior AI functions usually require vital compute assets, particularly for coaching advanced neural networks.

Coaching and Analysis

Mannequin Coaching
Mannequin coaching in machine studying is “educating” a mannequin to study patterns and make predictions by feeding it knowledge and adjusting its parameters to optimize efficiency. It’s the key step in machine studying that ends in a mannequin able to be validated, examined, and deployed. AI coaching usually requires vital computational assets, particularly for advanced fashions.

Advantageous Tuning
Advantageous-tuning is the method of taking a pre-trained AI mannequin and additional coaching it on a particular, usually smaller, dataset to adapt it to specific duties or necessities. This method is usually used when growing AI for specialised functions.

Inference
A mannequin inference pipeline is a program that takes enter knowledge after which makes use of a skilled mannequin to make predictions or inferences from the information. It is the method of deploying and utilizing a skilled mannequin in a manufacturing atmosphere to generate outputs on new, unseen knowledge.

ML Pipeline
A machine studying pipeline is a collection of interconnected knowledge processing and modeling steps designed to automate, standardize, and streamline the method of constructing, coaching, evaluating, and deploying machine studying fashions. ML pipelines goal to automate and standardize the machine studying course of, making it extra environment friendly and reproducible.

Mannequin Registry
The mannequin registry is a repository of the skilled machine studying fashions, together with their variations, metadata, and lineage. It dramatically simplifies the duty of monitoring fashions as they transfer by means of the ML lifecycle, from coaching to manufacturing deployments.

Batch Dimension
The batch dimension is a hyperparameter that defines the variety of samples to work by means of earlier than updating the interior mannequin parameters.

Batch Vs Actual-time processing
Batch processing is finished offline. It analyzes giant historic datasets and permits the machine studying mannequin to make predictions on the output knowledge. Actual-time processing, often known as on-line or stream processing, thrives in fast-paced environments the place knowledge is constantly generated and rapid insights are essential.

Suggestions Loop
A suggestions loop is the method of leveraging the output of an AI system and corresponding end-user actions so as to retrain and enhance fashions over time.

Be a part of Our Free Non-public Neighborhood

Mannequin Analysis and Ethics

Mannequin Analysis
Mannequin analysis is a means of evaluating mannequin efficiency throughout particular use circumstances. It may additionally be known as the observability of a mannequin’s efficiency.

Mannequin Observability
ML observability is the power to watch and perceive a mannequin’s efficiency throughout all levels of the mannequin improvement cycle.

Accuracy
Accuracy refers back to the proportion of appropriate predictions a mannequin makes, calculated by dividing the variety of appropriate predictions by the full variety of predictions.

Precision
Precision exhibits how usually an ML mannequin is appropriate when predicting the goal class.

Recall, or True Optimistic Price(TPR)
Recall is a metric that measures how usually a machine studying mannequin accurately identifies constructive situations (true positives) from all of the precise constructive samples within the dataset.

F1-Rating
The F1 rating may be interpreted as a harmonic imply of precision and recall, the place an F1 rating reaches its finest worth at 1 and worst rating at 0.

Information Drift
Information drift is a change within the mannequin inputs the mannequin isn’t skilled to deal with. Detecting and addressing knowledge drift is important to sustaining ML mannequin reliability in dynamic settings.

Idea Drift
Idea drift is a change in input-output goal variables. It signifies that no matter your mannequin is predicting is altering.

Bias
Bias is a scientific error that happens when some facets of a dataset are given extra weight and/or illustration than others. There are various sorts of bias, equivalent to historic bias and choice bias. Addressing bias is a vital element of accountable AI efforts.

AI Ethics
AI ethics encompasses the ethical rules and values that information the event and use of synthetic intelligence. This contains issues round equity, transparency, privateness, and the social influence of AI applied sciences within the AI panorama.

Pc Imaginative and prescient

Pc Imaginative and prescient
Pc imaginative and prescient is a discipline of AI that trains computer systems to interpret and perceive visible data from the world. Picture recognition programs are a typical software of pc imaginative and prescient expertise.

Understanding these key phrases will improve your comprehension of AI ideas and supply a strong basis for navigating the quickly evolving discipline of synthetic intelligence. Because the AI terminology continues to develop, staying knowledgeable about totally different AI functions and applied sciences turns into more and more essential for professionals throughout all industries.

AI-Pushed Automation for Sooner Case Decision with Cisco’s Excessive-Efficiency Knowledge Middle Stretch Database


Introduction

As AI adoption accelerates throughout industries, companies face an plain fact — AI is simply as highly effective as the information that fuels it. To actually harness AI’s potential, organizations should successfully handle, retailer, and course of high-scale knowledge whereas making certain price effectivity, resilience, efficiency and operational agility. 

At Cisco Assist Case Administration – IT, we confronted this problem head-on. Our staff delivers a centralized IT platform that manages your entire lifecycle of Cisco product and repair circumstances. Our mission is to supply prospects with the quickest and only case decision, leveraging best-in-class applied sciences and AI-driven automation. We obtain this whereas sustaining a platform that’s extremely scalable, extremely out there, and cost-efficient. To ship the very best buyer expertise, we should effectively retailer and course of large volumes of rising knowledge. This knowledge fuels and trains our AI fashions, which energy vital automation options to ship sooner and extra correct resolutions. Our largest problem was placing the precise steadiness between constructing a extremely scalable and dependable database cluster whereas making certain price and operational effectivity. 

Conventional approaches to excessive availability typically depend on separate clusters per datacenter, resulting in important prices, not only for the preliminary setup however to keep up and handle the information replication course of and excessive availability. Nonetheless, AI workloads demand real-time knowledge entry, fast processing, and uninterrupted availability, one thing legacy architectures battle to ship. 

So, how do you architect a multi-datacenter infrastructure that may persist and course of large knowledge to assist AI and data-intensive workloads, all whereas retaining operational prices low? That’s precisely the problem our staff got down to remedy. 

On this weblog, we’ll discover how we constructed an clever, scalable, and AI-ready knowledge infrastructure that allows real-time decision-making, optimizes useful resource utilization, reduces prices and redefines operational effectivity. 

Rethinking AI-ready case administration at scale

In as we speak’s AI-driven world, buyer assist is not nearly resolving circumstances, it’s about repeatedly studying and automating to make decision sooner and higher whereas effectively dealing with the fee and operational agility.  

The identical wealthy dataset that powers case administration should additionally gas AI fashions and automation workflows, decreasing case decision time from hours or days to mere minutes, which helps in elevated buyer satisfaction. 

This created a basic problem: decoupling the first database that serves mainstream case administration transactional system from an AI-ready, search-friendly database, a necessity for scaling automation with out overburdening the core platform. Whereas the concept made good sense, it launched two main considerations: price and scalability. As AI workloads develop, so does the quantity of knowledge. Managing this ever-expanding dataset whereas making certain excessive efficiency, resilience, and minimal handbook intervention throughout outages required a wholly new method. 

Moderately than following the standard mannequin of deploying separate database clusters per knowledge middle for top availability, we took a daring step towards constructing a single stretched database cluster spanning a number of knowledge facilities. This structure not solely optimized useful resource utilization and diminished each preliminary and upkeep prices but in addition ensured seamless knowledge availability. 

By rethinking conventional index database infrastructure fashions, we redefined AI-powered automation for Cisco case administration, paving the way in which for sooner, smarter, and less expensive assist options. 

How we solved it – The expertise basis

Constructing a multi-data middle trendy index database cluster required a strong technological basis, able to dealing with high-scale knowledge processing, ultra-low latency for sooner knowledge replication, and cautious design method to construct a fault-tolerance to assist excessive availability with out compromising efficiency, or cost-efficiency. 

Community Necessities

A key problem in stretching an index database cluster throughout a number of knowledge facilities is community efficiency. Conventional excessive availability architectures depend on separate clusters per knowledge middle, typically fighting knowledge replication, latency, and synchronization bottlenecks. To start with, we performed a detailed community evaluation throughout our Cisco knowledge facilities specializing in: 

  • Latency and bandwidth necessities – Our AI-powered automation workloads demand real-time knowledge entry. We analyzed latency and bandwidth between two separate knowledge facilities to find out if a stretched cluster was viable.  
  • Capability planning – We assessed our anticipated knowledge progress, AI question patterns, and indexing charges to make sure that the infrastructure may scale effectively. 
  • Resiliency and failover readiness – The community wanted to deal with automated failovers, making certain uninterrupted knowledge availability, even throughout outages. 

How Cisco’s high-performance knowledge middle paved the way in which

Cisco’s high-performance knowledge middle networking laid a powerful basis for constructing the multi-data middle stretch single database cluster. The latency and bandwidth offered by Cisco knowledge facilities exceeded our expectation to confidently transfer on to the subsequent step of designing a stretch cluster. Our implementation leveraged:

  • Cisco Utility Centric Infrastructure (ACI) – Supplied a policy-driven, software-defined community, making certain optimized routing, low-latency communication, and workload-aware visitors administration between knowledge facilities.  
  • Cisco Utility Coverage Infrastructure Controller (APIC) and Nexus 9000 Switches – Enabled high-throughput, resilient, and dynamically scalable interconnectivity, essential for fast knowledge synchronization throughout knowledge facilities. 

The Cisco knowledge middle and networking expertise made this potential. It empowered Cisco IT to take this concept ahead and enabled us to construct this profitable cluster which saves important prices and supplies excessive operational effectivity.

Our implementation – The multi-data middle stretch cluster leveraging Cisco knowledge middle and community energy

With the precise community infrastructure in place, we got down to construct a extremely out there, scalable, and AI-optimized database cluster spanning a number of knowledge facilities.

 

Cisco multi-data middle stretch Index database cluster

 

Key design selections

  • Single logical, multi-data middle cluster for real-time AI-driven automation – As a substitute of sustaining separate clusters per knowledge middle which doubles prices, will increase upkeep efforts, and calls for important handbook intervention, we constructed a stretched cluster throughout a number of knowledge facilities. This design leverages Cisco’s exceptionally highly effective knowledge middle community, enabling seamless knowledge synchronization and supporting real-time AI-driven automation with improved effectivity and scalability.  
  • Clever knowledge placement and synchronization – We strategically place knowledge nodes throughout a number of knowledge facilities utilizing customized knowledge allocation insurance policies to make sure every knowledge middle maintains a singular copy of the information, enhancing excessive availability and fault tolerance. Moreover, regionally connected storage disks on digital machines allow sooner knowledge synchronization, leveraging Cisco’s strong knowledge middle capabilities to realize minimal latency. This method optimizes each efficiency and cost-efficiency whereas making certain knowledge resilience for AI fashions and demanding workloads. This method helps in sooner AI-driven queries, decreasing knowledge retrieval latencies for automation workflows. 
  • Automated failover and excessive availability – With a single cluster stretched throughout a number of knowledge facilities, failover happens robotically because of the cluster’s inherent fault tolerance. Within the occasion of digital machine, node, or knowledge middle outages, visitors is seamlessly rerouted to out there nodes or knowledge facilities with minimal to no human intervention. That is made potential by the strong community capabilities of Cisco’s knowledge facilities, enabling knowledge synchronization in lower than 5 milliseconds for minimal disruption and most uptime. 

Outcomes

By implementing these AI-focused optimizations, we ensured that the case administration system may energy automation at scale, cut back decision time, and preserve resilience and effectivity. The outcomes have been realized shortly.

  • Sooner case decision: Lowered decision time from hours/days to simply minutes by enabling real-time AI-powered automation. 
  • Value financial savings: Eradicated redundant clusters, slicing infrastructure prices whereas bettering useful resource utilization.  
    • Infrastructure price discount: 50% financial savings per quarter by limiting it to at least one single-stretch cluster, by finishing eliminating a separate backup cluster. 
    • License price discount: 50% financial savings per quarter because the licensing is required only for one cluster. 
  • Seamless AI mannequin coaching and automation workflows: Supplied scalable, high-performance indexing for steady AI studying and automation enhancements. 
  • Excessive resilience and minimal downtime: Automated failovers ensured 99.99% availability, even throughout upkeep or community disruptions. 
  • Future-ready scalability: Designed to deal with rising AI workloads, making certain that as knowledge scales, the infrastructure stays environment friendly and cost-effective.

By rethinking conventional excessive availability methods and leveraging Cisco’s cutting-edge knowledge middle expertise, we created a next-gen case administration platform—one which’s smarter, sooner, and AI-driven.

 

Extra assets:

Share:

SATMAR nanosatellite to debut in June advancing digital maritime connectivity


SATMAR nanosatellite to debut in June advancing digital maritime connectivity

by Hugo Ritmico

Madrid, Spain (SPX) Might 28, 2025






The maritime business’s digital evolution will acquire momentum on June 21 with the scheduled launch of SATMAR, a 6U nanosatellite engineered totally by Alen Area. The satellite tv for pc will carry off aboard a SpaceX Falcon 9 rocket through the Transporter-14 rideshare mission from Vandenberg Area Power Base in California.



Developed in collaboration with Egatel and backed by Spain’s Ports 4.0 innovation program, SATMAR is designed to validate the VHF Knowledge Change System (VDES), a next-generation maritime communication customary set to supersede the present Computerized Identification System (AIS). Ports 4.0 is led by Puertos del Estado and the Spanish Port Authorities to drive disruptive innovation inside port infrastructure.



SATMAR will function an orbital platform to check real-world functions of VDES over Spanish territorial waters. The satellite tv for pc will assess bidirectional knowledge switch within the VHF band, aiming to cut back communication saturation and improve the effectivity, safety, and environmental sustainability of world maritime operations.



Use case testing will likely be carried out in partnership with the Port Authority of the Bay of Algeciras and corporations resembling Oritia and Boreas. SATMAR will assist functionalities together with VDES sign transmission, long-range coastal connectivity, encrypted messaging, maritime security alerts, and correct vessel arrival predictions.



“This can be a pioneering mission with world implications,” acknowledged Guillermo Lamelas, CEO of Alen Area. “The VDES customary is poised to redefine maritime communications, bringing forth digital transformation, new companies, and significant security enhancements for vessels and ports.”



Past its main VDES mission, SATMAR additionally hosts a secondary payload for spectrum evaluation, positioning the satellite tv for pc as an orbiting Software program Outlined Radio (SDR) take a look at lab. It can experiment with high-speed S-band communications and monitor RF interference throughout VHF, L, and S bands. Payloads had been designed with in-orbit reconfigurability to regulate testing because the mission evolves.



The satellite tv for pc builds on the inspiration of the SHIPMATE undertaking, beforehand developed by Alen Area and Egatel with Gradiant, reinforcing a sustained dedication to advancing space-based maritime digitalization.


Associated Hyperlinks

Alen Area

VSAT Information – Suppliers, Know-how And Purposes