Home Blog Page 7

A New Frontier for Community Engineers


Whenever you first hear about MCP — Mannequin Context Protocol, it appears like one thing constructed for hardcore AI researchers. However right here’s the fact: Community engineers and automation engineers are going to be a few of the greatest customers of it.

In case you’re questioning why: MCP is the way you make Massive Language Fashions (LLMs) perceive your community, your topology, your requirements, your world.

With out it? You’re simply getting generic ChatGPT solutions.

With it? You’re creating Agentic AI that may configure, troubleshoot, and design networks with you.

I’ve been speaking to you — You! …Sure, you! — about community automation and adopting automation in your community engineering for years now. All in all, it’s time so as to add one other brick in *your* wall (of tech instruments). On this AI Break, we’ll discover an instance that demonstrates the worth of utilizing MCP to grasp automation in right this moment’s AI world.

Okay, so what’s MCP?

At its coronary heart, Mannequin Context Protocol is about injecting structured data into an LLM at runtime — mechanically and programmatically.

As an alternative of manually pasting community diagrams or config templates right into a chat window, MCP lets your instruments inform the mannequin:

  • What gadgets are on the community
  • What requirements you utilize
  • What applied sciences you like (OSPF over EIGRP, EVPN over VXLAN, no matter)
  • What change management processes exist

All that context flows into the mannequin, making its responses smarter, extra aligned, and extra helpful to your surroundings.

Let’s begin with a fundamental, real-world instance

Let’s say you’re constructing an LLM-based Community Assistant that helps generate configs. You don’t need it suggesting RIP when your whole community runs OSPF and BGP.

With MCP, earlier than you even ask the mannequin for a config, you present AI with the next context:

Look acquainted? Yup, it’s a JSON.

{
  "network_standards": {
    "routing_protocols": ["OSPF", "BGP"],
    "preferred_encapsulation": "VXLAN",
    "security_policies": {
      "ssh_required": true,
      "telnet_disabled": true
    }
  },
  "topology": {
    "core_devices": ["core-sw1", "core-sw2"],
    "edge_devices": ["edge-fw1", "edge-fw2"],
    "site_layout": "hub and spoke"
  }
}

Your assistant mechanically sends this context to the LLM utilizing MCP, and then asks, “Generate a config to onboard a brand new web site.”

The mannequin now solutions in a means that matches your surroundings— not some random textbook response.

So, what expertise do it’s essential to use MCP?

Truthfully, a variety of you have already got most of what’s wanted:

  • API Fundamentals. You’ll be sending structured context (often JSON) over API calls — identical to RESTCONF, NETCONF, Catalyst Heart, Or Meraki APIs.
  • Understanding your community metadata. It’s essential know what issues: routing, VLANs, safety, system varieties, and how one can symbolize that as structured knowledge.
  • Python scripting. You’ll in all probability use Python to gather this information dynamically (like through Nornir, Netmiko, or native APIs) after which bundle it into MCP calls.
  • LLM fundamentals. It’s essential perceive how prompts and context home windows work, and the way larger context equals smarter outputs.

The underside line

MCP isn’t some “perhaps later” factor for networkers.

It’s turning into the bridge between your real-world community data and AI’s capability that can assist you sooner, higher, and extra precisely.

Engineers who know how one can feed actual context into LLMs will dominate community design, troubleshooting, safety auditing, and even full-stack automation.

Begin now 

  • Map your community requirements.
  • Package deal them as JSON.
  • Play with sending that context into small AI workflows.

The very best AI Brokers are constructed by engineers who know their community—and know how one can train it to their AI. Subsequent, let’s get hands-on with MCP!

Strive it

For a completely working code and directions to get began, take a look at my undertaking on GitHub.

Create a actual Mannequin Context Protocol (MCP) server designed for community engineers.

This MCP app does the next:

  • Serve your community requirements (routing protocols, safety insurance policies, and so forth.)
  • Reply with system well being
  • Hook up with Claude Desktop, making your AI assistant conscious of your actual community surroundings

And it’s so simple as:

  1. Import the MCP Python SDK
    from mcp.server.fastmcp import FastMCP
  2. Initialize the FastMCP server with a singular title
    mcp = FastMCP("network-assistant")
  3. Outline instruments.
    Instruments are a robust primitive within the Mannequin Context Protocol (MCP). They let your server expose actual actions—so the mannequin can question programs, run logic, or kick off workflows. In our use case, we have to outline ‘network-standards’ & ‘system standing’ features:
    @mcp.device()
    async def get_network_standards() -> dict[str, Any]:
        """Returns customary routing protocols, encapsulation, and safety insurance policies."""
    return NETWORK_STANDARDS
  4. Run the server, and you might be set!
    if __name__ == "__main__":
        mcp.run(transport="stdio")
    

And if we have a look at it, that is what the LLM is aware of about your community earlier than you contextualized it:

 

And that is after connecting the LLM to our Community:

The place community automation and AI really collide

You’re not scripting for the sake of scripting. And also you don’t simply use AI for the sake of buzzwords. When you may mix stay community state with LLM intelligence, you’re constructing programs that suppose, adapt, and help with you—not simply for you.

Begin easy. Construct one stream.
Make your AI agent really know your community. As a result of the longer term belongs to engineers who don’t simply automate—they contextualize.

Welcome to the brand new frontier of Agentic AI!

Get began with AI

Studying Paths, programs, free tutorials, and extra. Unlock the way forward for know-how with synthetic intelligence coaching in Cisco U. Discover AI studying and begin constructing your expertise right this moment.

Join Cisco U. | Be a part of the  Cisco Studying Community right this moment without spending a dime.

Observe Cisco Studying & Certifications

X | Threads | Fb | LinkedIn | Instagram | YouTube

Use  #CiscoU and #CiscoCert to hitch the dialog.

Adaptability: The Should-Have Talent for Community Engineers within the AI Period

MCP for DevOps, NetOps, and SecOps: Actual-World Use Circumstances and Future Insights

 

Share:



Evolving from Bots to Brainpower: The Ascendancy of Agentic AI

0


What really separates us from machines? Free will, creativity and intelligence? However give it some thought. Our brains aren’t singular, monolithic processors. The magic is not in a single “pondering half,” however quite in numerous specialised brokers—neurons—that synchronize completely. Some neurons catalog information, others course of logic or govern emotion, nonetheless extra retrieve reminiscences, orchestrate motion, or interpret visible alerts. Individually, they carry out easy duties, but collectively, they produce the complexity we name human intelligence.

Now, think about replicating this orchestration digitally. Conventional AI was all the time slender: specialised, remoted bots designed to automate mundane duties. However the new frontier is Agentic AI—programs constructed from specialised, autonomous brokers that work together, motive and cooperate, mirroring the interaction inside our brains. Giant language fashions (LLMs) type the linguistic neurons, extracting that means and context. Specialised process brokers execute distinct capabilities like retrieving knowledge, analyzing developments and even predicting outcomes. Emotion-like brokers gauge person sentiment, whereas decision-making brokers synthesize inputs and execute actions.

The result’s digital intelligence and company. However do we want machines to imitate human intelligence and autonomy?

Each area has a choke level—Agentic AI unblocks all of them

Ask the hospital chief who’s attempting to fill a rising roster of vacant roles. The World Well being Group predicts a international shortfall of 10 million healthcare employees by 2030. Medical doctors and nurses pull 16-hour shifts prefer it’s the norm. Claims processors grind by way of limitless coverage critiques, whereas lab technicians wade by way of a forest of paperwork earlier than they’ll even take a look at a single pattern. In a well-orchestrated Agentic AI world, these professionals get some reduction. Declare-processing bots can learn insurance policies, assess protection and even detect anomalies in minutes—duties that will usually take hours of mind-numbing, error-prone work. Lab automation brokers may obtain affected person knowledge straight from digital well being data, run preliminary assessments and auto-generate stories, releasing up technicians for the extra delicate duties that really want human talent.

The identical dynamic performs out throughout industries. Take banking, the place anti-money laundering (AML) and know-your-customer (KYC) processes stay the largest administrative complications. Company KYC calls for limitless verification steps, advanced cross-checks, and reams of paperwork. An agentic system can orchestrate real-time knowledge retrieval, conduct nuanced threat evaluation and streamline compliance in order that workers can concentrate on precise shopper relationships quite than wrestling with types.

Insurance coverage claims, telecom contract critiques, logistics scheduling—the record is limitless. Every area has repetitive duties that lavatory down proficient individuals.

Sure, agentic AI is the flashlight in a darkish basement: shining a brilliant mild on hidden inefficiencies, letting specialised brokers sort out the grunt work in parallel, and giving groups the bandwidth to concentrate on technique, innovation and constructing deeper connections with prospects.

However the true energy agentic AI lies in its capacity to resolve not only for effectivity or one division however to scale seamlessly throughout a number of capabilities—even a number of geographies. That is an enchancment of 100x scale.

  • Scalability: Agentic AI is modular at its core, permitting you to start out small—like a single FAQ chatbot—then seamlessly broaden. Want real-time order monitoring or predictive analytics later? Add an agent with out disrupting the remainder. Every agent handles a selected slice of labor, chopping improvement overhead and letting you deploy new capabilities with out ripping aside your present setup.
  • Anti-fragility: In a multi-agent system, one glitch gained’t topple every thing. If a diagnostic agent in healthcare goes offline, different brokers—like affected person data or scheduling—maintain working. Failures keep contained inside their respective brokers, guaranteeing steady service. Meaning your whole platform gained’t crash as a result of one piece wants a repair or an improve.
  • Adaptability: When rules or client expectations shift, you may modify or substitute particular person brokers—like a compliance bot—with out forcing a system-wide overhaul. This piecemeal strategy is akin to upgrading an app in your telephone quite than reinstalling the complete working system. The consequence? A future-proof framework that evolves alongside your online business, eliminating huge downtimes or dangerous reboots.

You’ll be able to’t predict the following AI craze, however you might be prepared for it

Generative AI was the breakout star a few years in the past; agentic AI is grabbing the highlight now. Tomorrow, one thing else will emerge—as a result of innovation by no means rests. How then, will we future-proof our structure so every wave of latest know-how doesn’t set off an IT apocalypse? In accordance with a current Forrester examine, 70% of leaders who invested over 100 million {dollars} in digital initiatives credit score one technique for fulfillment: a platform strategy.

As a substitute of ripping out and changing outdated infrastructure every time a brand new AI paradigm hits, a platform integrates these rising capabilities as specialised constructing blocks. When agentic AI arrives, you don’t toss your whole stack—you merely plug within the newest agent modules. This strategy means fewer venture overruns, faster deployments, and extra constant outcomes.

Even higher, a strong platform presents end-to-end visibility into every agent’s actions—so you may optimize prices and maintain a tighter grip on compute utilization. Low-code/no-code interfaces additionally decrease the entry barrier for enterprise customers to create and deploy brokers, whereas prebuilt instrument and agent libraries speed up cross-functional workflows, whether or not in HR, advertising and marketing, or another division. Platforms that help PolyAI architectures and a wide range of orchestration frameworks assist you to swap totally different fashions, handle prompts and layer new capabilities with out rewriting every thing from scratch. Being cloud-agnostic, additionally they eradicate vendor lock-in, letting you faucet the most effective AI providers from any supplier. In essence, a platform-based strategy is your key to orchestrating multi-agent reasoning at scale—with out drowning in technical debt or dropping agility.

So, what are the core parts of this platform strategy?

  1. Knowledge: Plugged into a typical layer
    Whether or not you’re implementing LLMs or agentic frameworks, your platform’s knowledge layer stays the cornerstone. If it’s unified, every new AI agent can faucet right into a curated data base with out messy retrofitting.
  2. Fashions: Swappable brains
    A versatile platform helps you to choose specialised fashions for every use case—monetary threat evaluation, customer support, healthcare diagnoses—then updates or replaces them with out nuking every thing else.
  3. Brokers: Modular workflows
    Brokers thrive as unbiased but orchestrated mini-services. In case you want a brand new advertising and marketing agent or a compliance agent, you spin it up alongside present ones, leaving the remainder of the system secure.
  4. Governance: Guardrails at scale
    When your governance construction is baked into the platform—protecting bias checks, audit trails, and regulatory compliance—you stay proactive, not reactive, no matter which AI “new child on the block” you undertake subsequent.

A platform strategy is your strategic hedge towards know-how’s ceaseless evolution—guaranteeing that regardless of which AI development takes heart stage, you’re able to combine, iterate, and innovate.

Begin small and orchestrate your approach up

Agentic AI isn’t completely new—Tesla’s self-driving automobiles employs a number of autonomous modules. The distinction is that new orchestration frameworks make such multi-agent intelligence extensively accessible. Now not confined to specialised {hardware} or industries, Agentic AI can now be utilized to every thing from finance to healthcare, fueling renewed mainstream curiosity and momentum.Design for platform-based readiness. Begin with a single agent addressing a concrete ache level and broaden iteratively. Deal with knowledge as a strategic asset, choose your fashions methodically, and bake in clear governance. That approach, every new AI wave integrates seamlessly into your present infrastructure—boosting agility with out fixed overhauls.

BayLISA and Cisco DevNet – Tech meetups in particular person and on-line


What occurs when the sysadmin neighborhood meets a contemporary developer motion? BayLISA and DevNet, that’s what😎!

What’s BayLISA?

BayLISA stands for Bay Space Massive Set up System Directors and is a consumer group of system and community directors from the San Francisco Bay Space. The consumer group was based within the very early Nineties after the fourth Massive Set up System Administration (LISA) convention. BayLISA meets month-to-month to debate subjects of curiosity to directors and managers of websites supporting greater than 100 customers and/or computer systems. The concept was to supply a discussion board for Sysadmin professionals within the San Francisco Bay space to get collectively and alternate concepts, hear audio system handle subjects of curiosity and most significantly, socialize. The conferences are free and open to the general public. BayLISA has supported and educated methods, community, storage, virtualization, and different expertise professionals within the Bay Space for over 30 years. You could find extra particulars in regards to the consumer group at https://baylisa.org and https://www.meetup.com/baylisa.

BayLISA meets Cisco DevNet

Alright, the place does Cisco DevNet come into the image? The story of the collaboration between BayLISA and Cisco DevNet begins within the second half of 2023. As individuals had been beginning to return to in-person occasions, I had the privilege of being a speaker on the DeveloperWeek CloudX 2023 convention in San Mateo, California. There I met Ron Pagani, one of many BayLISA organizers. After a few 12 months hiatus from operating the month-to-month meetups, Ron was taking a look at getting again to in-person occasions. At Cisco DevNet we had been taking a look at rising our members neighborhood at the moment so Ron and I assumed that we may collaborate in a means that’s useful for everybody. This collaboration would imply at first discovering area on the Cisco campus in San Jose the place we may run the month-to-month meetup. As a observe as much as that, we had been additionally capable of counsel audio system and be members on the BayLISA board.

We’ve had a profitable 2024, operating a number of meetups on the Cisco Campus in San Jose however for this 12 months we determined to vary location. For the foreseeable future we’ll run the meetup at 3350 Scott Blvd Constructing 54, Santa Clara, CA 95054 within the Ding Ding Studio. You need to double verify the situation of the meetup at https://www.meetup.com/baylisa simply in case and likewise please RSVP at this hyperlink in order that we all know that you simply plan on becoming a member of us in order that we are able to higher manage the logistics for the occasion. In addition to subjects of curiosity and improbable individuals that you would be able to work together and chat with, there’s additionally pizza and refreshments for everybody. We additionally attempt to assist individuals which might be in search of jobs or which have positions opened on their groups or of their firm by providing an area to make these vital connections and sharing info.

Stay streaming

Whereas individuals residing within the Bay Space are greater than welcome to hitch our month-to-month meetup, we all know not everybody can attend in particular person. We additionally wish to expose the data shared in our meetup to a bigger on-line viewers. So we’ve determined to reside stream the BayLISA month-to-month meetups on the Cisco DevNet social media platforms, together with YouTube, LinkedIn, X/Twitter, Twitch and Fb.

The meetup takes place on the third Thursday of the month, so be sure to save that in your calendars. We’ll even have the reside streaming periods scheduled prematurely so as to see what the subject of the session can be in addition to audio system and timing. Because the periods are recorded, you can even view the recordings on-line. For YouTube, I’ve created a playlist at this hyperlink: https://www.youtube.com/playlist?listing=PL2k86RlAekM_r-_3uDdU4P2o4kLRF6VnB. You may already discover there the recording of the session for the month of April. We’ve had Madhupriya Ravishankar and Tianhong Zhang from Oracle presenting Converged Database: JSON Meets Relational for the Trendy Period.

Whether or not you’re a Bay Space native or tuning in from afar, observe DevNet in your favourite platform to remain plugged into methods and community administration.

Search for us on Social for the most recent on NetOps, DevOps, SecOps, Automation, and AI:

DevNet on YouTube
DevNet on LinkedIn
DevNet on X (Twitter)
DevNet on BlueSky
DevNet on Twitch
DevNet on Instagram
DevNet on Fb
DevNet on TikTok

Share:



Cisco joins AI infrastructure alliance



“The addition of Cisco reinforces AIP’s dedication to an open-architecture platform and fostering a broad ecosystem that helps a various vary of companions on a non-exclusive foundation, all working collectively to construct a brand new type of AI infrastructure,” the group stated in a assertion

Individually, Cisco introduced AI initiatives centered within the Center East area. Final week, Cisco CEO Chuck Robbins visited Saudi Arabia, UAE, Qatar, and Bahrain. This week, Jeetu Patel, govt vice chairman and chief product officer, is in Saudi Arabia, the place he’s taking part in President Trump’s state go to to the area, in accordance with Cisco. Associated new tasks embrace: 

  • An initiative with HUMAIN, Saudi Arabia’s new AI enterprise to assist construct an open, scalable, resilient and cost-efficient AI infrastructure: “This landmark collaboration will set a brand new customary for a way AI infrastructure is designed, secured and delivered – combining Cisco’s world experience with the Kingdom’s daring AI ambitions. The multi-year initiative goals to place the nation as a world chief in digital innovation,” Cisco said.
  • A collaboration with the UAE-basedG42 to co-develop a safe AI portfolio and AI-native companies: Cisco and G42 will work collectively to evaluate the potential to co-develop and collectively deploy AI-powered cybersecurity packages, in addition to a reference structure that integrates Cisco’s networking, safety, and infrastructure options particularly designed for high-performance computing. This collaboration goals to assist prospects construct and safe AI-ready information facilities and develop AI workloads successfully, in accordance with the businesses.
  • Curiosity in Qatar’s digital transformation: Qatar’s Ministry of Inside and Cisco signed a letter of intent to collaborate on Qatar’s digital transformation, AI, infrastructure improvement and cybersecurity.

Regardless of the hype, Work together Evaluation expects humanoid adoption to stay gradual

0


Regardless of the hype, Work together Evaluation expects humanoid adoption to stay gradual

Agility Robotics, a number one developer of humanoid robots, has deployed its Digit robotic with GXO. | Supply: Agility Robotics

The nascent humanoid robots market provides a big alternative, however uptake will likely be low within the quick and medium-term, in line with a brand new report from Work together Evaluation. Regardless of the hype about humanoid robots and vital funding exercise, the market intelligence specialist predicts market progress will likely be comparatively gradual, reaching over 40,000 models by 2032 with a complete market income of about $2 billion.

In its new Humanoid Robots report, Work together Evaluation assesses the potential and sure future growth of the worldwide humanoid robotic market by state of affairs. It concludes there’s a giant addressable market, an estimated $2 trillion, however 4 key obstacles hamper widespread adoption of applied sciences.

Whereas its projections for the sector stay conservative, the corporate has generated a collection of three eventualities, optimistic, baseline, and pessimistic, that display the varied trajectories the humanoid robots market might take by means of to 2032, with every exhibiting steep progress from 2029 onwards.

“The humanoid robotic market is presently experiencing substantial hype, fueled by a big addressable market and vital funding exercise,” Rueben Scriven, Work together Evaluation analysis supervisor, stated. “Nonetheless, regardless of the potential, our outlook stays cautious resulting from a number of key obstacles that hinder widespread adoption, together with excessive costs and the hole within the dexterity wanted to match human productiveness ranges, each of that are prone to persist into the following decade. Nonetheless, we preserve that there’s a major potential within the mid- to long-term.”

4 obstacles hindering humanoid adoption

A graph showing Interact Analysis' projection for humanoid robot growth.

Work together Evaluation expects humanoid adoption to be gradual within the coming years, however to develop by the top of the 2020s. | Supply: Work together Evaluation

Work together Evaluation stated the primary issue hindering humanoid robotic adoption is regulatory and security considerations. Humanoid robots, not like the cell robots which have change into more and more well-liked in warehousing and distribution, can lose their stability and topple over at even a small malfunction. Whether or not the robotic can fall safely, with out harming people round it, is a prime concern for these enthusiastic about deploying humanoids.

Regulatory committees are actively working to develop requirements for humanoids, but it surely’s a piece in progress. Till then, finish customers have to weigh the advantages of adopting humanoids in opposition to the considerations of adopting a brand-new expertise.

Subsequent, the analysis agency stated dexterity limitations are a prime issue limiting adoption. Bodily or embodied AI is a rapidly creating space of robotics, however, once more, it’s nonetheless new. Whereas some humanoid corporations present unimaginable movies of humanoid robots performing sleek actions, these movies are sometimes meticulously deliberate out and ready. This leaves many potential finish customers questioning if the robots are as much as the duty.

Value is the third issue limiting adoption. Humanoids require many customized elements that aren’t but being produced at scale. This drives up the price of the whole robotic, making them far more costly than cell robots or robotic arms.

The ultimate issue is the query of whether or not humanoids are the optimum kind issue for many AI-enabled robotic duties and purposes. The eye round humanoids has blown up in recent times, however, to a a lot quieter diploma, so has the eye round wheeled cell manipulators.

Many conventional cell robotic or robotic arm corporations, in addition to loads of new startups, are working to launch wheeled cell manipulators, which they are saying present the identical flexibility as a humanoid with examined and confirmed merchandise.


SITE AD for the 2025 RoboBusiness call for presentations.
Now accepting session submissions!


Count on gradual standardization

Though nearly all of elements used to create humanoid robots have been developed in-house, Work together Evaluation predicts there will likely be a gradual standardization of kind components because the market matures. The necessity for small, light-weight, and extremely built-in elements with very excessive torque density has necessitated in-house manufacturing, however elements will slowly shift to being off-the-shelf.

Given the relative immaturity of the market at current, Work together Evaluation has noticed vital range in design developments, with small humanoid robots typically geared up with planetary drives and wider variation with bigger, adult-height robots.

There are additionally regional variations, with many Chinese language distributors favoring high-speed motors with harmonic reducers for many joints and extra strong, cost-efficient high-torque motors with planetary gearboxes for key areas resembling hip joints.