Home Blog Page 23

Utilizing Generative AI for Technical Writing at Cisco ThousandEyes


 

As a technical content material author for Cisco ThousandEyes, I’ve to confess, my best worry is to get replaced by a generative AI algorithm. Within the spirit of “holding my pals shut and my enemies nearer,” I made a decision to befriend my potential adversary. On this article, I’ll share how utilizing CIRCUIT, Cisco’s inner AI assistant, has augmented my means as a technical author to maintain up with the speedy tempo of change in our trade.

Fast context switching

I work on a number of merchandise for ThousandEyes and, because of my earlier lives as a NOC Engineer (Community Operations Middle) and Backend Software program Developer, my technical experience means I’m typically referred to as upon when the workforce wants further protection. Whereas I’ve a broad and deep understanding of many applied sciences, the focus required to successfully write about them is on par with the focus I as soon as wanted as a developer to construct them.

In engineering, “rubber ducking” means to speak out an issue both at somebody or one thing with the intention to set up one’s ideas (not essentially for a response). I exploit CIRCUIT as a rubber duck in context switching conditions. With generative AI chat, I’m in a position to preserve a dialog the place I can current concepts and ask it to summarize the important thing factors I must refresh on. The conversational fashion strikes me rapidly into the headspace crucial for making troublesome context switches. Relatively than needing half a day to get again within the stream, I can pivot to a brand new job inside half-hour.

Shorter SME interviews

One thing I get pleasure from about being a technical author is attending to find out about new stuff. As a lot as I really like studying, nonetheless, given our present velocity (and quantity) of innovation, I typically really feel like I’m ingesting from a firehose. CIRCUIT gives me with a quiet house the place I can safely ask dumb questions on unfamiliar applied sciences. This maximizes the worth I can convey to SME (Topic Matter Professional) interviews.

When an SME has to stroll me by way of the fundamentals, interviews can take as a lot as an hour and should even span a number of classes. Even after I put together alone, I spend quite a lot of time gathering the data I want and discerning what’s truly helpful. With the assistance of CIRCUIT, the data is introduced to me in a summarized kind that’s simple to digest. I can rapidly assess it for accuracy and I can ask questions for clarification, in addition to reply with corrections to enhance my outcomes. It even exhibits me what sources it used, so I can go deeper alone. This considerably reduces the time I must ramp up on a brand new know-how. On account of utilizing CIRCUIT to ramp up on unfamiliar applied sciences, my SME interviews are extremely concentrated and environment friendly, usually lasting a mere 15-20 minutes.

Diminished repetitive duties

The Documentation Staff at ThousandEyes makes use of a “docs-as-code” strategy. For us, this implies our documentation lives in Github and we format it utilizing markdown. Markdown tables are very tedious and error-prone. When I’ve to create a brand new markdown desk, I exploit CIRCUIT to create it with the values I present. Then, all I’ve to do is copy and paste. It’s additionally helpful after I must summarize content material, comparable to offering a one-sentence description or writing a brief introductory paragraph to a technical article. CIRCUIT can present me with the summaries I want immediately, sourced from our personal documentation.

Immediate contextual data

The toughest factor to doc isn’t the technical stuff. It’s the stuff that connects the technical stuff to what the person actually wants and desires. That is the a part of the writing course of that depends upon the author’s expertise and instinct. Technical writing isn’t simply making a bunch of screenshots and describing them. It’s determining what the person needs to do after which determining one of the best ways to current the data to attenuate the person’s cognitive load.

Suggestions gathered by one in all our product managers, clients appreciated ThousandEyes’ documentation higher than our rivals’. Caring about our customers is an enormous precedence for my workforce. For instance, I perceive how difficult making sense of documentation may be within the midst of a high-severity outage, particularly while you’ve been woken up in the course of the evening. Once I’m figuring out easy methods to doc one thing, I do quite a lot of background analysis, together with perusing instructional supplies on the ThousandEyes weblog and user-contributed content material on websites like Reddit and StackOverflow. Asking CIRCUIT why somebody may use one thing makes a part of the writing course of a lot sooner (though it does take finesse to phrase my prompts successfully).

The listing of use instances and descriptions I get from the AI not solely permits me to extra deeply empathize with our customers, but in addition helps me to craft my content material to make sure relevancy and ease of use.

Embracing GenAI

Generative AI has reworked my strategy to technical writing at Cisco ThousandEyes, turning what may have been a frightening adversary into a robust ally. By streamlining context switching, accelerating studying for SME interviews, automating repetitive duties, and offering prompt contextual insights, CIRCUIT has not solely boosted my effectivity however continues to raise the standard of our user-focused documentation. Relatively than changing the human contact, CIRCUIT amplifies my means to empathize with customers and ship clear, related content material that meets their wants, even in high-pressure situations. Because the tempo of innovation continues to speed up, integrating AI into technical writing is not only a sensible alternative—it’s a strategic one which empowers writers to remain forward in a quickly evolving trade.

 


Further sources: 

 

Discover extra Cisco Innovation blogs right here!

Share:

Parasoft brings agentic AI to service virtualization in newest launch


Parasoft helps prospects handle the distinctive necessities for testing AI with a number of new capabilities throughout its varied testing options.

The corporate added an agentic AI assistant to its digital testing simulation answer Virtualize, permitting prospects to create digital companies utilizing pure language prompts. 

For instance, a person might write the immediate: “Create a digital service for a cost processing API. There must be a POST and a GET operation. The operations ought to require an account id together with different information associated to cost.” 

The platform will then draw from the supplied API service definitions, pattern requests/responses, and written descriptions of a service to generate a digital service with dynamic habits, parameterized responses, and the right default values. 

In keeping with Parasoft this reduces the necessity for deep area information or improvement expertise, and accelerates time to supply.

“That is greater than an incremental enchancment—it’s a game-changer. Whether or not you’re simulating third-party APIs, mimicking unavailable or incomplete dependencies, or supporting API-first improvement, Parasoft Virtualize now makes digital service creation dramatically quicker and extra accessible—no specialised expertise or coding required,” the corporate wrote in a weblog put up.

Moreover, a brand new functionality was added to Virtualize and the corporate’s API and microservices testing answer SOAtest for testing functions that depend on MCP.

Customers can check MCP servers, simulate unavailable MCP instruments, and construct check environments by MCP interfaces. 

One of many different new options is designed to assist organizations deal with the unpredictable outputs of LLMs. In keeping with Parasoft, the standard strategies of information validation are ineffective in coping with diverse LLM outputs. 

With the most recent replace, prospects can describe anticipated behaviors in pure language to allow them to validate enterprise logic extra rapidly. 

Different current updates embody help for GraphQL choice units in Virtualize, simpler entry to the Provisioning Property venture within the Virtualize Server view, and help for PostgreSQL in Parasoft Steady Testing Platform (CTP).

“Parasoft’s newest integration of agentic AI capabilities and ongoing AI testing developments are a big leap ahead in serving to prospects navigate the complexities of contemporary software program improvement,” mentioned Igor Kirilenko, chief product officer at Parasoft. “Our legacy of embracing AI continues to drive their journey of adopting API-first improvement whereas setting new benchmarks for high quality in AI-infused functions.”

Know-how is coming so quick information facilities are out of date by the point they launch



 Tariffs apart, Enderle feels that AI expertise and ancillary expertise round it like battery backup remains to be within the early levels of growth and there will likely be vital modifications coming within the subsequent few years.

GPUs from AMD and Nvidia are the first processors for AI, and they’re derived from online game accelerators. They had been by no means meant to be used in AI processing, however they’re being fine-tuned for the duty.  It’s higher to attend to get a extra mature product than one thing that’s nonetheless in a comparatively early state.

However Alan Howard, senior analyst for information heart infrastructure at Omdia, disagrees and says to not wait. One cause is the speed at which individuals which might be constructing information facilities is all about seizing market alternative.” You should have a certain quantity of capability to just remember to can execute on methods meant to seize extra market share.”

The identical sentiment exists on the colocation aspect, the place there’s a appreciable scarcity of capability as demand outstrips provide. “To say, effectively, let’s wait and see if possibly we’ll be capable of construct a greater, extra environment friendly information heart by not constructing something for a few years. That’s simply straight up not going to occur,” stated Howard.

“By ready, you’re going to overlook market alternatives. And these firms are all in it to generate income. And so, the almighty greenback guidelines,” he added.

Howard acknowledges that by the point you design and construct the info heart, it’s out of date. The query is, does that imply it may well’t do something? “I imply, in case you begin right now on an information heart that’s going to be stuffed with [Nvidia] Blackwells, and let’s say you deploy in two years after they’ve already retired Blackwell, they usually’re making one thing fully new. Is that information heart stuffed with Blackwells ineffective? No, you’re simply not going to get as a lot out of it as you’ll with no matter new era that they’ve acquired. However you then wait to construct that, you then’ll by no means catch you by no means catch up,” he stated.

Unlock New Prospects with Cisco Modeling Labs 2.9


Cisco Modeling Labs (CML) has lengthy been the go-to platform for community engineers, college students, and builders to design, simulate, and check community topologies in a digital atmosphere. With the discharge of Cisco Modeling Labs model 2.9, we’re excited to introduce some new options that improve its capabilities, providing flexibility, scalability, and ease of use.

Containers: A game-changer for community simulation

One of the vital compelling new options in CML 2.9 is the flexibility to combine Docker containers. Beforehand, CML was restricted to digital machines (VMs), akin to IOS-XE and Catalyst 9000. Now you’ll be able to add light-weight, optimized node varieties that devour fewer sources, so you’ll be able to construct bigger and extra numerous labs.

CML 2.9 ships with 10 pre-built container photographs, together with:

  • Browsers: Chrome and Firefox for in-lab internet entry.
  • Routing: Free-Vary Routing (FRR), a light-weight, open-source routing gadget supporting OSPF and different protocols.
  • Community Providers: Dnsmasq, Syslog, Netflow, Nginx, Radius, and TACACS+ for important community features.
  • Utilities: Web-tools (filled with 20+ community instruments like T-shark and Nmap) and a ThousandEyes agent for monitoring.

This opens up a complete new world of potentialities, permitting you to simulate advanced situations with specialised instruments and providers immediately inside your CML topology. As a result of containers are light-weight and resource-efficient, you’ll be able to run extra providers with no heavy impression on system efficiency. Plus, you might have the pliability to create and combine your personal customized container photographs from DockerHub or different sources.

Containers in CML combine seamlessly with VM nodes, permitting you to attach them inside the identical community topology and allow full bidirectional communication. As a result of containers use considerably much less CPU, reminiscence, and disk than VMs, they begin quicker and allow you to run extra nodes on the identical {hardware}. For big CML labs, use containers and clustering for optimum efficiency. Containers are light-weight, making it attainable to scale labs effectively by operating providers or routing features that don’t require a full VM.

How can I share labs with different customers? 

For groups and academic environments, CML 2.9 introduces a extra fine-grained permission system. As a lab proprietor or system admin, now you can share labs with different customers, giving collaborators entry to work on shared tasks or studying actions. This characteristic allows you to transfer past the fundamental learn/write entry out there in earlier variations. The brand new permission ranges embody:

  • View: Permits customers to see the lab however prevents any state adjustments or edits
  • Exec: Grants permission to view and work together with the lab; for example, you can begin and cease nodes
  • Edit: Allows customers to switch the lab, akin to shifting, including, or deleting nodes and altering configurations
  • Admin: Offers you full entry to the lab, in addition to means that you can share the lab with different customers

This enhanced management streamlines collaboration, making certain customers have precisely the suitable degree of entry for his or her duties.

→ See product documentation for particulars on setup and use.

How does node disk picture cloning work?

Uninterested in repeatedly configuring customized nodes on your labs? Node disk picture cloning in CML 2.9 resolves this subject. In the event you’ve personalized a node’s configuration or made particular edits, now you can clone that node’s disk picture and put it aside as a brand new picture kind. This implies quicker lab setup for continuously used units and configurations, saving you worthwhile time.

Node disk picture cloning is great for saving time in lab setup if you’ve modified a node, akin to Ubuntu, so as to add extra instruments and need to create a brand new node kind that has those self same instruments put in.

How do I handle labs utilizing the exterior labs repository and Git integration?

CML 2.9 introduces Git integration, permitting you to immediately tie your CML occasion to an exterior Git repository. This characteristic adjustments the way you entry and handle pattern labs. As an alternative of manually downloading and importing particular person lab recordsdata, now you can present a repository URL, and CML will sync the content material, making it out there beneath the Pattern Labs menu.

Cisco supplies a group of pattern labs on the CML Neighborhood GitHub on the Cisco DevNet website, together with certification-aligned labs (akin to CCNA), which will be imported with a single click on.

This characteristic additionally means that you can add Git repositories (akin to GitLab and Bitbucket), empowering you to handle your personal lab content material seamlessly.

CML helps any Git-based repo, however authentication for personal repos will not be supported. We’ve added some CCNA labs, and we’re working to combine extra superior certification content material, akin to CCNP, into the pattern labs repositories. Offline customers will get a pre-synced snapshot at set up.

 What are different new CML 2.9 enhancements?

Past the numerous new options, CML 2.9 contains these enhancements:

  • Elevated scalability: The restrict of concurrently operating nodes has risen from 320 to 520.
  • Net framework substitute (FastAPI): The product now makes use of FastAPI as its new API internet framework, leading to improved supportability, quicker API efficiency, enhanced documentation, and improved validation.
  • API help for bulk operations: Simplify your automation efforts with new API capabilities that enable for bulk operations, akin to fence-selecting and deleting teams of nodes with a single API name.
  • Allow all node definitions by default: This quality-of-life enchancment means that you can import all labs by default, no matter whether or not a specific node and picture definition can be found in your system.
  • Customized font for terminal home windows: Now you can configure customized fonts on your console terminal home windows to match your most well-liked CLI expertise.
  • IP deal with visibility: Now you can view the assigned IP addresses for interfaces linked to an exterior NAT connector.

Discover the ability of CML 2.9

CML 2.9 underscores our dedication to delivering a state-of-the-art community simulation platform. As we develop its capabilities and discover additional container orchestration, superior lab automation, and new API developments, we encourage our group to contribute to the rising library of pattern labs on our DevNet GitHub repository. And we’re working to make including new node varieties even simpler sooner or later.

Able to discover the ability of CML 2.9? Obtain it now to check out these new options right now.

Depart a remark beneath and tell us what you suppose!

 

Join Cisco U. | Be part of the  Cisco Modeling Labs Neighborhood on the Cisco Studying Community.

Study with Cisco

X | Threads | Fb | LinkedIn | Instagram | YouTube

Use  #CiscoU and #CiscoCert to hitch the dialog.

Share:



Nile CEO Pankaj Patel on why it is time to rethink networking with NaaS



We constructed Nile to be the primary zero-trust community requiring no community operations, providing a pure pay-as-you-use mannequin—per consumer or sq. foot. Consider it like Uber: you say the place you wish to go, and the service will get you there with out you managing the car.

Q: What’s the safety mannequin behind Nile’s NaaS method?

Patel: We wished to shift the dynamic of the community from safety fear to safety drive multiplier with the very first zero-trust community. In accordance with Gartner and others, networking is the floor space the place 60%–70% of cyberattacks originate. We architected and designed our service to utterly seal that floor space off fully—no lateral motion is feasible. That’s a key motive why monetary establishments, healthcare, manufacturing, and retail prospects are embracing us. These are environments the place downtime is unacceptable, and our service delivers four-nines uptime, backed financially.

We’re additionally concentrating on mid-sized enterprises—organizations with 100 customers to about 5,000 to 10,000 customers—as a result of they’re essentially the most weak. They don’t have the safety budgets of a Fortune 100 firm, however their wants are simply as crucial.

Q: How are you integrating AI into Nile’s providing? And what makes it completely different from different distributors?

Patel: Different distributors bolt AI onto legacy environments—they provide you dashboards or chatbot solutions, however don’t repair something. We began with a data-centric method. We put a really deep instrumentation throughout all of the community components. We accumulate tons of information, though, by the way in which, all of the metadata, we don’t accumulate any personal knowledge, and we’re studying with all of the collected knowledge. We lately introduced our Networking Expertise Intelligence (NXI) platform, which is really the mix of our efforts in consumer expertise and considers all of the occasions that may adversely have an effect on the community, and, extra importantly, mechanically resolve the problems.

Q: How are giant enterprises adopting this mannequin? Particularly these with legacy infrastructure?

Patel: The very giant enterprises similar to very giant monetary establishments like JP Morgan Chase or Citi aren’t going to vary in a single day. They nonetheless have their personal knowledge facilities and so they handle some workloads via AWS. However these kinds of giant enterprises are embracing NaaS on the edge—department places of work, retail places, and distant websites. These are locations the place conventional IT help simply doesn’t scale, and uptime is business-critical. We’re seeing robust adoption there as a result of we provide assured efficiency and simplified operations. They won’t utterly overhaul their core infrastructure, however they’re excited about NaaS for department and distant places.

Q: You talked about Nile gives price financial savings as nicely. Are you able to quantify that?

Patel: We sometimes ship a 40% to 60% discount in whole price of possession. That features {hardware}, software program, lifecycle administration—we take away all of the operational overhead. We’re in a position to present the true first financially backed efficiency assure at scale; we have now eradicated alerts utterly, which can be music to lots of people’s ears. There are not any alerts on this surroundings as a result of we repair the problems mechanically. It’s a real uninterrupted consumer expertise.