Home Blog

Cisco Stay 2025: Collaboration Reimagined for the Agentic AI Period


How Cisco is shaping the office of tomorrow—and what it means for companions

At Cisco Stay 2025, collaboration wasn’t only a monitor; it was a signal-a sign that Cisco is doubling down on how folks and digital brokers will work collectively within the AI period. This new actuality can be secured, automated, and powered by means of the community.

For companions, this second presents recent alternatives: to guide conversations about clever workplaces, differentiated assembly experiences, and the convergence of safety and collaboration-innovations solely Cisco can ship.

Right here’s what it is advisable know from San Diego-and the way it helps you differentiate and develop.

The New Clever Office Is Agentic and AI-Pushed

Cisco’s collaboration bulletins targeted on mixing bodily and digital experiences, with AI because the connective tissue. Highlights embody:

  • Room Imaginative and prescient PTZ: A cinematic, AI-powered multi-camera expertise that dynamically frames audio system and adjusts to room exercise—designed to scale back fatigue and elevate hybrid presence.
  • AI-Powered Machine Administration: Gadgets that self-configure and combine seamlessly throughout areas, powered by NVIDIA chipsets, RoomOS, and AI-native workflows.
  • Good Sensors as Office Perception Engines: Cameras, mics, switches, and entry factors now act as environmental and occupancy sensors—serving to IT and services groups optimize vitality, security, and actual property choices.

Why this issues: These improvements create new service layers so that you can monetize, together with office assessments, clever room design, and AI-first collaboration deployments, all delivering measurable enterprise impression.

Agentic Ops + Collaboration = Simplicity for IT and Finish Customers

Cisco launched AgenticOps, a brand new operational paradigm the place people and AI brokers work collectively in multiplayer environments. Mixed with Webex and unified administration platforms, IT groups can now:

  • Automate troubleshooting throughout Webex gadgets, Meraki, Catalyst, ThousandEyes, and Splunk.
  • Generate real-time dashboards utilizing pure language and AI Canvas.
  • Collaborate cross-functionally—community ops, safety, and services—in a single interface.

Why this issues: These capabilities place you to supply AI-powered managed providers that span consumer expertise, infrastructure, and security-establishing your agency because the glue between collaboration and operations.

Safety Constructed In: Common Zero Belief for Individuals, Issues—and Brokers

Cisco’s Common ZTNA is now extending zero belief insurance policies not simply to folks and gadgets, however to AI brokers. Add in built-in Duo, ISE, ThousandEyes, and Talos telemetry—and Cisco is delivering policy-aware collaboration experiences which can be safe by design.

Why this issues: Now you can deliver secure-by-default collab options to regulated industries or public sector shoppers—constructed into the Cisco platform, not bolted on.

Microsoft Groups + Cisco Gadgets: A Strategic Differentiator

Cisco gadgets now natively help Microsoft Groups Rooms, making it simpler than ever for purchasers to standardize on Cisco {hardware}—whereas protecting their UC technique versatile.

Key advantages embody:

  • Seamless integration: Prospects preserve their Groups workflows whereas gaining Cisco’s gadget intelligence, safety, and enhanced experiences.
  • Unified administration: Coverage and observability prolong throughout platforms, simplifying operations for IT groups.

Why this issues: This opens the door to non-Webex accounts, permitting you to place Cisco infrastructure in Microsoft-first environments and develop your attain.

Backside Line: A Platform Play Companions Can Construct On

Cisco’s collaboration bulletins aren’t simply product releases-they’re proof of a broader technique. The community is foundational to collaboration. AI is native, not bolted on. And safety is fused into each layer, from agent interactions to room gadgets.

For companions, this implies:

  • New recurring income streams by means of sensible gadgets, providers, and lifecycle administration.
  • Strategic conversations with line-of-business and IT leaders.
  • A novel capacity to ship outcomes throughout office, workforce, and workloads.

 

 


We’d love to listen to what you suppose. Ask a Query, Remark Beneath, and Keep Related with #CiscoPartners on social!

Cisco Companions Fb  |  @CiscoPartners X/Twitter  |  Cisco Companions LinkedIn

Share:



ScyllaDB X Cloud’s autoscaling capabilities meet the wants of unpredictable workloads in actual time


The workforce behind the open-source distributed NoSQL database ScyllaDB has introduced a brand new iteration of its managed providing, this time specializing in adapting workloads primarily based on demand.

ScyllaDB X Cloud can scale up or down inside a matter of minutes to fulfill precise utilization, eliminating the necessity to overprovision for worst-case situations or take care of latency whereas ready for autoscaling to happen. For instance, the corporate says it solely takes a couple of minutes to scale from 100K to 2M OPS.

In line with the corporate, purposes like retail or meals supply providers typically have peaks aligned with buyer work hours after which a low baseline within the off-houses.  “On this case, the height masses are 3x the bottom and require 2-3x the sources. With ScyllaDB X Cloud, they’ll provision for the baseline and rapidly scale in/out as wanted to serve the peaks. They get the regular low latency they want with out having to overprovision – paying for peak capability 24/7 when it’s actually solely wanted for 4 hours a day,” Tzach Livyatan VP of product for ScyllaDB, wrote in a weblog put up.  

ScyllaDB X Cloud additionally reaps the advantages of tablets, which had been launched final 12 months in ScyllaDB Enterprise. Tablets distribute knowledge by splitting tables into smaller logical items, or tablets, which can be dynamically balanced throughout the cluster. 

“ScyllaDB X Cloud allows you to take full benefit of tablets’ elasticity. Scaling will be triggered robotically primarily based on storage capability (extra on this beneath) or primarily based in your information of anticipated utilization patterns. Furthermore, as capability expands and contracts, we’ll robotically optimize each node rely and utilization,” Livyatan mentioned.

Tablets additionally enhance the utmost storage utilization that ScyllaDB can safely run at from 70% to 90%. It’s because tablets can transfer knowledge to new nodes sooner, permitting the database to defer scaling till the final minute. 

Assist for blended occasion sizes helps the 90% storage utilization as effectively. For instance, an organization can begin with tiny situations after which change them with bigger situations later if wanted, quite than needing so as to add the identical occasion measurement once more. 

“Beforehand, we advisable including nodes at 70% capability. This was as a result of node additions had been unpredictable and gradual — generally taking hours or days — and also you risked working out of area. We’d ship a tender alert at 50% and robotically add nodes at 70%. Nevertheless, these huge nodes typically sat underutilized. With ScyllaDB X Cloud’s tablets structure, we will safely goal 90% utilization. That’s significantly useful for groups with storage-bound workloads,” Livyatan mentioned. 

Different new options in ScyllaDB X Cloud embody file-based streaming, dictionary-based compression, and a brand new “Flex Credit score” pricing choice, which mixes the associated fee advantages of an annual dedication with the flexibleness of on-demand pricing, ScyllaDB says. 

Nanoneedle patch gives painless different to conventional most cancers biopsies – NanoApps Medical – Official web site


A patch containing tens of tens of millions of microscopic nanoneedles may quickly substitute conventional biopsies, scientists have discovered. The patch gives a painless and fewer invasive different for tens of millions of sufferers worldwide who endure biopsies annually to detect and monitor illnesses like most cancers and Alzheimer’s. The analysis is revealed in Nature Nanotechnology.

Biopsies are among the many commonest diagnostic procedures worldwide, carried out tens of millions of occasions yearly to detect illnesses. Nonetheless, they’re invasive, may cause ache and problems, and may deter sufferers from looking for  or follow-up assessments. Conventional biopsies additionally take away small items of tissue, limiting how typically and the way comprehensively docs can analyze diseased organs just like the mind.

Now, scientists at King’s Faculty London have developed a nanoneedle patch that painlessly collects molecular info from tissues with out eradicating or damaging them. This might permit well being care groups to observe illness in actual time and carry out a number of, repeatable assessments from the identical space—one thing unattainable with commonplace biopsies.

As a result of the nanoneedles are 1,000 occasions thinner than a  and don’t take away tissue, they trigger no ache or harm, making the method much less painful for sufferers in comparison with commonplace biopsies. For a lot of, this might imply earlier prognosis and extra common monitoring, remodeling how illnesses are tracked and handled.

Dr. Ciro Chiappini, who led the examine, mentioned, “We’ve been engaged on nanoneedles for twelve years, however that is our most fun growth but. It opens a world of prospects for individuals with , Alzheimer’s, and for advancing customized drugs. It is going to permit scientists—and finally clinicians—to review illness in actual time like by no means earlier than.”

In , the staff utilized the patch to mind most cancers tissue taken from human biopsies and mouse fashions. The nanoneedles extracted molecular “fingerprints”—together with lipids, proteins, and mRNAs—from cells, with out eradicating or harming the tissue.

The tissue imprint is then analyzed utilizing  and , giving well being care groups detailed insights into whether or not a tumor is current, how it’s responding to therapy, and the way illness is progressing on the mobile degree.

Dr. Chiappini mentioned, “This strategy gives multidimensional  from various kinds of cells throughout the identical tissue. Conventional biopsies merely can’t try this. And since the method doesn’t destroy the tissue, we are able to pattern the identical tissue a number of occasions, which was beforehand unattainable.”

This expertise might be used throughout mind surgical procedure to assist surgeons make quicker, extra exact choices. For instance, by making use of the patch to a suspicious space, outcomes might be obtained inside 20 minutes and information real-time choices about eradicating cancerous tissue.

Made utilizing the identical manufacturing strategies as laptop chips, the nanoneedles might be built-in into frequent medical units equivalent to bandages, endoscopes and make contact with lenses.

Dr. Chiappini added, “This might be the start of the top for painful biopsies. Our expertise opens up new methods to diagnose and monitor illness safely and painlessly—serving to docs and sufferers make higher, quicker choices.”

The breakthrough was doable by shut collaboration throughout nanoengineering, , and synthetic intelligence—every discipline bringing important instruments and views that, collectively, unlocked a brand new strategy to non-invasive diagnostics.

Extra info: Nanoneedles allow spatiotemporal lipidomics of residing tissues, Nature Nanotechnology (2025).

On GitHub: github.com/zaritskylab/Nanoneedle-Lipidomics

Textual content Recognition with ML Package for Android: Getting Began


ML Package is a cell SDK from Google that makes use of machine studying to unravel issues reminiscent of textual content recognition, textual content translation, object detection, face/pose detection, and a lot extra!

The APIs can run on-device, enabling you to course of real-time use instances with out sending knowledge to servers.

ML Package supplies two teams of APIs:

  • Imaginative and prescient APIs: These embody barcode scanning, face detection, textual content recognition, object detection, and pose detection.
  • Pure Language APIs: You utilize them each time that you must establish languages, translate textual content, and carry out sensible replies in textual content conversations.

This tutorial will deal with Textual content Recognition. With this API you possibly can extract textual content from photographs, paperwork, and digicam enter in actual time.

On this tutorial, you’ll be taught:

  • What a textual content recognizer is and the way it teams textual content components.
  • The ML Package Textual content Recognition options.
  • The way to acknowledge and extract textual content from a picture.

Getting Began

All through this tutorial, you’ll work with Xtractor. This app helps you to take an image and extract the X usernames. You might use this app in a convention each time the speaker exhibits their contact knowledge and also you’d wish to search for them later.

Use the Obtain Supplies button on the high or backside of this tutorial to obtain the starter venture.

As soon as downloaded, open the starter venture in Android Studio Meerkat or newer. Construct and run, and also you’ll see the next display:

Clicking the plus button will allow you to select an image out of your gallery. However, there received’t be any textual content recognition.

Chosen image

Earlier than including textual content recognition performance, that you must perceive some ideas.

Utilizing a Textual content Recognizer

A textual content recognizer can detect and interpret textual content from varied sources, reminiscent of photographs, movies, or scanned paperwork. This course of is named OCR, which stands for: Optical Character Recognition.

Some textual content recognition use instances may be:

  • Scanning receipts or books into digital textual content.
  • Translating indicators from static photographs or the digicam.
  • Automated license plate recognition.
  • Digitizing handwritten types.

Right here’s a breakdown of what a textual content recognizer usually does:

  • Detection: Finds the place the textual content is positioned inside a picture, video, or doc.
  • Recognition: Converts the detected characters or handwriting into machine-readable textual content.
  • Output: Returns the acknowledged textual content.

ML Package Textual content Recognizer segments textual content into blocks, traces, components, and symbols.

Right here’s a quick clarification of every one:

  • Block: Reveals in crimson, a set of textual content traces, e.g. a paragraph or column.
  • Line: Reveals in blue, a set of phrases.
  • Ingredient: Reveals in inexperienced, a set of alphanumeric characters, a phrase.
  • Image: Single alphanumeric character.

ML Package Textual content Recognition Options

The API has the next options:

  • Acknowledge textual content in varied languages. Together with Chinese language, Devanagari, Japanese, Korean, and Latin. These have been included within the newest (V2) model. Verify the supported languages right here.
  • Can differentiate between a personality, a phrase, a set of phrases, and a paragraph.
  • Establish the acknowledged textual content language.
  • Return bounding bins, nook factors, rotation info, confidence rating for all detected blocks, traces, components, and symbols
  • Acknowledge textual content in real-time.

Bundled vs. Unbundled

All ML Package options make use of Google-trained machine studying fashions by default.

Significantly, for textual content recognition, the fashions could be put in both:

  • Unbundled: Fashions are downloaded and managed through Google Play Companies.
  • Bundled: Fashions are statically linked to your app at construct time.

Utilizing bundled fashions implies that when the consumer installs the app, they’ll even have all of the fashions put in and will probably be usable instantly. At any time when the consumer uninstalls the app, all of the fashions will probably be deleted. To replace the fashions, first the developer has to replace the fashions, publish the app, and the consumer has to replace the app.

Alternatively, when you use unbundled fashions, they’re saved in Google Play Companies. The app has to first obtain them earlier than use. When the consumer uninstalls the app, the fashions won’t essentially be deleted. They’ll solely be deleted if all apps that rely on these fashions are uninstalled. At any time when a brand new model of the fashions are launched, they’ll be up to date for use within the app.

Relying in your use case, you might select one possibility or the opposite.

It’s recommended to make use of the unbundled possibility if you’d like a smaller app measurement and automatic mannequin updates by Google Play Companies.

Nonetheless, it is best to use the bundled possibility if you’d like your customers to have full function performance proper after putting in the app.

Including Textual content Recognition Capabilities

To make use of ML Package Textual content Recognizer, open your app’s construct.gradle file of the starter venture and add the next dependency:

implementation("com.google.mlkit:text-recognition:16.0.1")
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-play-services:1.10.2")

Right here, you’re utilizing the text-recognition bundled model.

Now, sync your venture.

Be aware: To get the most recent model of text-recognition, please verify right here.
To get the most recent model of kotlinx-coroutines-play-services, verify right here. And, to help different languages, use the corresponding dependency. You may verify them right here.

Now, substitute the code of recognizeUsernames with the next:

val picture = InputImage.fromBitmap(bitmap, 0)
val recognizer = TextRecognition.getClient(TextRecognizerOptions.DEFAULT_OPTIONS)
val end result = recognizer.course of(picture).await()

return emptyList()

You first get a picture from a bitmap. Then, you get an occasion of a TextRecognizer utilizing the default choices, with Latin language help. Lastly, you course of the picture with the recognizer.

You’ll have to import the next:

import com.google.mlkit.imaginative and prescient.textual content.TextRecognition
import com.google.mlkit.imaginative and prescient.textual content.latin.TextRecognizerOptions
import com.kodeco.xtractor.ui.theme.XtractorTheme
import kotlinx.coroutines.duties.await
Be aware: To help different languages cross the corresponding possibility. You may verify them right here.

You might get hold of blocks, traces, and components like this:

// 1
val textual content = end result.textual content

for (block in end result.textBlocks) {
 // 2
 val blockText = block.textual content
 val blockCornerPoints = block.cornerPoints
 val blockFrame = block.boundingBox

 for (line in block.traces) {
 // 3
 val lineText = line.textual content
 val lineCornerPoints = line.cornerPoints
 val lineFrame = line.boundingBox

 for (component in line.components) {
 // 4
 val elementText = component.textual content
 val elementCornerPoints = component.cornerPoints
 val elementFrame = component.boundingBox
 }
 }
}

Right here’s a quick clarification of the code above:

  1. First, you get the total textual content.
  2. Then, for every block, you get the textual content, the nook factors, and the body.
  3. For every line in a block, you get the textual content, the nook factors, and the body.
  4. Lastly, for every component in a line, you get the textual content, the nook factors, and the body.

Nonetheless, you solely want the weather that signify X usernames, so substitute the emptyList() with the next code:

return end result.textBlocks
 .flatMap { it.traces }
 .flatMap { it.components }
 .filter { component -> component.textual content.isXUsername() }
 .mapNotNull { component ->
 component.boundingBox?.let { boundingBox ->
 UsernameBox(component.textual content, boundingBox)
 }
 }

You transformed the textual content blocks into traces, for every line you get the weather, and for every component, you filter these which are X usernames. Lastly, you map them to UsernameBox which is a category that accommodates the username and the bounding field.

The bounding field is used to attract rectangles over the username.

Now, run the app once more, select an image out of your gallery, and also you’ll get the X usernames acknowledged:

Username recognition

Congratulations! You’ve simply realized find out how to use Textual content Recognition.

AI-ready infrastructure | New period of knowledge heart design


The AI transformation crucial

Synthetic intelligence (AI) is now not a futuristic idea—it has grow to be a central driver of innovation, operational effectivity, and aggressive benefit throughout industries. However AI adoption isn’t with out its challenges. The Cisco 2024 AI Readiness Index highlights a rising urgency: 85% of organizations consider they’ve lower than 18 months to implement an AI plan—or threat falling behind. But solely 13% really feel absolutely ready to capitalize on AI alternatives.

This important hole between urgency and readiness underscores the significance of getting the proper infrastructure in place. As AI workloads develop in complexity and scale, conventional information facilities and networks are beneath immense stress to maintain up. Organizations want quick, dependable, and scalable networks to unlock the true potential of AI.

The rising calls for of AI workloads

AI workloads are extremely dynamic and information intensive. They require seamless communication between GPUs, CPUs, and storage programs, usually producing huge volumes of “east-west” (GPU-to-GPU storage) site visitors inside information facilities. Conventional networks—designed for much less demanding purposes—wrestle to ship the throughput, low latency, and reliability that AI workloads demand.

With out modernized networking infrastructure, organizations threat underutilizing their costly AI investments and falling in need of their enterprise targets.

Key challenges that AI networks should tackle embody:

  • Throughput: Guaranteeing high-speed information switch engineering site visitors whereas dealing with intensive AI computations
  • Latency: Minimizing delays in real-time processing
  • Scalability: Supporting exponential development in AI workloads over time
  • Effectivity: Lowering power consumption whereas optimizing prices

Cisco’s twin method to AI networking

At Cisco, we acknowledge that efficient AI networking requires each tailor-made infrastructure for demanding AI workloads and AI-driven instruments to simplify operations. This twin technique not solely addresses present challenges but additionally prepares organizations for future wants.

Key advantages of Cisco AI networking options embody:

  • Accelerated AI deployment: Quicker time-to-market for AI initiatives
  • Optimized useful resource utilization: Environment friendly use of {hardware} and power sources
  • Simplified operations: AI-driven instruments cut back complexity as networks scale
  • Future-ready scalability: Infrastructure designed to fulfill evolving AI calls for

“In an period the place AI workloads and real-time information processing outline the aggressive edge, high-performance information heart switching is now not a luxurious—it’s a necessity. Powered by Cisco Nexus 9000 sequence switching, we’re capable of transfer huge volumes of knowledge with unparalleled velocity and low latency that’s foundational to unlocking the complete potential of Groq’s progressive quick inferencing options, making certain organizations keep forward in a data-driven world.”

— Cameron Ferdinands, Head of Community Operations, Groq

A more in-depth take a look at Cisco AI networking improvements

AI workloads demand unprecedented throughput, low latency, and adaptableness out of your networking infrastructure. At Cisco, we ship an end-to-end information heart cloth for enterprises and neoclouds designed to fulfill these challenges, combining cutting-edge {hardware}, silicon, power-efficient optics, clever site visitors, administration, and automation to unlock the complete potential of your AI initiatives.

We ship most efficiency for AI networking with an infrastructure that gives:

  • Blazing-fast programs: Objective-built 400G and 800G Cisco Nexus 9000 Collection Switches managed by Nexus Dashboard speed up data-intensive AI purposes
  • Sustainable scaling: Energy-efficient Cisco Silicon One and Cisco Optics assist huge scale, minimizing power consumption and optimizing useful resource utilization
  • East-west optimization: Infrastructure tuned for distributed AI architectures maximizes communication effectivity and simplifies operations
  • Future-proof design: UEC-ready platforms align with rising Extremely Ethernet Consortium requirements, making certain your community is ready for future AI calls for

Meet Cisco Clever Packet Circulate

Cisco Clever Packet Circulate ensures your AI site visitors strikes seamlessly, even beneath probably the most demanding workloads, lowering job completion time (JCT). It achieves this by way of:

  • Optimum path utilization and decreased tail latency:
    Cisco Clever Packet Circulate delivers fine-grained load balancing with Dynamic Load Balancing (DLB), per-packet load balancing with multi-path packet spray, move pinning with deterministic routing and Weighted Price Multi-Path (WCMP) for weighted routing built-in with DLB, in addition to policy-based load balancing.
  • Congestion-aware site visitors administration:
    Cisco supplies real-time visibility into site visitors conduct with end-to-end telemetry with hardware-accelerated options like microburst detection, congestion signaling, tail timestamping and In-Band Community Telemetry (INT).
  • Autonomous restoration for seamless efficiency:
    Cisco Clever Pack Circulate ensures fault-aware site visitors restoration by lowering AI community hotspots and offering quick convergence by rerouting site visitors in case of sudden failures to keep away from head-of-line blocking.

Acquire unprecedented management with AIOps for Infrastructure

Optimizing AI workload efficiency and maximizing infrastructure ROI hinge on full lifecycle automation of AI materials together with strong visibility and streamlined administration. Cisco delivers industry-leading monitoring capabilities with AI Job Monitoring to empower your groups by way of:

  • Complete visibility: Acquire end-to-end insights throughout the complete stack, from AI jobs to the underlying infrastructure
  • Topology-aware correlation: Correlate efficiency information throughout AI jobs, community, and GPUs with intuitive visualizations
  • Actual-time insights and proactive detection: Entry essential real-time metrics, enabling proactive anomaly detection to deal with points earlier than they impression crucial AI workloads
  • Clever suggestions: Profit from clever suggestions based mostly on discovered patterns and greatest practices

By unifying operations and automation platforms with unparalleled visibility with Cisco Nexus Dashboard, we allow quicker insights, streamlined operations throughout community, infrastructure, and AI growth groups, finally maximizing your infrastructure ROI

At Cisco Dwell US 2025, Cisco and NVIDIA showcased the combination of Cisco G200 switches with NVIDIA NICs, that includes NVIDIA Spectrum-X Ethernet powered by Cisco Silicon One, supporting NX-OS, Nexus Hyperfabric AI, and SONiC. This collaboration guarantees clients enhanced efficiency, scalability, and adaptability for AI and fashionable networking workloads.

Prepared to rework your AI networking infrastructure?

Cisco AI networking options are constructed to do greater than sustain—they’re designed to empower your enterprise. By combining clever site visitors administration with scalable, adaptive infrastructure, we enable you to ship the efficiency, effectivity, and reliability that fashionable AI workloads demand.

In right now’s fast-paced AI panorama, agility and alignment with enterprise targets are important. With Cisco, your community is a strategic asset, able to assist innovation and development—right now and sooner or later. Allow us to enable you to construct a basis for AI-driven success.

 

Go to our useful resource heart or contact your Cisco consultant to be taught extra about:

Share: