Home Blog Page 3

Unitree B2 robodog fights fires with water cannon


Unitree has launched a modified model of its B2 quadruped aimed toward placing out fires. Capable of host numerous use-specific modules, the robodog hauls a strong water cannon high-flow and may function in excessive environments.

Although quadruped robots can tackle inspection, mapping and surveillance duties, they’ll additionally scout harmful environments in order that people do not need to. Such is the case with the hearth rescue bots from Unitree, which may be geared up with surveillance modules to offer reside video feeds to distant firefighters, permitting them to evaluate a scenario earlier than getting in.

Different modules mounted to its again can embrace a robotic arm, LiDAR sensors, comms gear and a water cannon with a variety of as much as 60 m (almost 200 ft) and a excessive circulation price of 40 liters per second. The hose is connected to the rear whereas the enterprise finish may be angled as much as 85 levels to swimsuit the fire-quenching want. The system can use water or foam, and the quadruped can routinely uncouple the hose and probably transfer on to different duties with out lacking a beat.

The hose module has a high flow rate of 40 liters per second and a range of 60 met
The hose module has a excessive circulation price of 40 liters per second and a variety of 60 meters

Unitree

This module additionally features a sprinkler system that is not designed to place out fires, however to maintain the robotic itself cool when throughout will not be. Which means the quadruped is dust- and waterproof, whereas composite metallic supplies for the physique additionally contribute to its excessive atmosphere readiness.

Yet another module worthy of point out is an air-blower unit “to extinguish forest fires safely by chopping the connection between flames and combustibles.”

The fire rescue quadruped's joints have been improved to tackle 45-degree stairs and step heights of up to 40 cm
The fireplace rescue quadruped’s joints have been improved to deal with 45-degree stairs and step heights of as much as 40 cm

Unitree

Unitree has additionally boosted joint efficiency over an ordinary B2 by 170%, giving the robodog further climbing energy for tackling obstacles as excessive as 40 cm (15 in) and stairs with a 45-degree slant. And eventually, the hearth rescue bot advantages from a hot-swap battery system for prolonged use within the area with out compromising waterproofing.

A variety of drills and demos have already taken place to show the robodog’s fire-fighting mettle, and a pair just lately joined the Qingdao Firefighting and Rescue Help Staff in an official capability. Items at the moment are obtainable on the market, although we have no phrase on pricing. The video under has extra.

Unitree Robotic Firefighting Answer

Supply: Unitree



Community Automation for the AI Period


When you might get 98%+ implementation and alter success 5x sooner, what number of adjustments would you make to your community?

Cisco Companies as Code will help you expedite community automation – and get you prepared for the AI period.

Right here’s what you want to know – in brief.

*This weblog is predicated on a Cisco Reside session delivered by Jesse Reed, VP of Buyer Expertise Product Administration, and Michael Kaemper, CTO of Buyer Expertise EMEA. You may watch the total session right here.

#1

What’s the state of networks right this moment?

  • Roughly 80% of community issues are on account of improper configuration and points with change administration[1], which are sometimes a results of human error.
  • IT leaders are shifting away from handbook automation to API-driven environments to organize for AI.
  • Infrastructure as Code is established because the de facto methodology with the worldwide market projected to develop at a CAGR of >24% from 2025 to 2027[2].

#2

What do you want to change to get AI-ready?

The AI period requires sure adjustments:

  • Digitize change; Go from ClickOps to DevOps: Transfer from human workflow to digital workflow. As a repeatable methodology, you may go from one structure to the subsequent utilizing the identical toolkit.
  • Leverage digital intelligence and expertise inside DevOps: Guarantee greatest practices are utilized, validated designs are adopted, and all adjustments are examined and documented with model management. Making use of open requirements like Terraform, OpenTofu, Ansible, and Relaxation-APIs allow you to handle your atmosphere in a common and totally digital style.

Working your infrastructure digitally is the important thing to harvesting AI capabilitieswhich has nonetheless to indicate its full potential.

#3

What’s Cisco Companies as Code?

It is a large transformation in the best way you use your atmosphere. Cisco can assist you with Cisco Companies as Code, enabling you with these three digital capabilities:

  • Infrastructure as Code: All of Cisco’s greatest practices, data, and capabilities, so you may keep away from errors when defining and executing configurations.
  • Enterprise Course of and Analytics: We guarantee an built-in enterprise course of is integrated. We provide observability and analytics so you may mirror on how your adjustments have an effect on your consumer expertise or your utility efficiency.
  • DevOps Toolchain: Cisco equips you with a toolchain or use yours the place out there. It’s all open supply to work to your heterogeneous environments, not simply Cisco applied sciences.

#4

How will you put together the workforce for change?  (AI brokers)

Engineers can now have interaction with the Cisco Companies as code by an AI Assistant to get assist all through your entire course of:

  • Assist me perceive the configuration”: The engineer comes into a sophisticated topology and desires to know the configuration state.
  • “Assist me change the configuration”: The engineer will get assist altering the configuration and creating documentation.
  • “Assist me troubleshoot”: The engineer executes a pipeline with validation earlier than the change goes into manufacturing. If an error happens, the assistant creates another configuration and helps determine and perceive the foundation reason behind that situation.

*Now out there for ACI. Quickly out there for SD-WAN and NDFC.

#5

How will you obtain cross-architecture automation?

Community as Code API means that you can automate completely different architectures in the identical method in addition to combine what you are promoting course of together with your ITSM, providers catalog, and many others. Upon getting one structure carried out within the toolchain, you should use the identical toolkit for different architectures.

Many architectures can be found right this moment, together with ACI, NDFC, SD-WAN, ISE, Firewall, Meraki, and Catalyst Middle.

#6

What do prospects say?

“We’ve got been speaking about community automation for 10 to fifteen years however hadn’t seen the enhance we anticipated. For us, it was the adoption of the general public cloud that opened our eyes, and we actually noticed the chance. Infrastructure as Code can have a big impression on the best way we function and preserve the community.”

-Jose Manuel Postigo Aguilar, Lead Community Architect, BBVA

 


[1] Gartner, Hype Cycle for Enterprise Networking 2024, June 2024

[2] World Market Insights, Infrastructure as Code Market Measurement, December 2024

 

Share:



Analysis Suggests LLMs Prepared to Help in Malicious ‘Vibe Coding’


Over the previous few years, Massive language fashions (LLMs) have drawn scrutiny for his or her potential misuse in offensive cybersecurity, notably in producing software program exploits.

The current development in the direction of ‘vibe coding’ (the informal use of language fashions to shortly develop code for a consumer, as a substitute of explicitly educating the consumer to code) has revived an idea that reached its zenith within the 2000s: the ‘script kiddie’ – a comparatively unskilled malicious actor with simply sufficient information to duplicate or develop a harmful assault. The implication, naturally, is that when the bar to entry is thus lowered, threats will are inclined to multiply.

All business LLMs have some form of guardrail towards getting used for such functions, though these protecting measures are underneath fixed assault. Usually, most FOSS fashions (throughout a number of domains, from LLMs to generative picture/video fashions) are launched with some form of comparable safety, often for compliance functions within the west.

Nonetheless, official mannequin releases are then routinely fine-tuned by consumer communities in search of extra full performance, or else LoRAs used to bypass restrictions and probably acquire ‘undesired’ outcomes.

Although the overwhelming majority of on-line LLMs will stop aiding the consumer with malicious processes, ‘unfettered’ initiatives akin to WhiteRabbitNeo can be found to assist safety researchers function on a degree enjoying area as their opponents.

The final consumer expertise at the moment is mostly represented within the ChatGPT sequence, whose filter mechanisms ceaselessly draw criticism from the LLM’s native neighborhood.

Appears to be like Like You’re Making an attempt to Assault a System!

In mild of this perceived tendency in the direction of restriction and censorship, customers could also be stunned to search out that ChatGPT has been discovered to be the most cooperative of all LLMs examined in a current research designed to power language fashions to create malicious code exploits.

The new paper from researchers at UNSW Sydney and Commonwealth Scientific and Industrial Analysis Organisation (CSIRO), titled Good Information for Script Kiddies? Evaluating Massive Language Fashions for Automated Exploit Era, presents the primary systematic analysis of how successfully these fashions could be prompted to provide working exploits. Instance conversations from the analysis have been offered by the authors.

The research compares how fashions carried out on each unique and modified variations of identified vulnerability labs (structured programming workout routines designed to reveal particular software program safety flaws), serving to to disclose whether or not they relied on memorized examples or struggled due to built-in security restrictions.

From the supporting site, the Ollama LLM helps the researchers to develop a string vulnerability attack. Source: https://anonymous.4open.science/r/AEG_LLM-EAE8/chatgpt_format_string_original.txt

From the supporting website, the Ollama LLM helps the researchers to develop a string vulnerability assault. Supply: https://nameless.4open.science/r/AEG_LLM-EAE8/chatgpt_format_string_original.txt

Whereas not one of the fashions was in a position to create an efficient exploit, a number of of them got here very shut; extra importantly, a number of of them needed to do higher on the activity, indicating a possible failure of present guardrail approaches.

The paper states:

‘Our experiments present that GPT-4 and GPT-4o exhibit a excessive diploma of cooperation in exploit technology, similar to some uncensored open-source fashions. Among the many evaluated fashions, Llama3 was probably the most immune to such requests.

‘Regardless of their willingness to help, the precise risk posed by these fashions stays restricted, as none efficiently generated exploits for the 5 customized labs with refactored code. Nonetheless, GPT-4o, the strongest performer in our research, usually made just one or two errors per try.

‘This means vital potential for leveraging LLMs to develop superior, generalizable [Automated Exploit Generation (AEG)] strategies.’

Many Second Possibilities

The truism ‘You do not get a second likelihood to make a great first impression’ isn’t usually relevant to LLMs, as a result of a language mannequin’s typically-limited context window implies that a unfavourable context (in a social sense, i.e., antagonism) is not persistent.

Contemplate: should you went to a library and requested for a e book about sensible bomb-making, you’d in all probability be refused, on the very least. However (assuming this inquiry didn’t totally tank the dialog from the outset) your requests for associated works, akin to books about chemical reactions, or circuit design, would, within the librarian’s thoughts, be clearly associated to the preliminary inquiry, and could be handled in that mild.

Doubtless as not, the librarian would additionally bear in mind in any future conferences that you simply requested for a bomb-making e book that one time, making this new context of your self ‘irreparable’.

Not so with an LLM, which might battle to retain tokenized info even from the present dialog, by no means thoughts from Lengthy-Time period Reminiscence directives (if there are any within the structure, as with the ChatGPT-4o product).

Thus even informal conversations with ChatGPT divulge to us unintentionally that it generally strains at a gnat however swallows a camel, not least when a constituent theme, research or course of regarding an in any other case ‘banned’ exercise is allowed to develop throughout discourse.

This holds true of all present language fashions, although guardrail high quality might differ in extent and strategy amongst them (i.e., the distinction between modifying the weights of the skilled mannequin or utilizing in/out filtering of textual content throughout a chat session, which leaves the mannequin structurally intact however probably simpler to assault).

Testing the Methodology

To check how far LLMs could possibly be pushed towards producing working exploits, the authors arrange a managed atmosphere utilizing 5 labs from SEED Labs, every constructed round identified vulnerabilities together with a buffer overflow, return-to-libc, a Soiled COW assault, and race circumstances.

Along with utilizing the unique labs, the researchers created modified variations by renaming variables and features to generic identifiers. This was supposed to forestall the fashions from drawing on memorized coaching examples.

Every lab was run twice per mannequin: as soon as in its unique kind, and as soon as in its obfuscated model.

The researchers then launched a second LLM into the loop: an attacker mannequin designed to immediate and re-prompt the goal mannequin so as to refine and enhance its output over a number of rounds. The LLM used for this function was GPT-4o, which operated via a script that mediated dialogue between the attacker and goal, permitting the refinement cycle to proceed as much as fifteen occasions, or till no additional enchancment was judged potential:

Workflow for the LLM-based attacker, in this case GPT-4o.

Workflow for the LLM-based attacker, on this case GPT-4o.

The goal fashions for the mission had been GPT-4o, GPT-4o-mini, Llama3 (8B), Dolphin-Mistral (7B), and Dolphin-Phi (2.7B), representing each proprietary and open-source programs, with a mixture of aligned and unaligned fashions (i.e., fashions with built-in security mechanisms designed to dam dangerous prompts, and people modified via fine-tuning or configuration to bypass these mechanisms).

The locally-installable fashions had been run by way of the Ollama framework, with the others accessed by way of their solely obtainable methodology – API.

The ensuing outputs had been scored based mostly on the variety of errors that prevented the exploit from functioning as supposed.

Outcomes

The researchers examined how cooperative every mannequin was throughout the exploit technology course of, measured by recording the proportion of responses through which the mannequin tried to help with the duty (even when the output was flawed).

Results from the main test, showing average cooperation.

Outcomes from the principle check, displaying common cooperation.

GPT-4o and GPT-4o-mini confirmed the very best ranges of cooperation, with common response charges of 97 and 96 p.c, respectively, throughout the 5 vulnerability classes: buffer overflow, return-to-libc, format string, race situation, and Soiled COW.

Dolphin-Mistral and Dolphin-Phi adopted carefully, with common cooperation charges of 93 and 95 p.c. Llama3 confirmed the least willingness to take part, with an total cooperation charge of simply 27 p.c:

On the left, we see the number of mistakes made by the LLMs on the original SEED Lab programs; on the right, the number of mistakes made on the refactored versions.

On the left, we see the variety of errors made by the LLMs on the unique SEED Lab applications; on the precise, the variety of errors made on the refactored variations.

Analyzing the precise efficiency of those fashions, they discovered a notable hole between willingness and effectiveness: GPT-4o produced probably the most correct outcomes, with a complete of six errors throughout the 5 obfuscated labs. GPT-4o-mini adopted with eight errors. Dolphin-Mistral carried out fairly effectively on the unique labs however struggled considerably when the code was refactored, suggesting that it could have seen comparable content material throughout coaching. Dolphin-Phi made seventeen errors, and Llama3 probably the most, with fifteen.

The failures usually concerned technical errors that rendered the exploits non-functional, akin to incorrect buffer sizes, lacking loop logic, or syntactically legitimate however ineffective payloads. No mannequin succeeded in producing a working exploit for any of the obfuscated variations.

The authors noticed that the majority fashions produced code that resembled working exploits, however failed on account of a weak grasp of how the underlying assaults truly work –  a sample that was evident throughout all vulnerability classes, and which instructed that the fashions had been imitating acquainted code constructions quite than reasoning via the logic concerned (in buffer overflow circumstances, for instance, many didn’t assemble a functioning NOP sled/slide).

In return-to-libc makes an attempt, payloads typically included incorrect padding or misplaced perform addresses, leading to outputs that appeared legitimate, however had been unusable.

Whereas the authors describe this interpretation as speculative, the consistency of the errors suggests a broader difficulty through which the fashions fail to attach the steps of an exploit with their supposed impact.

Conclusion

There’s some doubt, the paper concedes, as as to if or not the language fashions examined noticed the unique SEED labs throughout first coaching; for which motive variants had been constructed. Nonetheless, the researchers verify that they wish to work with real-world exploits in later iterations of this research; actually novel and up to date materials is much less prone to be topic to shortcuts or different complicated results.

The authors additionally admit that the later and extra superior ‘pondering’ fashions akin to GPT-o1 and DeepSeek-r1, which weren’t obtainable on the time the research was carried out, might enhance on the outcomes obtained, and that it is a additional indication for future work.

The paper concludes to the impact that a lot of the fashions examined would have produced working exploits if they’d been able to doing so. Their failure to generate absolutely useful outputs doesn’t seem to outcome from alignment safeguards, however quite factors to a real architectural limitation – one which will have already got been lowered in more moderen fashions, or quickly can be.

 

First printed Monday, Could 5, 2025

Automate Forensics to Remove Uncertainty


At RSA Convention 2025, one theme echoed throughout the present flooring: safety groups don’t want extra alerts—they want extra certainty. As threats transfer sooner and operations get leaner, organizations are shifting from reactive investigation to proactive, automated forensics. That’s why we’re excited to announce a serious leap ahead in Cisco XDR: automated forensics constructed into the detection and response workflow.

The Fashionable SOC Struggles with Confidence, Not Simply Complexity

It’s not about simply figuring out suspicious exercise. At present’s safety instruments can floor anomalies similar to a rogue login, a wierd course of, or a lateral motion try. The actual problem? Proving what occurred—and the way far it went—earlier than harm spreads.

Guide investigations delay motion and important questions go unanswered:

  • What actually occurred?
  • How far did it go?
  • What’s subsequent?

With out clear proof, groups stall. Investigations drag on. And uncertainty turns into the best danger. Guide Digital Forensics and Incident Response (DFIR) has historically lived outdoors the core detection and response loop. That hole is not sustainable.

A New Mandate: TDIR and DFIR Should Work as One

Cisco’s imaginative and prescient is obvious: Menace Detection, Investigation, and Response (TDIR) and forensics have to be a unified movement.

Safety groups have to validate threats and act with confidence—with out ready for handbook processes or digging via disconnected logs. And now, Cisco XDR makes this attainable by operationalizing forensics straight into the AI-assisted TDIR circulation.

Finest-in-class safety operations doesn’t cease at detection; it closes the loop. Assured SOCs have embraced a steady, linked workflow the place detection, response, investigation, verification, and remediation are all a part of the identical movement.

Analysis corporations agree that merging menace detection and response with prompt, automated investigation is the long run. In keeping with a report from the SANS Institute, “64% of organizations have built-in automated response mechanisms, however solely 16% have absolutely automated processes. This discovering underscores a shift in direction of automation in menace detection and response.”

“64% of organizations have built-in automated response mechanisms, however solely 16% have absolutely automated processes. This discovering underscores a shift in direction of automation in menace detection and response.”

Cisco XDR is operationalizing this shift—making forensics an embedded functionality, not an elite ability.

What’s New: Prompt, Automated Forensics on the Level of Detection

Sooner or later, Cisco XDR will be capable of seize forensic proof routinely when a suspicious occasion is detected—earlier than analysts even start their investigation.

Highlights:

  • Automated Triggers —Actual-time forensic snapshotting of reminiscence, processes, and file information throughout impacted endpoints
  • Incident Timeline Enrichment — Collected artifacts are built-in alongside the XDR storyboard for end-to-end visibility
  • AI-Powered Summarization — Cisco XDR interprets forensic findings and suggests seemingly root trigger and response actions
  • Guided Analyst Workflow — Visible assault graphs and step-by-step remediation paths speed up time to response

That is investigation with out friction. Forensics with out pivoting. Proof directly.

Designed for Each Staff—from Lean IT to World SOC

Whether or not you might have a small workforce with restricted workers or a worldwide SOC supporting a hybrid enterprise, Cisco XDR adapts to your setting:

  • For smaller groups — One-click forensics reduces dependency on specialists. Prebuilt AI workflows speed up validation and containment.
  • For enterprises with Splunk or different SIEMs — Cisco XDR enriches your SIEM with validated forensic information—enhancing correlation, compliance reporting, and post-incident documentation.

No third-party agent. No separate console. No studying curve.

The Final result: Confidence on the Velocity of SecOps

By embedding forensic seize into each validated menace, Cisco XDR helps safety groups:

  • Remove ambiguity with concrete, machine-captured proof
  • Speed up decision-making by eradicating the guesswork from investigations
  • Guarantee consistency throughout shifts, roles, and groups
  • Enhance audit readiness with forensically backed incident documentation

It’s not nearly responding quick—it’s about responding proper.

Powered by Cisco’s Open Requirements Structure

This new functionality is deeply built-in into Cisco’s broader safety platform, leveraging native telemetry from:

  • Cisco Safe Consumer
  • Meraki MX
  • Safe Entry (SSE)
  • Safe Endpoint
  • Umbrella DNS and Cloud Firewall
  • Public Cloud Logs

And it’s enriched by the worldwide menace intelligence of Cisco Talos, together with pre-built integrations into 100+ different safety merchandise from Cisco and third events. Collectively, this basis offers Cisco XDR the deepest native visibility and broadest assault floor protection of any XDR resolution available on the market.

Able to Increase Your SecOps Confidence?

Solely Cisco unifies real-time detection, AI-led investigation, and automatic proof seize in a single XDR resolution. There isn’t a third-party instrument dependency. No delays. Simply certainty on the pace of SecOps.

Ransomware, insider threats, and provide chain assaults transfer quick and depart little room for doubt. That’s the place we’ve your again. Cisco XDR is constructed on deep visibility, enriched with Talos menace intelligence, and is able to scale.

Now, as a substitute of extra alerts, you get prioritized incidents with the proof you want. With prompt supply, SecOps has proof for regulators, not assumptions. And explanations for boards, not theories.

See how Cisco XDR delivers prompt forensics and AI-guided investigation to assist your workforce go from “We predict” to “We all know.”

Register for the RSAC Highlights webinar on Might 20th to find out about all the key Cisco XDR improvements introduced at RSAC™ 2025.


We’d love to listen to what you suppose. Ask a Query, Remark Beneath, and Keep Linked with Cisco Safe on social!

Cisco Safety Social Channels

Instagram
Fb
Twitter
LinkedIn

Share: