Home Blog

MXene infused printed nanogenerator advances ecofriendly wearable power techniques


MXene infused printed nanogenerator advances ecofriendly wearable power techniques

by Clarence Oxford

Los Angeles CA (SPX) Jun 17, 2025






Researchers at Boise State College have launched a totally printed, environmentally pleasant triboelectric nanogenerator (TENG) that harvests biomechanical and environmental power whereas additionally appearing as a movement sensor. Constructed from a novel composite of Poly (vinyl butyral-co-vinyl alcohol-co-vinyl acetate) (PVBVA) and MXene (Ti3C2Tx) nanosheets, this system gives a sustainable and environment friendly various to traditional TENGs, which frequently rely on fluorinated polymers and complicated manufacturing processes.



TENGs, which convert mechanical power into electrical energy utilizing the triboelectric impact, have been initially developed by Prof. Zhong Lin Wang at Georgia Tech. These techniques generate power from movement and speak to between supplies, making them well-suited for wearable tech, IoT sensors, and self-powered electronics. The Boise State venture, led by Ph.D. candidate Ajay Pratap and supervised by Prof. David Estrada from the Micron College of Supplies Science and Engineering, demonstrates how additive manufacturing permits versatile, skin-compatible, high-performance units for real-world power and sensing functions.



The researchers formulated a printable PVBVA ink containing 5.5 mg/mL of MXene-an rising class of atomically skinny supplies. Their prototype achieved an open-circuit voltage of 252 V, a short-circuit present of two.8 uA, and a peak energy density of 760 mW/m2. These outcomes stem from the composite’s excessive dielectric fixed and superior cost switch capabilities, pushed by robust interfacial polarization and synergistic interactions between MXene and the polymer. The system maintained steady efficiency after greater than 10,000 mechanical flexing cycles.



“This analysis underscores the promise of mixing sustainable supplies with superior printing methods,” mentioned Ajay Pratap. “By eliminating dangerous solvents and incorporating MXene into an eco-friendly polymer matrix, we’ve created a scalable power harvesting system that isn’t solely environment friendly but in addition environmentally acutely aware.”



The crew additionally constructed a totally printed TENG prototype utilizing ethanol-based inks and silver electrodes. This model successfully detected a variety of human actions, together with strolling, knee bending, and leaping. It additionally harvested rainwater power and efficiently powered units resembling LEDs and stopwatches, showcasing its software breadth.



Prof. Estrada famous, “Ajay’s work highlights how next-generation power harvesting techniques can harness biomechanical movement to generate energy in actual time. His revolutionary strategy utilizing sustainable supplies and additive manufacturing paves the best way for self-powered wearable units that convert on a regular basis human exercise into helpful power.”



The analysis was backed by NASA EPSCoR, the U.S. Division of Vitality, and collaborators resembling NASA Ames, Idaho Nationwide Lab, and Drexel College, with enter from consultants throughout supplies science, mechanical engineering, and nanoelectronics.



Analysis Report:Direct writing of PVBVA/Ti3C2 Tx (MXene) triboelectric nanogenerators for power harvesting and sensing functions


Associated Hyperlinks

Boise State College Faculty of Engineering

Powering The World within the twenty first Century at Vitality-Day by day.com



A New Case of Cell Espionage


Final week, a number of investigations have confirmed that three journalists primarily based in Europe have been focused utilizing Graphite, a robust spy ware instrument developed by Israeli firm Paragon Options. In accordance with findings reported by Citizen Lab, the assaults concerned a zero-click exploit on iOS, permitting full compromise of the gadgets with out person interplay. This new case sheds mild on the rising sophistication of cellular spy ware and the rising danger it poses far past the journalism world.

Whereas journalists and human rights defenders are sometimes early victims of such applied sciences, the implications are broader: any group dealing with delicate knowledge, whether or not political, authorized, medical or company is doubtlessly susceptible. 

 

A New Era of Silent Assaults 

Graphite belongs to a brand new breed of spy ware able to working with out detection. As soon as deployed, it could possibly entry messages, name logs, pictures, contacts, microphone and digital camera feeds, GPS location, and app knowledge remodeling a telephone right into a surveillance gadget. And since Graphite can entry encrypted conversations, any particular person speaking with a focused gadget might also have their messages intercepted, making them oblique victims of the assault. 

What makes it notably harmful is the zero-click mechanism it leverages. Victims don’t must click on a malicious hyperlink or set up a rogue app. Within the case uncovered, the an infection seems to have taken place by means of Apple’s native messaging system, by exploiting an unknown vulnerability that has since been patched. 

This aligns with broader business observations: cellular platforms, as soon as thought-about comparatively secure, at the moment are being actively focused by way of OS-level vulnerabilities and trusted system apps. 

 

A Recurring Menace within the Cell Panorama 

Cell gadgets focus an unprecedented quantity of non-public {and professional} data. They’re continuously related, not often turned off, and more and more used as a major work instrument. But, they continue to be the least protected a part of most company infrastructures. 

Graphite shouldn’t be the primary spy ware to abuse that belief. Comparable instruments like Pegasus, Predator or Reign have proven how superior surveillance kits can quietly infiltrate smartphones. 

The chance shouldn’t be restricted to particular targets. So long as these instruments exist and stay obtainable on grey or black markets, any high-value profile could change into a goal, together with company leaders, compliance officers, healthcare professionals, and elected officers. 

 

Addressing Cell-Particular Threats 

This newest case is a reminder that conventional safety measures don’t suffice relating to cellular threats. Cell Machine Administration (MDM) options are important for imposing insurance policies, managing fleet configuration, and remotely wiping compromised gadgets however they don’t analyze threats, detect malicious conduct, or block ongoing assaults. 

Equally, Endpoint Detection and Response (EDR) platforms, whereas extremely efficient on workstations and servers, supply solely restricted visibility and protection relating to cellular working methods. They usually deal with processes and endpoints in desktop environments, not on mobile-specific dangers. 

To counter superior spy ware, organizations should undertake devoted Cell Menace Protection options that monitor app behaviors and assess danger in actual time with out ready for a breach to happen. 

As cellular threats evolve, the one viable technique is a layered protection that aligns administration, detection, and real-time response. 

Cisco capitalizes on Isovalent purchase, unveils new load balancer



The shopper deploys the Isovalent Load Balancer management airplane through automation and configures the specified variety of digital load-balancer home equipment, Graf stated. “The management airplane routinely deploys digital load-balancing home equipment through the virtualization or Kubernetes platform. The load-balancing layer is self-healing and helps auto-scaling, which implies that I can exchange unhealthy cases and scale out as wanted. The load balancer helps highly effective L3-L7 load balancing with enterprise capabilities,” he stated.

Relying on the infrastructure the load balancer is deployed into, the operator will deploy the load balancer utilizing acquainted deployment strategies. In a knowledge heart, this can be carried out utilizing a regular virtualization automation set up corresponding to Terraform or Ansible. Within the public cloud, the load balancer is deployed as a public cloud service. In Kubernetes and OpenShift, the load balancer is deployed as a Kubernetes Deployment/Operator, Graf stated. 

“Sooner or later, the Isovalent Load Balancer may even be capable to run on prime of Cisco Nexus good switches,” Graf stated. “Because of this the Isovalent Load Balancer can run in any setting, from information heart, public cloud, to Kubernetes whereas offering a constant load-balancing layer with a frictionless cloud-native developer expertise.”

Cisco has introduced a number of good switches over the previous couple of months on the seller’s 4.8T capability Silicon One chip. However the N9300, the place Isovalent would run, features a built-in programmable information processing unit (DPU) from AMD to dump complicated information processing work and release the switches for AI and huge workload processing.

For patrons, the Isovalent Load Balancer gives constant load balancing throughout infrastructure whereas being aligned with Kubernetes as the longer term for infrastructure. “A single load-balancing answer that may run within the information heart, in public cloud, and trendy Kubernetes environments. This removes operational complexity, lowers price, whereas modernizing the load-balancing infrastructure in preparation for cloud native and AI,” Graf stated.

As well as, it’s aligned with trendy utility improvement ideas. “It removes ‘ticket ops’ type load-balancing configuration the place utility groups must file tickets to get a load-balancing service. As a substitute, it permits utility groups to leverage trendy CI/CD deployment practices and accelerates deployment and time to marketplace for new functions,” Graf stated.

Managing the rising danger profile of agentic AI and MCP within the enterprise


Developments in synthetic intelligence proceed to offer builders an edge in effectively producing code, however builders and corporations can’t neglect that it’s an edge that may at all times minimize each methods.

The newest innovation is the appearance of agentic AI, which brings automation and decision-making to advanced growth duties. Agentic AI may be coupled with the lately open-sourced Mannequin Context Protocol (MCP), a protocol launched by Anthropic, offering an open commonplace for orchestrating connections between AI assistants and information sources, streamlining the work of growth and safety groups, which might turbocharge productiveness that AI has already accelerated. 

Anthropic’s rivals have completely different “MCP-like” protocols making their approach into the house, and because it stands, the web at giant has but to find out a “winner” of this software program race. MCP is Anthropic for AI-to-tool connections. A2A is Google, and in addition facilitates AI-to-AI comms. Cisco and Microsoft will each come out with their very own protocol, as effectively. 

However, as we’ve seen with generative AI, this new method to rushing up software program manufacturing comes with caveats. If not rigorously managed, it will probably introduce new vulnerabilities and amplify present ones, corresponding to vulnerability to immediate injection assaults, the era of insecure code, publicity to unauthorized entry and information leakage. The interconnected nature of those instruments inevitably expands the assault floor.

Safety leaders must take a tough have a look at how these dangers have an effect on their enterprise, being positive they perceive the potential vulnerabilities that consequence from utilizing agentic AI and MCP, and take the required steps to attenuate these dangers.

How Agentic AI Works With MCP

After generative AI took the world by storm beginning in November 2022 with the discharge of ChatGPT, agentic AI can seem to be the subsequent step in AI’s evolution, however they’re two completely different types of AI.

GenAI creates content material, utilizing superior machine studying to attract on present information to create textual content, photographs, movies, music and code. 

Agentic AI is about fixing issues and getting issues accomplished, utilizing instruments corresponding to machine studying, pure language processing and automation applied sciences to make choices and take motion. Agentic AI can be utilized, for instance, in self-driving automobiles (responding to circumstances on the street), cybersecurity (initiating a response to a cyberattack) or customer support (proactively providing assist to prospects). In software program growth, agentic AI can be utilized to write down giant sections of code, optimize code and troubleshoot issues.

In the meantime, MCP, developed by Anthropic and launched in November 2024, accelerates the work of agentic AI and different coding assistants by offering an open, common commonplace for connecting giant language fashions (LLMs) with information sources and instruments, enabling groups to use AI capabilities all through their surroundings with out having to write down separate code for every software. By primarily offering a standard language for LLMs corresponding to ChatGPT, Gemini, DALL•E, DeepSeek and lots of others to speak, it enormously will increase interoperability amongst LLMs.

MCP is even touted as a strategy to enhance safety, by offering a typical strategy to combine AI capabilities and automate safety operations throughout a corporation’s toolchain. Though it was handled as a general-purpose software, MCP can be utilized by safety groups to extend effectivity by centralizing entry, including interoperability with safety instruments and functions, and giving groups versatile management over which LLMs are used for particular duties.

However as with all highly effective new software, organizations shouldn’t simply blindly soar into this new mannequin of growth with out taking a cautious have a look at what may go improper. There’s a important profile of elevated safety dangers related to agentic AI coding instruments inside enterprise environments, particularly specializing in MCP. 

Productiveness Is Nice, however MCP Additionally Creates Dangers

Invariant Labs lately found a essential vulnerability in MCP that might permit for information exfiltration by way of oblique immediate injections, a high-risk challenge that Invariant has dubbed “software poisoning” assaults. Such an assault embeds malicious code instructing an AI mannequin to carry out unauthorized actions, corresponding to accessing delicate information and transmitting information with out the person being conscious. Invariant stated many suppliers and programs like OpenAI, Anthropic, Cursor and Zapier are susceptible to this sort of assault. 

Along with software poisoning, corresponding to oblique immediate injection, MCP can introduce different potential vulnerabilities associated to authentication and authorization, together with extreme permissions. MCP can even lack sturdy logging and monitoring, that are important to sustaining the safety and efficiency of programs and functions. 

The vulnerability considerations are legitimate, although they’re unlikely to stem the tide transferring towards the usage of agentic AI and MCP. The advantages in productiveness are too nice to disregard. In any case, considerations about safe code have at all times revolved round GenAI coding instruments, which might introduce flaws into the software program ecosystem if the GenAI fashions had been initially educated on buggy software program. Nevertheless, builders have been comfortable to utilize GenAI assistants anyway. In a latest survey by Stack Overflow, 76% of builders stated they had been utilizing or deliberate to make use of AI instruments. That’s a rise from 70% in 2023, even supposing throughout the identical time interval, these builders’ view of AI instruments as favorable or very favorable dropped from 77% to 72%.

The excellent news for organizations is that, as with GenAI coding assistants, agentic AI instruments and MCP capabilities may be safely leveraged, so long as security-skilled builders deal with them. The important thing emergent danger issue right here is that expert human oversight is not scaling at wherever close to the speed of agentic AI software adoption, and this pattern should course-correct, pronto.

Developer Schooling and Danger Administration Is the Key

Whatever the applied sciences and instruments in play, the important thing to safety in a extremely linked digital surroundings (which is just about each surroundings today) is the Software program Improvement Lifecycle (SDLC). Flaws on the code stage are a high goal of cyberattackers, and eliminating these flaws will depend on making certain that safe coding practices are de rigueur within the SDLC, that are utilized from the start of the event cycle. 

With AI help, it’s an actual risk that we’ll lastly see the eradication of long-standing vulnerabilities like SQL injection and cross-site scripting (XSS) after a long time of them haunting each pentest report. Nevertheless, most different classes of vulnerabilities will stay, particularly these referring to design flaws, and we’ll inevitably see new teams of AI-borne vulnerabilities because the expertise progresses. Navigating these points will depend on builders being security-aware with the abilities to make sure, as a lot as potential, that each the code they create and code generated by AI is safe from the get-go. 

Organizations must implement ongoing schooling and upskilling applications that give builders the abilities and instruments they should work with safety groups to mitigate flaws in software program earlier than they are often launched into the ecosystem. A program ought to make use of benchmarks to ascertain the baseline abilities builders want and measure their progress. It ought to be framework and language-specific, permitting builders to work in real-world situations with the programming language they use on the job. Interactive periods work greatest, inside a curriculum that’s versatile sufficient to regulate to modifications in circumstances.

And organizations want to substantiate that the teachings from upskilling applications have hit house, with builders placing safe greatest practices to make use of on a routine foundation. A software that makes use of benchmarking metrics to trace the progress of people, groups and the group total, assessing the effectiveness of a studying program in opposition to each inner and business requirements, would offer the granular insights wanted to actually transfer the needle is probably the most helpful. Enterprise safety leaders in the end want a fine-grained view of builders’ particular abilities for each code commit whereas displaying how effectively builders apply their new abilities to the job.

Developer upskilling has proved to be efficient in enhancing software program safety, with our analysis displaying that corporations that applied developer schooling noticed 22% to 84% fewer software program vulnerabilities, relying on elements corresponding to the scale of the businesses and whether or not the coaching targeted on particular issues. Safety-skilled builders are in the perfect place to make sure that AI-generated code is safe, whether or not it comes from GenAI coding assistants or the extra proactive agentic AI instruments.

The drawcard of agentic fashions is their means to work autonomously and make choices independently, and these being embedded into enterprise environments at scale with out acceptable human governance will inevitably introduce safety points that aren’t notably seen or simple to cease. Expert builders utilizing AI securely will see immense productiveness positive factors, whereas unskilled builders will merely generate safety chaos at breakneck velocity.

CISOs should scale back developer danger, and supply steady studying and abilities verification inside their safety applications to soundly implement the assistance of agentic AI brokers.

XDR nonetheless means a lot greater than some might understand – Cisco Weblog


One 12 months in the past, we wrote about Prolonged Detection and Response (XDR) as an rising safety class as a result of there had been inherent, unmet wants for organizations.

Right this moment, we see that Cisco has pushed this market to attain new heights and necessities with our method to XDR. With over 1,000 prospects, the true promise of XDR is being realized by means of:

  • Concentrate on correlated incidents vs. investigating each safety occasion
  • Recognition for the significance of community and cloud telemetry
  • Open, hybrid architectures supporting extra than simply native telemetry

Cisco XDR democratizes safety operations. As a complete, standalone safety device for organizations it delivers built-in integrations, constructed – in analytics, and sturdy, guided response. As a key a part of a Safety Operations platform, it extends capabilities into fraud detection, superior risk looking, and detection engineering.

Cisco is proud to announce that we’ve been named a Chief and Quick Mover in GigaOm’s Radar for Prolonged Detection and Response (XDR) — once more!

GigaOm established this Radar 4 (4) years in the past and invited Cisco XDR to take part our first 12 months in market. As Leaders within the 2024 and 2025 GigaOm Radars for Prolonged Detection and Response, Cisco has confirmed that our method to XDR is aligned with the wants of the market proper from the beginning.

It signifies that Cisco XDR is delivering on the true promise of XDR and never making baseless claims or lip-service. We’re redefining the market since 2023 by listening to prospects and delivering on their germane ache factors.

On this 12 months’s analysis, GigaOm calls out Cisco XDR’s strengths in Assault Path Visualization, Danger Prioritization, and Unified Telemetry. Every immediately helps Cisco XDR’s skill to ship a high-fidelity incident comprised of analyzed occasions that we outline are a danger to your group which we clearly visualize, utilizing your entire key telemetry sources and your present investments in safety structure – Cisco or third-party.

Cisco acknowledges that any group, no matter its maturity stage in safety operations, needs to confirm an answer’s capabilities to then belief it. We submit our continued Management within the GigaOm Prolonged Detection and Response Radar as verification proof.

We additionally invite you to strive Cisco XDR for your self with one in every of our self-guided demos (Cisco XDR Demos and Webinars — Cisco).

With Immediate Assault Verification, Assault Storyboard, and XDR Forensics on the horizon (at present in Non-public Preview), Cisco is dedicated to supporting your Safety Operations journey wherever you’re on that path right now: Cisco XDR Simply Modified the Sport, Once more – Cisco Weblog.

As an XDR resolution powered by built-in Forensics to help higher telemetry acquisition, visibility, and operational maturity, Cisco XDR is the apropos selection on your group. We are able to’t wait to associate with you!


We’d love to listen to what you suppose! Ask a query and keep related with Cisco Safety on social media.

Cisco Safety Social Media

LinkedIn
Fb
Instagram
X

Share: