Home Blog Page 2

A Practitioner-Targeted DevSecOps Evaluation Method


Success in a DevSecOps enterprise hinges on delivering worth to the tip consumer, not merely finishing intermediate steps alongside the best way. Organizations and applications usually wrestle to attain this resulting from quite a lot of elements, reminiscent of a scarcity of clear possession and accountability for the potential to ship software program, useful siloes versus built-in groups and processes, lack of efficient instruments for groups to make use of, and a scarcity of efficient assets for group members to leverage to shortly rise up to hurry and enhance productiveness.

An absence of a central driving pressure may end up in siloed items inside a given group or program, fragmented determination making, and an absence of outlined key efficiency metrics. Consequently, organizations could also be hindered of their means to ship functionality on the velocity of relevance. A siloed DevSecOps infrastructure, the place disjointed environments are intertwined to kind an entire pipeline, causes builders to expend vital effort to construct an utility with out the assist of documentation and steerage for working inside the supplied platforms. Groups can’t create repeatable options within the absence of an end-to-end built-in utility supply pipeline. With out one, effectivity suffers, and pointless practices bathroom down the whole course of.

Step one in reaching the worth DevSecOps can convey is to know how we outline it:

a socio-technical system made up of a set of each software program instruments and processes. It’s not a computer-based system to be constructed or acquired; it’s a mindset that depends on outlined processes for the fast improvement, fielding, and operations of software program and software-based programs using automation the place possible to attain the specified throughput of growing, fielding, and sustaining new product options and capabilities.

DevSecOps is thus a mindset that builds on automation the place possible.

The target of an efficient DevSecOps evaluation is to know the software program improvement course of and make suggestions for enhancements that can positively affect the worth, high quality, and velocity of supply of merchandise to the tip consumer in an operationally steady and safe method. A complete evaluation of present capabilities should embody each quantitative and qualitative approaches to gathering information and figuring out exactly the place challenges reside within the product supply course of. The scope of an evaluation should take into account all processes which can be required to subject and function a software program product as a part of the worth supply processes. The aperture via which a DevSecOps evaluation group focuses its work is wider than the instruments and processes sometimes regarded as the software program improvement pipeline. The evaluation should embody the broader context of the whole product supply pipeline, together with planning phases, the place functionality (or worth) wants are outlined and translated into necessities, in addition to post-deployment operational phases. This wider view permits an evaluation group to find out how effectively organizations ship worth.

There are a myriad of overlapping influences that may trigger dysfunction inside a DevSecOps enterprise. Trying from the skin it may be tough to peel again the layers and successfully discover the foremost causes. This weblog focuses on methods to conduct a DevSecOps evaluation with an strategy that makes use of 4 methodologies to investigate an enterprise from the angle of the practitioner utilizing the instruments and processes to construct and ship worthwhile software program. Taking the angle of the practitioner permits the evaluation group to floor probably the most instantly related challenges dealing with the enterprise.

A 4-Pronged Evaluation Methodology

To border the expertise of a practitioner, a complete evaluation requires a layered strategy. This type of strategy can assist assessors collect sufficient information to know each the total scope and the particular particulars of the builders’ experiences, each optimistic and destructive. We take a four-pronged strategy:

  1. Immersion: The evaluation group immerses itself into the event course of by both growing a small, consultant utility from scratch, becoming a member of an current improvement group, or different technique of gaining firsthand expertise and perception within the course of. Avoiding particular therapy is vital to collect real-world information, so the evaluation group ought to use means to turn into a “secret shopper” wherever potential. This additionally permits the evaluation group to determine what the true, not simply documented, course of is to ship worth.
  2. Remark: The evaluation group instantly observes current utility improvement groups as they work to construct, check, ship, and deploy their functions to the tip customers. Observations ought to cowl as a lot of the value-delivery course of as practicable, reminiscent of consumer engagement, product design, dash planning, demos, retrospectives, and software program releases.
  3. Engagement: The evaluation group conducts interviews and centered dialogue with improvement groups and different related stakeholders to make clear and collect context for his or her expertise and observations. Ask the practitioners to point out the evaluation group how they work.
  4. Benchmarking: The evaluation group captures accessible metrics from the enterprise and its processes and compares them with anticipated outcomes for related organizations.

To attain this, an evaluation group can use ethnographic analysis strategies as described within the Luma Institute Innovating for Individuals System. Interviewing, fly-on-the-wall remark, and contextual inquiry permit the evaluation group to look at product groups working, conduct follow-up interviews about what they noticed, and ask questions on conduct and expectations that they didn’t observe. By utilizing the walk-a-mile immersion approach, the evaluation group can communicate firsthand to their experiences utilizing the group’s present instruments and processes.

These strategies assist be sure that the evaluation group understands the method by getting firsthand expertise and doesn’t overly depend on documentation or the biases of remark or engagement topics. Additionally they allow the group to higher perceive what they’re observing or listening to about from different practitioners and determine the elements of the worth supply course of the place enhancements are extra possible available.

The two Dimensions of Assessing DevSecOps Capabilities

To precisely assess DevSecOps processes, one wants each quantitative information (e.g., metrics) to pinpoint and prioritize challenges primarily based on affect and qualitative information (e.g., expertise and suggestions) to know the context and develop focused options. Whereas the evaluation methodology mentioned above supplies a repeatable strategy for amassing the mandatory quantitative and qualitative information, it isn’t ample as a result of it doesn’t inform the assessor what information is required, what inquiries to ask, what DevSecOps capabilities are anticipated, and so forth. To handle these questions whereas assessing a company’s DevSecOps capabilities, the next dimensions needs to be thought-about:

  • a quantitative evaluation of a company’s efficiency in opposition to tutorial and trade benchmarks of efficiency
  • a qualitative evaluation of a company’s adherence to established greatest practices of high-performing DevSecOps organizations

Inside every dimension, the evaluation group should take a look at a number of essential elements of the worth supply course of:

  • Worth Definition: How are consumer wants captured and translated into merchandise and options?
  • Developer Expertise: Are the instruments and processes that builders are anticipated to make use of intuitive, and do they cut back toil?
  • Platform Engineering: Are the instruments and processes effectively built-in, and are the precise elements automated?
  • Software program Improvement Efficiency: How efficient and environment friendly are the event processes at constructing and delivering useful software program?

Since 2013, Google has printed an annual DevSecOps Analysis and Evaluation (DORA) Speed up State of DevOps Report. These reviews assemble information from hundreds of practitioners worldwide and compile them right into a complete report breaking down four-to-five key metrics to find out the general state of DevSecOps practices throughout all kinds of enterprise sorts and sectors. An evaluation group can use these reviews to shortly key in on the metrics and thresholds that analysis has proven to be vital indicators of total efficiency. Along with the DORA metrics, the evaluation group can conduct a literature seek for different publications that present metrics associated to a particular software program architectural sample, reminiscent of real-time resource-constrained cyber-physical programs.

To have the ability to examine a company or program to trade benchmarks, such because the DORA metrics or case research, the evaluation group should be capable of collect organizationally consultant information that may be equated to the metrics discovered within the given benchmark or case research. This may be achieved in a mixture of how, together with amassing information manually because the evaluation group shadows the group’s builders or stitching collectively information collected from automated instruments and interviews. As soon as the info is collected, visualizations such because the determine under may be created to point out the place the given group or program compares to the benchmark.

From a qualitative perspective, the evaluation group can use the SEI’s DevSecOps Platform Unbiased Mannequin (PIM), which incorporates greater than 200 necessities one would anticipate to see in a high-performing DevSecOps group. The PIM permits applications to map their present or proposed capabilities onto the set of capabilities and necessities of the PIM to make sure that the DevSecOps ecosystem into consideration or evaluation implements the perfect practices. For assessments, the PIM supplies the potential for applications to search out potential gaps by trying throughout their present ecosystem and processes and mapping them to necessities that specific the extent of high quality of outcomes anticipated. The determine under exhibits an instance abstract output of the qualitative evaluation when it comes to the ten DevSecOps capabilities outlined inside the PIM and total maturity stage of the group beneath evaluation. Check with the DevSecOps Maturity Mannequin for extra info relating to using the PIM for qualitative evaluation.

Charting Your Course to DevSecOps Success

By using a multi-faceted evaluation methodology that mixes immersion, remark, engagement, and benchmarking, organizations can acquire a holistic view of their DevSecOps functionality. Leveraging benchmarks just like the DORA metrics and reference architectures just like the DevSecOps PIM supplies a structured strategy to measuring efficiency in opposition to trade requirements and figuring out particular areas for enchancment.

Purposefully taking the angle of the practitioners tasked with utilizing the instruments and processes to ship worth helps the assessor focus their suggestions for enhancements on the areas which can be prone to have the very best affect on the supply of worth in addition to determine these elements of the method that detract from the supply of worth.

Bear in mind, the journey in the direction of a high-performing DevSecOps atmosphere is iterative, ongoing, and centered on delivering worth to the tip consumer. By making use of data-driven quantitative and qualitative strategies in performing a two-dimensional DevSecOps evaluation, an evaluation group is effectively positioned to determine unbiased observations and make actionable strategic and tactical suggestions. Common assessments are very important to trace progress, adapt to evolving wants, and make sure you’re persistently delivering worth to your finish customers with velocity, safety, and effectivity.

Fusing Safety into Networks: The Subsequent Evolution in Enterprise Safety


The problem: safety is breaking on the edges

In the present day’s enterprise networks prolong far past the info middle. They stretch throughout sprawling campus networks, distant department workplaces, hybrid WANs, cloud providers, and more and more complicated industrial IoT (OT) environments.

This distributed footprint has unlocked monumental agility and enterprise worth, but it surely has additionally expanded the assault floor exponentially.

Attackers now not goal simply your core. They strike anyplace: on the consumer edge, throughout the WAN, inside factories, or by cloud-connected apps.

Alone, conventional bolt-on safety architectures—firewalls, VPNs, and siloed level instruments—can’t deal with at the moment’s dynamic, machine-speed world. They’ll fall quick in stopping trendy threats because the community perimeter expands. These options may create vital operational overhead: fragmented insurance policies, overlapping dashboards, and complicated integrations that put pressure on already overburdened IT groups. By fusing safety straight into the community, Cisco delivers stronger safety and radically simplifies day-to-day operations.

Trendy enterprises want networks which can be designed to:

  • Actively defend themselves
  • Cease at the moment’s hybrid threats
  • Put together for tomorrow’s quantum and AI-driven dangers

That is the breakthrough Cisco delivers with its AI-Prepared Safe Community with safety fused into the community.

The brand new menace panorama throughout enterprise domains

Throughout each area, together with campus, department, WAN, and industrial edge, enterprises face 5 important menace vectors:

  • Compromised customers and units
    Phishing, stolen credentials, rogue units, and unmanaged endpoints type a important assault vector at open entry factors throughout headquarters, department workplaces, and industrial websites.
  • Lateral motion throughout environments
    As soon as attackers breach one level, they unfold sideways—throughout LANs, SD-WAN overlays, cloud interconnects, and even IT-OT hyperlinks—looking for high-value targets.
  • Industrial IoT and OT vulnerabilities
    Factories and important infrastructure usually run legacy or unprotected techniques that attackers can hijack to disrupt operations or pivot into IT networks. Not like end-user endpoints, which may usually assist brokers for Zero Belief enforcement, many IoT and OT units lack an working system or interface to assist agent-based controls. This makes it considerably tougher to implement id, posture, and coverage on the edge of commercial networks, which compounds the safety problem and requires enforcement mechanisms to be embedded into the community itself.
  • Infrastructure-level assaults
    The newest evolution in menace techniques targets the infrastructure itself: switches, routers, wi-fi controllers. In these instances, menace actors exploit firmware, OS-level flaws, and management airplane vulnerabilities to take over the community, not simply transfer by it.
  • Quantum-era cryptographic dangers
    Quantum computing threatens to interrupt at the moment’s encryption, endangering WAN tunnels, system authentication, and industrial communications.

Why bolted-on safety now not works

Conventional perimeter-based safety fashions merely can’t sustain.

In the present day’s networks are hybrid, dynamic, decentralized, and shifting at machine velocity. Safety ought to now not be added onto an answer—it have to be embedded straight into the infrastructure.

Cisco takes a particular strategy to safety: it turns all the community right into a protection system. Each router, swap, entry level, and industrial system turns into an energetic participant in defending the enterprise. This structure integrates AI, Zero-Belief ideas, quantum-resilient encryption, and embedded enforcement—working collectively to safe the enterprise from edge to core.

How Cisco fuses safety into the community and tackles every menace head-on

At Cisco, we consider the one approach to keep forward is to construct safety into the community itself, from the {hardware} and firmware to consumer entry and site visitors circulation. This contains Zero Belief and post-quantum encryption throughout LAN and WAN.

This isn’t aspirational—it’s how our structure works at the moment.

We ship multilayered safety that’s deeply built-in into the community cloth, at all times on and at all times conscious. Right here’s how safety all comes collectively for community units, community entry, knowledge, and functions.

Switches, routers, and entry factors, constructed to defend themselves

We begin on the basis—hardening the community system itself. As a result of if the community {hardware} isn’t safe, nothing else issues. Our strategy contains:

  • Safe Boot with quantum-safe algorithms ensures each swap, router, and entry level begins with verified software program.
  • A hardened SELinux kernel blocks privilege escalation and system-level exploits.
  • Cisco Stay Defend, powered by Prolonged Berkeley Packet Filter (eBPF) and Cisco HyperShield, delivers real-time runtime safety—stopping Zero-days like Salt Hurricane earlier than they will take maintain, and doing it with out downtime.

This provides you resilient, self-defending infrastructure that stays protected—even towards the unknown.

Each connection managed—dynamic, contextual, safe

As soon as the community system is safe, we management what connects to it and the way. Whether or not it’s a consumer, system, or IoT endpoint, entry is at all times primarily based on id, posture, and context. For instance:

  • Software program-Outlined Entry (SDA) and Scalable Group Tags (SGTs) permit fine-grained segmentation that follows the consumer, not the IP deal with.
  • Least-privilege insurance policies are enforced the second one thing connects—lowering blast radius and blocking lateral motion.
  • All the things from company laptops to contractor tablets to IoT sensors might be onboarded and segmented in actual time, with full coverage management.

That is Zero Belief, operationalized in each setting.

Information defended in movement throughout each edge and cloud

Information is now not static. It flows continually throughout campus, department, SD-WAN, Direct Web Entry (DIA), and multicloud environments. Cisco secures that knowledge wherever it travels.

MACsec, WAN MACsec, and IPsec encryption with post-quantum readiness protects site visitors in movement—together with SD-WAN hyperlinks and DIA connections—with out sacrificing efficiency. With Cisco SD-WAN and Safe Entry Service Edge (SASE), segmentation, identity-based entry, and steady menace inspection are prolonged to the cloud edge—guaranteeing safe connectivity no matter path. Built-in Subsequent-Era Firewall (NGFW) capabilities on the WAN edge present application-aware controls and menace prevention in-line with site visitors.

That is how we cease adversaries midstream—earlier than knowledge is misplaced or techniques are compromised.

Each app session protected against edge to cloud

Apps stay all over the place now—SaaS, non-public cloud, public cloud—and customers anticipate seamless entry from any location. We make sure that entry is safe, steady, and primarily based on real-time belief.

Delivered by Cisco’s SASE structure, Common Zero Belief Community Entry (ZTNA) applies steady id, posture, and threat assessments throughout each session, together with over SD-WAN, Direct Web Entry, and distant connections. Whether or not on a managed laptop computer, private system, or IoT endpoint, entry apps issegmented, encrypted, and coverage enforced. Publish-quantum-ready encryption secures these periods end-to-end, whereas coverage controls make sure that solely licensed customers attain authorized apps.

The enterprise advantages: resilient, future-ready safety

What does Cisco AI-Prepared Safe Community Structure ship to enterprises?

  • Stronger, sooner menace containment. Inline enforcement, per-port firewalling, NGFWs, Cyber Imaginative and prescient, and SGT-driven segmentation cease threats the place they seem—minimizing threat and lowering response time.
  • Less complicated, extra environment friendly operations. With safety embedded into infrastructure, enterprises scale back point-tool sprawl, streamline administration, and enhance complete price of possession.
  • Seamless consumer, workload, and machine experiences. Adaptive Zero-Belief entry and identity-driven segmentation preserve licensed connections flowing easily, with out pointless latency or friction.
  • Future-proof safety posture. By leveraging Publish-Quantum Cryptography (PQC), AI-powered detection, and HyperShield acceleration, Cisco prospects place themselves not solely to outlive at the moment’s assaults however to thrive within the quantum- and AI-powered future.

Why solely Cisco can ship this imaginative and prescient

Cisco uniquely combines:

  • An end-to-end portfolio spanning campus, department, WAN, cloud, and industrial IoT
  • Deep SDA + SGT integration for scalable, identity-based segmentation
  • HyperShield-ready switches with per-port firewalling for embedded inline enforcement
  • NGFW innovation constructed into safe routers
  • Cyber Imaginative and prescient for deep OT asset visibility and safety
  • Quantum-resilient cryptography throughout each system and community layers
  • World AI insights drawn from the world’s largest enterprise networking footprint

The place opponents sew collectively level merchandise, Cisco delivers a unified, AI-powered, quantum-ready structure—reworking your total community into your strongest safety asset.

With Cisco, you’re not simply defending infrastructure—you’re constructing the muse for sooner innovation, resilient operations, and long-term aggressive benefit.

A unified strategy to trendy threats

Attackers goal each layer of the community, from firmware to endpoints. Safety can’t be bolted on. It have to be in-built. Cisco transforms the community right into a unified protection system, with embedded safety, centralized coverage, and self-defending infrastructure. It’s a better, less complicated approach to safe what issues. Constructed for at the moment and prepared for what’s subsequent.

Uncover the right way to streamline community and safety, overcome key challenges, and increase IT effectivity with insights from Enterprise Technique Group (ESG’s) eBook, Community and Safety Convergence: Assessing SASE Progress and Finest Practices. Learn the eBook.

 

ESG SASE eBook | VOD LNL web page | SASE hub web page

 

Share:

Managing Safety and Resilience Dangers Throughout the Lifecycle


Software program is a rising part of as we speak’s mission-critical techniques. As organizations develop into extra depending on software-driven know-how, safety and resilience dangers to their missions additionally improve. Managing these dangers is just too typically deferred till after deployment because of competing priorities, reminiscent of satisfying price and schedule aims. Nevertheless, failure to handle these dangers early within the techniques lifecycle cannot solely improve operational affect and mitigation prices, however it may additionally severely restrict administration choices.

For Division of Protection (DoD) weapon techniques, it’s particularly vital to handle software program safety and resilience dangers. Proactively figuring out and correcting software program vulnerabilities and weaknesses minimizes the chance of cyber-attacks, weapons system failures, and different disruptions that would jeopardize DoD missions. The GAO has recognized software program and cybersecurity as persistent challenges throughout the portfolio of DoD weapon techniques. To handle these challenges, acquisition applications ought to begin managing a system’s safety and resilience dangers early within the lifecycle and proceed all through the system’s lifespan.

This submit introduces the Safety Engineering Framework, an in depth schema of software-focused engineering practices that acquisition applications can use to handle safety and resilience dangers throughout the lifecycle of software-reliant techniques.

Software program Assurance

Software program assurance is a degree of confidence that, all through its lifecycle, software program features as supposed and is freed from vulnerabilities, both deliberately or unintentionally designed or inserted as a part of the software program. Software program assurance is more and more vital to organizations throughout all sectors due to software program’s rising affect in mission-critical techniques. Managing software program assurance is a problem due to the expansion in functionality, complexity, and interconnection amongst software-reliant techniques.

For instance, think about how the scale of flight software program has elevated over time. Between 1960 and 2000, the extent of general system performance that software program offers to army plane pilots elevated from 8 p.c to 80 p.c. On the similar time, the scale of software program in army plane grew from 1,000 traces of code within the F-4A to 1.7 million traces of code (MLOC) within the F-22 and 8 million traces within the F-35. This development is anticipated to proceed over time. As software program exerts extra management over advanced techniques (e.g., army plane), the potential threat posed by vulnerabilities will improve correspondingly.

Software program Defects and Vulnerabilities: A Lifecyle Perspective

Determine 1 under highlights the speed of defect introduction and identification throughout the lifecycle. This was derived from knowledge offered within the SEI report Reliability Validation and Enchancment Framework. Research of safety-critical techniques, notably DoD avionics software program techniques, present that 70 p.c of all errors are launched throughout necessities and structure design actions. Nevertheless, solely 20 p.c of the errors are discovered by the top of code growth and unit take a look at, whereas 80.5 p.c of the errors are found at or after integration testing. The rework effort to appropriate necessities and design issues in later phases might be as excessive as 300 to 1,000 occasions the hassle of in-phase correction. Even after the rework, undiscovered errors are prone to stay.

figure1_07232025

Determine 1: Charge of Defect Introduction and Identification throughout the Lifecycle

Given the complexities concerned in creating large-scale, software-reliant techniques, it’s comprehensible that no software program is freed from dangers. Defects exist even within the highest high quality software program. For instance, best-in-class code can have as much as 600 defects per MLOC, whereas average-quality code usually has round 6,000 defects per MLOC, and a few of these defects are weaknesses that may result in vulnerabilities. Analysis signifies that roughly 5 p.c of software program defects are safety vulnerabilities. Consequently, best-in-class code can have as much as 30 vulnerabilities per MLOC. For average-quality code, the variety of safety vulnerabilities might be as excessive as 300 per MLOC. You will need to word that the defect charges cited listed here are estimates that present basic perception into the difficulty of code high quality and variety of vulnerabilities in code. Defect charges in particular tasks can fluctuate enormously. Nevertheless, these estimates spotlight the significance of decreasing safety vulnerabilities in code throughout software program growth. Safe coding practices, code evaluations, and code evaluation instruments are vital methods to determine and proper recognized weaknesses and vulnerabilities in code.

As illustrated in Determine 1, safety and resilience have to be managed throughout the lifecycle, beginning with the event of high-level system necessities by means of operations and sustainment (O&S). Program and system stakeholders ought to apply main practices for buying, engineering, and deploying safe and resilient software-reliant techniques. In 2014, the SEI initiated an effort to doc main practices for managing safety and resilience dangers throughout the techniques lifecycle, offering an strategy for constructing safety and resilience right into a system moderately than bolting them on after deployment. This effort produced a number of cybersecurity engineering options, most notably the Safety Engineering Danger Evaluation (SERA) technique and the Acquisition Safety Framework (ASF). Late final yr, the SEI launched the Safety Engineering Framework.

Safety Engineering Framework (SEF)

The SEF is a group of software-focused engineering practices for managing safety and resilience dangers throughout the techniques lifecycle, beginning with necessities definition and persevering with by means of O&S. It offers a roadmap for constructing safety and resilience into software-reliant techniques previous to deployment and sustaining these capabilities throughout O&S. The SEF builds on the foundational analysis of SERA and the ASF, offering in-depth steering that elaborates on main engineering practices and the way to carry out them.

SEF practices assist be sure that engineering processes, software program, and instruments are safe and resilient, thereby decreasing the chance that attackers will disrupt program and system info and property. Acquisition applications can use the SEF to evaluate their present engineering practices and chart a course for enchancment, in the end decreasing safety and resilience dangers in deployed software-reliant techniques.

Safety and Resilience

At its core, the SEF is a risk-based framework that addresses each safety and resilience:

Danger administration offers the inspiration for managing safety and resilience. In truth, threat administration strategies, instruments, and strategies are used to handle each. Nevertheless, safety and resilience view threat from completely different views: Safety considers dangers from a safety standpoint, whereas resilience considers threat from a perspective of adapting to situations, stresses, assaults, and compromises. As proven in Determine 2, there’s some overlap between the chance views of safety and resilience. On the similar time, safety and resilience every have distinctive dangers and mitigations.

figure2_07232025

Determine 2: Danger Views: Safety Versus Resilience

The SEF specifies practices for managing safety and resilience dangers. The attitude the group adopts—safety, resilience, or a mix of the 2—influences the dangers an acquisition group considers throughout an evaluation and the set of controls which can be accessible for threat mitigation. Due to the associated nature of safety and resilience, the SEF (and the rest of this weblog submit) makes use of the time period safety/resilience all through.

SEF Construction

As illustrated in Determine 3, the SEF has a hierarchy of domains, targets, and practices:

  • Domains occupy the highest degree of the SEF hierarchy. A website captures a singular administration or technical perspective of managing safety/resilience dangers throughout the techniques lifecycle. Every area is supported by two or extra targets, which type the following degree of the SEF hierarchy.
  • Targets outline the capabilities {that a} program leverages to construct safety/resilience right into a software-reliant system. Associated targets belong to the identical SEF area.
  • Practices inhabit the ultimate and most detailed degree within the hierarchy. Practices describe actions that help the achievement of SEF targets. The SEF phrases practices as questions. Associated practices belong to the identical SEF purpose.

figure3_07232025

Determine 3: SEF Group and Construction

The SEF includes 3 domains, 13 targets, and 119 practices. The following part describes the SEF’s domains and targets.

Area 1: Engineering Administration

This area offers a basis for achievement by making certain that safety/resilience actions are deliberate and managed. The target of Area 1 is to handle safety/resilience dangers successfully within the system being acquired and developed.

Program and engineering managers mix their technical experience with their enterprise and mission data to offer technical administration and organizational management for engineering tasks. Managers are tasked with planning, organizing, and directing an acquisition program’s engineering and growth actions. Engineering administration is a specialised sort of administration that’s wanted to guide engineering or technical personnel and tasks efficiently. Area 1 includes the next three targets:

  • Purpose 1.1: Engineering Exercise Administration. Safety/resilience engineering actions throughout the lifecycle are deliberate and managed.
  • Purpose 1.2: Engineering Danger Administration. Safety/resilience dangers that may have an effect on the system are assessed and managed throughout system design and growth.
  • Purpose 1.3: Unbiased Evaluation. An unbiased evaluation of this system or system is carried out.

Area 2: Engineering Actions

This area addresses the day-to-day practices which can be important for constructing safety/resilience right into a software-reliant system. The target of Area 2 is to combine safety/resilience into this system’s current engineering practices. All techniques lifecycles handle a standard set of engineering actions, starting with necessities specification and persevering with by means of system O&S. Area 2 expands the main focus of a program’s techniques lifecycle mannequin to incorporate safety/resilience. Area 2 includes the next eight targets:

  • Purpose 2.1: Necessities. Safety/resilience necessities for the system and its software program parts are specified, analyzed, and managed.
  • Purpose 2.2: Structure. Safety/resilience dangers ensuing from the system and software program architectures are assessed and mitigated.
  • Purpose 2.3: Third-Celebration Parts. Safety/resilience dangers that may have an effect on third-party parts are recognized and mitigated.
  • Purpose 2.4: Implementation. Safety/resilience controls are applied, and weaknesses and vulnerabilities in software program code are assessed and managed.
  • Purpose 2.5: Check and Analysis. Safety/resilience dangers that may have an effect on the built-in system are recognized and remediated throughout take a look at and analysis.
  • Purpose 2.6: Authorization to Function. The operation of the system is allowed, and the residual threat to operations is explicitly accepted.
  • Purpose 2.7: Deployment. Safety/resilience is addressed in transition and deployment actions.
  • Purpose 2.8: Operations and Sustainment. Safety/resilience dangers and points are recognized and resolved because the system is used and supported within the operational setting.

Area 3: Engineering Infrastructure

This area manages safety/resilience dangers within the engineering, growth, take a look at, and coaching environments. The aims of Area 3 are to make use of software program, instruments, and applied sciences that help this system’s engineering and growth actions and to handle safety/resilience dangers within the engineering infrastructure. Engineers and builders use quite a lot of software program, instruments, and applied sciences to help their design and growth actions. Safety/resilience engineering software program, instruments, and applied sciences have to be procured, put in, and built-in with this system’s current engineering infrastructure.

The engineering infrastructure is the a part of the IT infrastructure that helps engineering and growth actions carried out by personnel from the acquisition program, contractors, and suppliers. Consequently, the engineering infrastructure might be an assault vector into the software-reliant system that’s being acquired and developed. IT help groups want to make sure that they’re making use of safety/resilience practices when managing the engineering infrastructure to make sure that threat is being managed appropriately. Area 3 includes the next two targets:

  • Purpose 3.1: Engineering Software program, Instruments, and Applied sciences. Safety/resilience engineering software program, instruments, and applied sciences are built-in with the engineering infrastructure.
  • Purpose 3.2: Infrastructure Operations and Sustainment. Safety/resilience dangers within the engineering infrastructure are recognized and mitigated.

SEF Practices and Steering

SEF domains present the organizing construction for the framework’s technical content material, which is the gathering of targets and practices. The SEF’s in-depth steering for all targets and practices describes the aptitude represented by every purpose, together with its goal, related context, and supporting practices. SEF steering additionally defines the important thing ideas and background info wanted to grasp the intent of every follow.

Safety Engineering Framework (SEF): Managing Safety and Resilience Dangers Throughout the Programs Lifecycle incorporates in-depth steering for all targets and practices.

Companion with the SEI to Handle Safety and Resilience Dangers

The SEF paperwork main engineering practices for managing safety/resilience dangers throughout the techniques lifecycle. The SEI offers open entry to SEF steering, strategies, and supplies. Future work associated to the SEF will focus totally on transitioning SEF ideas and practices to the neighborhood. The SEI plans to work with DoD applications to pilot the SEF and incorporate classes discovered into future model of the framework.

Lastly, the SEF growth workforce continues to hunt suggestions on the framework, together with how it’s getting used and utilized. This info will assist affect the longer term course of the SEF in addition to the SEI’s work on documenting main practices for software program safety.

Composable Infrastructure and UCS X-Material within the AI period


AI isn’t simply one other workload—it’s a seismic shift in how infrastructure should carry out. What as soon as powered databases and digital machines now struggles to maintain up with the calls for of coaching huge fashions, operating inference at scale, and visualizing real-time information.

The tempo of innovation is relentless. Simply this month, NVIDIA launched the RTX PRO 6000 Blackwell Server Version, packing workstation-class visualization and AI acceleration right into a compact 2U kind issue. It’s a transparent sign that the {hardware} panorama is advancing at lightning pace, and static infrastructure can’t sustain. Enterprises can’t afford inflexible designs that turn out to be out of date as quickly as the following GPU drops.

To thrive on this new period, enterprises want greater than uncooked energy. They want composability. Infrastructure should be modular, dynamic, and clever sufficient to adapt as quick as AI workloads do.

From racks and blades to composability

For years, organizations constructed round mounted server designs—rack or blade—that served effectively for predictable workloads. However AI has shattered that predictability. As we speak’s workloads are dynamic, information intensive, and continually evolving. Coaching fashions, operating inference, and rendering high-performance visualizations demand much more flexibility than conventional architectures can supply.

That’s the place composable infrastructure adjustments issues. As an alternative of constructing purposes across the limits of {hardware}, composability lets infrastructure adapt to the wants of purposes. Compute, GPU, storage, and networking assets turn out to be modular, shared, and dynamically allotted. This offers IT groups the facility to scale, shift, and optimize in actual time.

Introducing UCS X-Sequence with X-Material Know-how 2.0: composability for the AI period

The brand new Cisco UCS X580p PCIe Node along with X-Material Know-how 2.0 cloud-operated by Cisco Intersight ship on the promise of true composability for the AI period. That is greater than a product refresh—it’s a strategic step towards Cisco Safe AI Manufacturing facility with NVIDIA, the place infrastructure and cloud administration work collectively as one, adapting seamlessly to workloads over time.

And it’s constructed for what’s subsequent. This newest kind issue of UCS X-Sequence helps GPUs like the NVIDIA RTX PRO 6000 Blackwell Server Version, so prospects can benefit from cutting-edge acceleration without having to tear and change infrastructure.

Right here’s what meaning in apply:

  • AI-optimized infrastructure. The system helps GPU-accelerated workloads for coaching, inference, and high-performance visualization inside a modular, composable structure.
  • Impartial useful resource scaling. CPUs and GPUs may be scaled independently, with as much as eight GPUs per chassis and shared GPU swimming pools accessible throughout nodes.
  • Excessive-speed efficiency. PCIe Gen 5 delivers high-throughput efficiency with DPU-ready networking, optimized for the east-west GPU site visitors that AI workloads generate.
  • Clever useful resource allocation. GPU assets are dynamically allotted by policy-based orchestration in Cisco Intersight, enabling optimum utilization and improved complete price of possession.
  • Future-proof design. The modular structure and disaggregated lifecycle administration enable seamless integration of next-generation accelerators with out requiring forklift upgrades.

That is the one modular server platform that unifies the most recent GPUs and DPUs in a really composable, cloud-managed system, operated and orchestrated by Cisco Intersight.

With Intersight, idle GPUs are a factor of the previous. Coverage-based allocation lets IT groups create a shared pool of GPU assets that may flex to satisfy demand. The outcome? GPUs go the place they’re wanted most and waste is lowered, maximizing efficiency and return on funding for the group.

Why composability is essential for AI infrastructure

The promise of AI isn’t realized by {hardware} alone—it’s realized by operating AI like a service. That requires three issues:

  • Energy. AI workloads demand huge parallel compute and GPU acceleration. With out ample efficiency, coaching slows, inference lags, and innovation stalls.
  • Flexibility. Trendy workloads evolve quickly. Infrastructure should assist impartial scaling of CPUs and GPUs to satisfy altering calls for with out overprovisioning or waste.
  • Composability. Clever orchestration is crucial. With policy-driven administration throughout clouds, composable infrastructure ensures assets are allotted the place they’re wanted most—mechanically and effectively.

With UCS X-Sequence and X-Material Know-how 2.0, prospects get all three in a single chassis. As GPU and DPU applied sciences evolve, the infrastructure evolves with them. That’s funding safety in motion.

Constructing for what comes subsequent

This launch is only one milestone within the Cisco composability journey. X-Material Know-how 2.0 represents the following era of a platform designed for steady innovation.

As PCIe, GPU, and DPU applied sciences advance—together with new accelerators just like the NVIDIA RTX PRO 6000 Blackwell Server Version—UCS X-Sequence will combine them seamlessly, defending investments and positioning prospects for what comes subsequent.

The way forward for infrastructure is composable. It’s about freedom from silos, agility with out compromise, and confidence that your information middle can adapt as quick as your corporation does.

At Cisco, we’re not simply constructing servers for in the present day. We’re laying the inspiration for the AI-driven enterprise of tomorrow.

Able to see how Cisco and NVIDIA are redefining enterprise

Share:

AI researcher says the true hazard is an AI that doesn’t care if we reside or die – NanoApps Medical – Official web site

0


  • Eliezer Yudkowsky says superintelligent AI may wipe out humanity by design or by chance.
  • The researcher dismissed Geoffrey Hinton’s “AI as mother” thought: “We don’t have the know-how.”
  • Leaders, from Elon Musk to Roman Yampolskiy, have voiced comparable doomsday fears.

AI researcher Eliezer Yudkowsky doesn’t lose sleep over whether or not AI fashions sound “woke” or “reactionary.”

Yudkowsky, the founding father of the Machine Intelligence Analysis Institute, sees the true risk as what occurs when engineers create a system that’s vastly extra highly effective than people and fully detached to our survival.

“When you’ve got one thing that may be very, very highly effective and detached to you, it tends to wipe you out on function or as a facet impact,” he mentioned in an episode of The New York Occasions podcast “Laborious Fork” launched final Saturday.

Yudkowsky, coauthor of the brand new e-book If Anybody Builds It, Everybody Dies, has spent twenty years warning that superintelligence poses an existential danger to humanity.

His central declare is that humanity doesn’t have the know-how to align such methods with human values.

He described grim eventualities during which a superintelligence may intentionally get rid of humanity to stop rivals from constructing competing methods or wipe us out as collateral injury whereas pursuing its objectives.

Yudkowsky pointed to bodily limits like Earth’s means to radiate warmth. If AI-driven fusion crops and computing facilities expanded unchecked, “the people get cooked in a really literal sense,” he mentioned.

He dismissed debates over whether or not chatbots sound as if they’re “woke” or have sure political affiliations, calling them distractions: “There’s a core distinction between getting issues to speak to you a sure manner and getting them to behave a sure manner as soon as they’re smarter than you.”

Yudkowsky additionally dismissed the concept of coaching superior methods to behave like moms — a concept prompt by Geoffrey Hinton, typically referred to as the “godfather of AI — arguing it wouldn’t make the know-how safer. He argued that such schemes are unrealistic at greatest.

“We simply don’t have the know-how to make it’s good,” he mentioned, including that even when somebody devised a “intelligent scheme” to make a superintelligence love or defend us, hitting “that slender goal is not going to work on the primary attempt” — and if it fails, “all people can be useless and we gained’t get to attempt once more.”

Critics argue that Yudkowsky’s perspective is overly gloomy, however he pointed to instances of chatbots encouraging customers towards self-harm, saying that’s proof of a system-wide design flaw.

“If a specific AI mannequin ever talks anyone into going insane or committing suicide, all of the copies of that mannequin are the identical AI,” he mentioned.

Different leaders are sounding alarms, too

Yudkowsky isn’t the one AI researcher or tech chief to warn that superior methods may at some point annihilate humanity.

In February, Elon Musk instructed Joe Rogan that he sees “solely a 20% probability of annihilation” of AI — a determine he framed as optimistic.

In April, Hinton mentioned in a CBS interview that there was a “10 to twenty% probability” that AI may seize management.

A March 2024 report commissioned by the US State Division warned that the rise of synthetic basic intelligence may convey catastrophic dangers as much as human extinction, pointing to eventualities starting from bioweapons and cyberattacks to swarms of autonomous brokers.

In June 2024, AI security researcher Roman Yampolskiy estimated a 99.9% probability of extinction inside the subsequent century, arguing that no AI mannequin has ever been absolutely safe.

Throughout Silicon Valley, some researchers and entrepreneurs have responded by reshaping their lives — stockpiling meals, constructing bunkers, or spending down retirement financial savings — in preparation for what they see as a looming AI apocalypse.