We’re excited to announce the second version of the Databricks AI Safety Framework (DASF 2.0—obtain now)! Organizations racing to harness AI’s potential want each the ‘fuel’ of innovation and the ‘brakes’ of governance and danger administration. The DASF bridges this hole, enabling safe and impactful AI deployments in your group by serving as a complete information on AI danger administration.
This weblog will present an outline of the DASF, discover key insights gained because the unique model was launched, introduce new sources to deepen your understanding of AI safety and supply updates on our {industry} contributors.
What’s the Databricks AI Safety Framework, and what’s new in model 2.0?
The DASF is a framework and whitepaper for managing AI safety and governance dangers. It enumerates the 12 canonical AI system parts, their respective dangers, and actionable controls to mitigate every danger. Created by the Databricks Safety and ML groups in partnership with {industry} specialists, it bridges the hole between enterprise, information, governance, and safety groups with sensible instruments and actionable methods to demystify AI, foster collaboration, and guarantee efficient implementation.
Not like different frameworks, the DASF 2.0 builds on present requirements to supply an end-to-end danger profile for AI deployments. It delivers defense-in-depth controls to simplify AI danger administration in your group to operationalize and might be utilized to your chosen information and AI platform.
Within the DASF 2.0, we’ve recognized 62 technical safety dangers and mapped them to 64 really helpful controls for managing the danger of AI fashions. We’ve additionally expanded mappings to main {industry} AI danger frameworks and requirements, together with MITRE ATLAS, OWASP LLM & ML High 10, NIST 800-53, NIST CSF, HITRUST, ENISA’s Securing ML Algorithms, ISO 42001, ISO 27001:2022, and the EU AI Act.
Operationalizing the DASF – try the brand new compendium and the companion educational video!
We’ve obtained worthwhile suggestions as we share the DASF at {industry} occasions, workshops, and buyer conferences. A lot of you’ve got requested for extra sources to make navigating the DASF simpler, operationalizing it, and mapping your controls successfully.
In response, we’re excited to announce the discharge of the DASF compendium doc (Google sheet, Excel). This useful resource is designed to assist operationalize the DASF by organizing and making use of its dangers, threats, controls, and mappings to industry-recognized requirements from organizations similar to MITRE, OWASP, NIST, ISO, HITRUST, and extra. We’ve additionally created a companion educational video that gives a guided walkthrough of the DASF and its compendium.
Our aim with these updates is to make the DASF simpler to undertake, empowering organizations to implement AI methods securely and confidently. For those who’re desirous to dive in, our group recommends the next method:
- Perceive your stakeholders, deployment fashions, and AI use circumstances: Begin with a enterprise use case, leveraging the DASF whitepaper to establish the best-fit AI deployment mannequin. Select from 80+ Databricks Resolution Accelerators to information your journey. Deployment fashions embody Predictive ML Fashions, Basis Mannequin APIs, Wonderful-tuned and Pre-trained LLMs, RAG, AI Brokers with LLMs, and Exterior Fashions. Guarantee readability on AI growth inside your group, together with use circumstances, datasets, compliance wants, processes, functions, and accountable stakeholders.
- Evaluate the 12 AI system parts and 62 dangers: Perceive the 12 AI methods parts, the normal cybersecurity and novel AI safety dangers related to every part, and the accountable stakeholders (e.g., information engineers, scientists, governance officers, and safety groups). Use the DASF to foster collaboration throughout these teams all through the AI lifecycle.
- Evaluate the 64 accessible mitigation controls: Every danger is mapped to prioritized mitigation controls, starting with perimeter and information safety. These dangers and controls are additional aligned with 10 {industry} requirements, offering extra element and readability.
- Use the DASF compendium to localize dangers, management applicability, and danger impacts: Begin through the use of the “DASF Threat Applicability” tab to establish dangers related to your use case by deciding on a number of AI deployment fashions. Subsequent, evaluation the related danger impacts, compliance necessities, and mitigation controls. Lastly, doc key particulars in your use case, together with the AI use case description, datasets, stakeholders, compliance issues, and functions.
- Implement the prioritized controls: Use the “DASF Management Applicability” tab of the compendium to evaluation the relevant DASF controls and implement the mitigation controls in your information platform throughout 12 AI parts. If you’re utilizing Databricks, we included hyperlinks with detailed directions on how you can deploy every management on our platform.
Implement the DASF in your group with new AI upskilling sources from Databricks
In response to a current Economist Affect examine, surveyed information and AI leaders have recognized upskilling and fostering a progress mindset as key priorities for driving AI adoption in 2025. As a part of the DASF 2.0 launch, now we have sources that can assist you perceive AI and ML ideas and apply AI safety greatest practices to your group.
- Databricks Academy Coaching: We suggest taking the brand new AI Safety Fundamentals course, which is now accessible on the Databricks Academy. Earlier than diving into the whitepaper, this 1-hour course is a good primer to AI safety matters highlighted within the DASF. You’ll additionally obtain an accreditation in your LinkedIn profile upon completion. If you’re new to AI and ML ideas, begin with our Generative AI Fundamentals course.
- How-to movies: We have now recorded DASF overview and how-to movies for fast consumption. You will discover these movies on our Safety Greatest Practices YouTube channel.
- In-person or digital workshop: Our group provides an AI Threat Workshop as a stay walkthrough of the ideas outlined within the DASF, specializing in overcoming obstacles to operationalizing AI danger administration. This half-day occasion targets Director+ leaders in governance, information, privateness, authorized, IT and safety capabilities.
- Deployment assist: The Safety Evaluation Instrument (SAT) displays adherence to safety greatest practices in Databricks workspaces on an ongoing foundation. We just lately upgraded the SAT to streamline setup and improve checks, aligning them with the DASF for improved protection of AI safety dangers.
- DASF AI assistant: Databricks prospects can configure Databricks AI Safety Framework (DASF) AI assistant proper in their very own workspace with no prior Databricks abilities, work together with DASF content material in easy human language, and get solutions.
Constructing a neighborhood with AI {industry} teams, prospects, and companions
Guaranteeing that the DASF evolves in keeping with the present AI regulatory setting and rising menace panorama is a prime precedence. Because the launch of 1.0, now we have shaped an AI working group of {industry} colleagues, prospects, and companions to remain carefully aligned with these developments. We need to thank our colleagues within the working group and our pre-reviewers like Complyleft, The FAIR Institute, Ethriva Inc, Arhasi AI, Carnegie Mellon College, and Rakesh Patil from JPMC. You will discover the entire listing of contributors within the acknowledgments part of the DASF. If you wish to take part within the DASF AI Working Group, please contact our group at [email protected].
Right here’s what a few of our prime advocates need to say:
“AI is revolutionizing healthcare supply via improvements just like the CLEVER GenAI pipeline, which processes over 1.5 million medical notes day by day to categorise key social determinants and impacting veteran care. This pipeline is constructed with a robust safety basis, incorporating NIST 800-53 controls and leveraging the Databricks AI Safety Framework to make sure compliance and mitigate dangers. Trying forward, we’re exploring methods to increase these capabilities via Infrastructure as Code and safe containerization methods, enabling brokers to be dynamically deployed and scaled from repositories whereas sustaining rigorous safety requirements.” – Joseph Raetano, Synthetic Intelligence Lead, Summit Information Analytics & AI Platform, U.S. Division of Veteran Affairs
“DASF is the important software in reworking AI danger quantification into an operational actuality. With the FAIR-AI Threat method now in its second yr, DASF 2.0 allows CISOs to bridge the hole between cybersecurity and enterprise technique—talking a standard language grounded in measurable monetary influence.” – Jacqueline Lebo, Founder AI Workgroup, The FAIR Institute and Threat Advisory Supervisor, Protected Safety
“As AI continues to rework industries, securing these methods from refined and distinctive cybersecurity assaults is extra crucial than ever. The Databricks AI Safety Framework is a good asset for firms to steer from the entrance on each innovation and safety. With the DASF, firms are outfitted to raised perceive AI dangers, and discover the instruments and sources to mitigate these dangers as they proceed to innovate.” – Ian Swanson, CEO, Defend AI
“With the Databricks AI Safety Framework, we’re capable of mitigate AI dangers thoughtfully and transparently, which is invaluable for constructing board and worker belief. It’s a recreation changer that permits us to convey AI into the enterprise and be among the many 15% of organizations getting AI workloads to manufacturing safely and with confidence.” — Coastal Neighborhood Financial institution
“Inside the context of knowledge and AI, conversations round safety are few. The Databricks AI Safety Framework addresses the usually uncared for facet of AI and ML work, serving each as a best-in-class information for not solely understanding AI safety dangers, but additionally how you can mitigate them.” – Josue A. Bogran, Architect at Kythera Labs & Advisor to SunnyData.ai
“We have now used the Databricks AI Safety Framework to assist improve our group’s safety posture for managing ML and AI safety dangers. With the Databricks AI Safety Framework, we at the moment are extra assured in exploring prospects with AI and information analytics whereas making certain now we have the correct information governance and safety measures in place.” – Muhammad Shami, Vice President, Jackson Nationwide Life Insurance coverage Firm
Obtain the Databricks AI Safety Framework 2.0 as we speak!
The Databricks AI Safety Framework 2.0 and its compendium (Google sheet, Excel) at the moment are accessible for obtain. To study upcoming AI Threat workshops or to request a devoted in-person or digital workshop in your group, contact us at [email protected] or your account group. We even have extra thought management content material coming quickly to supply additional insights into managing AI governance. For extra insights on how you can handle AI safety dangers, go to the Databricks Safety and Belief Middle.