6.2 C
New York
Tuesday, February 25, 2025

DeepSeek R1 on Databricks | Databricks Weblog


Deepseek-R1 is a state-of-the-art open mannequin that, for the primary time, introduces the ‘reasoning’ functionality to the open supply neighborhood. Particularly, the discharge additionally contains the distillation of that functionality into the Llama-70B and Llama-8B fashions, offering a sexy mixture of velocity, cost-effectiveness, and now ‘reasoning’ functionality. We’re excited to share how one can simply obtain and run the distilled DeepSeek-R1-Llama fashions in Mosaic AI Mannequin Serving, and profit from its safety, best-in-class efficiency optimizations, and integration with the Databricks Information Intelligence Platform. Now with these open ‘reasoning’ fashions, construct agent techniques that may much more intelligently motive in your knowledge.

Playground demo

Deploying Deepseek-R1-Distilled-Llama Fashions on Databricks

To obtain, register, and deploy the Deepseek-R1-Distill-Llama fashions on Databricks, use the pocket book included right here, or observe the simple directions beneath:

 

1. Spin up the required compute¹ and cargo the mannequin and its tokenizer:

This course of ought to take a number of minutes as we obtain 32GB value of mannequin weights within the case of Llama 8B. 

 

2. Then, register the mannequin and the tokenizer as a transformers mannequin. mlflow.transformers makes registering fashions in Unity Catalog easy – simply configure your mannequin measurement (on this case, 8B) and the mannequin identify.

1  We used ML Runtime 15.4 LTS and a g4dn.4xlarge single node cluster for the 8B mannequin and a g6e.4xlarge for the 70B mannequin. You don’t want GPU’s per-se to deploy the mannequin inside the pocket book so long as the compute used has adequate reminiscence capability.

 

3. To serve this mannequin utilizing our extremely optimized Mannequin Serving engine, merely navigate to Serving and launch an endpoint along with your registered mannequin!

Select served entity

As soon as the endpoint is prepared, you’ll be able to simply question the mannequin by way of our API, or use the Playground to begin prototyping your purposes.

Playground demo

With Mosaic AI Mannequin Serving, deploying this mannequin is each easy, however highly effective, making the most of our best-in-class efficiency optimizations in addition to integration with the Lakehouse for governance and safety.

When to make use of reasoning fashions

One distinctive facet of the Deepseek-R1 collection of fashions is their means for prolonged chain-of-thought (CoT), just like the o1 fashions from OpenAI. You’ll be able to see this in our Playground UI, the place the collapsible “Considering” part reveals the CoT traces of the mannequin’s reasoning. This might result in larger high quality solutions, notably for math and coding, however at the results of considerably extra output tokens. We additionally advocate customers observe Deepseek’s Utilization Tips in interacting with the mannequin.

These are early innings in realizing tips on how to use reasoning fashions, and we’re excited to listen to what new knowledge intelligence techniques our clients can construct with this functionality. We encourage our clients to experiment with their very own use instances and tell us what you discover. Look out for added updates within the coming weeks as we dive deeper into R1, reasoning, and tips on how to construct knowledge intelligence on Databricks.

Assets

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles