4.4 C
New York
Sunday, February 23, 2025

Wonderful-Tuning NVIDIA NV-Embed-v1 on Amazon Polarity Dataset Utilizing LoRA and PEFT: A Reminiscence-Environment friendly Strategy with Transformers and Hugging Face


On this tutorial, we discover the right way to fine-tune NVIDIA’s NV-Embed-v1 mannequin on the Amazon Polarity dataset utilizing LoRA (Low-Rank Adaptation) with PEFT (Parameter-Environment friendly Wonderful-Tuning) from Hugging Face. By leveraging LoRA, we effectively adapt the mannequin with out modifying all its parameters, making fine-tuning possible on low-VRAM GPUs.
Steps to the implementation on this tutorial will be damaged into the next steps:

  1. Authenticating with Hugging Face to entry NV-Embed-v1  
  2. Loading and configuring the mannequin effectively  
  3. Making use of LoRA fine-tuning utilizing PEFT  
  4. Preprocessing the Amazon Polarity dataset for coaching  
  5. Optimizing GPU reminiscence utilization with `device_map=”auto”`  
  6. Coaching and evaluating the mannequin on sentiment classification  

By the top of this information, you’ll have a fine-tuned NV-Embed-v1 mannequin optimized for binary sentiment classification, demonstrating the right way to apply environment friendly fine-tuning methods to real-world NLP duties.

from huggingface_hub import login


login()  # Enter your Hugging Face token when prompted


import os
HF_TOKEN = "...."  # Substitute along with your precise token
os.environ["HF_TOKEN"] = HF_TOKEN


import torch
import torch.distributed as dist
from transformers import AutoModel, AutoTokenizer, TrainingArguments, Coach
from datasets import load_dataset
from peft import LoraConfig, get_peft_model

First, we log into the Hugging Face Hub utilizing your API token, set the token as an setting variable, and import numerous libraries wanted for distributed coaching and fine-tuning transformer fashions with methods like LoRA.

MODEL_NAME = "nvidia/NV-Embed-v1"
HF_TOKEN = "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"  # Substitute along with your precise token


tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, token=HF_TOKEN)
mannequin = AutoModel.from_pretrained(
    MODEL_NAME,
    device_map="auto",  # Allow environment friendly GPU placement
    torch_dtype=torch.float16,  # Use FP16 for effectivity
    token=HF_TOKEN
)

This snippet units a selected mannequin identify and authentication token, then hundreds the corresponding pretrained tokenizer and mannequin from Hugging Face’s mannequin hub. It additionally configures the mannequin to make use of computerized GPU allocation and FP16 precision for improved effectivity.

lora_config = LoraConfig(
    r=16,
    lora_alpha=32,
    target_modules=["self_attn.q_proj", "self_attn.v_proj"],  
    lora_dropout=0.1,
    bias="none",
    task_type="FEATURE_EXTRACTION",
)


mannequin = get_peft_model(mannequin, lora_config)
mannequin.print_trainable_parameters()

With the above code, we configure a LoRA setup with specified parameters (like r=16, lora_alpha=32, and a dropout of 0.1) concentrating on the self-attention mechanism’s question and worth projection layers. It then integrates this configuration into the mannequin utilizing PEFT in order that solely these LoRA layers are trainable for characteristic extraction, and at last, the trainable parameters are printed.

dataset = load_dataset("amazon_polarity")


def tokenize_function(examples):
    return tokenizer(examples["content"], padding="max_length", truncation=True)


tokenized_datasets = dataset.map(tokenize_function, batched=True)

Right here, we load the Amazon Polarity dataset, outline a operate to tokenize its “content material” subject with padding and truncation, and applies this operate to transform the dataset right into a tokenized format for mannequin coaching.

training_args = TrainingArguments(
    output_dir="./outcomes",
    evaluation_strategy="epoch",
    per_device_train_batch_size=4,
    per_device_eval_batch_size=4,
    num_train_epochs=1,
    save_strategy="epoch",
    save_total_limit=1,
    logging_dir="./logs",
    logging_steps=10,
    fp16=True,  # Combined precision
)


coach = Coach(
    mannequin=mannequin,
    args=training_args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["test"],
)

coach.prepare()

With the above code, we arrange coaching parameters—like output directories, batch sizes, logging, and FP16 blended precision—utilizing TrainingArguments, create a Coach with the mannequin and tokenized prepare/take a look at datasets, and at last provoke the coaching course of.

mannequin.save_pretrained("./fine_tuned_nv_embed")
tokenizer.save_pretrained("./fine_tuned_nv_embed")
print("✅ Coaching Full! Mannequin Saved.")

Lastly, we save the fine-tuned mannequin and its tokenizer to the desired listing after which print a affirmation message indicating that coaching is full and the mannequin is saved.

By the top of this tutorial, we efficiently fine-tuned NV-Embed-v1 on the Amazon Polarity dataset utilizing LoRA and PEFT, making certain environment friendly reminiscence utilization and scalable adaptation. This tutorial highlights the facility of parameter-efficient fine-tuning, enabling area adaptation of huge fashions with out requiring large computational assets. This strategy will be prolonged to different transformer-based fashions, making it helpful for customized embeddings, sentiment evaluation, and NLP-driven purposes. Whether or not you’re engaged on product evaluate classification, AI-driven advice programs, or domain-specific search engines like google, this technique permits you to fine-tune large-scale fashions on a price range effectively.


Right here is the Colab Pocket book for the above challenge. Additionally, don’t neglect to comply with us on Twitter and be part of our Telegram Channel and LinkedIn Group. Don’t Overlook to hitch our 75k+ ML SubReddit.

🚨 Beneficial Learn- LG AI Analysis Releases NEXUS: An Superior System Integrating Agent AI System and Knowledge Compliance Requirements to Tackle Authorized Considerations in AI Datasets


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles