AMD Releases AMD-135M: AMD’s First Small Language Mannequin Collection Educated from Scratch on AMD Intuition™ MI250 Accelerators Using 670B Tokens 

0
22
AMD Releases AMD-135M: AMD’s First Small Language Mannequin Collection Educated from Scratch on AMD Intuition™ MI250 Accelerators Using 670B Tokens 


AMD has just lately launched its new language mannequin, AMD-135M or AMD-Llama-135M, which is a major addition to the panorama of AI fashions. Primarily based on the LLaMA2 mannequin structure, this language mannequin boasts a sturdy construction with 135 million parameters and is optimized for efficiency on AMD’s newest GPUs, particularly the MI250. This launch marks an important milestone for AMD in its endeavor to determine a robust foothold within the aggressive AI trade.

Background and Technical Specs

The AMD-135M is constructed on the LLaMA2 mannequin structure and is built-in with superior options to help numerous purposes, significantly in textual content technology and language comprehension. The mannequin is designed to work seamlessly with the Hugging Face Transformers library, making it accessible for builders and researchers. The mannequin can deal with advanced duties with a hidden measurement of 768, 12 layers (blocks), and 12 consideration heads whereas sustaining excessive effectivity. The activation operate used is the Swiglu operate, and the layer normalization is predicated on RMSNorm. Its positional embedding is designed utilizing the RoPE technique, enhancing its capability to grasp and generate contextual info precisely.

The discharge of this mannequin is not only in regards to the {hardware} specs but in addition in regards to the software program and datasets that energy it. AMD-135M has been pretrained on two key datasets: the SlimPajama and Undertaking Gutenberg datasets. SlimPajama is a deduplicated model of RedPajama, which incorporates sources resembling Commoncrawl, C4, GitHub, Books, ArXiv, Wikipedia, and StackExchange. The Undertaking Gutenberg dataset offers entry to an enormous repository of classical texts, enabling the mannequin to know numerous language buildings and vocabularies.

Key Options of AMD-135M

AMD-135M has outstanding options that set it other than different fashions available in the market. A few of these key options embrace:

  • Parameter Dimension: 135 million parameters, permitting for environment friendly processing and technology of textual content.
  • Variety of Layers: 12 layers with 12 consideration heads for in-depth evaluation and contextual understanding.
  • Hidden Dimension: 768, providing the aptitude to deal with numerous language modeling duties.
  • Consideration Kind: Multi-Head Consideration, enabling the mannequin to concentrate on totally different features of the enter knowledge concurrently.
  • Context Window Dimension: 2048, guaranteeing the mannequin can successfully handle bigger enter knowledge sequences.
  • Pretraining and Finetuning Datasets: The SlimPajama and Undertaking Gutenberg datasets are utilized for pretraining, and the StarCoder dataset is used for finetuning, guaranteeing complete language understanding.
  • Coaching Configuration: The mannequin employs a studying fee 6e-4 with a cosine studying fee schedule, and it has undergone a number of epochs for efficient coaching and finetuning.

Deployment and Utilization

The AMD-135M may be simply deployed and used by means of the Hugging Face Transformers library. For deployment, customers can load the mannequin utilizing the `LlamaForCausalLM` and the `AutoTokenizer` modules. This ease of integration makes it a good choice for builders seeking to incorporate language modeling capabilities into their purposes. Moreover, the mannequin is appropriate with speculative decoding for AMD’s CodeLlama, additional extending its usability for code technology duties. This function makes AMD-135M significantly helpful for builders engaged on programming-related textual content technology or different NLP purposes.

Efficiency Analysis

The efficiency of AMD-135M has been evaluated utilizing the lm-evaluation-harness on numerous NLP benchmarks, resembling SciQ, WinoGrande, and PIQA. The outcomes point out the mannequin is extremely aggressive, providing comparable efficiency to different fashions in its parameter vary. As an illustration, it achieved a cross fee of roughly 32.31% on the Humaneval dataset utilizing MI250 GPUs, a robust efficiency indicator for a mannequin of this measurement. This exhibits that AMD-135M is usually a dependable mannequin for analysis and business purposes in pure language processing.

In conclusion, the discharge of AMD-135M underscores AMD’s dedication to advancing AI applied sciences and offering accessible, high-performance fashions for the analysis neighborhood. Its sturdy structure and superior coaching methods place AMD-135M as a formidable competitor within the quickly evolving panorama of AI fashions.


Take a look at the Mannequin on Hugging Face and Particulars. All credit score for this analysis goes to the researchers of this venture. Additionally, don’t neglect to comply with us on Twitter and be part of our Telegram Channel and LinkedIn Group. Should you like our work, you’ll love our e-newsletter..

Don’t Neglect to hitch our 50k+ ML SubReddit


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.



LEAVE A REPLY

Please enter your comment!
Please enter your name here