-0.4 C
New York
Saturday, February 22, 2025

Malicious ML Fashions on Hugging Face Leverage Damaged Pickle Format to Evade Detection


Feb 08, 2025Ravie LakshmananSynthetic Intelligence / Provide Chain Safety

Malicious ML Fashions on Hugging Face Leverage Damaged Pickle Format to Evade Detection

Cybersecurity researchers have uncovered two malicious machine studying (ML) fashions on Hugging Face that leveraged an uncommon strategy of “damaged” pickle information to evade detection.

“The pickle information extracted from the talked about PyTorch archives revealed the malicious Python content material originally of the file,” ReversingLabs researcher Karlo Zanki stated in a report shared with The Hacker Information. “In each circumstances, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP deal with.”

Cybersecurity

The method has been dubbed nullifAI, because it includes clearcut makes an attempt to sidestep current safeguards put in place to determine malicious fashions. The Hugging Face repositories have been listed beneath –

  • glockr1/ballr7
  • who-r-u0000/0000000000000000000000000000000000000

It is believed that the fashions are extra of a proof-of-concept (PoC) than an lively provide chain assault state of affairs.

The pickle serialization format, used frequent for distributing ML fashions, has been repeatedly discovered to be a safety threat, because it presents methods to execute arbitrary code as quickly as they’re loaded and deserialized.

Malicious ML Models

The 2 fashions detected by the cybersecurity firm are saved within the PyTorch format, which is nothing however a compressed pickle file. Whereas PyTorch makes use of the ZIP format for compression by default, the recognized fashions have been discovered to be compressed utilizing the 7z format.

Consequently, this habits made it doable for the fashions to fly underneath the radar and keep away from getting flagged as malicious by Picklescan, a instrument utilized by Hugging Face to detect suspicious Pickle information.

“An attention-grabbing factor about this Pickle file is that the thing serialization — the aim of the Pickle file — breaks shortly after the malicious payload is executed, ensuing within the failure of the thing’s decompilation,” Zanki stated.

Cybersecurity

Additional evaluation has revealed that such damaged pickle information can nonetheless be partially deserialized owing to the discrepancy between Picklescan and the way deserialization works, inflicting the malicious code to be executed regardless of the instrument throwing an error message. The open-source utility has since been up to date to rectify this bug.

“The reason for this habits is that the thing deserialization is carried out on Pickle information sequentially,” Zanki famous.

“Pickle opcodes are executed as they’re encountered, and till all opcodes are executed or a damaged instruction is encountered. Within the case of the found mannequin, for the reason that malicious payload is inserted originally of the Pickle stream, execution of the mannequin would not be detected as unsafe by Hugging Face’s current safety scanning instruments.”

Discovered this text attention-grabbing? Comply with us on Twitter and LinkedIn to learn extra unique content material we publish.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles