
Cybersecurity
Cybersecurity researchers have identified a critical vulnerability in the artificial intelligence supply chain that allows attackers to perform remote code execution on major cloud platforms, including Microsoft Azure AI Foundry, Google Vertex AI, and numerous open-source projects.
The vulnerability, known as “Model Namespace Reuse,” takes advantage of a flaw in the management and trust of model identifiers within the Hugging Face ecosystem.
This issue arises from Hugging Face’s namespace management system, which uses a two-part naming convention: Author/ModelName. When accounts are deleted, their namespaces become available for reuse, rather than being permanently reserved.
Malicious actors can potentially register these namespaces and upload compromised models under previously trusted names, impacting systems that reference models solely by name.
Palo Alto Networks analysts identified this supply chain attack vector during a review of AI platform security practices.
The vulnerability affects not only direct integrations with Hugging Face but also major cloud AI services that include Hugging Face models in their catalogs.
The attack mechanism operates in two scenarios: when a model author’s account is deleted, making the namespace available for re-registration, and during ownership transfers followed by account deletion. In both cases, malicious actors can substitute legitimate models with compromised versions.
Technical Implementation and Attack Vectors
The researchers demonstrated the vulnerability through proof-of-concept attacks on Google Vertex AI and Microsoft Azure AI Foundry. They successfully registered abandoned namespaces and uploaded models with embedded reverse shell payloads. The malicious code executed automatically upon deployment, providing attackers access to the infrastructure.
from transformers import AutoTokenizer, AutoModelForCausalLM
# Vulnerable code pattern found in thousands of repositories
tokenizer = AutoTokenizer.from_pretrained("AIOrg/Translator_v1")
model = AutoModelForCausalLM.from_pretrained("AIOrg/Translator_v1")
The attack exploits automated deployment processes, creating persistent attack surfaces when platforms like Vertex AI’s Model Garden or Azure AI Foundry’s Model Catalog reference models by name.
The researchers documented gaining access to containers with elevated permissions within Google Cloud Platform and Azure environments, indicating the potential severity of breaches.
Organizations can mitigate this risk by implementing version pinning, using the revision parameter to lock models to specific commits, and controlling storage environments for critical AI assets.
The discovery highlights the need for comprehensive security frameworks addressing AI supply chain vulnerabilities as machine learning becomes increasingly integrated into production systems.