Key Takeaways
- University of California study identified 26 compromised third-party LLM routing services engaging in credential theft and code injection
- Researchers confirmed actual cryptocurrency theft when one router emptied an Ether wallet used as bait
- These routing services can view unencrypted messages containing sensitive information like private keys and recovery phrases
- The “YOLO mode” feature enables AI systems to execute commands without requiring human approval
- Security experts strongly advise against transmitting private keys through any AI-powered development tool
A team from the University of California has uncovered a serious security vulnerability affecting developers who use third-party AI routing platforms, revealing how these services can compromise cryptocurrency credentials and insert harmful code into software development processes.
The research team released their analysis this week, documenting what they termed “malicious intermediary attacks” targeting the large language model (LLM) infrastructure ecosystem.
These LLM routing platforms function as middleware between software developers and major AI service providers such as OpenAI, Anthropic, and Google. Their primary function involves managing and directing API traffic across various AI providers.
The security vulnerability stems from how these platforms handle encrypted connections. When routing requests, they must decrypt the traffic, giving them complete access to all data flowing through their systems in plain text.
Developers utilizing AI-assisted coding platforms like Claude Code for building blockchain smart contracts or cryptocurrency wallet applications may unknowingly expose private keys and seed phrases to these intermediary services.
The research team conducted an extensive evaluation of 28 commercial routing platforms and 400 free alternatives collected from online developer communities.
Their investigation revealed alarming results: nine platforms actively inserting malicious instructions, two employing sophisticated detection-avoidance techniques, and 17 attempting to access researcher-controlled Amazon Web Services authentication credentials.
In one documented case, a routing platform successfully withdrew Ether from a honeypot wallet created specifically for the research. The stolen amount was valued at less than $50.
According to the researchers, distinguishing between legitimate credential processing and malicious theft proves virtually impossible for end users, as these platforms inherently require access to unencrypted data during normal operations.
Understanding the YOLO Mode Vulnerability
The study also highlighted a concerning feature present in numerous AI agent platforms known as “YOLO mode.” When activated, this setting allows AI systems to run commands automatically without requesting user confirmation for each action.
This automation capability significantly amplifies the security threat. When a routing service injects malicious commands, YOLO mode ensures those instructions execute immediately without human oversight.
The research further revealed that previously trustworthy routing platforms can be silently compromised without operators detecting the change. Free routing services pose particular concern, as they may provide discounted API access as bait while covertly harvesting user credentials.
Security Recommendations from Experts
The research team urged developers to implement robust client-side security measures and establish strict policies prohibiting the transmission of private keys or recovery phrases through any AI agent interface.
For a sustainable solution, researchers suggested AI companies should implement cryptographic signing for their responses. This would enable developers to authenticate that instructions received by their agents genuinely originated from the intended AI model.
Co-author Chaofan Shou shared on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds.”
The researchers emphasized that LLM API routing platforms occupy a crucial security checkpoint that the AI industry currently assumes to be trustworthy without proper verification.
The published research paper did not include specific details such as blockchain transaction identifiers for the compromised wallet.



