Microsoft files lawsuit against LLMjacking gang that bypassed AI safeguards

by CybrGPT
0 comment

The civil suit against four members of Storm-2139 underscores an emerging trend that blends stolen LLM credentials and AI jailbreaking to reap financial gains for cybercriminals and losses for companies they exploit.

Credit: frank60 / Shutterstock

Microsoft has filed a civil lawsuit against an international gang of cybercriminals that exploited stolen credentials to access generative AI services, including its own. The gang, tracked as Storm-2139, used the stolen credentials along with AI jailbreaking techniques to set up paid services of their own capable of generating content that bypassed built-in ethical safeguards and violated the terms of service of the abused large language models (LLMs).

The lawsuit and its alleged activities shed light on the black market that has emerged around stolen credentials enabling access to AI chatbots or cloud platforms where a range of LLMs can be deployed. Attacks that abuse LLM resources, often with big financial costs for unsuspecting victims, have become known as LLMjacking.

“Storm-2139 is organized into three main categories: creators, providers, and users,” lawyers with Microsoft’s Digital Crimes Unit wrote in a blog post. “Creators developed the illicit tools that enabled the abuse of AI-generated services. Providers then modified and supplied these tools to end users, often with varying tiers of service and payment. Finally, users utilized these tools to generate violating synthetic content, often centered around celebrities and sexual imagery.”

Microsoft has managed to identify four of the 10 persons believed to be part of Storm-2139: Arian Yadegarnia, aka “Fiz,” of Iran; Alan Krysiak, aka “Drago,” of the United Kingdom; Ricky Yuen, aka “cg-dot,” of Hong Kong; and Phát Phùng Tấn, aka “Asakuri,” of Vietnam. Cg-dot is believed to be one of the two “creators,” while the other three were “providers” in the criminal operation.

The company said it has also identified two members based in the US, in Illinois and Florida, but for now, it’s keeping those identities secret because of ongoing criminal investigations.

Gang members out each other

Microsoft originally announced it was taking legal action against cybercriminals abusing its AI services in January and subsequently managed to seize a website that was critical to the Storm-2139 operation. This seizure and the unsealed legal filings immediately generated chatter on the communication channels used by the gang, with members and users speculating about whose identities might have been exposed. Microsoft lawyers also had their personal information and photographs shared.

“As a result, Microsoft’s counsel received a variety of emails, including several from suspected members of Storm-2139 attempting to cast blame on other members of the operation,” Microsoft’s Digital Crimes Unit said.

LLMjacking can cost organizations a lot of money

LLMjacking is a continuation of the cybercriminal practice of abusing stolen cloud account credentials for various illegal operations, such as cryptojacking — abusing hacked cloud computing resources to mine cryptocurrency. The difference is that large quantities of API calls to LLMs can quickly rack up huge costs, with researchers estimating potential costs of over $100,000 per day when querying cutting-edge models.

Security firm Sysdig reported last September a tenfold increase in the observed number of rogue requests to Amazon Bedrock APIs and a doubling of the number of IP addresses engaged in such attacks.

Amazon Bedrock is an AWS service that allows organizations to easily deploy and use LLMs from multiple AI companies, augment them with their own datasets, and build agents and applications around them. The service supports a long list of API actions through which models can be managed and interacted with programmatically. Microsoft runs a similar service called Azure AI Foundry, and Google has Vertex AI.

Sysdig initially saw attackers abusing AWS credentials to access Bedrock models that were already deployed by the victims organizations, but later started seeing attempts by attackers to actually enable and deploy new models in the compromised accounts.

Earlier this month, after the release of the DeepSeek R1 model, Sysdig detected LLMjacking attackers targeting it within days. The company also discovered over a dozen proxy servers that used stolen credentials across many different services, including OpenAI, AWS, and Azure.

“LLMjacking is no longer just a potential fad or trend,” the security company warned. “Communities have been built to share tools and techniques. ORPs [OpenAI Reverse Proxies] are forked and customized specifically for LLMjacking operations. Cloud credentials are being tested for LLM access before being sold.”

See also:

  • 10 most critical LLM vulnerabilities
  • Gen AI is transforming the cyber threat landscape by democratizing vulnerability hunting
  • Top 5 ways attackers use generative AI to exploit your systems

Source link

You may also like

Leave a Comment

Stay informed with the latest in cybersecurity news. Explore updates on malware, ransomware, data breaches, and online threats. Your trusted source for digital safety and cyber defense insights.

BuyBitcoinFiveMinute

Subscribe my Newsletter for new blog posts, tips & new photos. Let’s stay updated!

© 2025 cybrgpt.com – All rights reserved.