Table of Contents
Government-backed threat actors are currently using Google’s Gemini AI service to expand their capabilities, part of an effort by hackers of all skill levels to leverage publicly-available generative artificial intelligence (genAI) models for crime and espionage,
That’s the conclusion of a report issued today by Google’s Threat Intelligence Group, which shows how threat actors are using AI and calls on governments and the private sector to work together to reduce the risks of abuse.
One of the goals of the report is to show infosec leaders that the threat of abuse of AI is current and not theoretical.
“While generative AI can be used by threat actors to accelerate and amplify attacks, they haven’t yet been able to use AI to develop novel capabilities,” wrote Kent Walker, president of global affairs at Google’s parent, Alphabet, in a blog accompanying the report.
“In other words, the defenders are still ahead — for now.”
How Gemini is used by threat actors
To illustrate the overall threat to all genAI services, the report gives examples of how attackers are abusing Gemini:
- Iranian advanced persistent threat (APT) actors are the heaviest users of Gemini, using it for a wide range of purposes including research on defense organizations, vulnerability research, and creating content for campaigns. APT42 focused on crafting phishing campaigns, conducting reconnaissance on defense experts and organizations, and generating content with cybersecurity themes;
- Chinese APT actors are using Gemini to conduct reconnaissance, for scripting and development, to troubleshoot code, and to research how to obtain deeper access to target networks. They focused on topics such as lateral movement, privilege escalation, data exfiltration, and detection evasion;
- North Korean APT actors are using Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, payload development, and assistance with malicious scripting and evasion techniques. They also used Gemini to research topics of strategic interest to the North Korean government, such as the South Korean military and cryptocurrency;
- North Korean actors also used Gemini to draft cover letters and research jobs, activities that would likely support North Korea’s efforts to place clandestine IT workers in Western companies;
- Compared to other countries, Russian APT actors only showed limited use of Gemini during the period of analysis. Their Gemini use focused on coding tasks, including converting publicly available malware into another coding language and adding encryption functions to existing code.
The report also gives examples of an unnamed APT threat actor trying unsuccessfully to get around Gemini’s safety protocols aimed at preventing abuse. Most genAI systems have such protocols, so these examples are a warning to AI model makers of the determination of some threat actors to try to maximize what they can out of a publicly-available AI service.
For example, the report notes, a threat actor copied publicly available prompts into Gemini and appended basic instructions to perform coding tasks. These tasks included encoding text from a file and writing it to an executable, and writing Python code for a distributed denial-of-service (DDoS) tool. Gemini provided Python code to convert Base64 to hex, but provided a safety filtered response when the user entered a follow-up prompt that requested the same code as a VBScript.
The same group used a different publicly available jailbreak prompt to request Python code for a DDoS. Gemini provided a safety filtered response stating that it could not assist, and the threat actor abandoned the session and did not attempt further interaction.
In addition, what the report calls information operation actors, which can include nation-state actors, are using Gemini to create misleading content, manipulating content and optimizing misinformation and disinformation, as well as for research and translation.
“Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities,” the report says. “At present, they primarily use AI for research, troubleshooting code, and creating and localizing content.”
That, Google’s Threat Intelligence Group said in an email to CSO, is the biggest takeaway from the report.
Recommendations for CISOs
Asked what CISOs should do in the wake of the report, Google said that organizations should check out Google’s Secure AI Framework (SAIF), which is a conceptual framework for secure AI systems.
Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume, the Google report concludes. “For skilled actors, generative AI tools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity. For less skilled actors, they also provide a learning and productivity tool, enabling them to more quickly develop tools and incorporate existing techniques,” it says. “However, current LLMs [large language models] on their own are unlikely to enable breakthrough capabilities for threat actors. We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily. As this evolution unfolds, GTIG anticipates the threat landscape to evolve in stride as threat actors adopt new AI technologies in their operations.”
In short, the report says, AI can be a useful tool for threat actors but it is not yet the game-changer it is sometimes portrayed to be.
In his blog, Alphabet executive Kent Walker specified actions that the US should be taking to protect itself from the abuse of AI. One recommendation is making sure the government and the private sector spend hundreds of billions of dollars to maintain the US lead in AI chips. Other recommendations include streamlining purchasing to enable the government adoption of AI and other game-changing technologies, and heightened private-public collaboration on cyber defense.
Asked by email what advice Google has for other governments, the Threat Intelligence Group replied that most of the recommendations apply to any government: First, governments should move faster to innovate and adopt advanced tech. Second, they should also prioritize partnering more closely with the private sector to disrupt threats, both with their domestic industries as well as global companies.
It’s not just Gemini — other AI systems also leveraged by cybercrooks
As today’s Google report noted, other cybersecurity companies have reported on the use of available AI systems by threat actors.
For example:
- in February, OpenAI, which is behind ChatGPT, said that, in partnership with Microsoft, it disrupted five state-affiliated actors that tried to use AI services in support of malicious cyber activities;
- in May, Deloitte described how AI is being used for sophisticated cyber attacks;
- in November, Microsoft posted this video discussion recorded earlier in 2024 about how AI is already a significant factor in cyber threats;
- also in November, ConnectWise published a blog on how malicious actors are leveraging AI.
Asked in an email why it published a report after acknowledging others have done similar work, Google’s Threat Intelligence Group said its findings are specific to abuse of Gemini. “What’s consistent [with other reports] are the categories of activity we’ve seen,” the group said, noting, “rather than novel malicious techniques, we’re seeing bad actors lean on Gemini for common tasks like troubleshooting, research, and content generation.”