Google’s Threat Intelligence Group (GTIG) has issued a warning regarding cybercriminals from China, Iran, Russia, and North Korea, and over a dozen other countries are using its artificial intelligence (AI) application, Gemini, to boost their hacking capabilities.
According to Google’s TIG report, published on Wednesday, state-sponsored hackers have been using the Gemini chatbot to improve their productivity in cyber espionage, phishing campaigns, and other malicious activities.
Google examined Gemini activity linked to known APT (Advanced Persistent Threat) actors and discovered that APT groups from over twenty countries have been using large language models (LLMs) primarily for research, target reconnaissance, the development of malicious code, and the creation and localization of content like phishing emails.
In other words, these hackers seem to primarily use Gemini as a research tool to enhance their operations rather than develop entirely new hacking methods.
Currently, no hacker has successfully leveraged Gemini to develop entirely new cyberattack methods.
“While AI can be a useful tool for threat actors, it is not yet the gamechanger it is sometimes portrayed to be. While we do see threat actors using generative AI to perform common tasks like troubleshooting, research, and content generation, we do not see indications of them developing novel capabilities,” Google said in its report.
Google tracked this activity to more than ten Iran-backed groups, more than twenty China-backed groups, and nine North Korean-backed groups.
For instance, Iranian threat actors were the biggest users of Gemini, using it for a wide range of purposes, including research on defense organizations, vulnerability research, and creating content for campaigns.
In particular, the group APT42 (which accounted for over 30% of Iranian APT actors) focused on crafting phishing campaigns to target government agencies and corporations, conducting reconnaissance on defense experts and organizations, and generating content with cybersecurity themes.
Chinese APT groups primarily used Gemini to conduct reconnaissance, script and develop, troubleshoot code, and research how to obtain deeper access to target networks through lateral movement, privilege escalation, data exfiltration, and detection evasion.
North Korean APT hackers were observed using Gemini to support multiple phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, payload development, and help with malicious scripting and evasion methods.
“Of note, North Korean actors also used Gemini to draft cover letters and research jobs—activities that would likely support North Korea’s efforts to place clandestine IT workers at Western companies,” the company noted.
“One North Korea-backed group utilized Gemini to draft cover letters and proposals for job descriptions, researched average salaries for specific jobs, and asked about jobs on LinkedIn. The group also used Gemini for information about overseas employee exchanges. Many of the topics would be common for anyone researching and applying for jobs.”
Meanwhile, Russian APT actors demonstrated limited use of Gemini, primarily for coding tasks such as converting publicly available malware into different programming languages and incorporating encryption functions into existing code.
They may have avoided using Gemini for operational security reasons, opting to stay off Western-controlled platforms to avoid monitoring their activities or using Russian-made AI tools.
Google said the Russian hacking group’s use of Gemini has been relatively limited, possibly because it attempted to prevent Western platforms from monitoring its activities or using Russian-made AI tools.
Google says it has been implementing safeguards to curb such misuse, such as developing its AI systems with strong security measures and disrupting the activity of threat actors who have misused Gemini.
“We investigate abuse of our products, services, users and platforms, including malicious cyber activities by government-backed threat actors, and work with law enforcement when appropriate,” the company said. “Moreover, our learnings from countering malicious activities are fed back into our product development to improve safety and security for our AI models.”