Weaponized AI for Cyber Attacks! HKCERT Exposes Six Emerging AI-assisted Attacks

by CybrGPT
0 comment

Table of Contents

In recent years, artificial intelligence (AI) technology has advanced rapidly. Large language models (LLMs) and generative models have been widely applied in writing, reasoning, and generating images and videos. At the same time, the Hong Kong Computer Emergency Response Team Coordination Centre (HKCERT) warns that hackers are also weaponising AI for various cyberattacks, making defence much more difficult. The latest analysis reveals six AI-assisted attack methods, urging businesses and the public to stay alert.

 

  1. New Risks from Agentic AI

The recent rise of Agentic AI has transformed AI from being just a chatbot into a more powerful tool capable of directly operating computer systems. This means that complex attacks that previously required team collaboration can now be executed by a single hacker commanding multiple AI agents.

 

According to a threat intelligence report released by AI company Anthropic in August 2025, Agentic AI has evolved from merely providing suggestions to becoming an active participant capable of executing attacks. Researchers have named this new type of attack “Vibe-hacking”, noting that a cybercrime group used it to carry out infiltration and data extortion attacks on more than a dozen organisations. The group used AI to complete the entire process — from reconnaissance, infiltration, ransomware development, file theft, and content analysis to drafting ransom notes.

 

The emergence of this type of attack further lowers the barrier for hackers to carry out cyberattacks. It is predicted that in the future, organisations will face more frequent and highly sophisticated attacks driven by AI under the direction of hackers.

 

Furthermore, since the rise of Agentic AI, major vendors have begun integrating this capability into browsers. Users can issue direct commands via a chat interface, such as booking a restaurant or buying daily necessities. The AI-powered browser will then carry out web searches and, based on the results, make decisions and perform actions such as completing purchases.

 

AI company Perplexity’s Agentic AI browser Comet was recently found to be vulnerable to a technique where hidden, invisible text embedded in a webpage could serve as instructions to the AI, indirectly injecting commands. Without the user’s knowledge, the AI might perform extra actions — such as opening an email inbox to retrieve a verification code and uploading it to another site. Since Agentic AI actions are considered equivalent to user actions, traditional cross-site attack protections cannot block them. Users’ personal data may be silently and invisibly leaked while commands are being executed in the browser.

 

HKCERT Recommendation: Organisations and users must remain vigilant and continue to enhance their security awareness to counter increasingly sophisticated attacks. While Agentic AI still requires additional security controls at the application level to prevent it from performing unauthorised actions. However, as Agentic AI browsers are still an emerging technology, HKCERT recommends that when using Agentic AI browsers to handle sensitive data or transactions, users should review the operational steps involved, or avoid linking email accounts, personal information, or credit card details to the browser.

 

Comet is executing commands on the webpage that were hidden by being highlighted in white (Source: Brave)

 

  1. AI Cracking CAPTCHAs

When logging into a website, in addition to entering a username and password, users must often input a CAPTCHA to distinguish between real users and automated programs. CAPTCHAs may consist of distorted alphanumeric characters or require selecting specific images from multiple pictures.

 

In the past, hackers had to write their own algorithms to bypass them, which involved high execution costs. Now, with AI systems equipped with image analysis, cracking CAPTCHAs has become much easier. Simply upload the CAPTCHA image to an AI system and ask for the answer — the AI can respond quickly and with high accuracy. This means that hackers can write programs to have AI help them automatically bypass traditional CAPTCHAs, quickly go ahead to the next stage of attack, and make the CAPTCHA’s function of protecting the website virtually useless.

 

HKCERT Recommendation: For websites still using traditional CAPTCHAs, administrators should consider upgrading to interactive CAPTCHAs or behaviour‑based verification to enhance security and reduce the risk of automated attacks.

 

 

AI can solve CAPTCHAs with high accuracy.

 

Interactive CAPTCHAs reduce the risk of automated attacks

 

  1. AI-Assisted Website Analysis and Attacks

Hackers often search online for various login pages and attempt brute-force attacks to obtain credentials for infiltration. With AI help, parts of the process — from finding login pages to executing brute-force attacks — can be automated.

 

When multiple AI programs run simultaneously, they can scan dozens or even hundreds of websites at once. Earlier, a cybersecurity researcher had developed and released tools that could use AI to aid in penetration attacks. It is believed that hackers will soon follow suit and develop more efficient attack tools. Like AI cracking CAPTCHAs, this reveals that traditional website security will face greater defensive challenges in the AI era.

 

HKCERT Recommendation: Website administrators should strengthen security checks, enforce strict password policies (such as multi-factor authentication), and regularly review system logs to analyse suspicious activities and patch vulnerabilities early to prevent exploitation.

 

  1. Offense and defence of Distributed Denial-of-Service (DDoS) attacks in the AI era

Before the AI era, DDoS was mainly a brute-force attack driven by overwhelming network traffic, preventing the target’s services from accepting requests and paralyzing the target network. The countermeasures were also very straightforward: either using firewalls or cloud-based anti-DDoS solutions to route attack traffic into a “black hole”. In the AI era, however, attackers have more advanced tools, such as using AI to crack website CAPTCHAs and to automatically scan and attack web pages. Agentic AI can also monitor attacks in real time, automatically switching to other weak link in response to defensive strategies like rate limiting. They can also mimic human user behaviour to bypass traditional defences, implying that conventional approaches may no longer suffice.

 

In the future, beyond raw traffic volume, DDoS will emphasize precision and flexibility, aiming to achieve maximum damage with minimal traffic. Cybersecurity developers understand the idea of fighting fire with fire. They will train AI models with traffic data, knowledge of past attacks, and threat intelligence to analyze real-time network traffic, respond to attacks, and automatically adjust defensive strategies. More importantly, AI models can continuously collect network traffic as reference data for fine-tuning, making the system more accurate over long-term operation.

 

HKCERT Recommendation: In the AI era, DDoS attacks will become more immediate and complex. Network administrators should keep up with the latest trends in cyberattacks, regularly update threat intelligence, and consider adopting AI-enabled network protection systems to address new, AI-driven attacks in the future.

 

  1. AI-Driven Ransomware

Recently, a university research team developed an AI ransomware as a prototype , naming it PromptLock..  This research suggests that in the future, hackers may no longer need to write ransomware separately for different platforms — instead, infected devices connect to a large language model in real time, using preset prompts to generate attack code on the spot and execute it.

 

The study demonstrates that the mode of operation of AI ransomware need no longer be confined to executing prewritten code. Instead, it may automatically customise itself based on the target organisation’s system architecture, network environment, and security controls, to maximise its destructive impact — significantly increasing both the difficulty of defense and the potential scale of damage.

 

HKCERT Recommendation: The public should never download or run files or programs from unknown sources. The public should also install antivirus software or cybersecurity applications and update them regularly.

 

  1. AI-Generated Phishing Websites

AI web development technology is becoming increasingly sophisticated. With the right prompts, users can generate attractive and fully functional websites. For hackers conducting phishing scams, this lowers the barrier to entry — they can use AI tools to copy legitimate website pages, make slight modifications, and produce highly convincing phishing sites.

 

Even more concerning, some commercial AI development tools can directly publish websites to the internet, drastically shortening the time from creation to deployment. It can be expected that phishing attacks will continue to increase and that it will become more difficult to figure out authenticity based solely on webpage content. If the public is not careful, they may easily fall into the scammers’ trap.

 

HKCERT Recommendation: When browsing websites, always check the authenticity of the URL. If in doubt, never enter personal or sensitive information into suspicious sites.

 


AI-generated fake courier company webpage (Source: Proofpoint)

 

Conclusion

 

While AI technology brings convenience to society, it also provides new tools for cybercriminals. From cracking CAPTCHAs to automating penetration attacks, and even full-process Agentic AI attacks, threats are constantly evolving. Human effort alone can hardly keep up with the rapidly changing, complex attacks of the AI era; human expertise augmented by AI will be the emerging trend in countering hackers. HKCERT has also introduced AI tools to assist in detecting phishing websites. In August 2025, HKCERT used AI to conduct 3.5 billion scans and, in total, discovered several suspicious websites. This shows that businesses and users must enhance their security awareness and technology in parallel to counter the dangers posed by weaponised AI.

 

Reference Links:

 

[1] https://www.bleepingcomputer.com/news/security/malware-devs-abuse-anthropics1-claude-ai-to-build-ransomware/

[2] https://www.anthropic.com/news/detecting-countering-misuse-aug-2025

[3] https://brave.com/blog/comet-prompt-injection/
[4] https://github.com/AashiqRamachandran/i-am-a-bot

[5] https://arxiv.org/pdf/2405.07496

[6] https://developers.google.com/recaptcha/docs/v3
[7] https://github.com/MorDavid/BruteForceAI

[8] https://www.radware.com/blog/ddos-protection/the-future-of-ddos-mitigation/

[9] https://www.scworld.com/perspective/how-ai-has-changed-the-ddos-industry

[10] https://www.scworld.com/perspective/tips-for-defending-against-automated-and-ai-driven-ddos-attacks

[11] https://www.eset.com/blog/en/business-topics/threat-landscape/the-first-known-ai-written-ransomware/

[12] https://www.theregister.com/2025/09/05/real_story_ai_ransomware_promptlock

[13] https://arxiv.org/pdf/2508.20444v1

[14] https://cybersecuritynews.com/threat-actors-abuse-ai-website/

[15] https://www.proofpoint.com/us/blog/threat-insight/cybercriminals-abuse-ai-website-creation-app-phishing

Source link

You may also like

Leave a Comment

Stay informed with the latest in cybersecurity news. Explore updates on malware, ransomware, data breaches, and online threats. Your trusted source for digital safety and cyber defense insights.

BuyBitcoinFiveMinute

Subscribe my Newsletter for new blog posts, tips & new photos. Let’s stay updated!

© 2025 cybrgpt.com – All rights reserved.