Cybercriminals have been observed adopting AI-powered cloaking tools to bypass traditional security measures and keep phishing and malware sites hidden from detection.
According to new research from SlashNext, Platforms like Hoax Tech and JS Click Cloaker are offering “cloaking-as-a-service” (CaaS), allowing threat actors to disguise malicious content behind seemingly benign websites.
Using advanced fingerprinting, machine learning and behavioral targeting, these tools selectively show scam pages only to real users while feeding safe content to automated scanners.
“I think that this is a clear example of a technology and set of tools being used in a bad way,” said Andy Bennett, CISO at Apollo Information Systems.
“Just like threat actors use encryption […], it is no surprise that they are taking an approach designed to help opportunistic marketeers […] and use it to target specific victims or evade detection.”
The technique, known as cloaking, is not new, but its use of AI represents a significant evolution. Hoax Tech uses fingerprint-based profiling and a self-learning AI engine to analyze hundreds of visitor data points in real time. Suspicious traffic is redirected to harmless pages. Real users are shown the intended scam.
Read more on cloaking attack techniques: Researchers Uncover New “Conversation Overflow” Tactics
Sophisticated Cloaking Services Commercialized
JS Click Cloaker offers similar capabilities, evaluating more than 900 signals per click to determine legitimacy. Despite branding itself as a JavaScript-based tool, it reportedly avoids reliance on JavaScript to work against search engines like Google.
Both services advertise features such as A/B testing, geographic filtering and real-time redirects.
“This research presents a critical evolution in the cyber-threat landscape,” said Mr. Mayuresh Dani, security research manager at Qualys.
“Platforms like Hoax Tech and JS Click Cloaker expose a significant escalation in threat actor capabilities.”
To counter these systems, Dani recommends:
-
Implementing behavioral and runtime analysis tools
-
Using multi-perspective and differential scanning
-
Investing in adaptive, AI-powered defensive technologies
-
Adopting zero-trust frameworks across networks
-
Creating incident response plans for AI-based threats
Bennett warned the risks extend beyond detection evasion.
“Using AI to detect the difference between a tool that is checking to see if a link in an email is malicious, and a real user who has clicked a link that made it through their email filter because no malicious activity was detected, is certainly next level,” Bennett said.
He added that attackers may increasingly personalize content per visitor in real time, further complicating detection.
Security Experts Urge Broader Defensive Measures
Trey Ford, CISO at Bugcrowd, noted some historical parallels.
“This is an age-old problem. Twenty years ago, attackers used FastFlux DNS to profile targets for risk and exploit mapping – now the AI-powered Cloaking Services are the modernized version of that capability,” Ford explained.
“The arms race that is detection and response can’t be entrusted to a singular tool or tier. […] Endpoint patching, hardening and browser protection remain critical points of control and monitoring.”
As cloaking techniques evolve, security teams are under growing pressure to adapt. Without multi-layered, behavior-aware defenses, malicious sites may continue to evade detection and reach users unchecked.