Security awareness training is being overmatched by cybercriminals who are enhancing attacks with generative AI — and moving phishing campaigns outside the inbox.
For years organizations have invested in security awareness training programs to teach employees how to spot and report phishing attempts. Despite those efforts, enterprise users were three times as likely in 2024 to land on phishing pages compared to the previous year, according to a report from security vendor Netskope.
Based on telemetry collected from its secure web gateway and cloud-based SASE platform, Netskope found that 8.4 out of every 1,000 users clicked on a phishing link every month during the past year, compared to 2.9 in 2023.
“The main factors leading to this increase are cognitive fatigue (with users constantly being bombarded with phishing attempts) and the creativity and adaptability of the attackers in delivering harder-to-detect baits,” the company said in its annual Cloud and Threat report.
The rise of large language models (LLMs) almost certainly played a role in this surge as well, as attackers can now easily automate the creation of phishing lures that are more diverse, grammatically correct, and targeted for every organization.
Phishing via search engine results
A big part of phishing detection training inside organizations focuses on spotting phishing emails, but this is far from the only way attackers entice users to click on links that lead to fake websites trying to steal their credentials.
Based on Netskope’s data, the majority of phishing clicks came from various locations on the web, with search engines being a top referrer. Attackers have been highly successful in running malicious ads or using so-called SEO poisoning techniques to inject malicious links into the top search engine results for specific terms.
Other big referrers for phishing pages were shopping, technology, business, and entertainment websites. The ways in which attackers get malicious links onto such sites is through spamming comment sections, buying malicious ads that are then displayed on those site through ad networks — a technique known as malvertising — or by compromising the sites themselves and directly injecting phishing pop-ups into pages.
“The variety of phishing sources illustrates some creative social engineering by attackers,” the Netskope researchers wrote. “They know their victims may be wary of inbound emails (where they are repeatedly taught not to click on links) but will much more freely click on links in search engine results.”
The top targets for phishing attacks have been credentials to cloud apps, with Microsoft 365 being the most targeted with 42%, followed by Adobe Document Cloud (18%) and DocuSign (15%). Many phishing sites pose as login pages for these services but also offer login options with other identity providers. including Office 365, Outlook, Aol, or Yahoo.
“There is no doubt that LLMs have played a role in attackers crafting more convincing phishing lures,” Ray Canzanese, director of Netskope Threat Labs, told CSO. “LLMs can provide better localization and more variety to try to evade spam filters and increase the probability of fooling the victim.”
Cybercriminals have even created specialized LLM-assisted chatbots such as WormGPT or FraudGPT that are being advertised and sold on underground forums, claiming to be capable of writing better phishing lures, among other features.
“More broadly, Netskope is seeing gen AI tools being used in targeted phishing campaigns, usually by mimicking a high-profile individual in the targeted organization,” Canzanese said. “The attacker sends messages generated using LLM, or even uses deepfake audio and video.”
Deepfakes have been on the rise in enterprises, with 15% of executives recently citing that their companies’ financial data has been targeted by cybercriminals via deepfake scams, according to a survey from Deloitte.