How are Cybercriminals exploiting AI?

Artificial intelligence (AI) has revolutionized numerous industries, offering unparalleled efficiency, accuracy, and innovation. However, it’s not just legitimate businesses that are reaping the benefits of AI. Cybercriminals, too, are harnessing this powerful technology to enhance their operations. A recent study by Intel 471 sheds light on the alarming ways in which criminals are using AI to further their nefarious goals, revealing a landscape where deepfake technology, advanced phishing scams, and AI-powered hacking tools are becoming increasingly prevalent.

AI and Cybercrime: A Disturbing Trend

The use of AI in cybercrime is no longer a futuristic scenario; it’s a present-day reality. According to Intel 471’s report, “Cybercriminals and AI: Not Just Better Phishing,” the discussion around AI among cybercriminals has significantly accelerated in 2023. These criminals are not just creating more convincing phishing emails; they are employing AI for a variety of malicious activities, including generating deepfake videos, defeating facial recognition systems, and summarizing stolen data from breaches.

Researchers have noted that threat actors are experimenting with AI in ways that mirror its use in legitimate industries. For instance, some criminals are integrating AI into hacking tools or creating malicious chatbots to facilitate their operations. The study highlights that the most significant impact AI has had on cybercrime is the increase in scams leveraging deepfake technology, which has led to devastating consequences, including loss of life.

Deepfakes and Scams: The Human Cost

Deepfake technology, which involves the use of AI to create highly realistic but fake audio and video content, has seen a surge in use among cybercriminals. One particularly egregious example is the rise of romance and sextortion scams perpetrated by a group known as the Yahoo Boys. Primarily based in Nigeria, these criminals use deepfakes to create convincing fake personas, gaining victims’ trust and persuading them to share compromising photos. These photos are then used to extort the victims, often leading to severe emotional distress and, in some cases, suicide.

The Intel 471 study reveals that deepfake offerings have become significantly cheaper and more accessible since January 2023. One threat actor claimed to produce deepfake content for as little as $60 to $400 per minute, depending on the complexity. Other criminals offer subscription services that provide 300 face swaps per day for $999 per year. The affordability and accessibility of deepfake technology are making it easier for cybercriminals to exploit vulnerable individuals on a massive scale.

Business Email Compromise and Document Fraud

AI’s role in cybercrime extends beyond deepfakes. Business email compromise (BEC) scams and document fraud are also areas where AI is making a significant impact. BEC scams typically involve intercepting and manipulating business communications to trick companies into transferring funds to the scammers’ accounts. The study highlights a tool developed by cybercriminals that uses AI to manipulate invoices, detecting and editing PDF documents and altering bank account details to redirect payments.

This AI-powered invoice manipulation tool is offered on a subscription basis for $5,000 per month or $15,000 for lifetime access. If it works as advertised, it represents a significant productivity gain for criminals, enabling them to execute scams more efficiently and at a larger scale.

AI in Data Breaches and Ransomware

Cybercriminals are also leveraging AI to enhance their data breach and ransomware operations. One criminal claimed to use Meta’s Llama large language model (LLM) to extract sensitive data from breached information, using it to pressure victims into paying ransoms. While not all claims of AI use in cybercrime can be verified, there is evidence that AI is being integrated into tools that scrape and summarize data from common vulnerabilities and exposures (CVE) advisories, helping criminals exploit vulnerabilities more effectively.

The Competitive Underground Market

The underground market for cybercriminal tools is highly competitive, with multiple vendors offering similar services. AI’s potential to enhance these tools gives criminals a commercial advantage, making their products more attractive to buyers. However, this competition also drives innovation, as criminals seek to outdo each other by incorporating the latest AI advancements.

The Intel 471 report cautions that not all claims of AI use in cybercrime may be accurate. Some threat actors might exaggerate their capabilities to attract customers. For example, four University of Illinois Urbana-Champaign (UIUC) computer scientists claimed to have used OpenAI’s GPT-4 LLM to autonomously exploit real-world vulnerabilities. However, the lack of published details on their methodology invites skepticism about the authenticity of their claims.

Emerging Risks and Government Response

As AI becomes more integrated into cybercriminal activities, new risks are emerging. AI-powered tools can generate recommendations that direct users to malicious sites, and vulnerabilities in AI applications themselves can be exploited. Nation-states and other malicious entities are also using LLMs for various attacks, raising concerns about the broader implications of AI in cybercrime.

In response, government agencies worldwide are stepping up efforts to monitor and regulate AI to ensure its safety and security. The US Federal Communications Commission, the Department of Homeland Security, and the UK government are among those initiating measures to counter the growing threat posed by AI-enhanced cybercrime.

The Future of AI in Cybercrime

The Intel 471 report concludes that AI has already begun to play a significant role in cybercrime, and its influence is expected to grow. Deepfakes, phishing, BEC scams, and disinformation campaigns are all likely to increase as criminals continue to explore AI’s potential. The security landscape will change dramatically when AI can autonomously find and exploit vulnerabilities, posing a severe challenge to cybersecurity defenses.

While AI has been used in the security industry to fight spam and detect malware for years, its dual-use nature means it can aid both attackers and defenders. The availability of LLMs and AI models with fewer guardrails will determine how extensively cybercriminals can leverage AI for malicious purposes.

Conclusion

The integration of AI into cybercriminal activities represents a significant and growing threat. From deepfake scams to AI-powered hacking tools, criminals are finding innovative ways to exploit AI for their gain. The affordability and accessibility of AI technologies make it easier for these criminals to operate on a larger scale, causing more harm to victims. As governments and cybersecurity experts work to counter these threats, it is crucial to remain vigilant and proactive in addressing the evolving landscape of AI-enhanced cybercrime. The balance of power between attackers and defenders will continue to shift, and staying ahead of these technological advancements will be key to ensuring a safer digital world.

Be the first to comment

Leave a Reply

Your email address will not be published.


*