Emerging Threats from Generative AI to Cybersecurity

Potential threats include AI-crafted malware, deepfake technology, and more advanced social engineering. There are also growing concerns about data privacy within large language models (LLMs).

Despite these risks, many experts remain optimistic. They believe AI can strengthen cybersecurity defenses and help address talent shortages. This could allow teams to focus resources on more critical tasks. However, the balance between AI’s benefits and its dangers is still unclear. As AI continues to rise, malicious actors are adopting new cybercrime tactics. One emerging method is AI poisoning. In this attack, hackers manipulate the training data of AI systems. Their goal is to influence the decisions the model makes.

Generative AI is making it easier for a wide range of hackers to enter the global cyber underworld. It lowers the barrier to becoming a cybercriminal. People from various backgrounds can now engage in illicit cyber activity. This trend makes the job of cybersecurity professionals even more difficult worldwide.

How Generative AI is Helping Cybercriminals

Cybercriminals now have unprecedented ease in extracting money and data from individuals and organizations.

Affordable and user-friendly tools are now widely available. At the same time, personal data—like photos, voice recordings, and social media profiles—is easily accessible online. Combined with increased computational power, these factors have expanded the scope of cyber threats.

Even individuals without coding, design, or writing experience can now act maliciously. They only need to know how to provide prompts. By using natural language instructions with large language models (LLMs) like ChatGPT or image generators like StableDiffusion, they can quickly create new content.

AI’s automation capabilities make it easier to scale cyberattacks, such as phishing campaigns. These operations were once manual, slow, and expensive. As these attacks grow more frequent, their success rate increases—fueling more advanced and widespread cybercrime.

AI and the Evolution of Cyberattack Techniques

Password cracking: Passwords pose a problem due to human tendencies. Despite advice to create strong, unique passwords, many still opt for easily guessable ones, with “password” being the top choice in 2022, as reported by NordPass. People often use passwords with personal significance across multiple sites, providing hackers with crucial information for brute force attacks. Generative AI, like large language models (LLMs), accelerates this process by leveraging publicly available data, such as social media profiles, to generate relevant password options rapidly. A password-less future seems increasingly necessary.

CAPTCHA bypass: While CAPTCHA has long safeguarded websites against bots, recent research reveals that bots are now quicker and more accurate at solving CAPTCHA tests. However, new AI-driven strategies are emerging to counter this trend. One proposed method, presented at CAMLIS by Okta’s data science team, employs image-based narration completion. Users must select the image that best completes a short story, a task currently challenging for AI to accomplish affordably.

Prompt injection: Prompt injection targets applications built on AI models, not the models themselves. Coined by developer Simon Willison, this attack manipulates layers added by developers to override intended instructions. Successful prompt injections could lead to serious consequences, like an AI-run tweet bot sending threats. With businesses integrating large language models more often, the risk of prompt injection rises.

Voice cloning: Generative AI has disrupted voice authentication, once hailed as a promising secure identification method. Bad actors now require only a 3-second audio snippet to produce a natural-sounding voice replica capable of saying anything. In a striking demonstration, ethical hackers used a voice clone of a “60 Minutes” correspondent to deceive a staffer into divulging sensitive information in just five minutes, while cameras recorded the exchange. Efforts to counter these clones are underway, including Okta’s recent patent on detecting AI-generated voices.

Image and video manipulation: Celebrities like Tom Hanks, Oprah Winfrey and Martha Stewart have found themselves unwitting stars of AI deepfakes, as scammers exploit their likenesses to deceive the public. Beyond damaging a celebrity’s reputation, deepfakes undermine truth, fostering confusion in critical arenas such as global affairs and legal proceedings. The proliferation of inexpensive, user-friendly generative AI tools facilitates the widespread creation of deepfakes.

Text creation: In the AI era, traditional signals for spotting phishing emails, like grammatical errors, are rendered obsolete by generative AI’s ability to craft flawless text in multiple languages. This advancement fuels the proliferation of sophisticated and personalized phishing schemes, presenting a pervasive cybersecurity challenge.

Code generation: Generative AI streamlines code development, enabling cybercriminals with limited coding skills to orchestrate attacks efficiently. This reduced barrier to entry may attract more individuals to cybercrime and enhance operational effectiveness, highlighting the broader impact of AI on illicit activities.

The Surge in AI-Powered Phishing Attacks

The surge in phishing attacks, a longstanding menace in the digital realm, has reached new heights, and AI, particularly generative AI, is largely to blame. According to Zscaler’s findings in 2022, phishing incidents escalated by a staggering 47% compared to the preceding year, with generative AI playing a pivotal role.

Sophisticated phishing kits are now easily accessible through underground markets. These tools, combined with chatbot AI like ChatGPT, allow attackers to create highly targeted and convincing phishing campaigns faster than ever before. This mix of AI technologies gives malicious actors powerful new capabilities. As a result, they can exploit cybersecurity vulnerabilities with alarming efficiency. This poses major challenges for individuals and organizations trying to defend against these threats.

Economic Impact of Generative AI on Cybercrime

By 2025, CyberSecurity Ventures predicts that global cybercrime damage costs will skyrocket to an alarming $10.5 trillion annually, a substantial increase from the previous $3 trillion estimate. This exponential rise, projected at 15 percent per year over the next two years, is largely attributable to the proliferation of generative AI. AI technologies are actively contributing to the escalation of cyber threats, further exacerbating the already substantial economic and societal impacts of cybercrime.

AI-Driven Cybersecurity Defenses: Fighting AI with AI

The most effective defense against generative AI threats often lies within AI itself. As generative AI advances, it creates new cybersecurity challenges. Using AI-driven solutions is essential to stay ahead of evolving threats.

AI-powered systems can detect anomalies and patterns linked to malicious activity. This enables early detection and faster threat response. Machine learning allows these defenses to adapt and improve continuously. They evolve alongside new tactics used by cybercriminals. This makes computer systems more resilient against AI-driven cyber threats. The connection between AI and cybersecurity highlights its dual role in both enabling and preventing digital attacks in today’s connected world.

The Role of Encrypted Portable NAS Devices in Defense

Owning an encrypted portable Network Attached Storage (NAS) device can serve as a robust defense against Generative AI-assisted cyberattacks. With the proliferation of AI-driven cyber threats, securing sensitive data becomes imperative, and encrypted portable NAS devices provide a reliable solution. By storing data locally on the NAS rather than relying solely on cloud services, individuals and businesses can reduce their exposure to online vulnerabilities exploited by Generative AI. The encryption ensures that even if the device falls into the wrong hands, the data remains inaccessible without proper authorization. Additionally, portable NAS devices offer the flexibility to access and manage data securely from anywhere, mitigating the risks associated with centralized storage systems vulnerable to cyberattacks. In an era where data privacy is increasingly threatened by advanced AI technologies, encrypted portable NAS devices stand as a vital safeguard for protecting valuable information from malicious exploitation.

Ciphertex Data Security® offers a rugged, portable, encrypted NAS line called SecureNAS®, trusted by organizations worldwide in various industries, including Government, Military, Energy, Oil & Gas, Healthcare, Forensics, Aerospace, Media Entertainment, and Enterprise IT. The unparalleled security measures implemented in Ciphertex®‘s petabyte-scale SecureNAS® solutions safeguard sensitive information with utmost reliability. Advanced cryptographic algorithms are employed, bolstered by features like hardware-based encryption acceleration and key management protocols. SecureNAS® implements robust access controls, allowing administrators to finely tune permissions and monitor user activity, thus mitigating the risk of insider threats. Additionally, the hardware design of SecureNAS® systems integrates physical security mechanisms, such as a metal security door to protect against drive theft. SecureNAS® systems are certified in a military-approved environmental test lab and are engineered to withstand vibration, shock, or drop. They are equipped with a ruggedized chassis and a sturdy carry handle, making them ideal for field deployment. Custom hard cases for secure transport are also available. Ciphertex®‘s unwavering commitment to innovation and rigorous security standards renders SecureNAS® systems a trusted solution for safeguarding critical data assets against evolving threats.

Scroll to Top