Potential threats encompass AI-crafted elusive malware, deepfake technology, heightened levels of sophisticated social engineering, and a plethora of concerns regarding data privacy within large language models (LLMs). Nonetheless, there is also a palpable sense of optimism among experts regarding the capacity of AI to fortify cybersecurity measures and mitigate talent shortages, thereby enabling a more focused allocation of resources towards pivotal tasks. The ultimate balance between benefits and drawbacks remains uncertain.
With the event of AI on the rise, a new method adopted by malicious actors to perpetrate cybercrimes involves using attack vectors such as AI poisoning. This tactic entails attackers manipulating the training data of artificial intelligence systems with the explicit intent of influencing the resultant decisions made by the model.
Generative AI technology is poised to facilitate the entry of a wide array of hackers into the global cyber underworld, effectively lowering the bar to being a cybercriminal. This advancement opens up opportunities for individuals of various backgrounds to engage in illicit cyber endeavors, thereby making it more challenging for cybersecurity professionals worldwide.
How Generative AI is Helping Cybercriminals
Cybercriminals now have unprecedented ease in extracting money and data from individuals and organizations. The availability of affordable and user-friendly tools, combined with the abundance of publicly accessible personal data like photos, voice recordings, and social media details, alongside enhanced computational power, has expanded the scope of potential threats. Even individuals with no prior coding, design, or writing experience can rapidly escalate their capabilities by simply knowing how to provide prompts. By inputting natural language instructions into large language models (LLMs) such as ChatGPT or text-to-image models like StableDiffusion, they can effortlessly generate new content. Moreover, AI’s automation capabilities enable malicious actors to scale their operations, such as phishing campaigns, which were once laborious, manual, and costly endeavors. With the proliferation of attacks, the likelihood of their success increases, fueling the evolution of increasingly sophisticated cybercrimes.
Password cracking: Passwords pose a problem due to human tendencies. Despite advice to create strong, unique passwords, many still opt for easily guessable ones, with “password” being the top choice in 2022, as reported by NordPass. People often use passwords with personal significance across multiple sites, providing hackers with crucial information for brute force attacks. Generative AI, like large language models (LLMs), accelerates this process by leveraging publicly available data, such as social media profiles, to generate relevant password options rapidly. A password-less future seems increasingly necessary.
CAPTCHA bypass: While CAPTCHA has long safeguarded websites against bots, recent research reveals that bots are now quicker and more accurate at solving CAPTCHA tests. However, new AI-driven strategies are emerging to counter this trend. One proposed method, presented at CAMLIS by Okta’s data science team, employs image-based narration completion. Users must select the image that best completes a short story, a task currently challenging for AI to accomplish affordably.
Prompt injection: Prompt injection targets applications built on AI models, not the models themselves. Coined by developer Simon Willison, this attack manipulates layers added by developers to override intended instructions. Successful prompt injections could lead to serious consequences, like an AI-run tweet bot sending threats. With businesses integrating large language models more often, the risk of prompt injection rises.
Voice cloning: Generative AI has disrupted voice authentication, once hailed as a promising secure identification method. Bad actors now require only a 3-second audio snippet to produce a natural-sounding voice replica capable of saying anything. In a striking demonstration, ethical hackers used a voice clone of a “60 Minutes” correspondent to deceive a staffer into divulging sensitive information in just five minutes, while cameras recorded the exchange. Efforts to counter these clones are underway, including Okta’s recent patent on detecting AI-generated voices.
Image and video manipulation: Celebrities like Tom Hanks, Oprah Winfrey and Martha Stewart have found themselves unwitting stars of AI deepfakes, as scammers exploit their likenesses to deceive the public. Beyond damaging a celebrity’s reputation, deepfakes undermine truth, fostering confusion in critical arenas such as global affairs and legal proceedings. The proliferation of inexpensive, user-friendly generative AI tools facilitates the widespread creation of deepfakes.
Text creation: In the AI era, traditional signals for spotting phishing emails, like grammatical errors, are rendered obsolete by generative AI’s ability to craft flawless text in multiple languages. This advancement fuels the proliferation of sophisticated and personalized phishing schemes, presenting a pervasive cybersecurity challenge.
Code generation: Generative AI streamlines code development, enabling cybercriminals with limited coding skills to orchestrate attacks efficiently. This reduced barrier to entry may attract more individuals to cybercrime and enhance operational effectiveness, highlighting the broader impact of AI on illicit activities.
The surge in phishing attacks, a longstanding menace in the digital realm, has reached new heights, and AI, particularly generative AI, is largely to blame. According to Zscaler’s findings in 2022, phishing incidents escalated by a staggering 47% compared to the preceding year, with generative AI playing a pivotal role. The emergence of sophisticated phishing kits accessible through underground markets, coupled with the integration of chatbot AI tools such as ChatGPT, has enabled cyber attackers to craft highly tailored and convincing phishing campaigns at an unprecedented pace. This amalgamation of AI technologies has empowered malicious actors to exploit vulnerabilities in cybersecurity defenses with alarming efficiency, posing significant challenges to individuals and organizations alike in combating these nefarious activities.
By 2025, CyberSecurity Ventures predicts that global cybercrime damage costs will skyrocket to an alarming $10.5 trillion annually, a substantial increase from the previous $3 trillion estimate. This exponential rise, projected at 15 percent per year over the next two years, is largely attributable to the proliferation of generative AI. AI technologies are actively contributing to the escalation of cyber threats, further exacerbating the already substantial economic and societal impacts of cybercrime.
The most effective defense against the threats posed by Generative AI for computers often lies within AI itself. As generative AI continues to advance, presenting new challenges in cybersecurity, leveraging AI-driven solutions becomes imperative. AI-powered systems can be trained to detect anomalies and patterns indicative of malicious activity associated with generative AI, enabling proactive identification and mitigation of potential threats. By harnessing machine learning algorithms, AI-based defenses can continuously evolve and adapt to counter emerging tactics employed by malicious actors, bolstering the resilience of computer systems against the evolving landscape of cyber threats posed by generative AI. This symbiotic relationship between AI and cybersecurity underscores the pivotal role AI plays in both the perpetration and prevention of digital attacks in today’s interconnected world.
Owning an encrypted portable Network Attached Storage (NAS) device can serve as a robust defense against Generative AI-assisted cyberattacks. With the proliferation of AI-driven cyber threats, securing sensitive data becomes imperative, and encrypted portable NAS devices provide a reliable solution. By storing data locally on the NAS rather than relying solely on cloud services, individuals and businesses can reduce their exposure to online vulnerabilities exploited by Generative AI. The encryption ensures that even if the device falls into the wrong hands, the data remains inaccessible without proper authorization. Additionally, portable NAS devices offer the flexibility to access and manage data securely from anywhere, mitigating the risks associated with centralized storage systems vulnerable to cyberattacks. In an era where data privacy is increasingly threatened by advanced AI technologies, encrypted portable NAS devices stand as a vital safeguard for protecting valuable information from malicious exploitation.
Ciphertex Data Security® offers a rugged, portable, encrypted NAS line called SecureNAS®, trusted by organizations worldwide in various industries, including Government, Military, Energy, Oil & Gas, Healthcare, Forensics, Aerospace, Media Entertainment, and Enterprise IT. The unparalleled security measures implemented in Ciphertex®‘s petabyte-scale SecureNAS® solutions safeguard sensitive information with utmost reliability. Advanced cryptographic algorithms are employed, bolstered by features like hardware-based encryption acceleration and key management protocols. SecureNAS® implements robust access controls, allowing administrators to finely tune permissions and monitor user activity, thus mitigating the risk of insider threats. Additionally, the hardware design of SecureNAS® systems integrates physical security mechanisms, such as a metal security door to protect against drive theft. SecureNAS® systems are certified in a military-approved environmental test lab and are engineered to withstand vibration, shock, or drop. They are equipped with a ruggedized chassis and a sturdy carry handle, making them ideal for field deployment. Custom hard cases for secure transport are also available. Ciphertex®‘s unwavering commitment to innovation and rigorous security standards renders SecureNAS® systems a trusted solution for safeguarding critical data assets against evolving threats.