• Home
  • AI
  • Surge in Email Phishing Attacks by 1,265% Following ChatGPT Release: SlashNext Reveals
Surge in Email Phishing Attacks by 1,265% Following ChatGPT Release: SlashNext Reveals

Surge in Email Phishing Attacks by 1,265% Following ChatGPT Release: SlashNext Reveals

Phishing Scams Surge with the Rise of Generative AI

Generative AI has rapidly transformed numerous aspects of daily life, but it has also led to an increase in phishing scams. According to cybersecurity firm SlashNext, phishing emails have surged by 1,265% since the launch of ChatGPT. Cybercriminals are not only developing malware-generating AI tools like WormGPT, Dark Bart, and FraudGPT on the darkweb, but they are also finding ways to jailbreak OpenAI’s flagship AI chatbot.

Phishing attacks involve cyberattacks disguised as emails, texts, or social media messages from reputable sources. These attacks can direct victims to malicious websites that trick them into making transactions with their crypto wallets, resulting in fund losses.

The Alarming Statistics

SlashNext’s report reveals that during the last quarter of 2022 alone, there were 31,000 phishing attacks sent daily—a 967% increase in credential phishing. The report also states that 68% of all phishing attacks are text-based business email compromises (BEC), and 39% of mobile-based attacks occur through SMS phishing (smishing).

According to SlashNext CEO Patrick Harr, generative AI tools like ChatGPT are being leveraged by threat actors to craft sophisticated and targeted business email compromises and other phishing messages.

The Dangers of Phishing Attacks

Harr explains that phishing attacks primarily aim to obtain users’ credentials, including usernames and passwords. He warns that these attacks can also lead to the installation of more persistent ransomware. For instance, the Colonial Pipeline attack was a result of a credential phishing attack where the hackers gained access to a user’s login information.

Fighting Fire with Fire

Harr suggests that cybersecurity professionals should adopt an offensive approach and combat AI-driven threats with AI itself. He recommends incorporating AI directly into security programs to continuously scan messaging channels for potential threats. SlashNext utilizes generative AI in its own tools to not only detect and block attacks but also predict how future attacks may occur.

However, Harr acknowledges that it takes more than simply relying on ChatGPT to identify cyber threats. He emphasizes the need for a large language model application tuned to detect nefarious threats.

AI Developers’ Efforts

While AI developers like OpenAI, Anthropic, and Midjourney have implemented measures to prevent their platforms from being used for malicious purposes such as phishing attacks and spreading misinformation, skilled individuals are still finding ways to circumvent these safeguards.

Last week, the RAND Corporation published a report indicating that terrorists could learn how to carry out biological attacks using generative AI chatbots. The chatbot itself did not provide instructions on building the weapon, but by using jailbreaking prompts, users could manipulate the chatbot into explaining the execution of the attack.

Researchers have also discovered vulnerabilities in ChatGPT that allow them to hack the system using less commonly tested languages like Zulu and Gaelic. This enables them to make the chatbot explain how to successfully commit robbery.

In response to these challenges, OpenAI has called upon offensive cybersecurity professionals (Red Teams) to identify security vulnerabilities in its AI models.

Enhancing Security Postures

In conclusion, Harr urges companies to reassess their security postures. He stresses the importance of utilizing generative AI-based tools not only for response and detection but also for proactive blocking and prevention of attacks before they occur.

Hot Take: Phishing Attacks Rise as Cybercriminals Exploit Generative AI

The rise of generative AI has opened up new opportunities for cybercriminals to carry out phishing attacks. With the surge in phishing emails and the jailbreaking of AI chatbots, the threat landscape has become increasingly sophisticated. It is crucial for cybersecurity professionals to take an offensive approach and fight AI-driven threats with AI itself. By incorporating generative AI into security programs, companies can continuously scan messaging channels for potential attacks. However, it is important to remember that relying solely on AI detection is not enough. A comprehensive security posture requires a large language model application tuned to detect and respond to nefarious threats. As the battle between cybercriminals and cybersecurity professionals continues, companies must remain vigilant and proactive in their defense against phishing scams.

Read Disclaimer
This content is aimed at sharing knowledge, it's not a direct proposal to transact, nor a prompt to engage in offers. Lolacoin.org doesn't provide expert advice regarding finance, tax, or legal matters. Caveat emptor applies when you utilize any products, services, or materials described in this post. In every interpretation of the law, either directly or by virtue of any negligence, neither our team nor the poster bears responsibility for any detriment or loss resulting. Dive into the details on Critical Disclaimers and Risk Disclosures.

Share it

Surge in Email Phishing Attacks by 1,265% Following ChatGPT Release: SlashNext Reveals