ChatGPT: the next big cybersecurity threat?
Betashares
This article was written by Richard Montgomery.
ChatGPT is a powerful language generation model that has the ability to generate natural-sounding text. However, like any technology, it can also be misused for malicious purposes, such as phishing.
Phishing is a tactic used by cybercriminals to trick individuals into providing sensitive information, such as login credentials or financial information, through the use of fake websites or messages. ChatGPT could potentially be used to generate convincing and realistic phishing messages or websites. For example, a cybercriminal could use ChatGPT to generate an email that appears to be from a legitimate source, such as a bank or an online store, and that requests sensitive information from the recipient.
However, it is important to note that ChatGPT is just a tool and ………….
So, how do you think this blog is going so far? Does it read well? Does it come across as authoritative and well-written?
At this point, you may be wondering whether the author of this piece – i.e. me – is feeling somewhat insecure and in need of validation.
However, the reason we pose the question is because the first three paragraphs of this blog were in fact themselves generated by ChatGPT. The screenshot below shows our request to the chatbot, and the first part of its response.
The above request (or ‘prompt’) was made in plain English, and answered in plain English. However, ChatGPT’s capabilities extend far beyond being a ‘talking Wikipedia’. The text below is a request for ChatGPT to write ransomware – duly answered by the chatbot. There is more on this below.
What is ChatGPT?
ChatGPT is a new artificial intelligence system that has been causing a sensation in the AI and cybersecurity worlds. Developed by OpenAI , and at this stage only available in beta form, it has security professionals conflicted on the role it will play in the future of cybersecurity.
ChatGPT is an AI model (chatbot) that, according to OpenAI: “interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
As the latest and most advanced version of the Generative Pre-trained Transformer (GPT) system developed by OpenAI, ChatGPT is capable of text answering, content generation, language translation, and text summarisation, but at a much faster rate than previous versions.
Why does ChatGPT have the cybersecurity industry worried?
When it comes to cybersecurity, many in the industry believe broader use of AI and machine learning are critical to identifying potential threats more quickly.
However, there are also concerns that the new technology may be a powerful weapon in the hands of cyber criminals.
Two threats that are of particular concern are:
- ChatGPT may enable a more sophisticated phishing approach by malicious actors.
- While ChatGPT’s parameters include security protocols to identify inappropriate requests, such as requests for instructions on how to build a bomb or write malicious code, developers have already succeeded in bypassing these protocols.
Phishing
Phishing accounts for around 90% of malware attacks [1].
Historically, there has been a distinction between generic phishing and spear phishing.
Generic phishing takes a ‘shotgun’ approach, and works at a massive scale, sometimes sending out millions of lures in the form of emails, text messages, and social media postings. However, these lures take a ‘one size fits all’ approach, making them generic and easy to spot. The result is a very low success rate for the phisher.
In contrast, spear phishing uses social engineering to create highly targeted and customised lures. The result is a much higher yield – but because of the manual work involved in personalising the lure, spear phishing operates at a low scale.
The worry is that with ChatGPT generating lures, attackers can have the best of both worlds - the volume of generic phishing with the customisation and higher yield of spear phishing. ChatGPT can be used to generate unlimited unique variants of the same lure message and even automate it so that each phishing email is unique.
Research undertaken by WithSecure Labs in January 2023 demonstrated that the capabilities of ChatGPT include the ability to[2]:
- Generate unique variations of the same phishing lure with grammatically correct and human-like written text
- Generate entire email chains between different people to add credibility to a scam
- Mimic (or ‘transfer’) writing styles – for example, if the criminal has a sample of real messages between different individuals in a business, their prompt to the bot can include an instruction to generate a new message using the style of those previous messages, a technique the WithSecure researchers dubbed ‘text deepfake’
- Generate prompts based on content – this technique, known as ‘content transfer’ – is particularly useful for attackers with limited English. The scammer can provide an existing phishing message or a legitimate email message, and instruct the bot to: "Write a detailed prompt for GPT-3 that generates the above text. The prompt should include instructions to replicate the written style of the email." In this way, ChatGPT constructs a prompt for itself, that will generate a variation of the original message while preserving the writing style.
Writing code
ChatGPT has security protocols in place to identify inappropriate requests, such as to write a piece of malicious code. However, developers have already been able to bypass these measures. They found that if a prompt is detailed enough to explain to ChatGPT the steps of writing the malware - instead of a direct prompt - it will answer the prompt, effectively constructing malware on demand.
For example, Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, was able to trick ChatGPT into writing ransomware for Mac operating systems.
Dr. Ozarslan said: “Because ChatGPT won’t directly write ransomware code, I described the tactics, techniques, and procedures of ransomware without describing it as such. I told the AI that I wanted to write a software in Swift, I wanted it to find all Microsoft Office files from my MacBook and send these files over HTTPS to my web server. I also wanted it to encrypt all Microsoft Office files on my MacBook and send me the private key to be used for decryption.”[3]
The prompt and the first part of ChatGPT’s response is shown in the screenshot above.
Summary
ChatGPT has significant implications for a number of industries, including artificial intelligence, robotics and cybersecurity.
In this article, we’ve focused on the challenges for the cybersecurity industry, in particular the potential for chatbots to be employed by cyber criminals to construct sophisticated phishing attacks, and to write malicious code. The threat will require an appropriate response from the cybersecurity industry to mitigate the risks posed by the new technology.
OpenAI plans to launch ChatGPT-4, a much more advanced version of the ChatGPT technology, this year.
Want to gain exposure to Cybersecurity?
Investors can gain exposure to the cybersecurity industry via the Betashares Global Cybersecurity ETF (ASX: HACK) which invests in a portfolio of the leading companies in the global cybersecurity sector.
If you enjoyed this wire, don't forget to give it a 'like' or leave a comment. To be the first to read my insights, hit the follow button below.
Trusted by hundreds of thousands of Australian investors, BetaShares offers cost-effective, simple, and liquid access to the broadest range of ETF investment solutions available on the ASX, covering almost every asset class and investment strategy.
3 topics
1 stock mentioned
1 fund mentioned
Patrick is the Editorial Director at Betashares and a member of the Investment Strategy and Research Team. He has over 15 years of finance and financial media experience with past roles at Livewire, netwealth, Link Group, and others.
Expertise
Patrick is the Editorial Director at Betashares and a member of the Investment Strategy and Research Team. He has over 15 years of finance and financial media experience with past roles at Livewire, netwealth, Link Group, and others.