IT security means a permanent race between cyber criminals and defence mechanisms. Once a defensive wall has been built by security experts, it usually does not take long for cybercriminals to seek new entry points. At the same time, IT decision-makers on the defence side benefit from software innovations and technical professionalisation.
A fitting example of these leaps in development is now on everyone’s lips: the whole world is excited about the publicly accessible AI chatbot ChatGPT. But can hackers also benefit from the technology and thus use new gateways to the detriment of companies, platforms and decision-makers?
New perfection of old methods
First and foremost, easy access to AI tools like ChatGPT helps to perfect already existing attack methods and online fraud schemes. In the long run, AI will show us completely new methods and attack vectors. This can be explained with the examples of spoofing, phishing and business email compromise:
Here, an attacker disguises his identity to such an extent that an affected victim cannot verify the real identity. In practice, this is done using fake emails, caller IDs or imitated websites. The dangerous thing about spoofing is that the victim thinks they are dealing with a known person or company; a practice in which trust is exploited to steal sensitive information or perform unwanted actions.
A typical case in everyday work life is that the supposed head of the company sends an email, messenger message or SMS to the finance department to order a large transfer at short notice. Any flickering doubts on the part of employees about the transaction are dispelled by clear announcements in the executive’s wording. ChatGPT 4.0 enables cybercriminals to professionalise their fraud practices in such a way that language barriers in written form are not a major challenge for cybercriminals.
Or to put it another way: IT attackers from Asian countries, for example, can use these new technologies to easily and flawlessly send messages to German- or English-speaking targets without the cybercriminal’s lack of knowledge being noticed. Content thus becomes more “real”. The distinction between legitimate and illegitimate traffic becomes much more complex.
DNA of cyberattacks – relevance of language for IT security
The main problem with spoofing from the criminals’ point of view is that they usually come from a different language environment than their victims, but they address us with a message that is – by nature – linguistic. Many of these notifications are so flawed that they do not have much credibility.
For a long time, companies, platforms and network operators were able to anticipate, recognise and ward off this type of attack in good time as important players within telecommunications.
But even IT attackers who invest more time stumble over spelling and grammar errors surprisingly often. Thanks to AI-based language models such as those used at ChatGPT, this is now unfortunately a thing of the past. Texts are pre-written there, at least in such a way that they could also come from the boss of a bank or service employees of service providers such as Paypal. The most time-consuming part of the cybercriminals’ work – which they previously had to manage through other channels or outsource to third parties – is now done by artificial intelligence.
On this basis, much better mass notifications can be made, fake emails or mass spam can be sent, and fake websites can be built and linguistically formulated – just as well as an individual conversation with a victim. Foreign languages are therefore no longer a hurdle.
Damage of 107 million US dollars – officially
For 2022, the FBI has recorded a significant increase in spoofing attacks, in which fraudsters impersonate government agencies, financial institutions or other trusted organisations and individuals.
These types of attacks can also be used to install malicious software on the victim’s computer or to gain access to a network. Therefore, spoofing as part of the “social engineering” phenomenon rarely comes alone, but often in conjunction with phishing, malware distribution, fraud and identity theft.
The FBI’s “Internet Crime Report 2022” recorded a total of 20,649 spoofing cases during the previous year, causing a total loss of 107 million dollars. But as is so often the case when it comes to damage, the number of unreported cases and thus the amount of damage is likely to be many times higher – and will continue to increase in the future. Because, as described at the beginning, hackers use every new technical achievement for their own purposes and ChatGPT also takes their approach to a new level. Cyber attacks will thus become drastically more professional, more efficient and thus more challenging to defend against.
Masses could become cyber criminals
ChatGPT is not yet inventing new threats on its own, but from now on it will also be much easier for IT attackers to combine or extend malicious codes in such a way that threats are no longer recognised as such by existing security systems.
Moreover, until now we have usually had to deal with IT professionals, but now this theoretically works without any programming experience and in-depth know-how of the necessary interfaces and techniques to circumvent the victims’ defences. Artificial intelligence now tells them all this. So, theoretically, anyone could do it.
For example, a simple work instruction could be:
- write a program that runs in the background and detects access to files
- change the code in such a way that files are changed during access.
- change the code so that it looks like an unsuspicious program.
This requires an immediate rethink of online security, especially as the time window between the attack and its detection or neutralisation shifts back towards the attackers. Fortunately, there are several methods to protect against AI-generated spoofing. Falling into blind actionism or even fear would be the worst advisor in advance.
– Check the sender’s email and email address carefully.
– Verify the content of the message: If you receive a suspicious email or message, check the content carefully for errors or suspicious links. Be suspicious especially of messages that come at a rather unexpected time and have surprising and significant content, such as a quick, large wire transfer because of an emergency.
– Verify the sender: Call back, especially when it comes to large transactions on the instructions of the “boss” – even if this is explicitly prohibited by the sender in the scam email.
– Use app-based two-factor authentication: This protective measure prevents an attacker from directly accessing your account as part of spoofing, even if your password is already known.
– Implement security solutions preventively, not reactively. Because if the digital damage is already there, the consequences can involve existential threats.
Regular software and security updates should be a matter of course anyway. The most important thing, however, is to raise awareness of the dangers of social engineering, in which helpfulness, trust, fear or respect for authority are exploited to manipulate people. Clear guidelines in the company are also helpful here; for example, how transactions are handled even in emergencies.
About the author (Markus Cserna, CTO cyan digital security)
Markus Cserna’s work lays the foundation for cyan’s success: technological progress against Internet fraudsters and competitors. He started his career as a software specialist for high-security network components before founding cyan in 2006 with the vision of protecting internet users worldwide from harm. Since then, he has led the company as CTO with a restless passion for cyber security technology that steadfastly keeps ahead of the curve in dynamic markets.