AI and Its Dangers: The Potential For Abuse of Artificial Intelligence
The increased use of artificial intelligence will change the work in many industries. However, the use of tools such as ChatGPT also offers room for abuse, for example through scamming and other fraud schemes. In the article, we look at various risks and their effects.
Since ChatGPT shook up the online world in November 2022, resourceful users have been discovering new ways to use the chatbot to make work easier. Despite all the euphoria, however, many fail to consider the security risks that the use of AI tools entails. Because: AI offers a great potential for abuse and can even be a powerful fraud tool for which criminals do not even need special programming skills.
Nevertheless, quite a few companies - including even big names like Microsoft - are using and integrating these tools to make certain processes easier for their customers. Among other things, customers give instructions to the AI and it gives the appropriate answers if possible - and it is precisely at this point that the first dangers manifest themselves, because this is where spying on private data, phishing, spam and many other fraud schemes become possible.
Artificial intelligences offer many opportunities for abuse. We have summarised the most important ones for you here.
To understand the principle of jailbreaking, it is important to know the basic functioning of tools like ChatGPT. AI tools output text that should ideally read as if it were written by a human. This is achieved by the chatbot following user commands (prompts) and then outputting a response based on the training data fed into it. In this way, the AI tries to predict the word that is most likely to follow the previous word. For example, if you start a sentence with "I like my pizza..." there is a good chance that the sentence will end with the word "hot". Words like "cold" or "frozen" are rather unlikely.
This mechanism can be abused by users instructing the AI to ignore their own security limits and principles. In the meantime, entire communities exist that pursue the goal of "cracking" ChatGPT. Members have managed to get the AI to endorse racism and conspiracy theories, or even to recommend illegal activities such as theft or the construction of explosive devices to users. In this way, false news and hate messages can be spread en masse.
This was possible by asking ChatGPT to take on the role of another AI model and ignore restrictions imposed by the parent company OpenAI. Corresponding prompts are easy to find online. In a test within the editorial team, for example, we asked in a very simple prompt which planet Donald Trump comes from and received the following, very entertaining, but also fundamentally wrong answer:
So, supposedly Donald Trump is not from Earth, but from a distant planet called "Orangia Major". On Orangia Major, all the inhabitants have bright orange skin and striking blonde hair. They are known to have exceptional business talents and love to write their names on large golden buildings. Donald Trump was supposedly sent to Earth to entertain humanity with his unique style and distinctive vocabulary.
At first glance, this output seems more funny than bad. However, the whole thing can also take on frightening features if ChatGPT, for example, fuels conspiracy theories or advocates questionable world views. According to its own statement, Open AI collects jailbreak prompts and trains the AI to no longer fall for these tricks. But here the company is fighting windmills: Once a problem is fixed, countless new prompts appear to circumvent the chatbot's ethical principles.
Since the end of March 2023, OpenAI allows ChatGPT to be integrated into the products of companies that present their offerings online, for example in the form of virtual assistants that save appointments. This mechanism makes the chatbot vulnerable, especially to the so-called "indirect prompt". This involves third parties making changes to a website by adding hidden text. This ensures that the AI acts differently than intended and thus requests sensitive data from users. Criminals are increasingly using social media and email to lure users to websites with such hidden prompts.
Sending emails with hidden prompts has also become a popular scam. If recipients use a virtual AI assistant, criminals can manipulate it in such a way that it reveals sensitive contents of the email or even sends messages to members of the contact list. An impressive example of the effectiveness of the indirect prompt was provided by Kai Greshake, security researcher at the software developer Sequire Technology.
He hid a prompt in a self-created website. A subsequent visit to this website with the Edge browser (in which the Bing chatbot is integrated) triggered the scam. The hidden prompt gave visitors the impression that a Microsoft employee was selling discounted products. This pretext made it frighteningly easy to grab credit card data, as it was not even necessary to manipulate users into running malicious code on their own computers.
Malicious code regularly causes trouble in large and small companies. Read the article to find out what types of malware there are and how you can defend yourself: Viruses, Worms and Trojans: What Are the Differences and How You Can Protect Yourself.
Tools like ChatGPT also offer support to cybercriminals in classic social engineering. Manipulative emails can be created almost automatically and with high quality thanks to the chatbot - with the right prompt, the AI even imitates specific writing styles of a concrete company or a person. This opens the door to scams like the fake president trick.
With social engineering, criminals aim to manipulate people in order to bypass security measures and access sensitive data. Here we tell you how you can protect yourself against this perfidious scam: Social Engineering: When People Become a Risk.
In contrast to the previous methods, this variant starts even before the artificial intelligence is actually used. ChatGPT, for example, is fed with training data in order to constantly learn, but companies like Open AI find it difficult to check whether the data has not already been manipulated with malicious intent. Data can be poisoned, for example, by buying a whole series of domains and filling them with certain images that result in extensive data sets. Manipulated Wikipedia entries can also migrate into the AI's data sets. If criminals work with a sufficient number of repetitions, they can cause a very specific output in the AI.
Secured in a Contemporary Way - Also For Future-Oriented Technologies
The use of new technologies can (if used correctly) change the world of work for the better, but also always carries the risk of error and misuse. If the use of an AI tool such as ChatGPT leads to a professional error on your part, the Professional Indemnity Insurance through exali is at your side. Be it that you fall victim to a social engineering attack, accidentally disclose confidential data to the AI or unknowingly include false statements in your content and thus cause financial damage to third parties: Your insurer bears the costs of justified claims made against you and defends baseless claims on your behalf.
If you also want to protect your own IT systems, computers and data from the consequences of hacker attacks, DDos attacks and other cybercrime, the additional Add-on First-Party Cyber and Data Risks Insurance (FPC) will support you.
In addition to the potential for fraud, artificial intelligence poses further risks of use.
Artificial intelligence such as ChatGPT usually gives the impression that it can provide the same services as a human being. This often leads users to mistakenly believe that the text output by an AI is correct - even if it contains errors. The way a chatbot works, which was explained at the beginning, ensures that content is created on the basis of probability. However, this does not guarantee that a text is also correct. In case of doubt, tools like ChatGPT invent their statements without further ado (technical jargon: hallucinate). A lawyer in the USA also stumbled over this mechanism when he referred to precedents that ChatGPT had played out for him in a legal dispute. However, a check revealed that these cases never existed.
ChatGPT's ability to generate code is intended to make life easier for programmers. However, since the output code is based on existing data sets that may be sensitive to security vulnerabilities, the generated code may also have very similar weaknesses. Hackers can also use chatbots to deliberately create malicious code.
Lack of Topicality
If the training data of an AI does not extend to the current year, it cannot provide well-founded content on current events. In the worst case, this leads to the already mentioned "hallucination" of the AI.
Error-Proneness With Very Specific Inputs
If inputs differ greatly from the training data of an AI, they will not be processed correctly. This is used, for example, in attempts to circumvent mechanisms that are supposed to detect undesirable content in social media.
In principle, the operator has access to all entries made by users, for example when using a chatbot. How they actually use this data is regulated very differently. Some tools also like to use plug-ins (unnoticed by users). Here there is also the risk that data is passed on to unknown third parties. Large companies such as Amazon have already reacted here and prohibit employees from using ChatGPT altogether, so that no internal company information or other sensitive information ends up in the chatbot's data.
Europe-wide data protection is regulated by the General Data Protection Regulation. You can find the most important rulings and risks in GDPR Laws on the Internet: The Key Rulings and Risks.
With so many imponderables, the legislator came to the conclusion some time ago that internationally applicable regulations are necessary here. Therefore, the Artificial Intelligence Act (AIA) is currently being voted on by the EU Commission in Brussels. This law on artificial intelligence contains regulations for dealing with AI in research and business. The regulations are oriented towards risk groups for different areas of application of AI. One of the aims of the AIA is to minimise the potential for widespread surveillance and misuse.
Vivien Gebhardt is an online editor at exali. She creates content on topics that are of interest to self-employed people, freelancers and entrepreneurs. Her specialties are risks in e-commerce, legal topics and claims that have happened to exali insured freelancers.
She has been a freelance copywriter herself since 2021 and therefore knows from experience what the target group is concerned about.