Generative AI: Intelligent and Dangerous?
Have you heard about ChatGPT or Gemini, the online program that can create text content on its own, solve problems, and write programs for you? This is possible thanks to generative artificial intelligence (AI) – a technology that, unfortunately, cybercriminals have already discovered for themselves.
In this article, you will learn what AI is, what cybercriminals can do with it, and how to protect yourself against new attack techniques.
1. What is Generative AI?
One example of artificial intelligence, or AI in short, which is simulation of human intelligence, would be a computer program that can recognize objects in pictures – just like a food critic who can recognize the individual ingredients in a particular soup. Now, generative artificial intelligence would be a chef who prepares the soup. AI creates new content. Let’s stick with the chef’s example: a guest orders soup. But what if the chef has never made this soup before? She consulted many recipes during her training and used them to learn the most important principles of cooking. She also has plenty of experience in the kitchen and was able to remember the dishes she made that were particularly tasty. And our chef can now use these skills to tackle this new challenge.
AI works similarly: it learns using large amounts of sample data from which it delivers rules, for example, how to structure a business letter. In its learning phase, AI also generates new content that is evaluated. Rules that lead to good content are used again in the future. AI learns by using a so-called neural network. This is a structure that mimics the behavior of neurons in the brain.
The artificial nerve cells forward data but are flexibly connected to each other while doing so. The data transfers that lead to correct results will reinforce certain connection paths, while others will be dismantled. In the future, new data will display a preference for strong connection paths and continue using them. This is how the network learns. Data is the foundation of everything. For generative AI to generate new content, it needs a lot of training data. This training data can come from all kinds of data collections, such as digitized books, online news, or even texts from chats and online forums. This led to a potential problem. If the training data has, for example, a racist or sexist bias, then the AI also learns these prejudices. You can see why healthy suspicion is always advisable when using generative AI. The result is only as good as the training data. In other words, if poor quality or biased data goes in, poor quality information will come out.
Let’s review an example of how AI creates new content.
a. Data
In three weeks, my best friend is coming to visit, and I know she loves good food. So, I’m reading lots and lots of cookbooks in preparation.
For generative AI to produce new results, it needs lots and lots of training data.
b. Patterns
I am trying to understand as many cooking techniques and principles as possible: What’s the difference between boiling and steaming? What goes well with carrots? What flavors work with parsley?
The AI neural network analyzes the data, recognizes patterns in it, and derives rules from it.
c. Training
Over the next few weeks, I will invite all my friends to try my home-cooked food. My only request is an honest review, and I’ll make sure to remember the dishes they liked the most.
Generative AI learns by generating new content using the rules it learns itself. If the result meets the expectations, the neural network can learn from it by strengthening the connections between its artificial neurons.
d. Results
My best friend just got here and wants to eat a soup I’ve never made before. But I have my own cooking rules by now, so I’ll just modify recipes and recombine ingredients to make something new. She is surprised that I can cook so well!
The generative AI was trained using a large amount of data, recognized patterns in it and derived rules from them. It has stored these in its neural network. It is now able to generate new content in response to every query.
2. Cybercriminals & AI
Cybercriminals can do a lot of damage with generative AI. For example, they can quickly generate large amounts of disinformation to specifically cast an organization in a bad light. Generative AI is also very good at compiling information that can then be used for targeted social engineering attacks, for example, exploiting human weaknesses. Or cybercriminals may use the AI’s ability to write software code to keep building new malware that is not yet detected by standard virus scanners. This introduces new opportunities for criminals without much technical knowledge. So far, cybercriminals are still sharing the work. Some write the malware and offer it for sale, and the others then use it to carry out cyberattacks. But in the future, cybercriminals with less technical knowledge will certainly be able to develop malware for their attacks themselves.
(1500×800, alt text “AI Cybercriminals in the AI era” )
And of course, this is much cheaper. With the help of generative AI, criminals can also obtain information on things like vulnerabilities in network devices or software much more quickly. Then a single targeted attack would be enough to cause great damage. The ability of generative AI to produce texts, images, and synthetic voices is another threat. Criminals have already started to use these. It was often the case that phishing emails, like those pretending to come from your bank, were full of spelling errors. When written by a generative AI, these emails are usually mistake-free and look believable. A voice created with generative AI can also make fraudulent calls and even obtain sensitive information during the conversation. Of course, this works even better if the voice mimics a friend or colleague. For an AI that has been trained accordingly, that’s no problem. There’s also the use of fake photos. For example, a deep fake, meaning an artificially created image or video, could show a person in a compromising situation and thus make them susceptible to blackmail.
3.How to protect yourself?
The question is now: Can you protect yourself against attacks with generative AI at all? AI-powered attacks on networks or malware generated by artificial intelligence are difficult to determine for employees who don’t work with this kind of technology regularly. However, you can protect yourself against social engineering attacks such as fake texts, images, videos, or voices.
It helps to continue to adhere to cybersecurity best practices. Install updates regularly, be aware of the warning signs of phishing or social engineering, and of course, participate in security awareness training regularly. Generative AI can become a threat, but it can also be used to fight cybercrime. Protect yourself and your organization:
– Never input sensitive information into AI systems; if you are not sure if the information is sensitive, don’t enter it into the AI.
– If you see someone else entering sensitive information into one of these systems, report it to the appropriate security or IT team immediately.
4. Conclusion:
As artificial intelligence (or AI) programs become more integrated into our daily lives, the safety of sensitive information such as personal data, financial information, or even confidential communications is a growing concern. When you share sensitive data with AI, you risk it being stored, replicated, or accessed by unauthorized entities. Even AI systems with robust security can be vulnerable to cyberattacks. Stored data can be hacked, misused, or accidentally leaked. This can lead to serious issues like identity theft, financial fraud, fines, and lawsuits, which can have significant impacts on you and your organization.