Does AI Distinguish Between Good and Bad?

Does AI distinguish between good and bad?

Marco Eggerling from Check Point says that AI should be taught a kind of ethics catalogue right from the start so that it adheres to certain rules on its own.

According to our security researchers in Israel, hacker gangs are already using AI to train their new members. They are also being helped by AI to improve malware, automate attacks and generally make their criminal activities run more smoothly.

On the other hand, McKinsey management consultants found in 2022 that 50 per cent of the companies surveyed use AI to support their work in at least one business area. According to Forbes magazine, as many as 60 per cent of entrepreneurs surveyed believe that AI will increase their company’s productivity in general.

This comparison shows how the same technology can be used for both good and bad purposes. However, the concern that is currently most preoccupying the population: Can AI learn which area of use is good or bad in order to prevent misuse on its own?

What do hackers expect from ChatGPT and Bard?

To answer this, we first need to understand the ways in which hackers hope to be helped by ChatGPT and Google Bard:

Automated cyber attack

Since both tools have been on the market, our security researchers have seen a strong increase in bots that remotely control infected computers and other automated systems. These are ideal for DDoS attacks, which can paralyse a server or an entire system with an enormous number of access requests. This is another reason why last year, IT attacks increased by 38 per cent worldwide.

Helping to create malware, phishing emails, deepfakes and devise cyber-attacks

Hackers realised early on that ChatGPT could create command lines for them to use in malware or write phishing emails – the latter crafted in better English or German than the criminals could often do themselves. Furthermore, because ChatGPT and Google Bard learn with each input, they can even create complicated content, such as images, videos or even sound recordings. This is where the danger of deepfakes comes into play. This means videos that show a certain person and make them speak the way they are supposed to, even though it is a fake. Modern technology makes it possible here to fake facial expressions, gestures and voices. Barack Obama, Joe Biden, Volodymyr Selenskyj and Vladimir Putin have already fallen victim to such deepfakes.

Petty criminals without great programming skills can become hackers

With the help of such AI-driven chatbots, as mentioned above, hacker gangs can train their new members, or people with criminal intent can quickly craft and launch a small IT attack. This leads to a flood of, so to speak, mini-attacks and of new hacker groups.

Country restrictions undermined after a short time

To prevent this abuse, the producers of ChatGPT, the company OpenAI, have built in some security devices and excluded countries from use, although this did not last long: first, the request blocks were circumvented by cleverly asking the chatbot questions, then the country restrictions were levered out, then premium accounts were stolen as well as sold on a large scale, and meanwhile even clones of ChatGPT are offered on the darknet, in the form of API interfaces.

One way to make the AI itself less vulnerable to abuse is to train the AI itself. Technicians agree that it is impossible to remove knowledge once it has been learned from an AI. Thus, a kind of ethics catalogue should be taught to the AI from the beginning so that it adheres to certain rules on its own. This would also be possible with laws that simply prohibit some actions. A petition started by AI experts and even publicly supported by Elon Musk demands that the development of AI programmes be stopped until states and alliances, such as the European Union, have introduced their own AI laws and a concept for ethical training has been worked out.

Besides, an attempt could be made to prevent the AI from acquiring certain knowledge in the first place.

This shows that AI can be used for good or for bad and that a lot of work is still needed to make it really suitable for everyday use. However, the possibility of misuse by hackers should not displace the positive achievements and possibilities of AI use. It is simply a matter of designing AI programming in a well thought-out way. One example of success is IT security, which already benefits greatly from specially tailored AI, as we at Check Point can confirm from our own experience.

 

Marco Eggerling

Marco Eggerling

is CISO EMEA at Check Point Software Technologies.