TikTok, WhatsApp, Metaverse: Ugh. ChatGPT: Yay.

Gigamon study shows that companies are sceptical about some tools in terms of security - but not about the AI chatbot ChatGPT.

Gigamon study shows that companies are sceptical about some tools in terms of security – but not about the AI chatbot ChatGPT.

As with many new technologies, Generative AI and Large Language Models (LLM) like ChatGPT raise the question: are companies aware of their potential risks and how do they deal with them? Gigamon wanted to know exactly and, as part of a worldwide study, also asked the CIO/CISOs from 150 German companies how they assess the security of modern technologies. The result: three quarters of the companies surveyed have no security concerns at all when their employees use ChatGPT. Only five percent have banned the AI chatbot from their company and another 20 percent are currently dealing with the risks.

WhatsApp often banned in corporate environment

What’s interesting is that companies are less forgiving when it comes to other technologies. When it comes to the Metaverse and WhatsApp, CIO/CISOs each agree 100 percent that there are (potential) security risks. Thus, 67 percent of them have banned the instant messenger in the corporate environment; the Metaverse meets with disapproval from two percent. In both cases, the rest are at least concerned with possible cyber risks in order to make a decision regarding use as soon as possible. The same fate also befalls TikTok: In ten percent of the companies, the short video app is taboo; 89 percent are investigating the risk potential. Only one per cent have no concerns and allow TikTok in the company.

This suggests that the security risks of the aforementioned platforms are widely known and the majority of companies also take them seriously. The situation is different with ChatGPT – even though the AI chatbot does not pose an insignificant threat to companies. Company internals or other sensitive information that employees share with ChatGPT can end up in the training data pool and be stolen in the course of an attack on OpenAI. During the ChatGPT outage in March 2023, a bug even made chat inputs publicly available. There are also indirect risks: cyber criminals can use the AI tool, for example, to write trustworthy-looking phishing emails, construct false identities or develop malware.

False sense of security

“The fact that they don’t have to download anything for ChatGPT gives users a false sense of security. Usually, staff are supposed to be on the lookout for suspicious emails and not download unknown files or click on strange links. But AI chatbots can now be used to write authentic applications, websites and emails that hide fraudulent activity. This increases the risk of employees falling victim to an attack. Therefore, companies must prepare for the worst if they do not want to do without ChatGPT. The key to more security – for example, within the framework of a zero-trust model – is comprehensive visibility down to the network level. This makes potential blind spots where cyber criminals are nesting visible to security teams and attacks can be detected and combated more quickly,” advises Andreas Junck, Senior Sales Director DACH at Gigamon.

Given the current threat situation, blind spots within the IT stack are a major challenge for 52 percent of German CIO/CISOs. Nevertheless, many of them still lack visibility. Only 29 percent of them have a comprehensive visibility foundation across networks, systems and applications to support their zero trust architecture. Just 21 per cent have visibility into encrypted data.