ChatGPT and The 6 Risks Associated with its Business Use

Issues such as bias, privacy and cyber fraud need to be taken into account by legal and compliance professionals.

ChatGPT has become one of the most prominent technological phenomena in recent months. Its popularity is spreading rapidly, but it does not come without risks.

“The results generated by ChatGPT and other large language model (LLM) tools are prone to various risks,” says Ron Friedmann, senior analyst director at Gartner, who believes that “legal and compliance leaders need to assess whether these issues present a material risk to their company and what controls are needed.

Otherwise,” he said, “companies could be exposed to legal, reputational and financial consequences.

Specifically, Gartner identifies six main dangers associated with ChatGPT. One is the delivery of “fabricated and inaccurate responses”. The tool can provide inaccurate information that appears “superficially plausible”, according to its description.

Moreover, Friedmann believes that ChatGPT “is prone to ‘hallucinations’, including made-up answers that are incorrect and non-existent legal or scientific citations”. Answers should therefore be verified rather than taken as valid by default.

Bias” and the resulting discriminations also have the potential to condition the results. “Complete elimination of bias is likely to be impossible,” Friedmann notes, “but legal and compliance aspects need to keep abreast of the laws governing artificial intelligence bias and ensure that their guidance complies”.

“This may involve working with subject matter experts to ensure the output is reliable and with audit and technology functions to establish data quality controls,” he explains.

Then there are the issues of “data privacy and confidentiality”. Gartner calls on companies to be very careful about what information is linked to ChatGPT and warns that “sensitive, proprietary or confidential information” could end up embedded in “responses for users outside the company”.

The recommendation, in this case, is quite clear. It is to directly prohibit the entry of personal and confidential data into public artificial intelligence tools.

Similarly, “intellectual property and copyright risks” can arise if this type of data has been included in the training process of the tool, which automatically implies the violation of regulations. Gartner is blunt: “ChatGPT does not provide source references or explanations of how its output is generated”.

Another sensitive area concerns “cyber fraud”. Criminals are advancing with technology to perfect their attacks and are already misusing ChatGPT for their tasks. For example, to generate fake reviews.

Applications like this are susceptible to injection techniques that end up tricking the model into triggering tasks that were not developed in the first place, such as writing malware.

Finally, companies must also consider “consumer protection”. They risk losing the trust of their customers if they do not inform them about the use of ChatGPT for their customer services. They may even end up being accused of unfair practices. To avoid this, organisations need to be clear and make the relevant disclosures.