Pornographic Deepfakes, a Threat Beyond Privacy

Pornographic deepfakes not only violate the privacy of their victims, but can also have psychological repercussions, in addition to the legal consequences for the perpetrators.

In recent months we are discovering that the applications of artificial intelligence are almost infinite. Some can help us in our daily work, as is the case with generative AI tools. However, this technology also has a dark side, as we are seeing in the case of deepfakes.

Some time ago we warned that deepfakes were used for entertainment, but also to generate disinformation about politicians, brands and products. And we also warned about the potential for the creation of pornographic videos.

The case of the deepfakes of the Almendralejo girls shows us that this is a real threat, as this technology is within the reach of anyone, even children.

The study ‘State of deepfakes 2023’ by Home Security Heroes, offers some shocking data. For example, it reveals that the total amount of deepfake videos on the web has grown by 550% compared to 2019. It also highlights that deepfake pornography accounts for 98% of all deepfake videos online.

It also points out that to create a 60-second deepfake pornographic video, all that is needed is a clear photo of the victim’s face and it can be made in just 25 minutes, at no financial cost. Another relevant fact of the report is that 99% of the people targeted by deepfake pornography are women.

“The creation of sexual images created by AI, or sexual deepfakes, is not yet contemplated as a specific crime in the Penal Code. However, taking into account the idea of gender violence as all violence that is exercised against women, the creation of this type of images is a tool and another form of violence against women,” explains Rocío Pina, lecturer in the Faculty of Psychology and Educational Sciences and the Criminology degree at the Universitat Oberta de Catalunya (UOC).

The creation of deepakes violates two major rights: data protection and privacy, and privacy, honour and self-image. “Data protection regulations are violated because there is a dissemination of information that, although essentially false, uses real personal data, such as a person’s face or sometimes even their voice. This often involves the processing of personal data without the consent of the person concerned. In addition, the violation of regulations also occurs when these AI creations are shared or disseminated to third parties, and sometimes even through open social networks, also without the consent of the affected party”, explains Eduard Blasi, associate professor at the UOC’s Faculty of Law and Political Science and an expert in privacy rights.

He also points out that “these images often involve the processing of sensitive data, as the information is related to a person’s intimate or sexual life and, depending on the image generated using this technology, it can also cause damage to the person’s right to honour, privacy and/or self-image”.

On the other hand, these pornographic deepfakes can also have a psychological impact on their victims. “This new form of violence by digital means broadens its scope and magnifies the consequences for the victims who suffer it”, Pina stresses. She therefore stresses that cyber-violence against women directly affects their mental health. Furthermore, the UOC points out that various studies have found high rates of mental health problems – anxiety, depression, self-harm, and suicide – as a result of digital abuse, regardless of whether it is a fake or real deepfake.

Equally, these deepfakes carry this threat to minors. According to a study by the Internet Watch Foundation, there is reasonable evidence that AI is helping to create nude images of children whose clothed photos have been uploaded online.