Generative AI and the Right to Honour in Spain: Legal Challenges and Avenues of Protection Against Disinformation and “Deepfakes”
The advent of generative artificial intelligence (AI), with its capacity to create hyper-realistic content such as “deepfakes”, presents unprecedented challenges to the protection of the rights to honour (ie, protection of reputation and dignity), privacy and personal image. This technology not only facilitates the growth of disinformation, but also enables the manipulation of reality in ways that can gravely undermine the reputation of individuals and organisations. This article examines how the current Spanish legal framework, principally Organic Law 1/1982, is being tested by these new technologies. It delves into the implications of deepfakes and AI-generated defamatory texts, the complexity of attributing liability, and the existing avenues of protection. Finally, it discusses legal and technical solutions, including the role of platforms and the necessity of adapted regulation, to ensure a balance between technological innovation and the safeguarding of fundamental rights in our current digital era.
One of the greatest challenges posed by generative AI, such as ChatGPT or Google Gemini, is its capacity for the creation of deepfakes: manipulated audiovisual or audio files that depict individuals saying or doing things that never occurred. This technology, alongside the automatic generation of false or defamatory information through advanced language models, has become a powerful tool for disinformation and identity impersonation. Deepfakes can be used to create injurious narratives, damaging professional or personal reputations, inflicting harm that is difficult to repair in the public and private spheres of Spanish citizens. The ease with which such content can be created and viralised on social media and digital platforms rapidly escalates the risk, making the damage more challenging to contain and reverse.
In light of this situation, a fundamental question arises for the Spanish legal system: Is the current legal framework sufficient to address these new realities? The right to honour, personal and family privacy, and personal image, enshrined in Article 18.1 of the Spanish Constitution and developed by Organic Law 1/1982, of 5 May, finds itself at the heart of this dilemma. Although the law offers robust protection against illegitimate interferences, the automated, hyper-realistic, and often anonymous nature of AI-generated content presents unprecedented challenges in identifying liability and effectively enforcing the regulations.
The right to honour in Spain: regulatory framework
The rights to honour, personal and family privacy, and personal image constitute pillars of human dignity and are recognised and protected with constitutional rank in Spain. The Spanish Constitution, in its Article 18.1, establishes that: “The right to honour, personal and family privacy, and personal image is guaranteed.” This guarantee is not merely a declaration of intent by the Spanish legal system; rather, it confers upon these rights the category of “fundamental”, implying reinforced protection and the possibility of judicial redress in the event of their violation.
The legislative development of these constitutional rights materialised with Organic Law 1/1982, of 5 May, on the civil protection of the right to honour, personal and family privacy, and personal image. This law is the key legal instrument that defines what constitutes an illegitimate interference and establishes civil actions for its defence. Honour is conceived as good reputation, the esteem individuals have and that which others have for them. Personal and family privacy refers to the reserved sphere of an individual’s life and family environment, which should not be made public without consent. For its part, personal image protects an individual’s faculty to control the depiction of physical appearance and the dissemination of portrait or representation.
Organic Law 1/1982 details a series of acts considered illegitimate interferences with these rights. For instance, Article 7 enumerates conduct such as the placement of recording or filming devices, the revelation of private data concerning an individual or family, or the capture, reproduction, or publication of a person’s image without their consent. It is essential to highlight that the mere existence of an illegitimate interference is sufficient to trigger the right to protection, without the need to prove economic damage; the law presumes the existence of prejudice with the interference. Judicial actions seek not only the cessation of the interference and, where applicable, rectification, but also compensation for the moral and material damages caused.
Nevertheless, the protection of these rights is not absolute and must be balanced with other fundamental rights, especially freedom of expression and freedom of information, rights enshrined in Article 20 of the CE. The Constitutional Court and the Supreme Court have developed vigorous jurisprudence to determine the boundaries between these colliding rights. The balance tilts in favour of freedoms of expression and information when the information is truthful, of general interest, and publicly relevant. However, these freedoms do not extend to the dissemination of injurious, libellous, or vexatious expressions that do not contribute to public debate, nor the dissemination of false facts that undermine honour. The difficulty lies in applying these criteria to the speed and complexity of information generated and disseminated in the current digital environment.
Generative AI as a threat to the right to honour
The emergence of generative AI has introduced a new, complex dimension to the ways in which the right to honour can be violated. The capacity of these technologies to create false content, indistinguishable from reality, represents an unprecedented threat to the reputation, credibility, and identity of individuals who may be affected by these fabricated creations resulting from AI outputs.
One of the most paradigmatic examples of this threat, as previously mentioned, is deepfakes. These audio, video, and image manipulations allow, for example, a person’s image to be inserted into a video in which they never appeared, or their voice to be cloned to make them utter phrases they never spoke. The implications for honour and image are direct and devastating.
Beyond visual and auditory deepfakes, advanced language models (LLMs) also pose a significant risk through AI-generated disinformation and defamatory text. These AI agents can draft fake news articles, misleading reviews, or even detailed reports containing injurious or libellous information about individuals or entities that is entirely false. The sophistication of the writing makes these texts difficult to distinguish from those written by humans, facilitating the widespread dissemination of hoaxes and large-scale smear campaigns, thereby undermining public credibility and trust in information.
However, perhaps the greatest legal challenge presented by AI lies in the difficulty of attributing liability. When a deepfake or defamatory text is created using these technologies, who bears legal responsibility?
This chain of responsibility becomes extensive and complex, hindering the legal prosecution of offenders and the reparation of harm for victims. Furthermore, virality and exponential damage are inherent characteristics of the digital environment. The damage to reputation is exponentially magnified with each distribution of the fraudulently generated content, leaving victims in a vulnerable position.
Protection and solutions within the Spanish legal framework
In the face of the challenges posed by generative AI to the right to honour, the Spanish legal system offers various avenues of protection, although some of these require interpretation and application adapted to the new technological realities confronting us.
The primary mechanism of civil defence is Organic Law 1/1982. A victim of a deepfake or an AI-generated defamatory text can initiate a civil action for the protection against illegitimate interference with their right to honour or personal image. The courts, relying on established jurisprudence, would have to assess whether the artificially generated content constitutes an imputation of damaging facts or statements that violate the person’s reputation. The law permits the request for the immediate cessation of the interference, which translates into the removal of the content from the internet and any dissemination medium, as well as the publication of a judgment or rectification. Furthermore, the victim is entitled to compensation for the damages caused. To determine the amount of this compensation, the courts consider factors such as the severity of the injury, the dissemination of the content, the notoriety of the affected person, and the specific circumstances of the case. The possibility of requesting preliminary measures is crucial in these scenarios, as it allows the victim to request the judge to provisionally and urgently remove the content even before a definitive judgment is rendered, given the speed of propagation in the digital environment.
In deepfake civil cases, the issue most likely will be to identify the defendant. If based outside Spain, service should be carried out in accordance with the corresponding treaties or regulations, if they do not have a presence in Spain.
In addition to the civil route, the criminal route may be considered where the investigative powers of the criminal judge may prove particularly useful to identify the defendant. Indeed, complexity again arises in determining criminal authorship when AI is the instrument. Responsibility would, in principle, fall on the person who uses the AI with the intention of committing the crime, but the debate on the potential responsibility of developers or of AI itself (as a legal entity) remains open and would require legislative adaptations.
The creation and dissemination of AI-generated content with the intent to insult or defame could fall within the crimes of insult and slander under the Criminal Code, provided their requirements are met (eg, the false imputation of a crime or the expression of facts that gravely undermine dignity). Moreover, deepfakes of a sexual or violent nature could constitute crimes against moral integrity if illicitly obtained images are used.
Intermediaries and digital platforms can play a fundamental role in combating AI-generated disinformation. The Spanish Information Society Services and Electronic Commerce Law (LSSI) already established a liability regime for these actors, obliging them to remove illicit content when they had effective knowledge of its existence. However, the recent entry into force of the European Union Digital Services Act (DSA) significantly strengthens these obligations. The DSA imposes due diligence requirements on large platforms, such as the obligation to establish clear and effective mechanisms for users to report illegal content, the swift processing of these notifications (“notice and takedown”), and transparency regarding their moderation algorithms. This should facilitate the faster removal of deepfakes and defamatory AI-generated content, shifting part of the responsibility to the platforms.
Finally, generative AI underscores the essential need for legislative adaptation and specific regulation for AI. In addition to the EU DSA, the Artificial Intelligence Act (AI Act) seeks to classify AI systems according to their risk level and establish requirements for those deemed “high-risk”. While its primary focus is safety and reliability, it also addresses transparency and the need for AI-generated content to be recognisable. This could lead to an obligation for labels or watermarks that clearly identify artificial content, a technical measure that would assist users in distinguishing reality from artificial manipulation. Furthermore, the debate focuses on whether future regulations should contemplate the liability of the AI’s “designer” or “trainer” for the malicious use of their models, establishing limits or guarantees to prevent the creation of tools that can be easily used to violate fundamental rights.
Conclusion
The advent of generative AI has radically transformed the information and communication sector, presenting significant challenges to the protection of the rights to honour, privacy, and personal image in Spain. The capacity to create deepfakes and generate defamatory texts automatically and on a large scale not only undermines individuals’ reputations but also threatens social trust and the veracity of information in our digital era. Although Organic Law 1/1982 provides a vigorous legal framework for the defence of these rights, its application to the sophisticated and often anonymous attacks of generative AI reveals complexities in attributing liability and in the effectiveness of corrective measures.
The fight against disinformation and violations of honour in this new digital environment requires a comprehensive response. This implies not only the adaptation and flexible interpretation of Spain’s current legislation to address new forms of attack but also the strengthening of the role of intermediaries and digital platforms through regulations such as the EU DSA. However, the need for specific regulation on AI is evident, directly addressing the risks it poses to fundamental rights, establishing transparency mechanisms, and potentially imposing liabilities on developers and malicious users.
The Right to Honour in Spain is not solely shaped by legislative provisions. A forceful case law significantly complements and further develops this right. Spanish courts have consistently engaged in balancing acts, performing proportionality judgements between the Right to Honour and other fundamental rights, such as the Right to Information and the Right to Freedom of Expression. This judicial interpretation and application of legal principles have created a comprehensive framework that defines the scope and limitations of these interconnected fundamental rights.
Ultimately, the protection of honour in our technological age demands a delicate balance between promoting technological innovation and safeguarding fundamental rights. The future of the right to honour will depend on the ability to adapt the legal and technological framework to the speed of AI advancements, ensuring that progress is not achieved at the expense of individuals’ dignity and reputation. Constant vigilance, public education, and co-operation among legislators, technologists, and society are fundamental to building a digital environment where truth and respect prevail.
Avda Diagonal, 437, 5º 1ª
08006 Barcelona
Spain
+34 93 388 25 34
info@giromartinez.com www.giromartinez.com