The legal grounds for protecting privacy and confidentiality in England and Wales are the Human Rights Act 1998 and the European Convention on Human Rights.
European Convention of Human Rights
The key provisions of the European Convention on Human Rights regarding privacy are Articles 8 and 10, as follows.
Human Rights Act 1988
The Human Rights Act 1998 incorporated the European Convention on Human Rights into domestic law. Therefore, under Section 6(1) of the Human Rights Act, UK courts and tribunals are obliged to act in a way that is compatible with Articles 8 and 10 of the European Convention on Human Rights when carrying out their functions.
For a claimant to establish a case for misuse of private information, they would need to satisfy a two-stage test, as follows.
Stage 1 ‒ reasonable expectation of privacy
Here the claimant must prove that they objectively have/had a reasonable expectation of privacy in relation to the relevant information. The factors that the court is likely to consider when assessing whether a person has a reasonable expectation of privacy are:
Stage 2 – balancing exercise
If the claimant is able to establish a reasonable expectation of privacy, the court will then assess whether that expectation is outweighed by the defendant’s right to freedom of expression under Article 10 of the European Convention of Human Rights by conducting a balancing exercise of the competing interests. For further details of the criteria that is to be applied during the balancing exercise, please refer to 1.3 Privacy Deadlines and Defences.
Confidential Information
In respect of confidential information, there is an equitable doctrine in England and Wales that a person who receives information in confidence cannot take unfair advantage of it. For a claimant to establish a case for breach of confidence, they must demonstrate that:
It is worth noting that there are some limitations to this protection – for example, where information has been (or is deemed to have been) read in open court, or where the public interest in disclosure outweighs confidentiality.
The main remedies available in the English courts for misuse of private information or breach of confidence are as follows.
Interim Injunctions
These are temporary injunctions that are intended to last for a defined period of time – commonly until the trial of the action. An interim injunction can be obtained before the publication of the private information. However, it will not be granted by the court on mere suspicion or apprehension of disclosure of private information; the court must be satisfied that there is clear evidence for such a fear. The claimant must also prove that they are “more likely than not” to be successful at trial (Cream Holdings v Banerjee) in order to justify the intrusion of the defendant’s rights under Article 10 of the European Convention of Human Rights to freedom of expression pre-publication, which can be a difficult hurdle to surpass.
Final Injunctions
A final injunction may be granted to a successful claimant in an action for misuse of private information (or breach of confidence), either to prevent the continuation of the wrongful publication or to prevent a threatened publication. The court has discretion as to the terms and length of the injunction.
Damages
This is an available remedy for successful claimants in order to compensate for the invasion of privacy and for the distress, injury to feelings and loss of dignity that may have arisen as a result of the breach of privacy or confidence – as well as for financial loss or, in specific cases, for the loss of control over the commercial exploitation of the claimant’s image.
The normal range of financial awards for misuse of private information varies greatly and is largely fact-specific. In early cases, such as Campbell (2002), awards were low. Model Naomi Campbell was awarded GBP2,500 in general damages and GBP1,000 in aggravated damages. However, in 2018, Sir Cliff Richard was awarded general damages totalling GBP210,000 in relation to the British Broadcasting Company (BBC)’s reports that he was being investigated about historic allegations of child sexual assault (Richard v BBC (2018)).
The limitation period for bringing a claim for the misuse of private information and breach of confidence is generally six years from the relevant private or confidential information being published. However, the right to the remedy of an injunction can be lost if a claimant does not pursue the claim promptly.
The right to freedom of expression (Article 10 of the European Convention of Human Rights) is most commonly relied upon by media defendants in claims brought against them for misuse of private information. The balancing exercise between Article 8 and Article 10 is conducted by the court.
In Von Hannover v Germany (no 2) and Axel Springer GmbH v Germany, Application No 39954/08, the ECHR set out the criteria that is to be applied during the balancing exercise. These include:
Defendants may also seek to rely on the defence that the “private” information was already in the public domain.
For breach of confidence claims, the iniquity defence may apply where the public interest overrides express or implied duties of confidentiality. This may include where confidentiality is used in order to hide criminal activity, mislead the public or cover up financial irregularities.
In England and Wales, the general rule is that hearings are to be held in public, in accordance with the principle of open justice. However, under Rule 39.2 of the Civil Procedure Rules 1998 (CPR), parties may apply for a private hearing – although a hearing will only be held in private if the court is satisfied that a private hearing is necessary to secure the proper administration of justice and that one of the following criteria apply:
A party can also apply for an anonymity order. However, the court will only order for the identity of a person not to be disclosed where the court considers non-disclosure necessary to secure the proper administration of justice and in order to protect the interests of that person (39.2(4) of the CPR).
In terms of bringing proceedings for misuse of private information in this jurisdiction, the courts of England and Wales generally have jurisdiction where the misuse of private information or/and the resulting damage has taken place within the jurisdiction. Other factors, including where the defendant is based, are also relevant.
The general rule is that the successful party pays the losing parties’ costs. However, the court can depart from the general rule and – even when the losing party is ordered to pay the successful party’s costs ‒ recovery is usually between 60–70% of total costs.
The availability of the pre-publication injunction makes the UK an attractive jurisdiction for claimants. When pursuing a privacy injunction against a media publisher, there will often be dialogue concerning the possibility of undertakings and/or agreements as to what – if anything – may be publishable or restrained.
There are two types of defamation, which are:
Libel
The law of defamation is based on common law, but is accompanied and modified by acts of Parliament – most recently and significantly, the Defamation Act 2013 (but also the Defamation Act 1996).
In order to bring a claim in defamation, the claimant needs to demonstrate the following.
The common law (see, in particular, Jameel v Dow Jones & Co Inc (2005)) requires that a defamation claim must involve the commission of a “real and substantial tort”. If the claimant does not satisfy this element, the claim will be struck out by the court on the basis that allowing it to proceed would cause an abuse of the court’s process.
Moreover, case law has found that the publisher of a defamatory statement is not limited to the original author. By way of example, secondary publishers – who did not take an active editorial role but facilitate the availability of the defamatory content to third parties – potentially may also be responsible for publication. Secondary publishers could include social media platforms, bookshops, libraries and newsstands. Nonetheless, the Defamation Act 2013 has provided more protection to intermediaries (such as social media platforms).
Slander
In order to bring a successful claim for slander there is an additional element. The claimant needs to establish “special damage” arising as the direct, natural and reasonable result of the publication of the words.
However, in some cases, it is not necessary to demonstrate special damage – namely, where:
These two categories are set out in Section 14 of the Defamation Act 2013.
Damages
The current ceiling for awards is approximately GBP350,000. However, awards tend to be far lower than this. To be at the top end of awards, the libel needs to be extremely serious (eg, imputations of murder or terrorism).
Compensatory damages
A successful claimant is likely to be awarded compensatory damages, which aim to remedy the claimant’s distress, vindicate their reputation, and reinstate any loss that occurred as a result of the libel. Factors that the court will take into account when determining the level of compensatory damages include the gravity of the libel, the injury caused to the claimant’s feelings, the extent and nature of the publication, and whether there are any mitigating factors.
Aggravated damages
This category of damages is intended to compensate a claimant for additional injury to their feelings resulting from poor conduct on the part of the defendant. In assessing aggravated damages, the court can consider all of the defendant’s conduct – from the time the libel is published up to and including the trial. Owing to the “injury to feelings” element of these damages, they are not available to companies.
Exemplary damages
Exemplary damages are punitive in nature and therefore are only awarded in exceptional circumstances, where appropriate to punish a defendant for deliberate conduct. The defendant’s state of mind is a key element when the court is considering exemplary damages.
Injunctions
In defamation cases, pre-publication interim injunctions that restrain a defendant from publication pending the trial of an action are generally unavailable, as the defendant only has to put forward an arguable defence for publication if challenged. Also known as the rule against “prior restraint” of defamatory statements, it is centred on the principle that it is in the public interest to protect freedom of speech. Moreover, if a publisher does get it wrong, substantial damages are seen to be an adequate remedy.
Nevertheless, final injunctions may be granted after trial to restrain further or future publication of the words complained of. The court must be satisfied that the words complained of are injurious to the claimant and there is reason to believe that the defendant may publish them further.
The limitation period for issuing a claim for defamation in the courts of England and Wales is generally one year.
Four main defences are available, as follows.
However, Section 3(5) states that the defence is can be defeated by the claimant if they prove that the opinion was in fact not held by the defendant. This defence does not apply where the statement is one of fact, rather than of opinion.
See 1.4 Privacy Proceedings Forum Choice.
In terms of jurisdiction, the place of publication is an important factor. Where a defendant is domiciled elsewhere, the courts of England and Wales do not have usually have jurisdiction unless the court is satisfied that – of all the jurisdictions where publication has taken place – England and Wales is the appropriate place to bring the action. Various factors are taken into account when determining this, including the extent of publication in each jurisdiction and the amount of damage in this jurisdiction.
See 1.5 Privacy Costs.
Defamation proceedings are becoming increasingly rare, given the “serious harm” test introduced by the Defamation Act 2013 and given the cost and complexity of the claims. It has become usual for the determination of meaning to follow immediately after a claim is issued, which can result in a delay of at least six months passes before a defence is filed.
Also, as mentioned in 4.5 SLAPPs, the UK has recently seen the introduction of the concept of SLAPPs (strategic litigation against public participation), which originated in the USA. Lawyers who deploy the law of defamation in order to restrain publication of information that may be in the public interest must ensure that they do so carefully and not in way that may include illegitimate or heavy-handed threats. In addition to facing censure in the courts, UK solicitors can see regulatory investigation and sanctions arising from reports of SLAPPs.
The Protection from Harassment Act 1997 provides a legal basis in England and Wales to bring a civil claim for harassment, where a person pursues a course of conduct that amounts to harassment of another and which said person knows or ought to know amounts to harassment ( Section 1(1)). Section 1(2) of the Protection from Harassment Act 1997 states that the person whose course of conduct is in question ought to know that it amounts to or involves harassment of another if a reasonable person in possession of the same information would think the course of conduct amounted to or involved harassment of the other.
The Protection from Harassment Act 1997 creates both a criminal offence (Section 2) and civil offence (Section 3) of harassment and stalking. Although the Protection from Harassment Act 1997 fails to explicitly define harassment, common law has defined harassment to be a persistent and deliberate course of unacceptable and oppressive conduct – targeted at another person – that is calculated to and does cause that person alarm, fear or distress.
If a claimant is successful, the court may award damages for anxiety caused by harassment and for any financial loss arising from the harassment.
Under Section 3 of the Protection from Harassment Act 1997, the court may grant an injunction for the purpose of restraining the defendant from pursuing any conduct that amounts to harassment. The order made by the court may prohibit specific conduct and/or harassment in general. An exclusion zone can also be ordered by the court, forbidding the defendant to go within a specified area around the claimant’s home or place of work where it is necessary to make the injunction effective. In particularly urgent cases, an interim injunction may be obtained.
The limitation period for bringing an action under the Protection from Harassment Act 1997 is usually six years from the harassing conduct.
There is a statutory defence (Section 1(3) of the Protection from Harassment Act 1997) if it is shown that the course of conduct complained of was:
Moreover, a course of conduct is required to bring a successful action in harassment. Thus, if there is only a sole event, the action will most likely fail.
Harassment is actionable under civil and criminal law in England and Wales, as set out in the Protection from Harassment Act 1997. Both the civil and criminal route can be pursued in parallel.
In criminal proceedings, if the defendant is found guilty, they may face criminal penalties such as up to six months’ imprisonment (for non-violent harassment) or a fine or both. The criminal route may be preferable for those who wish to focus on the punishment of the offender – rather than compensation for the victim – and on deterrence of future harassment.
As stated in 1.4 Privacy Proceedings Forum Choice, parties can apply for private hearings and anonymity orders. However, these are only ordered in exceptional circumstances.
In civil harassment proceedings, the successful party will usually be ordered to pay the losing parties’ costs by the court.
In criminal harassment proceedings, the court will order that the successful parties’ costs be paid from Central Funds, which is money provided by Parliament.
The Leveson Inquiry investigated the conduct of the UK press in 2011 following the phone-hacking scandal. Even though the Leveson Inquiry recommended a new form of regulation, it was not adopted by the UK press. After the Leveson Inquiry, press misconduct appears to have diminished – although news publishers remain fiercely independent and sometimes intrusive. A number of publishers have suffered financially owing to a loss of advertising revenue and sales. The growing influence of social media has affected the political influence of news publishers.
In the UK, the five most influential news providers – whether online, newspapers or broadcasters ‒ are:
There are currently two press regulators in the UK – namely, the Independent Press Standards Organisation (IPSO) and Impress. However, both are voluntary for publishers and therefore their utility is limited. Many publications have signed up to IPSO, whereas only a small number of publications have joined Impress. Impress is “Leveson-compliant” and was recognised by the Press Recognition Panel (PRP) in October 2016 as an “approved” regulator.
IPSO
The procedure for making a complaint to IPSO is as follows.
Impress
Impress provides for the following complaints process.
There are statutory defences for third-party hosts in the following legislation.
The Economic Crime and Corporate Transparency Act 2023, which came into force on 26 October 2023, contains provisions designed to combat SLAPPS brought in response to reports of economic crime.
There is currently no legislation in relation to non-economic crime SLAPPs, but the Solicitors Regulation Authority (SRA) can refer serious misconduct to the Solicitors Disciplinary Tribunal, which has unlimited fining powers. The SRA has also published a warning notice for solicitors along with its thematic review conclusions, which provide guidance for lawyers.
There is no equivalent of the USA’s Securing the Protection of our Enduring and Established Constitutional Heritage (SPEECH) Actin England and Wales.
The legal grounds for protecting data subjects’ rights are contained in the Data Protection Act 2018, which incorporates the principles of the EU General Data Protection Regulation (GDPR) post-Brexit.
Civil Claims
In terms of a civil claim, the Data Protection Act 2018 and the UK GDPR provide a statutory cause of action ‒ for example, where a data controller has processed a claimant’s personal data in a way that does not comply with the data protection principles.
Regulatory Grounds
If an individual makes a complaint directly to the organisation that has been processing their data and ‒ after one month ‒ the organisation has refused to respond, that individual can make a complaint to the Information Commissioner’s Office (ICO). Once the complaint has been processed, the ICO may:
In the UK, the following remedies are available to data subjects in the event of a breach of their data privacy rights:
Data privacy damages are normally in the low thousands. The UK Supreme Court in Lloyd v Google marked a significant setback in data privacy representative actions.
The limitation period for bringing a data protection claim for compensation is usually six years from the date of the breach. Provided there is an underlying public interest behind the publication, the Data Protection Act 2018 and the UK GDPR contain an exemption for journalists/the media.
See 1.4 Privacy Proceedings Forum Choice for information relating to private or anonymised court proceedings.
The Data Protection Act 2018 contains provisions for various criminal offences that can be prosecuted in the criminal courts, with maximum penalties of an unlimited fine.
See 1.5 Privacy Costs.
10 New Square
Lincoln’s Inn Fields
London
WC2A 3QG
UK
+44 (0) 20 7465 4300
enquiries@phb.co.uk www.phb.co.uk/service/privacy-and-media-law-solicitors/Regulatory, Legislative and Technological Factors Shaping the Fast-Changing Reputation Law Landscape
With the evolution of technology, the reputation law landscape is changing. Technology is being used to develop more sophisticated reputational weapons, which are being deployed by bad actors. The challenge for reputation lawyers, the courts, and governments in jurisdictions around the world is to keep pace by identifying the threats and adapting the existing toolset to meet those challenges, as well as introducing new legislation and regulation where that is not possible.
Disinformation
Disinformation is one limb of the unholy trinity of weapons now being deployed to attack clients’ reputations. But what does it comprise?
Disinformation has taken centre stage as one of the most concerning and challenging trends in the sphere of reputation management.
Who uses disinformation and why?
Disinformation is used by bad actors ‒ from disgruntled former business partners seeking revenge to political adversaries trying to influence governments against their targets ‒ seeking retribution or political/commercial competitive advantage by (for example):
What does a disinformation campaign look like?
Disinformation has been deployed in numerous contexts with devastating results. It can take the form of paid content placed on an ostensibly credible platform by a third party, which is then relied upon as a sound primary source to secure further negative coverage in more “established” national or international media. Suddenly, what started as a small, contained media issue is now everywhere, setting off alarm bells for clients and their advisers.
The existence of these false allegations in the media can then drive third-party action: a client may suddenly find themselves without access to banking facilities, generating difficult questions from commercial partners that leave transactions or assets at risk. In some cases, clients could be investigated by concerned regulators (at home or abroad) or named in Parliament by unwitting members who have effective immunity for their statements. Claims made online can even impact clients’ freedom and wealth. Inaccurate media articles have been used by government authorities as a basis to request the execution of Interpol Red Notices seeking a person’s location and arrest or as a basis to impose asset-freezing sanctions.
Bad actors’ current technological capabilities are evolving at a rapid pace and the scope for damage is huge. Recently, an elaborate scam saw a finance worker at a multinational firm pay USD25 million to fraudsters after attending a video call with deepfake recreations of his colleagues, including the organisation’s chief financial officer.
Critically, misinformation can be insidious or proliferate at a lightning pace. It can build a body of inaccurate, damaging information over months (lending it false credibility) or it can have a global impact within hours. The existence of online translation tools means foreign languages no longer act as a barrier to dissemination. Ostensibly credible websites can in fact be clones of the originals, used to divert traffic from legitimate sources and attract a high volume of “hits” from unsuspecting readers.
The internet has no borders; hence, clients who have international portfolios and footprints are vulnerable to a multi-jurisdictional game of “whack-a-mole” with publications around the world. As fraudulent activities become more sophisticated and accessible, the risks continue to grow.
Managing the risk
Tackling disinformation can be highly complex, but it can be effectively managed with the right tools. There are usually multiple ways to stem the flow of disinformation, including targeting the originating publication, social media platforms and search engines using a variety of legal claims, often in conjunction with non-legal strategies ‒ for example, targeted strategic communications campaigns using forensic intelligence to demonstrate the falsity of the allegations.
AI and reputational concerns
The rapid rise of AI generative communications, exemplified by tools such as ChatGPT and increasingly realistic deepfakes, holds the potential to inflict unprecedented damage on the reputations of individuals, companies and governments.
How do generative AI systems work?
Generative AI systems learn patterns and relationships in datasets (often collected from websites and other online sources) to create new text, images, video and audio. These computer systems then adapt without following explicit instructions, using algorithms and statistical models to analyse and draw inferences from new inputs and improve the quality of their outputs over time.
Each of the newly available generative AI tools relies on huge amounts of human-generated personal data. AI models’ sources range from news websites and academic journals to social media platforms and online forums. Generative AI can, however, also source data from online content that is far from reliable: a recent investigation by the Washington Post revealed that Google’s AI machine sources have included the white nationalist website VDARE, the Russian state-sponsored propaganda site RT, and the far-right American news site Breitbart. Wikipedia has also been shown to be a hugely important data source for AI systems.
AI and data concerns
As a result of the way AI systems function and “learn”, some have raised questions about how these datasets are collected, processed and stored, and whether the data collected infringes IP rights. Without proper controls, AI systems have the capacity to scrape and republish personal data from online sources without consent, giving rise to concerns about the potential for data breaches or the malicious use of personal and sensitive information.
Furthermore, AI cannot always distinguish fact from fiction, and it is only as accurate as the data it learns from. Different AI tools sometimes provide contrasting and conflicting answers to the same question because their answers are generated from multiple sources blended together in different ways. Some content generated by AI tools has also been found to be inaccurate or misleading (and even a “hallucination” of information that does not exist). The tendency of AI systems to use Wikipedia as a primary data source renders the output even more vulnerable if the information on the Wikipedia page in question is itself inaccurate or outdated.
What are the reputational risks?
As Google integrates its generative AI system into its search engine (which accounts for the vast majority of all online searches), those seeking information about individuals or businesses on Google are potentially presented with inaccurate and/or defamatory information that has been generated from inaccurate data sources. This type of inadvertent defamation can be enormously damaging for clients.
AI can also be used in a far more sinister way ‒ it is now incredibly easy to create and proliferate false information using AI with minimal budget. Deepfakes are a prime example. Deepfakes use AI to make audio or video content of someone by manipulating their face or body, with increasingly sophisticated and convincing results. They can place people in locations they were never at, with people they have never met, and saying or doing things they would never have done.
Bad actors can also use AI to influence SEO so that negative content features more highly on search engines. Take, for example, the case of Taylor Swift ‒ who in early January 2024 was subject to an enormous volume of explicit AI-generated images circulating on X (formerly Twitter), which led to the social media platform’s suspension of all searches of the singer over the course of that weekend in an effort to limit circulation.
AI and disinformation
AI can also be used as a tool to create the above-mentioned type of disinformation campaign, enabling users to produce a high volume of false and defamatory information at little to no cost and within a matter of minutes. AI-powered bots can then disseminate this disinformation across multiple fora (including social media), creating the “whack-a-mole” problem highlighted earlier in this article.
The ability of generative AI to scrape personal data from other online sources can also be used to manufacture convincing fake profiles and narratives, such as the previously mentioned elaborate scam that caused the finance worker to pay US25 million to fraudsters.
Legal tools to combat the risks posed by generative AI
2025 is likely to see an increase in the assessment of potential claims arising out of generative AI systems, together with a clarifying of the regulatory landscape in respect of both legal liability and legal protections for AI users. Many existing legal tools may provide recourse to those who find their reputations damaged by output generated by generative AI systems. As with all legal claims, the first step will be to establish the correct defendant and the relevant cause(s) of action.
The correct defendant could be the person responsible for creating the software, the person(s) responsible for creating the defamatory output or the person(s) responsible for distributing the defamatory content. Where those responsible have attempted to conceal their identifies, under English law, a claimant could seek a Norwich Pharmacal order against a third party they believe holds information allowing them to identify a wrongdoer (such as the creator of the generative AI system). Choosing which defendant to pursue will depend on various factors, including the applicability of the law in the relevant jurisdiction, which remedies might be available (including injunctions, damages, and accounts of profits), and the scope for enforcement.
Potential causes of action can include the traditional torts of defamation, privacy, offences under the Protection from Harassment Act 1997, and tortious interference. Data protection rights (including the “right to erasure” enshrined in the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018) and the data protection obligations on the companies responsible for creating and maintaining generative AI systems are of enormous significance in this context.
From an IP perspective, copyright infringement is likely to be an effective tool. Several copyright cases involving AI are due to be heard in the USA in 2025 (including Thomson Reuters et al v ROSS Intelligence; Concord Music et al v Anthropic; and The New York Times Co v Microsoft et al), which will undoubtedly have wider ramifications.
Law enforcement intervention might be possible in the UK under the Computer Misuse Act 1990, the Malicious Communications Act 1998, and even the Fraud Act 2006. These all provide the police with powers to arrest individuals for illegal online communications.
Finally, the implementation of the new Online Safety Act in 2025 (the manner of which is still being formulated by Ofcom) will aim to provide greater protection for children and adults from the circulation of illegal and harmful content, including content generated by AI. On a more global platform, with a view to spearheading international co-operation on the issue of AI safety, at the end of 2024 the UK government hosted a conference in Silicon Valley to engage with AI developers on implementing commitments made at earlier AI summits to tackle the risks created by AI. The UK government looks set to build on this in 2025, while engaging with AI regulation being considered and implemented by the EU, the USA, and others.
Data protection as a reputational tool
In the reputational sphere, data protection ‒ as a relatively new legislative regime ‒ has been somewhat underexplored compared to the regimes governing defamation and/or privacy claims. In recent months, however, data protection has surfaced as a contender to be one of the more powerful tools in the claimant reputation toolkit. Some of the reasons for this are explored here.
What does it cover?
A common complaint by clients is the limited redress available in circumstances where publishers have published inaccurate information about them that ‒ although not strictly defamatory in accordance with the statutory and common-law definitions ‒ is nevertheless causing them harm. Another common complaint is that a client’s historic personal information has been published online at some stage and now features prominently in search results for that client’s name.
Under the UK GDPR and the Data Protection Act 2018, companies that hold (or process) personal data belonging to individuals are required to comply with a number of high-level “principles”, including a requirement to ensure that the data they are processing is:
Claims in data protection therefore provide a legal mechanism through which a claimant can potentially require the publisher to rectify and/or erase that data. In addition, the limitation period for bringing a claim in data protection is six years, as opposed to the one-year limitation period for defamation claims.
Jurisdiction
In the past decade, it has become increasingly difficult for claimants ‒ especially international claimants ‒ to sue for international libel in England and Wales. In January 2014, the introduction of the Defamation Act 2013 provided that (emphasis added) “[a] court does not have jurisdiction to hear and determine an action to which this section applies unless the court is satisfied that, of all the places in which the statement complained of has been published, England and Wales is clearly the most appropriate place in which to bring an action in respect of the statement”.
Although this provision was originally intended to preclude claimants with a tenuous connection to the jurisdiction from seeking to use the English courts to vindicate their reputations (commonly termed “libel tourism”), it also ‒ perhaps unfairly ‒ imposes a higher hurdle for those claimants who have ample connection with the jurisdiction and are suffering harm here but who also have connections (personal or professional) elsewhere. Furthermore, in circumstances where defamatory and inaccurate information is largely published online and read in multiple jurisdictions, establishing jurisdiction in the English courts can seem an insurmountable task for an international claimant. These challenges were considered in the recent judgment of Steyn J in Parish v Wikimedia Foundation (2024) EWHC 2301 (KB), where an order granting the claimant permission to serve the defendant outside the jurisdiction was set aside in part because he had failed to satisfy the test under Section 9 of the Defamation Act 2013.
By contrast, Section 9 of the Defamation Act 2013 does not apply to data protection claims. The reversion to the common-law rules governing service and jurisdiction post-Brexit means that, in order to establish jurisdiction in the English courts, an international claimant must still (quite properly) satisfy the forum conveniens test ‒ ie, it must satisfy the court that England and Wales is the proper place in which to bring the claim (the appropriate forum being the one in which the case may most suitably be tried “for the interests of justice for all the parties and the ends of justice” (see Soriano v Forensic News LLC (2021) EWCA Civ 1952)). However, even though the language of the forum conveniens test resembles Section 9 of the Defamation Act, the latter modifies and amplifies the test by:
The impact of the distinctions between the two tests is that claimants with global reputations are likely to find it more difficult to ground jurisdiction in a libel claim than a data protection claim. Evidence of this is perhaps found in the judgment of HHJ Parkes KC in Pacini and Another v Dow Jones & Co Inc (2024) EWHC 1709 (KB), which was a strike out application brought by the owners of The Wall Street Journal against a data protection claim brought by Joseph Pacini and Carsten Geyer that notably did not include a challenge by the defendant on the ground of forum conveniens. (The judgment also confirmed that there was no issue in principle of a claimant bringing a legitimate data protection claim even where it was motivated in large part by reputational concerns.)
Potential defences
Unlike in defamation claims, the burden of proof in data protection claims is on the claimant. As such, if the claimant is unable to prove that the allegations are inaccurate, the claim will fail.
Assuming the claimant can meet that challenge, there are several “exemptions” or defences set out in Schedules 1 and 2 of the Data Protection Act 2018 that a defendant might seek to rely on, allowing them to disregard their obligations to process the claimant’s data accurately under the data protection legislation. These include processing data for:
The jurisprudence on the application of these exemptions is sparse and there may be good reason for that. In order for a defendant to rely on any of them, it would need to prove that complying with the relevant data protection principle (eg, that that the data they are processing is processed adequately and accurately) is incompatible with or would prevent or seriously impair the achievement of the purposes in question. On its face, that is a very high test ‒ arguably higher than the defences of truth, honest opinion, public interest or common-law privileges that are available in defamation proceedings.
Potential remedies
The remedies potentially available under a successful data protection claim are in many ways equal to the remedies available under a successful defamation claim. If bringing a claim on the basis of non-compliance with the accuracy principle, a successful claimant would likely obtain a judgment declaring the allegations were inaccurate. A successful data protection claimant could also obtain an order for rectification or erasure, akin to a final injunction under a defamation claim.
Data protection claimants are also entitled to seek orders for disclosure concerning the recipients or categories of recipients to whom their personal data has been disclosed, including recipients or categories of recipients in third countries or international organisations. (So, if there is reason to believe an organisation is responsible for disseminating misinformation or disinformation, the organisation may be required to identify the person(s) with whom they have shared that information so that this can be challenged.)
The courts are also proving to be inclined to adopt a versatile and flexible approach to data protection remedies. In Hurbain v Belgium (App No 57292), the Grand Chamber of the ECHR upheld the decision of the lower courts that an order to anonymise an article (which referred to a person’s involvement in a fatal road traffic accident for which they were subsequently convicted) in a newspaper’s electronic archive did not breach the applicant publisher’s right to freedom of expression under Article 10 of the European Convention on Human Rights.
Compensation is also available under the legislation for material or non-material damage, including distress. The question of whether reputational damages are available under data protection claims (and non-defamation claims more generally) is however still in flux. The High Court has taken different approaches to the issue of reputational harm damages in non-defamation claims, particularly in recent misuse of private information cases (Nicklin J at first instance in ZXC v Bloomberg LP (2019) EWHC 970 (QB) cf Richard v BBC (2018) EWHC 1837 (Ch) cf Warby J (as he then was) in Sicri v Associated Newspapers Ltd (2020) EWHC 3541 (QB)). HHJ Parkes KC considered this issue in the context of data protection in Pacini & Anor v Dow Jones & Co Inc (2024) EWHC 1709 (KB) and concluded that the issue required appellate court intervention. In short – watch this space!
Outlook
The rise of disruptive new technology often forces regulators to play catch-up, leaving parties exposed before the establishment of new legislation and regulation. While governments and courts attempt to keep up, there remain several tools available – both inside and outside the courtroom – that can be leveraged by reputation lawyers on behalf of their clients to minimise these new risks and ensure that any targeted individuals or businesses can protect their reputations. One thing is for sure: in the face of new technology and increasingly sophisticated bad actors, the legal landscape will have to continue to evolve in order to remain effective, and reputation lawyers will need to keep a close watch on these developments.
Tower 42
25 Old Broad Street
London
EC2N 1HQ
UK
+44 (0) 20 3301 5700
communications@kobrekim.com www.kobrekim.com