Data Protection & Cybersecurity 2019

The Data Protection & Cyber Security guide provides expert legal commentary on the key issues for businesses involved in the Data Protection and Cybersecurity sector. The guide covers the important developments in the most significant jurisdictions.

Last Updated: March 28, 2019


Author



Sidley Austin LLP is a global law firm committed to providing excellent client service, fostering a culture of co-operation and mutual respect, and creating opportunities for lawyers of all backgrounds. With over 1,900 lawyers in 20 offices around the world, talent and teamwork are central to Sidley’s successful results for clients in all types of legal matters, from complex transactions to 'bet-the-company' litigation to cutting-edge regulatory issues. Sidley values the recognition it has received in response to its leadership in its wide cross-section of legal services around the globe.


Where Does Privacy Go from Here – Digital Governance?

Government regulation has traditionally been a lagging indicator of problems that have already materialised in a country’s economy. Data privacy regulation for the private sector has developed differently. Legislative hearings, regulatory guidance, and international conferences do not dwell on the injuries that people actually experience. Policymakers focus instead on preventing the possibility that companies will misuse their customers’ (or employees’) information in a way that could cause harm. These risks are theoretically plausible, but how often do they really happen? 

Given the comprehensive data protection regimes already enacted in the EU, California, and Japan among other jurisdictions – and now being actively considered in other parts of the United States, India and elsewhere – the question of what types of data-related injuries people need to be protected against must be studied more carefully, and some real-world answers must be provided. Today's policymakers should not get away with simply assuming that everything to do with a private entity’s collection or use of personal data is a potential problem, and that a sledgehammer approach is the only proper response to preserve informational autonomy and dignity.

Of course, real data privacy dangers do exist. The manipulation of citizens’ political views through micro-targeting for the dissemination of false, distorted or inflammatory information through social media has demonstrated that certain data practices can be highly abusive and damaging to society. Identifying and preventing harmful data practices should be the priority for privacy regulation.

Kudos to the European Data Protection Board for issuing a statement on 13 March 2019, on a serious privacy risk regarding “the use of personal data in the course of political campaigns”. The statement reads: "Predictive tools are used to classify or profile people’s personality traits, characteristics, mood and other points of leverage to a large extent, allowing assumptions to be made about deep personality traits, including political views and other special categories of data. The extension of such data processing techniques to political purposes poses serious risks, not only to the rights to privacy and to data protection, but also to trust in the integrity of the democratic process. The Cambridge Analytica revelations illustrated how a potential infringement of the right to protection of personal data could affect other fundamental rights, such as freedom of expression and freedom to hold opinions and the possibility to think freely without manipulation."

Too often, however, privacy and data protection regulation and enforcement proceeds without clear specification of what real harm – tangible or intangible – the government is protecting citizens from. 

Rigorous assessment of potential problems is especially necessary, now and going forward, because digital practices based on artificial intelligence and machine learning are increasingly opaque. In some cases, the government may not be in the best position to first identify the potential harm. As these problems may be less and less likely to come to the attention of regulators and citizens, corporations themselves will have to protect against unfair or destructive digital practices through their own internal 'digital governance.' Moreover, the importance of identifying 'what is the harm?' becomes ever-more acute as artificial intelligence and machine learning take on increasing responsibility for critical decisions that affect all of us. 

To foster digital governance, the regulators, the regulated and civil society must develop a reasoned approach to identifying the harm and recognising proportional distinctions between varying degrees of harms. Otherwise, it will not be realistically possible for companies to engage in meaningful internal digital governance. In other words, individuals cannot really be protected if governments mandate that private entities treat all collections and uses of personal data as sensitive and meriting a high level of restriction and regulation. It is thus increasingly imperative that governments provide guidelines as to what digital practices are truly abusive, dangerous, offensive, discriminatory, exclusionary or anti-democratic. Remarkably, to this point, most governments have made little to no effort to identify practices that are actually harmful or reflect actual citizen concerns or consumer experiences. 

The leading comprehensive privacy regimes in effect around the world today – for example, the EU’s General Data Protection Regime (GDPR) and the California Consumer Privacy Act (CCPA) – do not do a good job of targeting actual privacy abuses or dangerous data practices, and may perversely limit data processing that is beneficial to the economy and society at large. The best hope for guidance in this regard may emerge from the US Federal Trade Commission’s (FTC) ongoing hearings on Competition and Consumer Protection in the 21st Century and the pending Request for Comments on Developing the Administration’s Approach to Consumer Privacy promulgated by the US Department of Commerce’s National Telecommunications and Information Administration’s (NTIA).

Specifically, the NTIA proposed the following framework for public consideration: "Instead of creating a compliance model that creates cumbersome red tape – without necessarily achieving measurable privacy protections – the approach to privacy regulations should be based on risk modeling and focused on creating user-centric outcomes. Risk-based approaches allow organisations the flexibility to balance business needs, consumer expectations, legal obligations, and potential privacy harms, among other inputs, when making decisions about how to adopt various privacy practices. Outcome-based approaches also enable innovation in the methods used to achieve privacy goals. Risk and outcome-based approaches have been successfully used in cyber-security, and can be enforced in a way that balances the needs of organisations to be agile in developing new products, services, and business models with the need to provide privacy protections to their customers, while also ensuring clarity in legal compliance."

The easy cases for identifying privacy harm involve, naturally enough, data security and data quality. People’s financial data and account information clearly needs to be rigorously protected to prevent theft and fraudulent accounts. We know these risks are real because funds actually do get stolen, payment cards and lines of credit are wrongfully opened, and time and expense is incurred in rectifying the practical consequences of suffering identity theft. Likewise, cavalier, flawed or biased data practices that affect a person’s reputation, credit score, employment record, etc, hurt people by curtailing their job prospects, access to loans, insurability, housing opportunities, and so on.

The GDPR, for instance, does not identify or address actual risks or harms of data privacy transgressions to any considerable extent. Rather, the GDPR speaks generally of the need to protect the “fundamental rights and freedoms” of data subjects. The only significant discussion of specific privacy risks or harms arises in the context of data security and automated profiling. GDPR endows data subjects with the right to be free from “automated processing [that] produces legal effects concerning him or her,” or similarly significant effects. The text explains that such automating processing is risky to the extent it could result in denied credit applications or restricted job recruiting. Indeed, in this regard, GDPR Recitals 71 and 75 identify examples where privacy violations can cause actual, real-world damage to people.

The GDPR provides a small number of specific examples of harm: "[…] automatic refusal of an online credit application or e-recruiting practices without any human intervention […] discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation […] identity theft or fraud, financial loss, damage to reputation, loss of confidentiality of personal data protected by professional secrecy, unauthorized reversal of pseudonymisation, or any other significant economic or social disadvantage."

As companies embark on establishing programmes for 'digital governance,' they should focus on preventing these serious harms that are clearly identified in the GDPR, along with those discussed in the FTC’s 2016 report – Big Data: A Tool for Inclusion or Exclusion? – and give due consideration to those harms that we may not yet even fully understand.

What is Digital Governance?

Simply stated, digital governance is corporate oversight of technologies that use personal or sensitive information, make autonomous decisions or exercise human-like responsibilities. The concept addresses disruptive technologies including artificial intelligence (AI), connected devices (IoT, cars, ubiquitous sensors, etc), and machine learning. The process should entail setting up organisational structures, internal processes and policies, ethics, and morality that advance:

  • legal compliance for privacy, data protection, cyber-security and fair digital practices;
  • fiduciary standards and preservation of assets;
  • shareholder value;
  • reputation, company values, stakeholder expectations;
  • innovation, economic productivity, competitiveness;
  • transparency, intelligibility, explainability;
  • fairness, inclusion, personal autonomy and human dignity and agency.

This is now critically important because disruptive technologies like AI, pervasive connected devices (IoT), autonomous vehicles and autonomous decision-making capabilities will challenge regulatory, consumer, marketplace and political expectations as no prior data technology has before.

For example, one leading tech company (Microsoft) has publicly identified these potential risk factors for its investors to consider: "Issues in the use of artificial intelligence in our offerings may result in reputational harm or liability […] As with many disruptive innovations, AI presents risks and challenges that could affect its adoption, and therefore our business. AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by [the company] or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.

"Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm […] We may experience backlash from customers, government entities, advocacy groups, employees, and other stakeholders that disagree with our product offering decisions or public policy positions. Damage to our reputation or our brands may occur from […] [p]ublic scrutiny of our decisions regarding user privacy, data practices, or content [and] [d]ata security breaches." Accordingly, the need for digital governance is manifest – at the highest levels of the corporate hierarchy.

To establish digital governance programmes, companies must first structure themselves accordingly. This means putting someone, perhaps a chief digital officer, in charge of the issues raised by new technologies that use personal information or do jobs once exclusively the preserve of human beings. Companies should also consider forming a digital governance, or perhaps technology, committee of the Board of Directors to address these issues. Establishing a compliance and ethics programme for fair digital practices should also be undertaken, along with assigning and empowering qualified personnel to staff it, granting them appropriate resources, and putting in place appropriate training on key issues of digital fairness. Naturally, commensurate reporting, escalation and accountability is key to success.

Secondly, companies need to have a full picture of what they are doing. This means 'mapping' current data and digital technology uses and understanding the company’s digital practices and exposure. What sorts of data and digital technologies are significant to the company’s productivity, profitability and competitiveness? Where may it be possible to monetise data and digital technologies like AI? What are the relevant risks – to include privacy and security risks – of doing so? Only a full understanding of what a company is doing will allow intelligent decision-making about what it should be doing.

Thirdly, companies must create an organisational culture that values fair digital practices. As a baseline, this requires identifying and ensuring compliance with relevant legal obligations – but it also requires much more. Companies should take steps so that the development and deployment of digital technologies account for ethical and moral considerations by, for example, assessing impacts and risks prospectively with fair digital practice assessments and establishing digital practices review boards. Companies should also evaluate and audit outcomes, including by establishing expectations for reporting on these issues to senior leadership and, where appropriate, the Board.   

Digital governance will not be easy or obvious to implement, but it will be necessary to protect companies and their stakeholders from ever-more complex – and even mysterious – risks of disruptive technology. It will also, of course, help comply with today’s duties to safeguard privacy and cyber-security.

Author



Sidley Austin LLP is a global law firm committed to providing excellent client service, fostering a culture of co-operation and mutual respect, and creating opportunities for lawyers of all backgrounds. With over 1,900 lawyers in 20 offices around the world, talent and teamwork are central to Sidley’s successful results for clients in all types of legal matters, from complex transactions to 'bet-the-company' litigation to cutting-edge regulatory issues. Sidley values the recognition it has received in response to its leadership in its wide cross-section of legal services around the globe.