Artificial Intelligence 2024

The Artificial Intelligence 2024 guide provides the latest legal information on industry use of AI, machine learning, AI regulatory regimes and legislative developments.

Last Updated: May 28, 2024


Author



Bird & Bird delivers expertise covering a full range of legal services through more than 1,400 lawyers and legal practitioners across a worldwide network of 32 offices. The firm has built a stellar, global reputation from a deep industry understanding of key sectors and through its sophisticated, pragmatic advice. Bird & Bird is a global leader in advising organisations being changed by digital technology, as well as advising companies who are carving the world’s digital future. AI and generative AI are a cornerstone of the firm’s legal practice, which helps clients leverage AI and digital technology against a backdrop of increasing regulation. Bird & Bird’s longstanding strengths in data protection, commercial and IP law – allied with the team’s technology and communications expertise – mean the firm is ideally placed to work with clients in order to help them reach their full potential for growth.


Widespread AI Adoption

Widespread adoption of AI is on the horizon, following increased familiarity with generative AI and the potential of large language models (LLM) in 2023. In 2024, there will be a significant market shift from organisations becoming familiar with AI to actively implementing it. AI is expected to remain a key driver of global economic performance throughout the decade.

Predictive and Generative AI

Both predictive and generative AI play an important role in this transformation. Generative AI has gained popularity owing to its ability to deliver impressive results in various consumer domains. Businesses are using AI to gain competitive advantage and drive innovation by offering unique products, services and experiences.

However, the distinction between predictive and generative AI can be blurred in practice. Generative AI can use insights from predictive AI-based data analysis to generate content. Predictive AI analyses data to make predictions and recommendations based on patterns and trends, whereas generative AI creates new content that resembles human-generated output. The combination of these approaches enables tailored content creation based on analysed insights.

Importance of AI

The importance of AI is widely recognised as a key driver of future growth, competitiveness and job creation. The market potential for AI is significant, with CEOs recognising its transformative impact on business, which even surpasses the internet revolution. AI is seen as having the potential to address major societal challenges, from clean energy to medical advances (eg, finding cures for diseases such as cancer).

For countries, embracing AI is about securing a leading position in the global AI innovation race. By being early adopters in their respective sectors, countries can gain a competitive advantage and strengthen their economies. However, barriers to innovation pose risks, as lack of support or regulatory barriers (eg, restrictive or unclear laws) can lead to significant market share losses and prevent certain business models and local economies from remaining competitive.

Reasons for Regulating AI

The response to these risks is often some form of regulation that provides guardrails to mitigate AI-related risks, but also enables businesses to thrive. However, there are significant challenges to regulating AI, including the need to identify the specific risks that regulation needs to address without stifling innovation. The contrasting policy approaches of the EU and the USA highlight the complexity of the debate. In addition, issues related to the opacity of AI and its rapid advancement exacerbate the risks associated with AI, to which forthcoming regulations should provide solutions.

The opacity of AI models, especially LLMs, poses challenges in understanding and explaining their behaviour. Researchers are still struggling to understand the behaviour of AI. AI is often compared to the early days of physics in the 20th century when many experimental results remained unexplained and continued to surprise researchers. Similarly, in AI there are many experimental results that are not fully understood, and conducting experiments often leads to unexpected results (see here). This lack of transparency increases risks related to decision accuracy, bias, and the difficulty of explaining AI-based decisions.

In addition, the rapid pace of AI developments – driven by factors such as the growth in computing power (based on Moore’s Law) and algorithmic innovations – makes it difficult to understand the technology, identify risks, and establish effective risk mitigation measures through regulation.

This is complemented by the fact that real-word risks arise from faulty data, biases in coding, and challenges related to human perception when using AI in business scenarios such as insurance premiums or automation. As a result, there is a growing recognition that the immense power of AI requires restrictions and regulations to address its unregulated use and mitigate such risks.

Discussions Around AI Regulation

Since it has become clear that AI requires restrictions and regulations for the reasons mentioned, a global political debate about the control and governance of AI has been sparked. The outcome of this debate will determine who becomes the dominant player in shaping the regulation of this transformative technology, with long-lasting implications that will be difficult to change.

Key AI lobbying issues have remained consistent, with some alignment between Western governments. However, conflicts remain, such as the disagreement between the USA and the EU over the extent of binding legislation. The EU sees the US executive order on AI as laudable but unenforceable, while US policymakers argue that the EU AI Act is overly burdensome and stifles innovation. The ongoing debate between addressing long-term versus short-term risks remains unresolved. While concerns are being raised about the potential negative consequences of AI, there is growing support for immediate oversight of existing AI systems, particularly in the USA.

Voluntary Commitments Rather Than Regulation

In response to the challenges of AI regulation, numerous voluntary AI governance initiatives have been established. By way of example, prominent US AI companies have agreed to comply with voluntary AI safeguards facilitated by President Joe Biden’s administration in July 2023 in order to ensure the safety of their AI products prior to release. These companies have extensive knowledge of the technology and are motivated to prevent societal risks, such as the spread of misinformation and propaganda.

However, the history of technology regulation has shown that voluntary commitments often have limited effectiveness. As a result, there is a growing need for binding legislation in the form of regulations to address AI-related concerns.

Regulatory Approaches to AI in General

The need for AI regulation is unlikely to diminish in 2024, as the impact of AI remains uncertain in the context of important global elections, from the USA and EU to the Philippines and India. Concerns about losing control of AI are growing as it is recognised that AI is not just a distant future prospect but, rather, a present reality.

However, developing a comprehensive legislative and policy agenda for AI takes time and careful consideration. Despite numerous voluntary AI governance initiatives, it is challenging for policymakers to keep pace with technological advances and evolving AI regulations. Jurisdictions must strike a balance between fostering innovation and effectively regulating the associated risks in various AI applications, such as healthcare and finance. Regulating this rapidly evolving and not fully understood technology runs the risk of regulations becoming outdated before implementation. Legislators continue to face a learning curve in understanding AI and its implications.

AI Regulatory Efforts Around the World

AI regulatory efforts are gaining momentum around the world, with the EU AI Act expected to come into force soon and the White House issuing an executive order that is already prompting changes in US federal agency practices. Japan and Canada are also developing their own plans for enhanced oversight and regulatory frameworks.

In contrast, the UK government has expressed the need for future legislation on AI, emphasising the importance of understanding the risks involved before enacting comprehensive regulations. The UK has shown a willingness to consider industry-led recommendations and was praised for hosting the AI Safety Summit back in 2023.

China has taken an important step by introducing enforceable regulations for various AI applications, with a focus on regulating specific applications individually. In contrast, the EU AI Act takes a comprehensive, cross-sectoral approach to AI regulation.

Applying Existing Regulation to AI

In the area of AI regulation, companies around the world face the challenge of applying existing regulations to AI technologies. Even in the absence of AI-specific legislative initiatives, AI is also subject to existing regulations, particularly cross-sectoral and technology-neutral regulations. However, the application of these regulations to AI is complex.

Cross-sector and technology-neutral regulations, such as those on data protection and IP offer advantages in terms of universal applicability and avoiding protection gaps and inconsistencies. However, they tend to be more abstract than sector-specific requirements, leading to greater legal uncertainty. The interpretation of these rules in the context of new technologies is unclear, owing to limited practical testing.

Efforts are underway to address these issues. By way of example, the OECD has established an expert group to improve the effectiveness and coherence of AI and privacy regulations. Objectives include clarifying key terms, identifying overlaps in current regulations, defining separate boundaries for AI and privacy, and providing guidance to stakeholders on how to navigate existing laws and foster innovation within defined boundaries.

Early regulatory support, in the form of guidelines that contextualise abstract rules for specific sectors, is critical to providing legal certainty while fostering innovation. Regulators around the world have begun to address AI and issue guidelines to make existing regulations more applicable to AI technologies.

Regulatory Enforcement and Litigation Around AI

The legal focus on AI is evident as regulators worldwide seek to understand and regulate this technology, with data protection regulators taking a leading role. Data protection is particularly important in the context of generative AI, as evidenced by the ongoing investigations of LLM providers by data protection regulators in countries such as Italy, Canada, Brazil and South Korea.

Data protection regulators have become key players in AI regulation, owing to the broad scope of data protection triggered by the fact that personal data is often involved in AI systems. There is significant overlap between AI governance and data protection governance, which further strengthens the role of data protection regulators in AI regulation. They will continue to play an important role in overseeing the governance of AI systems, particularly in relation to the processing of personal data.

Litigation also arises in the context of AI, particularly in relation to the application of IP laws. By way of example, the New York Times has filed a lawsuit against generative AI providers, alleging copyright infringement. This lawsuit is significant in the ongoing litigation over the unauthorised use of published content to train AI technologies. The New York Times’ lawsuit is the first of its kind by a major American media organisation against these companies, which are responsible for popular AI platforms such as ChatGPT. Copyright-related class actions have also been filed by creators in the creative industries against generative AI providers, reflecting concerns about the potential impact of AI on their work.

The Outlook

The intersection of law and AI technology encompasses a range of topics, and it is fascinating to witness the ongoing evolution both of AI as a technology and of the legal landscape. This includes the creation of new AI-specific laws and standards, as well as the application of existing laws by regulators and courts charged with enforcement.

Organisations must closely monitor these developments in order to navigate the ever-changing regulatory environment, mitigate legal risks and gain a competitive advantage. Implementing AI best practice compliance strategies that address key issues is critical for effectiveness, manageability and proportionality.

Furthermore, it is interesting to observe which countries will emerge as dominant players in shaping AI regulation. Different countries have different approaches to AI regulation, leading to international partnerships and standard-setting competition. The G7 countries are developing their own AI governance guidelines, in line with the prevailing perspectives in the EU. Meanwhile, China has launched the Global AI Governance Initiative (GAIGI) through the Belt and Road Initiative, which emphasises targeted and iterative regulation. The outcome of this competition between Western-style and Chinese-style regulations is likely to affect business practices and introduce a new dimension of competition for data, computing power and AI talent between G7 and GAIGI nations, involving more than 150 countries.

Author



Bird & Bird delivers expertise covering a full range of legal services through more than 1,400 lawyers and legal practitioners across a worldwide network of 32 offices. The firm has built a stellar, global reputation from a deep industry understanding of key sectors and through its sophisticated, pragmatic advice. Bird & Bird is a global leader in advising organisations being changed by digital technology, as well as advising companies who are carving the world’s digital future. AI and generative AI are a cornerstone of the firm’s legal practice, which helps clients leverage AI and digital technology against a backdrop of increasing regulation. Bird & Bird’s longstanding strengths in data protection, commercial and IP law – allied with the team’s technology and communications expertise – mean the firm is ideally placed to work with clients in order to help them reach their full potential for growth.