The Artificial Intelligence 2025 guide features close to 30 jurisdictions. The guide provides the latest legal information on industry use of AI (including in employment, healthcare, finance and manufacturing), AI-specific legislation and regulation, machine learning and generative AI, liability for AI, IP issues in the space, and cybersecurity and ESG concerns.
Last Updated: May 22, 2025
Widespread AI Adoption
Widespread adoption of AI is no longer just on the horizon but is actively underway, driven by increasing maturity and integration of generative AI, especially large language models (LLMs). Throughout 2024 and early 2025, AI transitioned from exploration to operational implementation across diverse industries. Organisations have moved beyond experimentation and are now embedding AI directly into their core processes and services. Advances in multimodal capabilities, improvements in model accuracy, early agentic behaviour, increased availability of tailored, domain-specific AI solutions, and greater regulatory clarity – especially following the entry into force of the EU AI Act – have significantly accelerated adoption. AI has solidified its role as a pivotal driver of economic performance and innovation, with its impact expected to deepen throughout the remainder of the decade.
Predictive and Generative AI
Both predictive and generative AI continue to drive transformation, with generative AI becoming increasingly sophisticated and embedded in business processes and consumer applications. In 2024, generative AI expanded significantly, propelled by multimodal capabilities and enhanced model reliability, allowing businesses to deliver innovative and highly personalised products, services and experiences.
The distinction between predictive and generative AI remains nuanced. Predictive AI analyses data to forecast outcomes based on identified patterns, while generative AI produces original, human-like content. However, integration has intensified, as generative models now regularly incorporate predictive insights, resulting in more context-aware, tailored outputs that meet specific business needs.
Importance of AI
The importance of AI remains widely recognised as a central driver of growth, competitiveness and employment. Since 2024, AI’s role has evolved from transformative potential to practical necessity, with CEOs now actively embedding AI across operations, surpassing the impact of past technological shifts such as the internet. Recent breakthroughs in healthcare, sustainable energy management and personalised services illustrate AI’s increasing effectiveness in addressing critical societal challenges.
Countries continue to compete intensely to secure leading positions in the global AI innovation race, recognising early adoption as essential for economic leadership. However, barriers such as regulatory uncertainty – potentially the EU AI Act – could either bolster or limit innovation, significantly influencing market competitiveness.
Reasons for Regulating AI
The approach to managing AI-related risks involves establishing regulatory guardrails that aim to balance risk mitigation with fostering innovation.
Since the entry into force of the EU AI Act in August 2024, the debate intensified around regulating AI without hindering rapid technological progress. The contrasting regulatory approaches between the EU, USA and emerging global standards underscore the complexity of creating harmonised rules.
The opacity and increasing sophistication of AI – especially multimodal LLMs – continue to challenge regulators and businesses. Researchers still grapple with explaining AI behaviour, akin to physics’ historical struggle with unexplained phenomena. This complexity exacerbates issues around accuracy, bias and explainability. Additionally, rapid technological progress driven by algorithmic advances and expanding computing power makes it increasingly difficult for regulation to effectively anticipate and mitigate risks.
Practical challenges such as faulty datasets, algorithmic biases and human oversight issues remain critical, highlighting the ongoing necessity for evolving, flexible and responsive regulation.
Discussions Around AI Regulation
The global debate around AI regulation has intensified significantly following major political shifts, particularly the US administration’s policy pivot in early 2025 towards aggressive deregulation and strategic dominance in AI development. This change in US policy has unsettled traditional alliances, pushing Europe towards reconsidering its strict regulatory stance to avoid isolation in the global AI arms race.
Key lobbying issues remain familiar, yet the regulatory gap between the USA and EU has widened. The EU is cautiously considering deregulation or more flexible interpretations of the AI Act to bolster competitiveness. These efforts extend to established regulations such as the General Data Protection Regulation (GDPR), as the EU aims to enhance the competitiveness of its businesses against rivals in the United States, China and other regions.
European policymakers now face the delicate balance between safeguarding ethical standards and fostering innovation in an increasingly competitive global market. Meanwhile, unresolved tensions persist between addressing immediate AI risks versus long-term existential threats, with many stakeholders advocating increased oversight to mitigate short-term risks amidst this rapidly shifting geopolitical context.
Voluntary Commitments Rather Than Regulation
In response to the persistent complexities of AI regulation, voluntary AI governance initiatives have continued to emerge, especially following the US administration’s recent shift towards deregulation and a preference for industry-led oversight. Prominent AI companies in the USA have renewed their commitments to voluntary safeguards, advocating self-regulation to ensure product safety and prevent societal harms such as misinformation.
However, historical experience indicates that voluntary measures alone often have limited effectiveness. The EU, initially advocating binding legislation through the AI Act, now faces pressure to consider more flexible approaches or hybrid models – combining binding regulations with voluntary commitments – to maintain its competitiveness in the global AI landscape reshaped by recent US policy changes.
AI Regulatory Efforts Around the World
AI regulatory efforts are gaining momentum globally, with the EU AI Act being the first comprehensive worldwide AI regulation, officially entering into force last year. The Brussels effect has already shown partial success, with South Korea becoming the latest country to enact a national AI law. The “Basic Act on the Development of Artificial Intelligence and Establishment of Foundation for Trust” will come into force on 22 January 2026. Similar to its EU counterpart, the Basic Act employs a risk-based approach to regulate the deployment, operation and development of AI systems, with more stringent requirements applying only to specific “high-risk” use cases.
Meanwhile, the US administration’s shift towards industry-led oversight has reignited debates on AI governance, sparking concern about insufficient safeguards. Japan and Canada have progressed their respective oversight plans, emphasising risk-based approaches and ethical principles.
The UK remains cautious, reiterating the need for thorough risk assessments before finalising comprehensive legislation, yet it continues to accept industry-led recommendations. China has broadened its targeted, application-specific regulations, further refining enforcement mechanisms for generative AI and other key use cases. Europe’s cross-sectoral approach, now partially operational under the AI Act, is undergoing adjustments to maintain global competitiveness while preserving its commitment to robust regulatory standards.
Applying Existing Regulation to AI
Companies worldwide continue to grapple with applying existing regulations to AI technologies. Even with new AI-specific legislation emerging in some regions, AI systems remain subject to cross-sectoral and technology-neutral rules, such as data protection and intellectual property laws. However, this application remains complex and requires careful interpretation.
Cross-sector and technology-neutral regulations still offer broad coverage, preventing gaps or inconsistencies. Nevertheless, they can appear abstract when applied to cutting-edge AI solutions, creating uncertainty in the absence of extensive case law or precedent. Recent guidance from various national data protection authorities and the partial applicability of the EU AI Act has begun to clarify certain aspects, though grey areas persist.
In parallel, organisations such as the Organisation for Economic Co-operation and Development (OECD) have published preliminary recommendations aimed at improving the coherence of AI and privacy regulations. These recommendations focus on clarifying key terms, spotting overlapping legal provisions, and providing practical direction for stakeholders. Early regulatory support, in the form of targeted guidelines, remains crucial to ensure both legal certainty and innovation. Regulators around the globe are steadily issuing interpretative materials to align existing laws with AI’s evolving capabilities.
Regulatory Enforcement and Litigation Around AI
The legal spotlight on AI has intensified as global regulators seek to understand and govern the technology. Data protection authorities continue to lead enforcement efforts due to the frequent processing of personal data in AI systems, particularly generative AI. Ongoing investigations by regulators in Europe, the Americas and Asia underscore the importance of safeguarding personal data when using LLMs. This overlap between AI governance and data protection has broadened since 2024, with authorities increasingly co-ordinating their oversight in response to new regulatory frameworks, such as the partial implementation of the EU AI Act.
Litigation around AI has also expanded, especially in the area of intellectual property (IP). The landmark lawsuit by a major American media organisation against generative AI providers – filed in 2023 – progressed through initial hearings in 2024 and spurred further copyright-related class actions by creators in various industries. These legal actions target the alleged unauthorised use of copyrighted material for AI training datasets, raising questions about fair use and compensation for content creators. Courts in multiple jurisdictions are beginning to address these claims, demonstrating that IP-related disputes are likely to shape AI’s regulatory environment. With the LAION case before the Regional Court of Hamburg, the first judgment on issues of text and data mining in Germany was also handed down. As a result, companies developing or deploying AI must remain vigilant and adapt their compliance strategies to navigate evolving enforcement landscapes.
The Outlook
The intersection of law and AI technology continues to evolve, propelled by new AI-specific legislation and the refinement of existing rules. The partial applicability of the EU AI Act in early 2025, alongside fresh governance frameworks emerging in the USA, has prompted organisations worldwide to reconsider their compliance strategies. Adopting best-practice AI governance remains key to navigating regulatory complexity, reducing legal exposure and enhancing competitiveness.
A notable shift is the intensifying competition over global AI regulation and standards. G7 countries have taken steps to refine their AI governance guidelines, following the spirit of European oversight but allowing limited scope for voluntary commitments. Simultaneously, China’s Global AI Governance Initiative (GAIGI) has attracted additional partners across Asia, Africa and Latin America, focusing on iterative and application-specific rules. This diverging regulatory landscape underscores a broader geopolitical contest for influence over data, computing capacity and AI talent. Multinational enterprises must now consider where to locate their AI operations, recognising that diverse legal regimes require strategic adaptability. Ultimately, 2025 has made it clear that stakeholders globally – governments, industry leaders and innovators – must collaborate to shape an enduring, stable framework that supports both ethical advancement and sustained growth in AI.