Artificial Intelligence 2025

Last Updated May 22, 2025

USA – Washington, DC

Trends and Developments


Authors



Steptoe is an international firm with more than 500 lawyers and offices in Beijing, Brussels, Chicago, Hong Kong, Houston, London, Los Angeles, New York, San Francisco and Washington, DC. Its Artificial Intelligence (AI) group brings together a deep bench of more than 20 attorneys who advise on multijurisdictional AI regulations, with a focus on regulatory response, government relations, litigation and transactional matters. It represents industry leaders and the most consequential companies that develop AI technologies, trade groups advocating for policy decisions, and companies seeking to navigate the fast-changing legal framework for AI adoption. The firm helps clients shape the AI landscape with regular guidance on output bias, data privacy protection, cybersecurity, government contracts, corporate governance, high-risk activities, intellectual property, FCC-related matters, defence and export controls. It hosts monthly webinars with in-house counsel, executives, academics and policymakers to discuss the latest developments and trends in AI, digital law and policy.

Emerging Trends and Developments in US Artificial Intelligence Law and Policy: Views From Capitol Hill

How we got here: the current landscape in Washington, DC

The United States is a major player in development of artificial intelligence (AI) globally and has a significant impact on national (and global) conversations on the regulation of AI. As of now, the United States is still in the early stages of regulating AI and has no comprehensive federal legislation or regulatory framework. It has also taken a largely de-regulatory approach towards AI at the national level, concerned that over-regulation might stifle US innovation. However, certain important trends in the regulatory landscape have emanated from DC, and it is vital to be apprised of them.

Understanding these trends is particularly important as AI developers, businesses and legal professionals strive to navigate the shift in priorities from the Biden administration to the Trump administration, and the current developments in Congress with respect to AI legislation. These trends are relevant not only to developments in Washington, DC but also to those at the state level throughout the country.

A series of Executive Orders have set the stage for AI policy. During his first term, President Donald Trump issued Executive Order 13859 in February 2019, setting forth many principles and goals for US AI policy. In October 2023, President Joe Biden issued Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Biden EO”). The Biden administration prioritised establishing ethical guidelines, promoting transparency, and preventing civil liberties violations and discrimination. In alignment with these objectives, the Biden administration released a Blueprint for an AI Bill of Rights, which promoted five principles:

  • safe and effective systems;
  • algorithmic discrimination protections;
  • data privacy;
  • notice and explanation; and
  • human alternatives, consideration and fallback.

The federal approach to AI, however, has changed significantly since the re-election of President Trump in 2024. The Trump administration moved quickly to repeal the Biden EO and issued its own executive order and memoranda on AI in its place. Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence (the “Trump EO”) emphasises innovation and economic competitiveness over regulatory oversight. Specifically, the Trump EO states that “[i]t is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security”. Therefore, the new AI policy is more focused on global dominance and security, and less on issues of risk, safety and discrimination that were more the focus of the Biden EO and laws and agreements that have taken root overseas, such as the EU AI Act.

President Trump’s EO directs the Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA) to review “all policies, directives, regulations, orders and other actions” taken pursuant to the Biden EO and “suspend, revise or rescind” any actions that may be inconsistent with the policy set forth in the Trump EO. It also directs those individuals to create an AI Action Plan, which must be completed by the end of July 2025. The administration has made its views towards European regulation and the Biden EO clear through Vice-President JD Vance, who stated at a Paris AI Summit in February that “with the President’s recent executive order on AI, we’re developing an AI Action Plan that avoids an overly precautionary regulatory regime while ensuring that all Americans benefit from the technology and its transformative potential”.

Although various US federal agencies have been actively engaged in issuing guidance on AI, such guidance might be significantly scaled back under the Trump administration, with its focus on speeding up AI development in the United States while slowing down regulation. Guidance for specific industries or issues – for example, communications and intellectual property – has the highest likelihood of remaining in effect. Federal export controls relating to AI and AI-related hardware will also likely remain in effect and continue to expand, particularly in response to recent reports of advancements in China. At the state level, a patchwork of restrictions and requirements is coming into effect and will likely continue to expand as more states grapple with issues posed by AI.

Federal Trends in AI Legislation and Regulation

Notwithstanding the general trend towards deregulation in Washington, DC, some regulation and legislation can be expected to take root in the United States.

Regulation at the federal level

As noted above, the Trump EO directs the heads of certain agencies and departments to develop an AI Action Plan to achieve the policy set forth in the executive order. As part of the Trump EO, the Office of Science and Technology Policy (OSTP) moved quickly to solicit public input on the development of the Action Plan. The open comment period closed 15 March 2025, with many stakeholders in AI submitting their perspectives on how the US government should approach the emerging technology. The submissions converged around several core themes, including:

  • infrastructure and energy development;
  • federal pre-emption of state AI laws;
  • export controls to maintain US competitiveness against rivals such as China;
  • promoting domestic AI adoption;
  • safeguarding national security; and
  • defining clear copyright and licensing frameworks for AI models and datasets.

Over the next several weeks and months, the Trump administration will comb through these comments and begin to finalise the Action Plan.

As the Action Plan is finalised and agencies begin implementing the Trump EO, one can expect additional federal regulatory actions on AI. However, there is still some uncertainty within this space, as the fate of agency regulations and actions directed by the Biden administration remains unclear.

Federal legislation on AI

On the legislative side, one can expect to see some movement in Congress on legislation addressing certain non-controversial AI topics. Recently, for example, the Senate unanimously passed the TAKE IT DOWN Act (S 146), which would bolster protections for those targeted by non-consensual intimate depictions, including deepfakes, generated by AI. The legislation was endorsed by First Lady Melania Trump and is expected to pass the House of Representatives and be signed into law.

Additionally, Reps Zach Nunn (R-IA) and Jim Himes (D-CT) introduced the bipartisan AI PLAN Act (HR 2152) that would “require the federal government to create a strategy to prevent AI-generated scams and data theft that threaten economic and national security”. Although passage during the current 119th Congress is unlikely, this non-controversial legislation could gain bipartisan support and attention in future sessions.

Sens Rounds and Heinrich are launching a new bipartisan initiative called the American Science Acceleration Project (ASAP). This initiative will promote investing in and building the infrastructure needed to speed up the development of AI for scientific research. There is wide bipartisan support in Congress for expanding data centres and energy production to ensure AI development, and the private sector has already started to address these challenges. In January 2025, multiple companies announced Project Stargate, which will invest USD500 billion over four years to build new AI infrastructure.

By contrast, other AI laws that regulate risks, such as safety and discrimination in decision-making, are less likely to pass muster in the United States any time soon. Although dozens of bills related to AI have been introduced in the current 119th Congress, it is unlikely that comprehensive federal AI legislation will become law this session, consistent with the Trump administration’s deregulatory posture and the current Republican majority. As a result, states are more likely to continue to pass their own regulations and laws for AI within their borders. In 2024, for example, Colorado passed its own comprehensive legislation, the Consumer Protections for Artificial Intelligence Act, which requires developers of high-risk AI systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.

However, while states are more likely to pass laws regulating discrimination and risks to AI than the federal government, Congress is actively considering pre-empting state laws to occupy the field and prevent states from legislating on AI. Also, one is already seeing signs that states may follow a more deregulatory path. Virginia, for example, recently passed a bill much in the vein of Colorado’s AI law, though the law was vetoed by the Governor out of concerns that it was “burdensome” and would stifle innovation, consistent with the Trump administration’s position. Even Colorado’s law was signed with “reservations” from its Governor, and there are signs that the Colorado might be scaled back in the coming months as well. With that said, numerous states have passed AI-focused laws, and several laws have been proposed spanning issues of safety and security, transparency of AI models and training data, deepfakes and more.

Similar to states across the country that have proposed or taken some degree of action on AI, the District of Columbia has also taken several regulatory and legislative actions on AI but has not taken any major comprehensive steps. In February 2024, Washington, DC Mayor Muriel Bowser announced Order 2024-028, Articulating DC’s Artificial Intelligence Values and Establishing Artificial Intelligence Strategic Benchmarks. Order 2025-028 outlines six AI values, establishes an AI Advisory Group on AI Values Alignment, creates an AI Taskforce, and identifies internal government standards and strategic plans for government use of AI.

The Stop Discrimination by Algorithms Act (B25-0114) has been reintroduced by the DC City Council for several years (most recently in the 2023–2024 legislative session), and would place restrictions on algorithmic decision-making to prevent discrimination, much like the Colorado AI law and the recent Virginia AI law that was vetoed by the Governor. The Act has failed to pass due to negative reception from industry groups.

AI, Intellectual Property, and Copyright: Emerging Federal Trends and Key Developments

The proliferation of AI, especially generative AI, raises complex questions about intellectual property and copyright, including whether AI-created works deserve copyright or patent protection, as well as questions about authorship and ownership. Both federal regulation and courts have been grappling with these questions, as the law in this area continues to develop. As patents and copyrights are uniquely federal issues within US law, regulation, courts and activities within Washington, DC have a unique role to play in regulating these areas. 

Patents and intellectual property

In the patent context, there have been significant developments in the context of AI over the past year. On the heels of Thaler v Vidal, which held that inventorship of a patent requires a human inventor, the United States Patent and Trademark Office (USPTO) released its “Inventorship Guidance on AI-Assisted Inventions” on 11 February 2024. This guidance clarified that a human inventor must make a “significant contribution” to any invention for it to be patentable, even if generative AI might have been involved in the inventive process. The USPTO also provided hypothetical examples and scenarios of how the USPTO might handle inventorship questions involving the use of AI in various contexts, such as in the mechanical or pharmaceutical contexts.

On 11 April 2024, the USPTO released a notice titled “Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office”, which provided guidance on how AI could be ethically used by attorneys and patent agents in the context of prosecuting and obtaining patents before the office. Finally, on 14 January 2025, the USPTO announced a new AI Strategy “to guide the agency’s efforts toward fulfilling the potential of AI within USPTO operations and across the intellectual property ecosystem”. However, this strategy document was promptly pulled from the USPTO’s website when President Trump took office, and is currently being re-evaluated. 

Copyright

Courts are currently navigating how to apply copyright laws to AI-generated content, dealing with issues such as infringement, rights of use, and ownership of AI-generated works. The outcome of these cases is likely to depend on the court’s interpretation of the fair use doctrine, which permits the use of copyrighted work without the owner’s permission for purposes such as criticism, commentary, news reporting, teaching, scholarship or research, as well as for transformative uses that repurpose the copyrighted material in a manner not originally intended.

The US District Court for the District of Delaware recently concluded in Thomson Reuters v Ross Intelligence that training of AI models with copyrighted datasets (specifically, Westlaw headnotes) did not constitute fair use in particular circumstances involving direct competitors in the market for legal research tools. However, the Court left open the possibility that using datasets in generative AI that create new content could qualify as fair use in other contexts. Specifically, the opinion did not address how the fair use analysis would be evaluated in scenarios where copyrighted data was used for research or non-commercial purposes, or where other types of content were used to train AI models. There are currently dozens of other cases involving copyrighted training data, spanning books, music, movies, source code and numerous other media types, that have also raised the fair use question but remain pending.

Apart from fair use, there is also the question of using AI to generate works of authorship, which is similar to the question of patent inventorship addressed above. In March 2025, the US Court of Appeals for the District of Columbia Circuit confirmed in Thaler v Perlmutter that a work created entirely by an AI system is not subject to copyright. The Court’s ruling was unequivocal that an AI system “cannot be the recognised author of a copyrighted work because the Copyright Act of 1976 requires all eligible work to be authored in the first instance by a human being”. At the same time, the decision noted that the originality requirement of copyright “still incentivises humans... to create and to pursue exclusive rights to works they make with the assistance of artificial intelligence”, leaving the door open for copyright protections in contexts where generative AI was used to generate outputs.

While the courts grapple with applying copyright laws to both AI training data and AI-generated content, the USA has begun to expand its regulatory regime to protect individuals from AI-related harms, such as copyright infringement. The US Copyright Office (the “Office”) is leading the discussion around AI and copyright protection.

In Spring 2023, the Office held four listening sessions on the use of AI to generate works in the creative field. The sessions focused on literary works, visual arts, audiovisual works, and music and sound recordings.

In August 2023, after hosting public listening sessions and webinars, the Office published a notice of inquiry in the Federal Register. The Office received several comments that discussed issues such as copyrighted data used to train AI systems, bias, system accountability, and existential harm to or the replacement of human creators.

The Office is currently in the process of issuing a multi-part report analysing issues relating to AI and copyright. Each part will be published as it is completed. On 31 July 2024, the Office released Part 1 of the Report on digital replicas. It discusses the growing issue of digitally created videos, images or audio recordings that falsely depict an individual. The Office found that existing legal protections are insufficient against the threat of digital replicas.

On 29 January 2025, the Office released Part 2 of the Report, addressing the copyrightability of outputs created using generative AI. Notably, the Office has been issuing copyrights to works that have been generated with the assistance of generative AI tools. In doing so, the Office’s practice (to date) has been to grant copyright protections to those portions of a creative work that were based on a human’s creativity, but to refuse protection to the portions that were directly generated by the AI tool. This approach is significant for AI use in software development where the coding efficiencies gained by generative AI are huge, while the drawbacks of potentially not having copyright in that code may be even bigger.

On 9 May 2025, the Office released a draft of Part 3 of its Report, which addressed the legal implications of training AI models on copyrighted works, including licensing considerations and liability allocation. President Trump fired the Register of Copyrights, Shira Perlmutter, the next day. The draft provides a nuanced view of fair use in the context of AI model training, noting that, while some uses of copyrighted data to train AI models may constitute fair use, other uses may require a licence. 

Forthcoming guidance on AI and copyright is also likely to come from the Trump administration’s AI Action Plan. Several responses to the OSTP’s Request for Information on the AI Action Plan focused on copyright issues. Major technology and AI companies, for example, argued that US copyright law – particularly the fair use doctrine – is vital for US innovation, while creators argued for free market licensing to link content creators and AI developers fairly.

AI, International Trade and Government Contracts: Emerging Trends and Key Developments

The National Institute of Standards and Technology framework

In collaboration with the private and public sectors, the National Institute of Standards and Technology (NIST) developed a framework to manage risks to individuals, organisations and society associated with AI. On 26 July 2024, NIST released its “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” (NIST AI 600-1) (the “GAI RMF”). Developed as a companion for NIST’s AI Risk Management Framework (the “AI RMF”), the GAI RMF identifies 12 risks novel to or exacerbated by the use of GAI and more than 400 possible actions that developers can take to manage them. The 12 risks include:

  • CBRN information;
  • confabulation (or “hallucinations”);
  • dangerous or violent recommendations;
  • data privacy;
  • environmental impacts;
  • harmful bias and homogenisation;
  • human-AI configuration;
  • information integrity;
  • information security;
  • intellectual property;
  • obscene, degrading and/or abusive content; and
  • value chain and component integration.

Although the Biden EO directed the creation of this framework, it is unlikely that the NIST GAI RMF will be repealed. The GAI RMF is a voluntary framework that is widely used by companies to guide their internal AI risk management frameworks. NIST frameworks are also often used within the federal government and in government contracts, which can end up permeating commercial operations when imposed as a mandatory requirement for government contractors.

On 14 February 2025, NIST announced the creation of a new “Cyber AI Profile” to provide risk management guidance related to “Cybersecurity of AI Systems, AI-enabled Cyber Attacks, and AI-enabled Cyber Defense”. As cyber threats grow more targeted and advanced with help from AI, companies are placing more guardrails around their online operations to protect them from online attacks. This Profile is the first comprehensive roadmap on cybersecurity risks related to AI usage and mitigation of these risks. NIST published a concept paper on the Cyber AI Profile, which will build off the NIST Cybersecurity Framework 2.0. Public comments on the draft are now being reviewed by NIST.

Export controls

In the final months of the Biden administration, the Department of Commerce (DOC) issued several far-reaching export control updates. On 2 December 2024, the DOC issued two rules that added 140 companies to the Entity List, expanded the scope of the Foreign Direct Product Rule, and restricted new technology such as high-bandwidth memory. On 15 January 2025, the Department released its “Framework for Artificial Intelligence Diffusion”, which added a new control on AI model weights for certain advanced closed-weight dual-use AI models. The next day, the DOC released the “Foundry Due Diligence Rule”, which revised export regulations for Outsourced Semiconductor Assembly and Test companies.

The Trump administration rescinded the diffusion rules in May 2025, but export controls remain a key focus of the Trump administration. In its America First Trade Policy Memorandum, the Trump administration suggested heightened export controls and outbound investment restrictions targeting AI, and suggested increased use of Information and Communications Technology and Services (ICTS) rules. Further, the Trump administration’s America First Investment Policy Memorandum suggested heightened CFIUS scrutiny of AI transactions. It is likely that outbound investment rules will be strengthened.

Government contracts

In government contracting, the federal government has been focused on expanding its control over and rights in AI models and datasets, and this trend will likely continue. The federal government has recognised that it has vast amounts of data that can be leveraged to train AI models, but has been giving conflicting signals as to whether these resources should be made widely available to spur commercial development or be tightly controlled for national security purposes and to promote the federal government’s own development of AI tools.

Most recently, in a proposed rule for handling controlled unclassified information, the federal government indicated that contractors should be prohibited from using data that the government provides for their own internal purposes, including training and improvement AI models. If ultimately implemented, this prohibition would result in a fundamental change in how software companies do business with the federal government, in that they often use feedback and information learned in performance to improve their offerings. Information that contractors collect in performance, such as through public engagement with AI systems, may also be restricted under the rule.

The federal government is also in the process of expanding limitations on ex-US access to AI used for defence and intelligence purposes, as well as ex-US access to AI systems more generally when used by the federal government. In addition, the federal government is increasingly imposing localisation requirements that mandate that such systems and corresponding datasets be maintained in the United States, including when offered through cloud environments.

Steptoe LLP

1330 Connecticut Avenue, NW
Washington, DC 20036
USA

+1 202 429 3000

pcastillo@steptoe.com www.steptoe.com
Author Business Card

Trends and Developments

Authors



Steptoe is an international firm with more than 500 lawyers and offices in Beijing, Brussels, Chicago, Hong Kong, Houston, London, Los Angeles, New York, San Francisco and Washington, DC. Its Artificial Intelligence (AI) group brings together a deep bench of more than 20 attorneys who advise on multijurisdictional AI regulations, with a focus on regulatory response, government relations, litigation and transactional matters. It represents industry leaders and the most consequential companies that develop AI technologies, trade groups advocating for policy decisions, and companies seeking to navigate the fast-changing legal framework for AI adoption. The firm helps clients shape the AI landscape with regular guidance on output bias, data privacy protection, cybersecurity, government contracts, corporate governance, high-risk activities, intellectual property, FCC-related matters, defence and export controls. It hosts monthly webinars with in-house counsel, executives, academics and policymakers to discuss the latest developments and trends in AI, digital law and policy.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.