Artificial Intelligence 2025

Last Updated May 22, 2025

USA – New York

Trends and Developments


Authors



A&O Shearman boasts an artificial intelligence (AI) team that comprises leading global experts working at the intersection of AI and intellectual property (IP), privacy and regulatory law. It covers all types of AI and the specific issues they raise from a risk management and contracting perspective. It has worked with sovereigns to develop AI policies, and with businesses of all sizes across industries on effective and responsible AI solutions. It has also advised on AI-focused transactions and disputes. The team has considerable experience with a wide range of new and emerging AI-related and AI-based technologies, and in assisting clients with their product development and the formulation of go-to-market strategies as well as clients who are users and consumers of others’ AI tech. It brings specialist expertise on matters centred in Silicon Valley, New York, the UK and several other global jurisdictions, advising a wide range of clients on AI matters.

Introduction to AI in New York

The State of New York (NY), and New York City (NYC) specifically, has made significant strides in recent years to become a leader in the responsible use of artificial intelligence (AI) technology by balancing the regulation of AI use and propelling AI innovation. This dual approach aims to harness the benefits of AI while mitigating potential risks and ethical concerns associated with its deployment.

AI Landscape in NYC

NYC has a robust framework for AI innovation. NYC is the largest metropolitan economy in the world, boasting a USD2 trillion gross metropolitan product. This impressive economic output is the product of NYC being a hub for diverse industries, including finance, healthcare, entertainment, energy and tech. The concentration of successful businesses in NYC reflects a city that encourages innovation and productivity – outcomes that AI has been shown to help achieve. NYC industry leaders appear eager to embrace AI to promote further efficiency and success.

NYC is home to industry giants and over 25,000 tech-enabled start-ups, including thousands of AI-driven start-ups. Recent trends suggest that there is intensified interest in AI-driven businesses in NYC. In 2023 alone, approximately one third of venture capital raised by NYC start-ups was directed to AI. This indicates that investors recognise the value and potential of AI in the city’s dynamic and competitive market.

Moreover, NYC has a rich talent pool of AI experts, researchers and entrepreneurs, who benefit from the city’s world-class academic institutions, research centres and innovation hubs. Some notable examples of AI-related initiatives and organisations in NYC include:

  • the AI Now Institute at New York University (NYU), which conducts interdisciplinary research on the social and ethical implications of AI;
  • the Center for Data Science at NYU, which offers cutting-edge education and research in data science and AI;
  • the Data Science Institute at Columbia University, which fosters collaboration and innovation in data science and AI across disciplines and sectors;
  • the Cornell Tech campus on Roosevelt Island, which brings together faculty, students and industry partners to create digital technologies and solutions for societal and economic impact;
  • the Partnership on AI, a global coalition of over 100 organisations, including leading tech companies, civil society groups and academic institutions, which aims to ensure that AI is developed and used in a responsible and beneficial manner;
  • the NYC AI Collective, a community of over 70 AI start-ups that collaborate, share best practices and advocate for the AI industry in NYC;
  • the NYC Mayor’s Office of Data Analytics, which leverages data and analytics to improve city services, operations and policies; and
  • the NYCx initiative, a programme that invites innovators and entrepreneurs to solve urban challenges using emerging technologies, including AI.

These examples illustrate the breadth and depth of the AI landscape in NYC, which offers a conducive environment for AI innovation, adoption and impact.

The State of NY’s AI Leadership: How the State Supports Innovation and Ethics

The State of NY has become an AI leader partly because of the state government’s support and vision. The State of NY has spearheaded and helped fund several bold AI-related initiatives that aim to foster collaboration, research, education and entrepreneurship in this rapidly evolving domain.

The Empire AI Consortium

One of the State of NY’s flagship initiatives is the Empire AI Consortium, which was established in April 2024 as a partnership between public and private universities across the state to create a state-of-the-art AI computing centre housed at SUNY’s University at Buffalo. Academic institutions that are members of the Empire AI Consortium can leverage sophisticated computing resources that may otherwise be prohibitively expensive, to help solve complex problems and promote innovation.

The Empire AI Consortium was formed with an investment of over USD400 million in public and private funding, including up to USD250 million from the State of NY in grants and other funding. In February 2025 alone, the Empire AI Consortium received USD90 million in new funding in Governor Hochul’s fiscal year 2026 budget. The proposal will be matched by USD50 million in private funding and USD25 million from the State University of New York System over ten years. This fiscal commitment emphasises the state’s dedication to remaining at the forefront of AI innovation.

The Empire AI Consortium also aligns with the State of NY’s emphasis on using AI ethically and within an environmental, social and governance (ESG) framework. Cognisant that training and using AI consumes significant energy and contributes to climate-warming greenhouse gas emissions, the Empire AI Consortium was built to be sustainably powered by clean and renewable hydropower from Niagara Falls. To help offset the environmental impact and carbon footprint of the Empire AI Consortium, it also recirculates the heat it generates to warm student housing at SUNY’s University at Buffalo.

NYCEDC

Another key player in the State of NY’s AI leadership is the New York City Economic Development Corporation (NYCEDC), a public-benefit corporation that serves as the official economic development organisation for NYC, and which published a January 2025 study and action plan that outlines NYC’s strategy to further develop AI opportunities and applications in various domains. The study and action plan also address the challenges and risks of AI (such as ethical, legal and social implications) and propose solutions and recommendations to help ensure that AI is used responsibly and equitably.

One of the main goals of the NYCEDC’s study and action plan is to develop a diverse, AI-ready workforce that can meet the growing demand for AI skills and talent in the city. The NYCEDC proposes to achieve this goal by investing in AI education and training programmes, creating AI career pathways and pipelines, and supporting AI entrepreneurship and innovation ecosystems. The NYCEDC also aims to foster collaboration and engagement among various stakeholders, such as academia, private institutions, government and the public, to promote a shared vision and understanding of AI’s potential and challenges.

AI governance

In addition, in January 2024, NYC established a Steering Committee to oversee AI use within city government and an AI Advisory Network with representatives from different sectors (including private corporations and academia) to support NYC’s AI efforts on a consultative basis.

The State of NY’s Regulatory Approach to AI

The State of NY has established a multi-layered approach to AI regulation, focusing primarily on government usage while beginning to address broader applications. There are no comprehensive AI laws yet, but the State of NY and NYC have addressed specific risks such as bias and deception. The State of NY has implemented several innovative measures to monitor and regulate AI deployment across public agencies, with distinct approaches at state and city levels. This regulatory landscape continues to evolve as lawmakers and agencies develop more comprehensive frameworks.

Key regulators

Multiple regulators oversee AI-related issues in the State of NY. Some of these include:

  • the New York State government – oversees AI use in state agencies, ensuring compliance with legislative requirements;
  • the New York City government – implements city-specific AI policies and initiatives;
  • the New York State Attorney General – enforces consumer protection and anti-discrimination laws in cases involving AI;
  • the Department of Consumer and Worker Protection (DCWP) – enforces the city’s AI hiring bias law (Local Law 144); and
  • the New York Department of Financial Services (NYDFS) – regulates AI in the financial sector, providing guidance on mitigating cybersecurity risks.

State government regulatory framework

The State of NY’s government has enacted significant legislation and executive initiatives to govern AI use in state agencies. In 2023, the State of NY enacted NY SB 1042A, a law that criminalised the intentional dissemination or publication of deepfake images or videos and created a right for victims to sue. In late 2024, Governor Kathy Hochul signed the Legislative Oversight of Automated Decision-making in Government Act (the “LOADinG Act”), requiring state agencies to conduct thorough assessments of any software using AI techniques. Additionally, the LOADinG Act requires the following:

  • state agencies must publicly disclose when they use AI or automated decision-making systems, including those already in use;
  • direct human review and oversight of AI systems is required;
  • biennial reports detailing AI usage must be submitted to the governor;
  • state agencies are prohibited from replacing government workers with AI systems; and
  • an approval process must be established for any new automated decision-making system.

These requirements reflect the State of NY’s emphasis on human verification alongside technological advancement, ensuring that critical decisions affecting citizens’ welfare remain subject to human judgement.

NYC’s regulatory approach

NYC has been a pioneer in local AI regulation. The city enacted Local Law 144 of 2021, which prohibits employers from using an AI-driven tool to make hiring or promotion decisions in NYC, unless:

  • the AI-driven tool has undergone a bias audit by an independent evaluator not more than one year prior to its use; and
  • at least ten businesses prior to its use, candidates are notified that an AI-driven tool is being used, how it is proposed to be used, and the specific job qualifications and characteristics that it will use when assessing the candidate.

Enforced by the DCWP, and despite being enacted on 1 January 2023, the law took effect on 5 July 2023, to ensure that such tools do not unfairly disadvantage existing applicants in the job market. Employers that breach Local Law 144 are liable for a civil penalty of up to USD500 for a first violation and each additional violation occurring on the same day. With respect to any subsequent violations, the penalty ranges between USD500 and USD1,500.

NYDFS AI cybersecurity guidance

The New York Department of Financial Services (NYDFS) has proactively regulated AI in the financial sector. In October 2024, the NYDFS issued guidance to help state-regulated financial institutions mitigate cybersecurity risks posed by AI. While this guidance does not implement new requirements, it provides a framework for regulated entities to meet existing compliance obligations under the NYDFS cybersecurity regulation (23 NYCRR Part 500). The guidance specifically addresses AI-specific risks and emphasises that financial institutions must implement robust governance frameworks, ensure third-party risk management, and use AI responsibly.

Proposed legislation and trends

State of NY lawmakers continue to develop new regulatory approaches to AI. They are expected to continue tightening AI rules in areas such as employment, finance and consumer protection, while co-ordinating with emerging federal standards. Key proposals to note include the following.

  • A01952: requires employers or employment agencies to notify candidates of the use of Automated Employment Decisions Tools and allow them to request an alternative selection process or accommodation.
  • A03930: regulates AI use in rental housing and loans. It requires annual disparate impact analysis for housing applicant selection tools and public disclosure of the analysis summary.
  • SB365: the New York Privacy Act would require that companies disclose automated decision-making with materially detrimental effects, allow consumers to contest negative decisions and obtain human review. 

Procurement and Use of AI

In October 2023, NYC unveiled an AI action plan, the first of its kind for a major US city. The plan outlines a framework to establish a holistic, adaptable framework for AI governance. More specifically, the city aims to:

  • prioritise public engagement in order to educate and empower New Yorkers;
  • train public servants who use, manage or make decisions about AI tools on best practices, and develop a range of new AI learning resources in support of that aim;
  • streamline AI acquisition processes to ensure responsible procurement, including by pursuing new citywide contracts for high-demand AI tools to benefit from competitive terms and pricing, and by developing AI-specific procurement standards; and
  • build public confidence and maintain accountability to industry stakeholders by publishing an annual progress report on the city’s implementation of the AI action plan.

The NYC Office of Technology and Innovation (OTI) published “Preliminary Use Guidance: Generative Artificial Intelligence” in May 2024. Rather than prohibiting the use of AI by city personnel, the preliminary guidance outlines key considerations to guide agencies and their personnel towards the ethical, transparent and effective deployment of AI technologies. In practice, agencies should vet AI tools against city standards before use.

NYC also incorporated stipulations for “responsible AI procurement” within its overarching governance objectives, which may serve as a key reference for private sector organisations. Private vendors looking to sell generative AI (GAI) solutions to the public sector must be prepared to meet stringent requirements. The state’s procurement approach emphasises that vendors should ensure that their GAI products include features or documentation addressing performance, transparency and explainability, fairness, privacy protection and cybersecurity safeguards. Vendors should anticipate security reviews and be prepared to demonstrate how their models protect data and mitigate bias or harm.

Challenges to Harnessing AI in the State of NY

The State of NY and its constituents have been proactive in attempting to harness AI, but have nonetheless faced certain challenges as a result. 

In June 2023, US District Judge P. Kevin Kastel of the Southern District of NY ordered certain attorneys to pay a fine of USD5,000 for submitting a legal brief that included six fictious case citations generated by ChatGPT. This incident underscores the potential pitfalls of relying on AI-generated content without thorough verification, highlighting the need for stringent oversight and validation mechanisms when integrating AI into legal practices.

In March 2023, NYC launched its first phase of the “MyCity”, an online chatbot that employs Microsoft’s Azure AI and is designed to help NYC citizens manage childcare, career and business services. Mayor Eric Adams promoted the MyCity chatbot as a technology that “allowed us to cut through government red tape, help 43,000 families gain eligibility for childcare, and support hundreds of thousands of job seekers”. However, a year later, reports circulated that the MyCity chatbot was generating hallucinations that instructed users to break the law. For example, the MyCity chatbot advised users that landlords are not required to accept tenants on rental assistance (despite NYC Human Rights Law Section 8-101 stipulating that it is illegal to discriminate based on a person’s lawful source of income) and that employers cannot take a cut of their worker’s tips (despite NY Labor Law Section 196-d and the NY Department of Labor Hospitality Industry Wage Order stipulating that tips are the property of the employee). The MyCity chatbot has since been updated to include a disclaimer that it may occasionally provide incomplete or inaccurate responses.

AI hallucinations can be caused by various factors, including flawed training data sources, limitations of the AI model, and inherent biases in the software developers of the AI tool. Although NYC has taken steps to address such factors, including by passing NY Local Law 144, it is difficult to predict the measures that will need to be taken as developers continue to innovate AI and develop new use cases. This ongoing challenge necessitates continuously monitoring and updating AI systems to ensure their reliability and accuracy.

In 2019, the NY Police Department (NYPD) signed a contract with Voyager Labs for its Voyager Analytics and Genesis tools. While the NYPD has publicly stated that it uses such tools in accordance with its social media policy and does not otherwise use AI in a predicative manner, certain non-profits such as the Surveillance Technology Oversight Project (STOP) have decried the use as “invasive” and “alarming”. The NYPD recently reported that it has explored the use of other AI tools to assist in its policing efforts and acknowledged that it deployed AI technology to assist with the manhunt for Luigi Mangione, a suspect in a homicide committed in NYC on 4 December 2024.

The rapid proliferation of AI in various industries, particularly digital infrastructure, may also stress the energy infrastructure of the State of NY and NYC. According to a 2024 report from the New York Independent System Operator (NYISO), NYC could experience an energy shortfall by 2032 if current electrification plans proceed as expected. Notably, the report does not yet account for the potential energy demands from major AI initiatives due to the undetermined scale of these projects. Addressing the energy requirements for AI projects while adhering to the State of NY’s ESG framework will require innovation and development in AI as well as other industries.

Best Practices for AI Adoption in the Corporate Context

General principles

The confluence of businesses rapidly developing and deploying AI tools (driven by third-party investors seeking out AI-driven businesses to swiftly scale), and the increase in the introduction and adoption of regulations and laws applicable to AI, has resulted in companies potentially inadvertently exposing themselves to liability. In order to reduce risk exposure stemming from the use of AI, businesses should do the following.

  • Ensure that the data used to train AI models is sourced ethically and legally. Verify the accuracy, currency and reliability of the data to avoid biases and inaccuracies. Always remember the computer science adage, “garbage in, garbage out”.
  • Always label AI-generated content, even if edited by employees. Include a header, footnote in documents, or a notice on webpages using or created by GAI.
  • Before full-scale implementation in a production environment, AI systems should undergo pilot testing to identify potential issues. However, pilot testing alone is insufficient. AI systems should be continuously monitored to ensure that they continue to operate as intended.
  • Protect personal data by complying with privacy regulations and implementing robust data security measures. This may involve establishing data governance policies, including assurances of data privacy and security.
  • Maintain the confidentiality of sensitive information processed by AI tools to prevent unauthorised access and data breaches.
  • Adopt guidelines for transparency, bias mitigation and data privacy, as recommended by the State of NY’s Emerging Technology Advisory Board (ETAB).

Guidance for legal professionals on AI use

The New York State Bar Association issued Formal Opinion 2025-5 in August 2024. It addresses key points such as:

  • the impact of GAI on access to justice;
  • ethical issues posed by AI in specific practice areas and contexts; and
  • the potential need for new rules or amendments to address GAI challenges.

Most importantly, the opinion outlines how the New York Rules of Professional Conduct (NYRPC) apply to the use of GAI in legal practice. For example, Rule 1.1 requires that lawyers provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation. If applied to a lawyer seeking to use GAI, the opinion explains that the rule would require the lawyer to:

  • understand the limitations of the specific GAI tool being used;
  • carefully and critically review its output for bias or inaccuracies;
  • supplement its output with human research and analysis; and
  • always apply professional judgement.

Conclusion

The State of NY has prioritised driving economic growth and technological transformation through the ethical and safe use of AI. The State of NY’s efforts to integrate AI into various sectors demonstrate the potential benefits and the challenges associated with this technology. While AI can streamline processes and provide significant advantages, it also requires careful management and oversight to prevent misuse and ensure accuracy. As AI continues to evolve, the State of NY must remain vigilant and adaptive to address the complexities and ethical considerations accompanying its use.

A&O Shearman

599 Lexington Avenue
New York
NY 10022
USA

+1 212 848 4000

information@aoshearman.com www.aoshearman.com
Author Business Card

Trends and Developments

Authors



A&O Shearman boasts an artificial intelligence (AI) team that comprises leading global experts working at the intersection of AI and intellectual property (IP), privacy and regulatory law. It covers all types of AI and the specific issues they raise from a risk management and contracting perspective. It has worked with sovereigns to develop AI policies, and with businesses of all sizes across industries on effective and responsible AI solutions. It has also advised on AI-focused transactions and disputes. The team has considerable experience with a wide range of new and emerging AI-related and AI-based technologies, and in assisting clients with their product development and the formulation of go-to-market strategies as well as clients who are users and consumers of others’ AI tech. It brings specialist expertise on matters centred in Silicon Valley, New York, the UK and several other global jurisdictions, advising a wide range of clients on AI matters.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.