Introduction to AI in New York
The State of New York (NY), and New York City (NYC) specifically, has made significant strides in recent years to become a leader in the responsible use of artificial intelligence (AI) technology by balancing the regulation of AI use and propelling AI innovation. This dual approach aims to harness the benefits of AI while mitigating potential risks and ethical concerns associated with its deployment.
AI Landscape in NYC
NYC has a robust framework for AI innovation. NYC is the largest metropolitan economy in the world, boasting a USD2 trillion gross metropolitan product. This impressive economic output is the product of NYC being a hub for diverse industries, including finance, healthcare, entertainment, energy and tech. The concentration of successful businesses in NYC reflects a city that encourages innovation and productivity – outcomes that AI has been shown to help achieve. NYC industry leaders appear eager to embrace AI to promote further efficiency and success.
NYC is home to industry giants and over 25,000 tech-enabled start-ups, including thousands of AI-driven start-ups. Recent trends suggest that there is intensified interest in AI-driven businesses in NYC. In 2023 alone, approximately one third of venture capital raised by NYC start-ups was directed to AI. This indicates that investors recognise the value and potential of AI in the city’s dynamic and competitive market.
Moreover, NYC has a rich talent pool of AI experts, researchers and entrepreneurs, who benefit from the city’s world-class academic institutions, research centres and innovation hubs. Some notable examples of AI-related initiatives and organisations in NYC include:
These examples illustrate the breadth and depth of the AI landscape in NYC, which offers a conducive environment for AI innovation, adoption and impact.
The State of NY’s AI Leadership: How the State Supports Innovation and Ethics
The State of NY has become an AI leader partly because of the state government’s support and vision. The State of NY has spearheaded and helped fund several bold AI-related initiatives that aim to foster collaboration, research, education and entrepreneurship in this rapidly evolving domain.
The Empire AI Consortium
One of the State of NY’s flagship initiatives is the Empire AI Consortium, which was established in April 2024 as a partnership between public and private universities across the state to create a state-of-the-art AI computing centre housed at SUNY’s University at Buffalo. Academic institutions that are members of the Empire AI Consortium can leverage sophisticated computing resources that may otherwise be prohibitively expensive, to help solve complex problems and promote innovation.
The Empire AI Consortium was formed with an investment of over USD400 million in public and private funding, including up to USD250 million from the State of NY in grants and other funding. In February 2025 alone, the Empire AI Consortium received USD90 million in new funding in Governor Hochul’s fiscal year 2026 budget. The proposal will be matched by USD50 million in private funding and USD25 million from the State University of New York System over ten years. This fiscal commitment emphasises the state’s dedication to remaining at the forefront of AI innovation.
The Empire AI Consortium also aligns with the State of NY’s emphasis on using AI ethically and within an environmental, social and governance (ESG) framework. Cognisant that training and using AI consumes significant energy and contributes to climate-warming greenhouse gas emissions, the Empire AI Consortium was built to be sustainably powered by clean and renewable hydropower from Niagara Falls. To help offset the environmental impact and carbon footprint of the Empire AI Consortium, it also recirculates the heat it generates to warm student housing at SUNY’s University at Buffalo.
NYCEDC
Another key player in the State of NY’s AI leadership is the New York City Economic Development Corporation (NYCEDC), a public-benefit corporation that serves as the official economic development organisation for NYC, and which published a January 2025 study and action plan that outlines NYC’s strategy to further develop AI opportunities and applications in various domains. The study and action plan also address the challenges and risks of AI (such as ethical, legal and social implications) and propose solutions and recommendations to help ensure that AI is used responsibly and equitably.
One of the main goals of the NYCEDC’s study and action plan is to develop a diverse, AI-ready workforce that can meet the growing demand for AI skills and talent in the city. The NYCEDC proposes to achieve this goal by investing in AI education and training programmes, creating AI career pathways and pipelines, and supporting AI entrepreneurship and innovation ecosystems. The NYCEDC also aims to foster collaboration and engagement among various stakeholders, such as academia, private institutions, government and the public, to promote a shared vision and understanding of AI’s potential and challenges.
AI governance
In addition, in January 2024, NYC established a Steering Committee to oversee AI use within city government and an AI Advisory Network with representatives from different sectors (including private corporations and academia) to support NYC’s AI efforts on a consultative basis.
The State of NY’s Regulatory Approach to AI
The State of NY has established a multi-layered approach to AI regulation, focusing primarily on government usage while beginning to address broader applications. There are no comprehensive AI laws yet, but the State of NY and NYC have addressed specific risks such as bias and deception. The State of NY has implemented several innovative measures to monitor and regulate AI deployment across public agencies, with distinct approaches at state and city levels. This regulatory landscape continues to evolve as lawmakers and agencies develop more comprehensive frameworks.
Key regulators
Multiple regulators oversee AI-related issues in the State of NY. Some of these include:
State government regulatory framework
The State of NY’s government has enacted significant legislation and executive initiatives to govern AI use in state agencies. In 2023, the State of NY enacted NY SB 1042A, a law that criminalised the intentional dissemination or publication of deepfake images or videos and created a right for victims to sue. In late 2024, Governor Kathy Hochul signed the Legislative Oversight of Automated Decision-making in Government Act (the “LOADinG Act”), requiring state agencies to conduct thorough assessments of any software using AI techniques. Additionally, the LOADinG Act requires the following:
These requirements reflect the State of NY’s emphasis on human verification alongside technological advancement, ensuring that critical decisions affecting citizens’ welfare remain subject to human judgement.
NYC’s regulatory approach
NYC has been a pioneer in local AI regulation. The city enacted Local Law 144 of 2021, which prohibits employers from using an AI-driven tool to make hiring or promotion decisions in NYC, unless:
Enforced by the DCWP, and despite being enacted on 1 January 2023, the law took effect on 5 July 2023, to ensure that such tools do not unfairly disadvantage existing applicants in the job market. Employers that breach Local Law 144 are liable for a civil penalty of up to USD500 for a first violation and each additional violation occurring on the same day. With respect to any subsequent violations, the penalty ranges between USD500 and USD1,500.
NYDFS AI cybersecurity guidance
The New York Department of Financial Services (NYDFS) has proactively regulated AI in the financial sector. In October 2024, the NYDFS issued guidance to help state-regulated financial institutions mitigate cybersecurity risks posed by AI. While this guidance does not implement new requirements, it provides a framework for regulated entities to meet existing compliance obligations under the NYDFS cybersecurity regulation (23 NYCRR Part 500). The guidance specifically addresses AI-specific risks and emphasises that financial institutions must implement robust governance frameworks, ensure third-party risk management, and use AI responsibly.
Proposed legislation and trends
State of NY lawmakers continue to develop new regulatory approaches to AI. They are expected to continue tightening AI rules in areas such as employment, finance and consumer protection, while co-ordinating with emerging federal standards. Key proposals to note include the following.
Procurement and Use of AI
In October 2023, NYC unveiled an AI action plan, the first of its kind for a major US city. The plan outlines a framework to establish a holistic, adaptable framework for AI governance. More specifically, the city aims to:
The NYC Office of Technology and Innovation (OTI) published “Preliminary Use Guidance: Generative Artificial Intelligence” in May 2024. Rather than prohibiting the use of AI by city personnel, the preliminary guidance outlines key considerations to guide agencies and their personnel towards the ethical, transparent and effective deployment of AI technologies. In practice, agencies should vet AI tools against city standards before use.
NYC also incorporated stipulations for “responsible AI procurement” within its overarching governance objectives, which may serve as a key reference for private sector organisations. Private vendors looking to sell generative AI (GAI) solutions to the public sector must be prepared to meet stringent requirements. The state’s procurement approach emphasises that vendors should ensure that their GAI products include features or documentation addressing performance, transparency and explainability, fairness, privacy protection and cybersecurity safeguards. Vendors should anticipate security reviews and be prepared to demonstrate how their models protect data and mitigate bias or harm.
Challenges to Harnessing AI in the State of NY
The State of NY and its constituents have been proactive in attempting to harness AI, but have nonetheless faced certain challenges as a result.
In June 2023, US District Judge P. Kevin Kastel of the Southern District of NY ordered certain attorneys to pay a fine of USD5,000 for submitting a legal brief that included six fictious case citations generated by ChatGPT. This incident underscores the potential pitfalls of relying on AI-generated content without thorough verification, highlighting the need for stringent oversight and validation mechanisms when integrating AI into legal practices.
In March 2023, NYC launched its first phase of the “MyCity”, an online chatbot that employs Microsoft’s Azure AI and is designed to help NYC citizens manage childcare, career and business services. Mayor Eric Adams promoted the MyCity chatbot as a technology that “allowed us to cut through government red tape, help 43,000 families gain eligibility for childcare, and support hundreds of thousands of job seekers”. However, a year later, reports circulated that the MyCity chatbot was generating hallucinations that instructed users to break the law. For example, the MyCity chatbot advised users that landlords are not required to accept tenants on rental assistance (despite NYC Human Rights Law Section 8-101 stipulating that it is illegal to discriminate based on a person’s lawful source of income) and that employers cannot take a cut of their worker’s tips (despite NY Labor Law Section 196-d and the NY Department of Labor Hospitality Industry Wage Order stipulating that tips are the property of the employee). The MyCity chatbot has since been updated to include a disclaimer that it may occasionally provide incomplete or inaccurate responses.
AI hallucinations can be caused by various factors, including flawed training data sources, limitations of the AI model, and inherent biases in the software developers of the AI tool. Although NYC has taken steps to address such factors, including by passing NY Local Law 144, it is difficult to predict the measures that will need to be taken as developers continue to innovate AI and develop new use cases. This ongoing challenge necessitates continuously monitoring and updating AI systems to ensure their reliability and accuracy.
In 2019, the NY Police Department (NYPD) signed a contract with Voyager Labs for its Voyager Analytics and Genesis tools. While the NYPD has publicly stated that it uses such tools in accordance with its social media policy and does not otherwise use AI in a predicative manner, certain non-profits such as the Surveillance Technology Oversight Project (STOP) have decried the use as “invasive” and “alarming”. The NYPD recently reported that it has explored the use of other AI tools to assist in its policing efforts and acknowledged that it deployed AI technology to assist with the manhunt for Luigi Mangione, a suspect in a homicide committed in NYC on 4 December 2024.
The rapid proliferation of AI in various industries, particularly digital infrastructure, may also stress the energy infrastructure of the State of NY and NYC. According to a 2024 report from the New York Independent System Operator (NYISO), NYC could experience an energy shortfall by 2032 if current electrification plans proceed as expected. Notably, the report does not yet account for the potential energy demands from major AI initiatives due to the undetermined scale of these projects. Addressing the energy requirements for AI projects while adhering to the State of NY’s ESG framework will require innovation and development in AI as well as other industries.
Best Practices for AI Adoption in the Corporate Context
General principles
The confluence of businesses rapidly developing and deploying AI tools (driven by third-party investors seeking out AI-driven businesses to swiftly scale), and the increase in the introduction and adoption of regulations and laws applicable to AI, has resulted in companies potentially inadvertently exposing themselves to liability. In order to reduce risk exposure stemming from the use of AI, businesses should do the following.
Guidance for legal professionals on AI use
The New York State Bar Association issued Formal Opinion 2025-5 in August 2024. It addresses key points such as:
Most importantly, the opinion outlines how the New York Rules of Professional Conduct (NYRPC) apply to the use of GAI in legal practice. For example, Rule 1.1 requires that lawyers provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation. If applied to a lawyer seeking to use GAI, the opinion explains that the rule would require the lawyer to:
Conclusion
The State of NY has prioritised driving economic growth and technological transformation through the ethical and safe use of AI. The State of NY’s efforts to integrate AI into various sectors demonstrate the potential benefits and the challenges associated with this technology. While AI can streamline processes and provide significant advantages, it also requires careful management and oversight to prevent misuse and ensure accuracy. As AI continues to evolve, the State of NY must remain vigilant and adaptive to address the complexities and ethical considerations accompanying its use.
599 Lexington Avenue
New York
NY 10022
USA
+1 212 848 4000
information@aoshearman.com www.aoshearman.com