Trends in Artificial Intelligence and the Law: A Texas Focus
Everything’s bigger in Texas, and it looks like the size and scope of artificial intelligence (AI) expansion in Texas is due to follow course, beginning with Open AI announcing its “Stargate” data centre (slated to start construction in 2025 in the middle of the state, at a cost of USD100 billion, with completion targeted for mid-2026). Federal and state laws are rushing to keep up with the advance of this radical new technological innovation. Texas officials are aggressively targeting AI and social media companies now. After suing Meta for allegedly violating Texas law by unlawfully collecting and using biometric identifiers of millions of Texans without proper consent, the Texas Attorney General entered into a landmark settlement in July 2024, in which the company agreed to pay over USD1 billion, and to provide notice to the Office of the Attorney General if Meta anticipated using technology that involved biometrics laws.
Steptoe, a law firm with a Houston, Texas office, as well as a national and an international footprint and lawyers well versed in the language of AI, is at the forefront of representing AI companies and companies that use AI in connection with their constant pursuit of new interactions with state, national and foreign governments and with other industries. The following is a summary discussion of important laws and regulations from these fora, impacting businesses that seek to understand how AI and the law surrounding AI might affect their development.
Texas: Enforcement Actions and Developing Law
Existing law and aggressive enforcement
Texas approaches AI regulation with proactive government enforcement actions alongside the legislature’s ongoing consideration of AI policy. In its 2025 session, the Texas legislature deliberated on the Texas Responsible AI Governance Act (TRAIGA), which proposes a framework for regulating AI systems within the state. A brief summary of existing Texas law provides helpful background.
The Texas Data Privacy and Security Act applies to AI developers and users. It is proactively enforced by the Texas Attorney General, who investigates potential violations, including those by international AI companies. Enforcement actions have focused and will likely continue to focus on data privacy and safety practices, especially those concerning children, protected health information and national security.
Further, the Attorney General’s office has expressed concerns that AI platforms could be used to undermine American AI dominance and to illicitly obtain the data of Texans. For example, in February 2025 Texas Attorney General Ken Paxton initiated a significant investigation into DeepSeek, a Chinese AI company. The investigation centres on potential violations of the Texas Data Privacy and Security Act, with allegations suggesting that DeepSeek might be a proxy for the Chinese Communist Party (CCP). Citing security concerns and the company’s alleged allegiance to the CCP, Attorney General Paxton directed that DeepSeek’s platform be banned from all devices within the Office of the Attorney General. As part of this investigation, the Attorney General issued Civil Investigative Demands (CIDs) to technology giants Google and Apple, requesting their analysis of the DeepSeek application and any related documentation submitted by the company before its app was made available to consumers.
The Texas Attorney General’s office similarly filed a lawsuit against TikTok in January 2025 for allegedly engaging in deceptive marketing practices by representing its app as safe for minors, despite the presence of inappropriate and explicit content. This action followed a previous lawsuit against TikTok for alleged violations of the Securing Children Online Through Parental Empowerment (SCOPE) Act.
In January 2025, Attorney General Paxton likewise launched investigations into character.ai and numerous other social media platforms, focusing on their data privacy and safety practices concerning children, in accordance with the SCOPE Act and the Texas Data Privacy and Security Act. Additionally, in September 2024, Texas reached a settlement with Pieces Technologies, a healthcare AI company, to resolve allegations of false and misleading statements regarding the accuracy of its AI products. Specific to data security, the Attorney General’s office filed a lawsuit against Allstate in January 2025, alleging the unlawful collection and sale of Texas residents’ driving data without their informed consent.
These enforcement actions by the Texas Attorney General underscore a proactive and assertive approach to applying existing data privacy laws to the realm of AI. The Attorney General’s willingness to scrutinise both domestic and international AI companies signals stringent application of Texas’s existing data security and privacy laws across sectors.
Agency regulation
The widespread adoption of AI raises fundamental ethical and legal considerations related to algorithmic bias, transparency in decision-making processes, and overall accountability for AI systems across various industries – including the law. To address these challenges, the Texas legislature established the Artificial Intelligence Advisory Council in 2023, through HB 2060. The Council was tasked with studying the use of AI within state agencies and with providing recommendations for safeguarding privacy, upholding civil liberties and promoting ethical AI practices. HB 2060 required state agencies to submit regular inventory reports of all automated decision systems that are being developed, used or procured by them. The law required the Council to provide a report to the legislature in December 2024.
Although the Council’s initial term expired, Texas’s proposed AI legislation (discussed below) would reinstate the Council. In the future, the Council would:
These directives highlight Texas’s dual goals of curbing improper AI use while simultaneously embracing and encouraging the sector’s development.
The Texas Responsible AI Governance Act (TRAIGA) and its potential impacts
In the Texas legislature’s biannual 2025 session, it addressed new AI issues in the TRAIGA, which proposed a comprehensive framework for regulating AI systems within the state.
The Bill outlines specific obligations for government agencies developing and deploying AI systems. It further sets out “prohibited uses” of AI that cover both commercial and governmental AI models. The Bill prohibits using AI systems “with the intent to unlawfully discriminate against a protected class” illegally, but makes clear that “disparate impact alone” is insufficient to establish discrimination under the law.
Although, under the Bill, the government would be prohibited from using AI to conduct biometric analyses and “social scoring” – or, the use of AI to evaluate or classify people based on social behaviour – the Bill does not prohibit commercial social scoring. This means that the law would permit companies to use personal data for targeted marketing, but would not permit the state to use the data to gauge trustworthiness or propensity for criminal activity.
Notably, TRAIGA would make it unlawful for anyone to “block, ban, remove, de-platform, demonetise, debunk, de-boost, restrict, or otherwise discriminate against a use based on the user’s political speech, or modify or manipulate a user’s content or posting for the purpose of censoring the user’s political speech”. This prohibition would apply to the computer system itself – creating a duty to comply for developers as well as users.
TRAIGA does not, in its current form, provide a private right of action. Instead TRAIGA empowers the Texas Attorney General to enforce the law, although allegedly non-compliant entities have a “right to cure” within 60 days before the Attorney General brings suit. Uncured violations would be expensive under the Bill, with fines ranging from USD10,000 to USD200,000 for violative uses of AI. Additionally, deployers and developers of AI who continue to violate the Bill would be subject to daily fines ranging from USD2,000 to USD40,000.
Emerging Issues in AI and the Law
Texas is rapidly establishing itself as a hub for the AI industry, attracting significant investment and fostering a thriving tech ecosystem. This growth is evidenced by the increasing establishment of new data centres across the state, particularly within the Dallas-Fort Worth metropolitan area, which is becoming a major centre for AI infrastructure.
The state has witnessed substantial investments in generative AI technologies, and projections indicate a significant increase in AI-related job opportunities in Texas over the next decade. This rapid expansion of the AI industry in Texas presents several emerging legal and regulatory challenges. One of the most pressing concerns is the strain on the state’s energy and water resources due to the immense power and cooling demands of large-scale data centres that support AI operations. The existing energy infrastructure may require significant upgrades, especially in transmission capabilities, to adequately support the growing electricity needs of the AI sector.
The increasing reliance on interconnected digital infrastructure also elevates the risks of cybersecurity threats targeting energy systems and AI operations. Data privacy is a critical concern, given the vast amounts of data collected and processed by AI systems and data centres, necessitating compliance with existing and potential future privacy regulations.
Industry and data ownership impacts
AI is poised to have a profound impact on energy law, influencing regulations, infrastructure and consumption patterns across the energy sector. AI-powered predictive tools already play a crucial role in modernising the energy grid by anticipating and mitigating disruptions caused by extreme weather events or cyber-attacks, thereby enhancing grid resilience and ensuring a consistent power supply. AI is also finding applications in energy contracts, including streamlining procurement processes, ensuring regulatory compliance and enhancing the monitoring of contract performance. In the oil and gas industry, upstream operating companies use AI to predict equipment repair needs and safe drilling locations. In the healthcare industry, insurers already use AI to more efficiently review claims. Further, across other industries, companies use AI to streamline employment decisions.
However, to use these tools, the many companies seeking to acquire sufficient data to run large language models (LLMs) must, at the same time, comply with applicable federal and state laws. Different data ownership models come with different levels of risk and control. Some examples include:
Exclusive data ownership, where a single entity such as a tech company or government controls the data, provides the highest level of protection and control. When an LLM is trained on a company’s own data, the company not only has a tighter grip on the outputs generated on proprietary datasets but also enjoys the highest level of protection against inadvertent leakage of sensitive information and trade secrets outside the company. However, centralised ownership is a high-cost approach. Further, if a company’s proprietary data itself relied on flawed processes, an AI model using that data will replicate and entrench those biases. As states continue to develop laws punishing the use of AI that results in biased or discriminatory results, such a result may pose liability risks in employment and healthcare.
To some degree, centralised ownership that uses open source training resolves concerns over biases in results, and the threat of enforcement actions that may result. Although, on the flip side, results can become less company-specific, this approach still offers a high level of control for adhering to regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA).
Another emergent creative industry solution is Shared Data Ownership. In this model, businesses collaborate with each other to share data. This diversifies the pool of data on which a model is trained, while maintaining higher quality, industry-specific and proprietarily informed results. However, under this model, clearly defined data governance protocols are essential, as risks related to intellectual property ownership are heightened.
Each of the above models provides some level of control over data security. By contrast, Open Access data use offers the least control – with Open Access data use, parties risk inadvertent disclosure of trade secrets and confidential information. Furthermore, with the potential for inadvertent private data links, this model increases the risk of enforcement actions of the sort discussed above.
The power of LLMs is intrinsically linked to the data they are trained on and interact with. The selection and management of this data are critical for the successful deployment and legal use of these models. A well-defined data strategy is therefore paramount for businesses looking to leverage the capabilities of LLMs effectively and responsibly.
Employment issues
The increasing adoption of AI in various aspects of employment has led to a growing number of legal challenges, both in Texas and across the nation. AI models are being used in hiring decisions for tasks such as resume screening and candidate evaluation, but concerns about algorithmic bias have resulted in significant litigation. Studies have indicated the presence of racial and gender biases in certain AI hiring tools, and lawsuits have been filed against companies such as Workday, alleging that their AI-powered screening systems lead to discriminatory outcomes.
The Equal Employment Opportunity Commission (EEOC) has actively engaged in enforcement efforts to combat discrimination and bias arising from the use of AI in employment processes. The EEOC reached a settlement in a case involving allegations of age discrimination stemming from the use of an AI hiring tool.
Beyond hiring, significant legal challenges arise from the “black box” nature of many AI systems used in employment decisions, making it difficult for employers to understand and explain how these tools arrive at their conclusions. The authors forecast substantial increases in litigation related to these issues, especially as states adopt reporting requirements for companies that use AI in employment decisions.
Copyright and advertising
The legal landscape surrounding copyright issues related to AI-generated content is marked by ongoing debate and considerable uncertainty. A central point of contention is whether works autonomously created by AI can be protected under copyright law, which traditionally requires human authorship. The US Copyright Office and multiple courts have taken the position that purely AI-generated content does not meet the human authorship requirement and is therefore not eligible for copyright protection. The Copyright Office additionally mandates the disclosure of AI’s involvement when applying for copyright registration of works containing AI-generated elements.
Numerous cases have emerged concerning using copyrighted materials to train AI models, including high-profile lawsuits such as Thomson Reuters v ROSS Intelligence, The New York Times v OpenAI and actions by music publishers against companies such as Anthropic. Proposed legislation such as the Generative AI Copyright Disclosure Act of 2024 seeks to address these issues by potentially requiring AI developers to disclose the copyrighted works used in training their models, though Congress has yet to enact any such legislation.
The use of AI in advertising and marketing has also drawn legal scrutiny. The Federal Trade Commission (FTC) has issued guidelines to prevent deceptive or unfair practices related to AI in advertising, cautioning against misleading claims about the capabilities of AI products and the failure to disclose when content is AI-generated. The FTC has specifically prohibited the use of fake or AI-generated consumer reviews and testimonials, aiming to maintain the integrity of online marketplaces.
There is also concern about the potential for misuse of AI-generated deepfakes in political advertising, with several states – including Texas – considering regulations or mandatory disclosure requirements to address this issue. For example, in Texas, the legislature considered a statute that would impose criminal penalties for the use of AI in political advertisements without proper disclosure.
Beyond Texas: Data Security, Privacy and AI
In addition to complying with state laws impacting companies’ use of AI, Texas industries must also comply with federal law. The absence of a unified federal data privacy law has led to a fragmented regulatory landscape. Despite bipartisan efforts, attempts to establish a comprehensive federal privacy law have faced legislative hurdles, often due to disagreements on key issues, such as federal pre-emption of state laws and the inclusion of a private right of action for consumers.
Still, several existing federal statutes establish foundational principles for data security and privacy based on legacy technology that indirectly impact the development and deployment of new AI technologies:
In the first 100 days of President Trump’s tenure, the White House issued an Executive Order on AI with the goals of establishing safety and security standards, protecting privacy, advancing civil rights and promoting innovation. This executive action directs various federal agencies to conduct studies and prepare reports on the impact of AI, and in certain instances to issue guidance on the responsible adoption of AI technologies. It also invoked the Defense Production Act (DPA) to mandate that developers of AI foundation models that pose risks to national security notify the federal government and share the results of their training.
In light of this federal government patchwork, the National Institute of Standards and Technology (NIST) developed an AI Risk Management Framework. Though it has no legal effect, the Framework provides a set of guidelines to assist organisations in managing risks associated with AI across various axes.
In addition to federal law, Texas industries that extend to other jurisdictions may run into AI-specific legislation impacting their operations. Although few states have AI-specific acts, 113 state bills related to AI were enacted into law across 45 states in 2024 alone. The approaches to AI regulation vary significantly across these states, ranging from expansive frameworks as seen in Colorado and California, sector-specific regulations such as those in Illinois focusing on employment, to transparency-focused laws such as those in Utah. Texas businesses with a national footprint must remain informed about and prepared to comply with AI regulations beyond the state’s borders.
Conclusion
The legal and regulatory landscape surrounding AI is in a state of constant flux, reflecting the rapid advancements in AI technology and the ongoing efforts to establish effective governance frameworks. Texas, with its burgeoning AI sector and proactive regulatory stance, is playing a significant role in shaping this evolving landscape. Looking ahead, the legal and regulatory landscape of AI will undoubtedly continue to evolve as the technology matures and its societal impacts become more fully understood. Ongoing legislative efforts at the state and potentially federal levels, coupled with continued enforcement actions and judicial interpretations, will shape the future of AI governance.
Navigating the intersection of AI and law presents numerous challenges. Balancing the promotion of technological progress with ethical considerations, ensuring the protection of data privacy and security in the face of increasingly sophisticated AI applications, and grappling with the complex issues of intellectual property rights in the context of AI-generated content, are all paramount concerns. The diverse approaches being taken by different states across the USA highlight the need for effective counsel who can assist companies through these ever-changing waters. Given that Steptoe has six US offices on all three coasts, including an industry-leading office in Houston, and in Chicago, Europe and Asia, its AI-experienced lawyers stand ready to help clients expand into this exciting new technological field and to represent them before governments and in state, federal and arbitral litigation. If you need Texas lawyers to stand up to a government AI-related investigation or an AI-driven business dispute, call Steptoe’s Houston lawyers for expert, client-focused representation.
1330 Connecticut Avenue, NW
Washington, DC 20036
USA
+1 202 429 3000
pcastillo@steptoe.com www.steptoe.com