Texas Enacts the Texas Data Privacy and Security Act, Adding to the Patchwork of US Consumer Privacy Laws
The Texas Data Privacy and Security Act went into effect on 1 July 2024. As a result, Texas joined a growing number of states that have enacted comprehensive consumer privacy laws. Similar to other states, the TDPSA gives consumers rights to:
Businesses must honour those rights and cannot discriminate against consumers who choose to exercise their rights. Businesses must also disclose certain information in a privacy policy.
Unlike the California Consumer Privacy Act of 2018, the TDPSA does not create a private right of action. Any violations are enforced by the Attorney General.
Texas Adopts New Law to Promote Children’s Privacy
Texas is one of several states that have enacted laws that impose requirements on app stores and app developers related to the use of mobile apps by minors. The Texas App Store Accountability Act would require app stores to verify user ages at the time of account creation and to obtain parental consent before minors can download apps or make in-app purchases. The Act would help prevent inadvertent privacy violations caused by inaccurate self-reporting. For instance, a child would be unable to put in a fake age to download an app not intended for minors.
Texas’s law was scheduled to go into effect on 1 January 2026. However, it is currently subject to litigation in the United States District Court for the Western District of Texas. On 23 December 2025, a federal judge issued preliminary injunctions preventing the law from taking effect.
Uncertain Future for Numerous New Texas AI Laws
Texas has passed a number of new laws relating to the use of artificial intelligence, including:
These laws are in limbo, however. On 11 December 2025, President Trump issued an executive order, which threatened to pull federal broadband funding from states that enacted “onerous” AI laws. Texas was awarded USD3.3 billion in federal funding to expand broadband access in the state. This has put lawmakers on both sides of the aisle in conflict with the federal government.
Data-Light AI Lets You Compete With Less Personal Information
The legal community and general counsels especially have spent the last decade hearing a single refrain from the business side. More data means better decisions. More data means better personalisation. More data means more growth. Privacy and cybersecurity teams have been the necessary brakes, asking whether the organisation really needs to keep what it collects and whether it can defend those practices when something goes wrong. The inherent balance of when data goes from an asset to a liability has been a battlefield within many organisations, often without a clear answer. That tension is not going away; it is changing form.
AI is accelerating the shift from data as fuel to models as leverage. That shift does not end privacy and cybersecurity risk; it changes what you must govern, what you must document, and what you must be able to explain to regulators, legislators, and fact finders. In a world of AI agents and advanced automation, the most sophisticated organisations will be those that can move quickly with minimal retained personal data, while still proving that their systems are controlled, auditable, and reasonable.
This future does not promise that AI eliminates the need for data, because it does not. It argues that AI changes where data lives, how long it should live there, and what it should be allowed to do. It then offers a flexible approach to privacy and cybersecurity that is designed for unpredictability. Threats evolve faster than policies. Regulations evolve faster than roadmaps. Your governance must be built for change at low cost, without requiring a small army of outside developers to rewrite core controls every time the world shifts.
Texas Biometric Laws Weigh on a Data-Light AI-Driven Approach
AI-driven identity and personalisation strategies often drift toward biometric identifiers because they are convenient for authentication, fraud controls, and user experience. In Texas, the Capture or Use of Biometric Identifier Act (CUBI) creates a direct statutory compliance hook: a person may not capture an individual’s biometric identifier for a commercial purpose without first informing the individual and obtaining consent, and the statute restricts disclosure, requires reasonable care in storage and transmission, and imposes a destruction requirement tied to the purpose for which the biometric identifier was collected. The Texas Attorney General highlights these requirements and emphasises that enforcement sits with the AG and may include civil penalties per violation.
For governance design, this matters because “data light” architectures can unintentionally become “biometric heavy” architectures. A defensible programme therefore treats biometric ingestion as a controlled exception with heightened notice, consent, retention limits, and audit logging, rather than as an incidental by-product of modern AI tooling.
The New Premise: You Cannot Forecast the Next Threat, Statute, or Theory of Liability
Most compliance programmes assume the future will look like the past. That assumption is now the liability. Cybersecurity threats mutate weekly. AI systems change behaviour as prompts, plugins, tools, and downstream integrations evolve. Meanwhile, lawmakers are moving toward risk-based AI rules that require impact assessments, disclosures, and documented mitigation, even when the technology stack is still in flux.
The winning move is not to hard-code today’s requirements into brittle systems. The winning move is to build an adaptable governance and technical architecture that lets you change quickly, prove what you did, and explain why it was reasonable. That is the posture that translates across regulators and juries, even when statutes and threats mutate.
The Core Idea: AI Can Reduce the Need to Retain Detailed Personal Data, but It Cannot Eliminate Accountability
Instead of retaining detailed consumer data, create virtual consumer avatars, score interactions, and use that scored experience to test marketing and deliver personalisation while retaining only contact information and regulatory essentials. The strategic objective is clear. Minimise retained personal data while preserving the ability to learn, optimise, and personalise.
This is directionally aligned with data minimisation principles. Regulators, including the FTC, have framed good security and privacy as collecting only what you need, keeping it safe, and disposing of it securely. That is both a compliance theme and a litigation risk theme. The less you retain, the less you lose in a breach and the less you must defend in discovery.
But legal counsel for an organisation should also see the trap. Even if you delete raw event-level consumer data, you may still be “processing” personal data through models, derived profiles, and inferences. Many regimes treat inferences as personal data if they can be linked to a person, even indirectly. AI does not make personal data disappear. It changes its shape. That is why a “data-light” strategy must be paired with governance that addresses profiling, transparency, discrimination risk, and security of the model life cycle.
So the practical answer is to refine the avatar concept into a defensible privacy-enhancing architecture. You can reduce retention while keeping learning and personalisation, but you have to do it in a way that is explainable, auditable, and adaptable.
Privacy and Cybersecurity in a “Model-First” World: What Changes, What Stays
The most important change is that models become regulated assets. The NIST AI Risk Management Framework frames AI risk management as a socio-technical discipline with core functions to govern, map, measure, and manage. It explicitly calls for accountable and transparent systems, explainability, privacy enhancement, and security and resilience. Those are not optional qualities if you want to survive regulatory scrutiny.
The second change is that privacy programmes cannot stop at data inventories and retention schedules. You now need model inventories, training data provenance, documentation of intended use, and monitoring of drift and downstream integrations. The EU AI Act’s emphasis on documentation, transparency, and risk categorisation is consistent with this trajectory. In many organisations, the best privacy policy in the world will fail if the model supply chain is a black box.
The third change is that cybersecurity must extend to the AI life cycle. Traditional security focuses on networks, endpoints, and data stores. AI introduces new attack surfaces, including prompt injection, model inversion, data poisoning, and supply chain compromises through third-party components. Even when using privacy-preserving approaches like federated learning, literature recognises that model updates can leak sensitive information without additional protections.
What stays the same is the core legal narrative. Reasonable care. Foreseeability. Good faith controls. Truthful disclosure. If you can show that your organisation collected only what it needed, protected what it kept, disposed of what it did not need, and governed AI decisions with documentation and oversight, you are building a posture that survives both regulators and juries.
Flexibility as a Compliance Strategy, Not a Buzzword
An organisation cannot hard code privacy and cybersecurity into rigid systems because future threats and regulations cannot be predicted. The next wave of enforcement will reward organisations that can adjust quickly and prove they did so responsibly. The strategy is to design flexibility into both policy and technology.
This approach highlights the need for an AI governance programme that includes safety testing, documentation, and harm mitigation, and points to standards frameworks such as NIST and ISO as part of compliance thinking. That is consistent with what regulators are asking for: not perfection, but a programme.
Flexibility is also about cost and speed. A programme that requires outside contractors to make routine changes is not agile; it is a bottleneck. The operational goal should be to empower internal owners to adjust retention schedules, change model routing rules, update prompts and guardrails, and change vendor integrations without a rebuild. This is governance through configuration, not governance through heroic engineering projects.
Portability and Vendor Independence
Vendor lock-in is not only a procurement problem; it becomes a privacy and cybersecurity problem the moment your vendor’s model behaviour changes, your risk tolerance changes, or a regulator asks you to switch approaches quickly. If you cannot move your data, your features, and your model outputs to another system, you have lost operational control.
A defensible programme therefore includes data and algorithmic portability. At a minimum, you should be able to export your feature definitions, model documentation, and decision logs in usable form. You should know which parts of your pipeline are proprietary and which parts are replaceable. You should also be able to demonstrate that you evaluated vendor risk and monitored vendor changes as part of your governance function.
Visibility is equally important. In an agent ecosystem, your systems interact with vendor agents and customer agents in ways that can create emergent behaviour. If you cannot see those interactions and reconstruct them later, you cannot explain them. Under regimes emphasising transparency and accountability, that lack of visibility becomes a compliance failure, not merely a technical inconvenience.
What is Your Story: Can You Explain Your AI Programme to a Regulator, Legislator, or Judge?
The most persuasive compliance posture is a coherent story backed by evidence. Your story should be consistent across privacy, cybersecurity, and AI governance. It should also be consistent across technical and legal audiences. The NIST AI RMF is useful here because it gives you a vocabulary for governance, risk mapping, measurement, and management that can be translated into board-level reporting and into regulator-facing documentation.
A credible story typically includes the following elements:
These elements track directly to themes in Colorado’s statute around reasonable care, documentation, and impact assessments, and to the EU AI Act’s emphasis on documentation and transparency. They also track to the FTC’s framing of security as collecting only what you need, protecting it, and disposing of it.
The point is not to memorise a framework. The point is to build an operational system of proof. When the question comes, you answer with documents, logs, testing artifacts, and governance records, not with aspiration.
Practical Guardrails for the “Data-Light AI” Future
Below are guardrails that high-level counsel can insist on without pretending to be engineers. They are principles that translate into controls, and they can be audited.
Guardrail 1: Collect less, keep less, and prove it
Create retention schedules that are actually enforced technically, not just written. Build deletion into pipelines and verify it. Use the FTC’s “scale-down” logic as a simple executive narrative: keep only what you need and dispose of the rest securely.
Guardrail 2: Treat models and prompts as regulated assets
Maintain a registry of models, prompts, tools, and agents used in production. Document intended use and prohibitions. Require change control for material updates, especially when outputs drive consumer interactions or decisions.
Guardrail 3: Require explainability that matches the risk
Not every system needs deep interpretability. High-risk and consequential contexts do require a stronger ability to explain outputs, document inputs, and show oversight. Risk-based governance is the theme across modern AI regulation and frameworks.
Guardrail 4: Build portability and exit options into procurement
Contract for access to logs, documentation, and exportable artifacts. Avoid architectures where you cannot migrate feature stores or decision logic. Portability is not a luxury in an enforcement environment.
Guardrail 5: Use privacy-enhancing technologies where they reduce exposure without erasing accountability
Differential privacy, federated learning, secure computation, and encryption can reduce exposure, but they do not eliminate governance needs. Use them as part of a measured programme, and document why you used them and what tradeoffs you accepted.
The Competitive Advantage Is Not Data; It Is Governed Adaptability
AI and agents will reduce the business need to retain detailed personal data in many contexts, and they will expand the ability to test markets and personalise experiences. The legal and security risks do not disappear. They relocate to models, to supply chains, to explainability, and to governance.
A winning organisation will be able to say, with evidence, that it designed for minimal retention, strong security, and rapid change. It will be able to show that it can modify policy and technical controls quickly without calling outside contractors for every change. It will be able to explain to a regulator or jury how the system worked, what was foreseeable, and why the choices were reasonable. That is the “one story” approach, and it is the most durable strategy you can buy in an unpredictable world.
2626 Cole Avenue
Suite 300
Dallas
Texas 75204
USA
+1 (214) 263 7500
myarbrough@buchalter.com www.buchalter.com/