AI and Privacy: What Is Next for India?
Introduction
India is widely regarded as being in its “techade”, where technology is expected to drive economic growth and business efficiency. Artificial intelligence (AI) is already playing a central role in this shift. Indian businesses are no longer testing AI at margins but are actively using it across core operations to improve productivity, manage risk and scale faster. Industry reports (eg, Nasscom AI Index Adoption) show that a significant number of Indian enterprises now have AI tools deployed in sectors such as banking, healthcare, retail, manufacturing and technology services. The adoption of AI poses a threat to human employment.
For businesses, the attraction of AI lies in its ability to process large volumes of data quickly and turn it into useful actionables. AI is used to detect fraud, predict demand, personalise customer interactions, automate internal processes and support management decisions that would otherwise require substantial time and human effort. In consumer-facing sectors, AI powers user algorithms, shopping indices, targeted advertising and recommendation engines, which influence what customers see, buy and engage with.
This rapid adoption of AI has made data a critical business asset. Most AI systems depend on continuous access to data, including customer information behavioural patterns and, sometimes, sensitive personal details. As a result, questions around how data is collected, used and safeguarded are no longer a purely legal issue but are also a concern for boardrooms, since they directly affect trust and brand value.
As AI becomes central to business growth and efficiency, organisations must focus not only on what AI can do but also on how it is used. This requires clear internal oversight, thoughtful data practices and an awareness of various privacy risks that arise from large-scale data use. Going forward, businesses that balance innovation with care and responsibility will be better suited to grow, build customer trust and stand out in the market. In the long run, trust and responsible AI use will be key to becoming a leader in the AI market.
AI Adoption Across Key Business Sectors in India
Banking and financial
AI is a widely used technology in the banking and financial services sector – statistics indicate that close to two thirds of banks and financial institutions operating within India have already integrated AI into their operations (Nasscom AI Index Adoption).
Banks have been developing their own “generative AI”; initially, its usage was restricted to customer services. Various chatbots have now been incorporated for rendering customer services and to provide financial advice. Some banks also use AI robot tools and voice recognition tools to perform compliance checks, data entry, know-your-customer (KYC) verification, email responses and other customer support services.
AI is also now being widely used in fraud detection to detect unusual patterns in banking, such as rapid micro-transactions or logins from suspicious devices. The Reserve Bank of India recently also launched the Digital Payments Intelligence Platform (DPIP), which harnesses AI to detect fraudulent transactions and prevent payments across UPIs. Banks and non-banking financial companies (NBFCs) use AI tools such as Feedzai and Verafin (available in the public domain) to secure payments and stop fraudulent transactions, and for identity verification, scam prevention, account monitoring, malware detection, phishing, etc. These systems help reduce manual oversight, speed up decision-making and improve the accuracy of risk assessments.
Healthcare and Pharmaceuticals
Hospitals and healthcare systems in India are using AI to support diagnostic imaging, clinical decision support, predictive patient risk profiling, personalised treatment recommendations and administrative automation. Leading healthcare networks are deploying AI tools such as Dax Copilot and Storyline AI (available in the public domain) to transcribe clinical notes, generate discharge summaries, support telemedicine and assist with diagnostic workflows, helping address rising patient volumes.
AI has emerged as one of the most powerful tools in the pharmaceutical industry as well. Generative AI such as AlphaFold2 (AF2) and NovaFold AI (available in the public domain) are used for molecular design, drug design, etc. This allows for identifying potential drug targets more efficiently, which helps prioritise compounds for experimental testing. Beyond discovery, AI also predicts patient eligibility and helps with strategising dosage to identify safer candidates for such experimental testing, eradicating extensive animal testing as a whole.
Manufacturing
Manufacturing and industrial sectors are increasingly adopting AI to improve operations and reduce downtime. AI-enabled systems support predictive maintenance of equipment and quality control through automated defect detection and optimisation of production schedules. AI is also at the heart of robotics and automation. A widely known example is Tesla’s Optimus humanoid robot, which has been developed to carry out tasks such as lifting, assembly or anything that may otherwise be dangerous for humans. While AI adoption in manufacturing currently trails behind banking and healthcare, it is increasing as companies see the benefits of AI in reducing waste and enhancing supply chains.
Retail
Consumer goods and retail businesses are using AI for consumer insights, personalised marketing, etc. Retailers use algorithms to analyse large volumes of sales and consumer data, tailoring promotions for different customer segments. Similarly, consumer goods companies are embedding AI into product development to respond swiftly to changes in consumer preferences.
Marketing
AI in marketing is not optional – it is a power source. Smart data analytics, targeted campaigns, content creation, ad creation, data analysis, research and behavioural analysis of customers is impossible in “techade” without AI. Various AI tools have been deployed to execute strategies within minutes instead of weeks.
Technology and Telecommunication
AI plays a crucial role in enhancing efficiency, reliability and customer experience. Telecommunication companies use AI to optimise network performance wherein they predict traffic patterns, managing bandwidth and detecting faults, thereby preventing service disruptions. On the customer side, AI powers chatbots and virtual assistants, handles service queries, troubleshoots issues and personalises plans based on usage patterns.
Facial recognition technology uses computer vision and machine-learning algorithms to identify or verify individuals based on their facial features. By analysing patterns including distance between eyes, facial contours and unique facial landmarks, AI tools can accurately match faces against stored databases. This is also being used in areas such as smartphone authentication, surveillance and security, access control and digital payments. In workspaces, facial recognition is being used for access control in devices for authentication, in retail environments for customer analytics, and in digital services for onboarding and verification.
Apart from the above, generative AI is specifically used across businesses for reviewing documents, extracting key information, summarising documents, drafting and interpreting legal clauses, obtaining a nuanced understanding of language and context, applying business logic, sending automated emails, linking information, etc.
While the adoption of AI across key business sectors offers significant potential for advancement, it is also accompanied by a number of drawbacks.
High-Risk AI Applications in Business Practices
AI-related privacy risks can be understood better when examined through real-world business use cases. Certain AI applications warrant closer scrutiny as they are both widely used by businesses and inherently high-risk from a privacy perspective. These risks can be divided between:
Technological risks
Generative AI
Generative AI possesses significant privacy risks across business sectors, as it relies on vast datasets that may contain sensitive or personal information. This increases the risk of data leakage or unintended disclosure through generated outputs.
In business environments, generative AI can inadvertently reproduce confidential data, trade secrets or personally identifiable information if the training module is not properly governed. Prompts pertaining to proprietary or customer data given by employees in order to obtain business ideas/logic may continue to be reflected in the chat history of other users, thereby exposing confidential information. Additionally, limited transparency around how generative AI models store, learn from and retain data makes privacy compliance more complex.
Facial recognition technology
Businesses that adopt facial recognition – ie, because it reduces friction, lowers dependency on passwords and enables rapid identification at scale – face privacy risks with the nature of data itself. Facial data is deeply personal, permanently linked to an individual and cannot be replaced if compromised. Once collected, misuse or leakage can carry long-term consequences.
Most privacy issues in facial recognition arise from unclear purpose definition and over-collection of data. Risks are significantly reduced when organisations clearly specify where and why facial data is used, restrict its application to narrowly defined functions and avoid repurposing it across unrelated systems. Meaningful opt-in mechanisms, rather than default or passive consent, are critical.
From a technical standpoint, privacy risks can be reduced by converting facial images into encrypted biometric templates, keeping the data only for as long as necessary and avoiding centralised storage. Regular testing for accuracy and bias is equally important. However, most reliance is placed on internal policies. These should clearly define acceptable and prohibited users and prevent the gradual expansion of facial recognition into areas where it was never intended.
Sectoral risks by the use of AI
Lending and banking operations
Banks and NBFCs’ reliance on AI, if effected without precautions, can have long-term disastrous consequences for customers, since it deals with some of their most sensitive data and even a slight overlook can put them in the way of cyberthreats. AI’s help in accelerating banking decisions and credits is a huge attraction for these institutions as well as their customers.
Most privacy issues in this sector emanate from similar concerns surrounding hoarding and use of data, especially with credit and banking details. Use of these datasets after the purpose of retention, if used to train AI models, can either result in bias or the worse consequences of such data falling into the wrong hands. Data minimisation and purpose limitation – the two core principles of data privacy – must be applied here too, to ensure that such risks are nipped in the bud.
Healthcare and pharmaceuticals
AI tools used in the healthcare sector for processing data/information pertaining to patients rely on highly sensitive data and directly affect individuals. As already discussed, the benefits of AI in the healthcare sector are not negligible. Contributions this massive also come with their own risks such as data breaches, algorithm bias, and regarding understanding consent and the extent of its application. Data anonymity can be a starting step to train AI models, ensuring that its learning is not influenced by failing to feed it enough data but, at the same time, that raw patient data is not put at risk.
Since most consumers are unaware of the extent to which their data is used, it is imperative that such extent be clearly delineated and explained to them so that they are in a position to give informed consent, and that businesses do not lack accountability when questioned.
The AI-Privacy Gap in Business Operations
The AI-privacy gap in business operations does not arise because organisations are unaware of privacy risks but because AI systems fundamentally change how data is collected, combined and reused within organisations. Unlike traditional IT systems, AI models are rarely limited to a single dataset or a single purpose. Businesses routinely feed AI systems with data drawn from consumer interactions, transaction histories, third-party datasets and publicly available information to improve accuracy and performance. Over time, this creates complex data chains that are difficult to map, monitor and control.
One of the most significant gaps appears at the point of data aggregation. Businesses often collect data for a refined operational purpose such as customer onboarding, service delivery or compliance, but later reuse the same data to train or fine-tune AI models for analytics, personalisation or prediction. While this may make commercial sense, it creates uncertainty around whether individuals are aware of, or expect, such secondary uses.
A close second issue is the use of third-party and cloud-based AI tools. Many Indian businesses rely on external platforms for generative AI, analytics, facial recognition or customer engagement. While these tools offer speed and scalability, they often operate as “black boxes”. Businesses may have limited visibility into how prompts, inputs or datasets are stored, whether they are retained for model improvement, or how long they stay across systems. This lack of transparency increases the risk of unintended data exposure, especially when sensitive personal or commercial information is used in routine business workflows.
The AI-privacy gap is further widened by the dynamic nature of AI systems. Unlike static software, AI models evolve as they are trained on new data or adapted data, for new use cases. Insights generated by AI, such as behavioural predictions or risk scores, may go beyond the data that was originally collected. In many cases, even development teams may not fully anticipate the types of inferences that AI systems can generate once deployed at scale. This creates challenges for businesses trying to align AI outputs with governance standards.
Importantly, this gap is not unique to India but is amplified by the speed of adoption. Industry reports (Deloitte Asia Pacific survey) consistently show that Indian businesses are among the fastest adopters of AI and generative AI globally, often outpacing the maturity of internal governance frameworks. In many cases, business value is realised faster than risk controls can be put in place. As a result, governance mechanisms tend to mature only after systems are already in use, increasing exposure to privacy and reputational risks.
In India, this AI-privacy gap is in part shaped by the Digital Personal Data Protection Act, 2023 (DPDPA), which operates as an indirect but influential constraint on how businesses deploy AI. While the DPDPA does not regulate AI systems as such, it governs how personal data used by organisations – including data processed through AI tools – is collected, processed, stored and shared. In practice, this means that businesses remain responsible for the downstream effects of AI-driven data processing, regardless of whether AI systems are developed internally or sourced from third-party vendors.
Alongside the DPDPA, India’s emerging AI governance approach relies heavily on policy principles rather than binding rules. The “India AI Governance” framework emphasises responsible use, transparency, fairness, safety and human oversight, signalling how regulators and courts are likely to evaluate AI deployment in the future. For businesses, this creates a mixed landscape. On one hand, companies have the freedom to design AI governance frameworks that align with their specific use cases and risk profiles. On the other hand, the absence of clarity makes it difficult to assess long-term compliance exposure. Businesses must therefore operate with the understanding that today’s voluntary guidelines may form the basis of tomorrow’s regulatory benchmarks.
Managing AI Risks in an Interim Governance Landscape
The absence of a concrete AI regulation shifts the onus onto organisations for navigating an interim governance landscape where risk management must be driven internally. For many businesses, this has shifted the focus from formal compliance to internal operational structures. AI systems are increasingly embedded across customer-facing functions, internal decision-making and core commercial processes, making it essential to evaluate not just whether AI is used but also how it is used and monitored over time.
Contractual and technical safeguards play a critical role here. Businesses must ensure that agreements with technology providers and service vendors address data protection responsibilities, security standards and limitations on data reuse.
As already mentioned, internal policies also play a central role in this phase. Businesses are increasingly expected to articulate their own standards for responsible AI use. Clear internal guidance on acceptable applications, human oversight and escalation mechanisms helps to ensure that AI systems support business objectives without creating unmanaged privacy or ethical risks. These measures allow organisations to operate responsibly while remaining adaptable to future legal developments in AI.
Corporate AI Guidelines
Given that Indian businesses are increasingly relying on generative AI across functions such as content creation, analytics, customer engagement and internal productivity, the privacy and governance risks associated with these systems have become particularly pronounced. Generative AI is associated with risks such as violations of the user’s privacy, concerns about infringement of the user’s rights, loss of trade secrets and breach of privacy clauses.
It is therefore imperative that every company come up with an AI policy internally. This policy must clearly delineate permitted uses of AI and expressly prohibit impermissible applications. Informed and responsible use of AI must be encouraged in order to increase efficiency, innovation, etc, and the risks associated with AI should be explained in the policy. A list of approved AI tools must be circulated and a formal process for approving additional tools must be established. The privacy policy for AI tools is to be thoroughly produced before approving the usage of such tools. The policy should:
Most privacy risks are best managed through deliberate design choices. Wherever possible, organisations can limit exposure by removing direct identifiers, using anonymised or pseudonymised datasets and adopting training methods that do not require raw data to leave secure environments. Clear communication with users about how their data is used, combined with consent processes that are genuinely understandable rather than buried in documentation, plays a critical role in building trust. These measures must be reinforced through strong internal governance such as the policy guideline itself, routine audits of AI systems and strict controls on data sharing, particularly when working with external technology vendors.
Conclusion
Fundamentally, applying India’s existing legal framework to AI-driven business activity requires documentation and adaptability. Businesses that proactively embed privacy and AI governance into their operational culture are better positioned to respond to regulatory change. Companies must proactively interpret privacy obligations, embed AI governance into internal decision-making and adopt self-regulatory practices that address risks not yet fully captured by law.
The risks associated with AI do not arise from its use alone but from the absence of clear internal boundaries around how data is collected, processed, shared and retained. The most effective risk management strategies, therefore, lie in shaping how it is designed and governed within organisations.
A recurring lesson across AI risk management practices is that less invasive systems are often the most sustainable. Transparency in how AI tools function, restraint in data collection, meaningful consent and clear opt-out mechanisms consistently reduce privacy concerns. Techniques such as anonymisation, restricted data retention, decentralised training models and limited reuse of data will allow organisations to retain the benefits of AI while considerably reducing the chances of misuse and leaks.
However, it is also important to acknowledge the trade-off between data minimisation and performance. In many cases, AI systems that rely on richer and more diverse datasets can deliver more accurate and reliable results. Expecting high-quality results from severely data-restricted systems is often unrealistic. The real challenge, therefore, is not to eliminate data use but to be deliberate about it. Organisations must focus on collecting data that is only relevant to the task at hand.
Equally important is the role of organisational AI policies. Without clearly articulated internal rules, AI systems tend to expand beyond their original purpose, incorporate new datasets and be used in ways that were never intended. Corporate AI guidelines that define permitted use cases, restrict high-risk applications, vet third-party tools and set minimum transparency standards act as practical safeguards.
Ultimately, reducing AI-related privacy risks is less about reactive controls and more about organisational discipline. Until clearer and more comprehensive regulation comes into the picture, the balance between innovation and self-regulation will remain central to how businesses responsibly leverage AI.
4/2 Millers Road
Level 3
Bangalore 560052
India
+91 80 4377 9955
admin@sdlaw.co.in www.sdlaw.co.in/