Artificial Intelligence in India: An Introduction
Having been ranked by Stanford University's Global and National AI Vibrancy Tool as one of the top four countries leading in AI (along with the US, China and the UK), India is a force to be reckoned with for shaping the global AI economy. The 2024 nasscom-BCG report indicates that the AI market in India is likely to achieve a 25–35% compound annual growth rate through 2027 (from USD7–9 billion in 2023 to USD17–22 billion by 2027), and India is the tenth largest economy globally in AI funding based on Stanford’s Global AI Index 2024, which reveals that India received nearly USD1.4 billion in private investments in 2023. The government has introduced several initiatives under the IndiaAI Mission, one of them being to build foundational AI models trained on Indian datasets to establish indigenous AI models that align with global AI standards while addressing the unique challenges and opportunities within the Indian context.
While the rapidly evolving AI landscape presents an incredible opportunity, it also poses significant risks, with there being vulnerabilities and unresolved issues under law and regulations. At present, India does not have a specific or overarching law for AI regulation; accordingly, existing legal frameworks apply to different AI applications and outcomes. So far, the government has maintained that it does not intend to hinder innovation by overregulating this sector, but it does intend to examine the gaps in the existing legal framework and introduce corresponding AI-related measures in the proposed legislation, the “Digital India Act”, to strengthen India’s legal and regulatory framework for AI.
Nonetheless, organisations and businesses will have to take current laws and regulations into consideration to address the potential risks and vulnerabilities of using AI. This article outlines the recent Indian trends and developments relating to AI technologies.
Emerging AI litigation landscape
Civil litigation seeking the enforcement of private rights against AI violations is increasing, as is litigation seeking government action to regulate the use of AI, particularly deepfakes.
Civil litigation
Copyright infringement
Copyright infringement claims are being litigated in several countries globally, including India. India’s first and only AI-related copyright infringement action to date involved leading news publisher ANI Media (www.aninews.com) suing OpenAI before the Delhi High Court in 2024.
ANI alleged that OpenAI’s training process for its large language models (LLM) illegally used ANI's copyrighted content through web scraping, both directly from its website and through its subscribers’ licensed reproductions. ANI's claims centred around three main issues:
In addition to challenging the jurisdiction of the Indian courts on the ground that it contended it has no direct operations in India and there is no storage or training within India (as OpenAI servers are situated abroad), OpenAI argued that the non-expressive storage or use of copyright-protected works for training does not amount to copyright infringement but rather is transformative, likening its model’s learning process to human learning from a textbook. OpenAI has, however, removed ANI’s pages from its training as a part of its opt-out process.
This case has attracted significant attention, with several interventions by rights owner organisations supporting ANI (such as the Federation of Indian Publishers, Digital News Publishers Association, Indian Music Industry), and an AI start-up (Flux Labs AI) supporting OpenAI. Two amici curiae (one from academia and one a practising lawyer) have been appointed to assist the court, both of whom have taken contrasting views on many issues, although both agree that the use of copyrighted works for training is not infringement per se as it only extracts unprotectable ideas, facts and such.
Arguments on an interim injunction are currently ongoing before Justice Amit Bansal of the Delhi High Court.
Personality rights violations
Personality rights litigation has emerged as the most prominent category of AI-related cases in India. These cases primarily involve celebrities seeking protection against the unauthorised use of their name, image, voice or other personal attributes in AI-generated content. The Delhi High Court has played a pivotal role in recognising and enforcing personality rights in the context of AI.
In Anil Kapoor v Simply Life India & Ors (2023), the Delhi High Court recognised that technological tools, including AI, make it possible for unauthorised users to imitate a celebrity’s persona. Justice Prathiba Singh noted that the use of AI, machine learning, deepfakes and face morphing to create unauthorised videos or images for commercial gains constitutes a violation of personality rights. The court granted an ex parte ad interim injunction and issued directions to:
This judicial recognition of personality rights in the AI context has been further reinforced in subsequent cases. In Jaikishsan Kakubhai Saraf alias Jackie Shroff v The Peppy Store & Ors (2024), the court specifically addressed the issue of an AI chatbot using attributes of Bollywood actor Jackie Shroff’s persona without consent. By way of an ad interim order, the court restrained the defendants from exploiting the plaintiff’s personality rights through AI and other technologies.
Voice cloning and AI voice models
A significant development in personality rights litigation came with the case of Arijit Singh v Codible Ventures LLP and Others (2024) before the Bombay High Court. This case specifically addressed the use of AI tools to synthesise artificial recordings of a well-known Bollywood singer’s voice, including the creation of AI models that mimic the singer, the conversion of text/speech into the singer’s voice, and unauthorised song creation.
Justice R.I. Chagla, by way of an ex parte ad interim injunction order, emphasised that, for personality and publicity rights protection, the plaintiff must be a recognised celebrity and identifiable from the defendant’s unauthorised use of their attributes. The court acknowledged the plaintiff’s fame and ruled that the plaintiff’s attributes are protectable, noting that the unauthorised creation of merchandise, domains and GIFs using these attributes was illegal. Significantly, the court observed that the use of AI tools to recreate the plaintiff’s voice and likeness was particularly concerning as it could harm their career if used for defamatory purposes.
Deepfakes and public trust – a subset of personality rights
A disturbing trend in AI use involves cases where deepfakes are used to spread misinformation or promote dubious products by exploiting the public’s trust in well-known personalities. Several recent cases highlight this concerning development, as follows.
Judicial approach to personality rights cases
The judicial approach to personality rights cases involving AI has been characterised by the following.
Litigation seeking government action
Public interest litigation and regulatory framework
A significant trend in AI litigation in India is the use of public interest litigation (PIL) to push for comprehensive regulatory frameworks. Several PIL cases have been filed seeking judicial intervention to address the challenges posed by AI, particularly deepfakes, as follows.
The petitioners in these cases have emphasised that India is lagging behind other jurisdictions in regulating deepfakes and AI-generated content. They have argued that AI platforms are absolving themselves of liability by claiming intermediary status, despite playing a direct role in generating the content. The petitioners have also questioned the efficacy of advisories published by MeitY.
The Delhi High Court has taken these concerns seriously, directing the Union of India to file a detailed report on measures taken to address the issues raised. The court has consistently observed that “the public needs to know what is being done” and indicated that it might take it upon itself to appoint a well-equipped committee to deal with the concerns if the government fails to do so. In November 2024, the court ordered the Union of India to name nominees to a committee that would examine grievances related to deepfakes and study regulations and statutory frameworks in foreign jurisdictions.
Platform blocking and access restrictions
A more direct regulatory approach is evident in Bhavna Sharma v Union of India (2025), where PIL seeks the blocking of DeepSeek by the Indian government due to security and privacy concerns. While the Delhi High Court has not yet issued substantive orders, it has asked the central government to report on consultations being undertaken. Interestingly, news reports suggest that the court orally observed, “if you are so aggrieved by DeepSeek, do not use it”, indicating a potential reluctance to impose broad blocking measures without careful consideration.
Legislative gaps and judicial innovation
The current wave of AI litigation highlights significant gaps in India’s legislative framework. There is no comprehensive statutory recognition of personality rights in India. Courts have been filling these gaps through judicial innovation, but a more systematic legislative response may be necessary. Moreover, deepfakes are not adequately regulated in India, and few even argue that the Copyright Act, 1957, which was last amended long before the advent of generative AI, does not explicitly address issues like AI training data, ownership of AI-generated content, or AI-enabled copyright infringement.
Future AI litigation will likely require courts to balance competing interests more explicitly. Such interests include the following.
Data protection and AI
AI use and deployment by corporations is becoming a topic of regulatory compliance, requiring proactive preparation involving processes and measures that comply with a complicated web of regulations across various areas and sectors. At the top of the list of applicable compliances is the data protection legal framework in India, since machine learning and generative AI rely on “training data” to create AI models, and such training data, in most cases, includes personal data within its purview.
In India, the interplay between AI and data protection is currently governed by the “SPDI Rules” or the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 under Section 43A of the Information Technology Act, 2000 (IT Act). However, since the SPDI Rules are dated and rudimentary, at best, the Parliament passed the Digital Personal Data Protection Act, 2023 (DPDP Act) in August 2023 to serve as the first dedicated legislation for data protection and privacy in India. The DPDP Act will soon replace the SPDI Rules once it comes into force this year.
The DPDP Act provides a principles-based framework for the “processing” of “digital personal data” (ie, data in digital form about an individual who is identifiable by or in relation to such data). The term “processing” has been defined very broadly to include wholly/partly automated processing of personal data, therefore including AI technologies and applications in its scope of regulation.
The Finance Ministry recently issued an advisory to avoid using generative AI models, citing risks posed to the confidentiality of government data and documents. Subsequently, PIL in the matter of Bhavna Sharma v Union of India [W.P(C) 1762/2025] was filed before the Delhi High Court against the joint owners and operators of the AI application “DeepSeek”, claiming non-compliance with the provisions of the SPDI Rules and the DPDP Act, including the lack of appropriate safeguards protecting Indian users’ rights such as consent mechanisms, grievance redressal system, etc.
While issuing notice in the matter, the court has acknowledged broader risks posed by AI, stating: “AI is a dangerous tool in anybody’s hands, whether it is Chinese or American, it does not make a difference. It is not that the government is unaware of these things, they are very well aware.” The outcome of this matter is of utmost relevance to generative AI tools and companies.
In view of the above, the key considerations and implications under the DPDP Act include the following.
Government recommendations for AI governance
A report on AI Governance Guidelines Development (“AI Report”) was published in January 2025 for public consultation, by a subcommittee of an advisory group that was created by the government to analyse gaps and offer recommendations for developing a regulatory framework for AI governance in India. The AI Report proposes the principles for responsible AI governance by drawing inspiration from global frameworks like the OECD AI Principles and NITI Aayog on Responsible AI (a government initiative). The subcommittee conducted a gap analysis and examined the issues and concerns surrounding deepfakes, cybersecurity and privacy in general, and proposed recommendations such as adopting a whole-of-government approach to AI governance, establishing an interministerial AI co-ordination committee and technical secretariat (to serve as a technical advisory body), setting up an incident database, etc.
One of the key recommendations of the AI Report is the need for the proposed legislation – the Digital India Act, which will overhaul technology laws in India – to suitably empower the government with appropriate regulatory and technical capacity and capability to minimise risks of harm from malicious use of emerging technologies, including AI. The report further suggests that there is a need for the government to review and strengthen the mechanisms for the redress and adjudication of matters concerning digital technologies (including the risks posed by AI applications).
Conclusion
India is steadfast in its approach to harnessing the potential of AI to bolster its economy. While regulators are discussing appropriate methods for AI regulation and governance (be it legislation, techno-legal measures or voluntary guidelines), there is an urgent and immediate need to factor in existing legal frameworks, regulations and compliances for AI deployment by organisations in India. Proactive measures based on legal, regulatory and ethical considerations will not only push organisations to get ahead of the curve but will also ensure trust and accountability while protecting the cornerstone of innovation.
8th Floor, VJ Business Tower
Plot No A-6, Sector 125
Noida, Uttar Pradesh
201301
India
+91 120 4633 900
info@saikrishnaassociates.com www.saikrishnaassociates.com