Artificial Intelligence 2025

Last Updated May 22, 2025

India

Trends and Developments


Authors



Saikrishna & Associates is a Tier-1 full-service firm with approximately 180 lawyers across offices in in New Delhi, Noida, Mumbai and Bangalore, delivering top-notch and innovative solutions to a diverse array of Indian and international clients, catering to their unique business objectives. The firm is known for its “thought leaders”, who combine industry understanding and legal acumen to provide practical and holistic advice, and for its ability to handle highly complex matters involving voluminous and often technical evidence. With expertise in IP, TMT, data protection & privacy, trade & regulatory compliance, various torts (such as personality rights, hot news doctrine, commercial misappropriation, unfair competition, trade secrets), consumer laws, criminal law, and antitrust, the firm is uniquely placed to provide comprehensive advice on AI issues to clients ranging from e-commerce giants, Big Tech corporations and cloud computing MNCs to video streaming platforms, SaaS providers, gaming intermediaries and social media intermediaries.

Artificial Intelligence in India: An Introduction

Having been ranked by Stanford University's Global and National AI Vibrancy Tool as one of the top four countries leading in AI (along with the US, China and the UK), India is a force to be reckoned with for shaping the global AI economy. The 2024 nasscom-BCG report indicates that the AI market in India is likely to achieve a 25–35% compound annual growth rate through 2027 (from USD7–9 billion in 2023 to USD17–22 billion by 2027), and India is the tenth largest economy globally in AI funding based on Stanford’s Global AI Index 2024, which reveals that India received nearly USD1.4 billion in private investments in 2023. The government has introduced several initiatives under the IndiaAI Mission, one of them being to build foundational AI models trained on Indian datasets to establish indigenous AI models that align with global AI standards while addressing the unique challenges and opportunities within the Indian context.

While the rapidly evolving AI landscape presents an incredible opportunity, it also poses significant risks, with there being vulnerabilities and unresolved issues under law and regulations. At present, India does not have a specific or overarching law for AI regulation; accordingly, existing legal frameworks apply to different AI applications and outcomes. So far, the government has maintained that it does not intend to hinder innovation by overregulating this sector, but it does intend to examine the gaps in the existing legal framework and introduce corresponding AI-related measures in the proposed legislation, the “Digital India Act”, to strengthen India’s legal and regulatory framework for AI.

Nonetheless, organisations and businesses will have to take current laws and regulations into consideration to address the potential risks and vulnerabilities of using AI. This article outlines the recent Indian trends and developments relating to AI technologies.

Emerging AI litigation landscape

Civil litigation seeking the enforcement of private rights against AI violations is increasing, as is litigation seeking government action to regulate the use of AI, particularly deepfakes.

Civil litigation

Copyright infringement

Copyright infringement claims are being litigated in several countries globally, including India. India’s first and only AI-related copyright infringement action to date involved leading news publisher ANI Media (www.aninews.com) suing OpenAI before the Delhi High Court in 2024.

ANI alleged that OpenAI’s training process for its large language models (LLM) illegally used ANI's copyrighted content through web scraping, both directly from its website and through its subscribers’ licensed reproductions. ANI's claims centred around three main issues:

  • the unauthorised storage and use of copyright-protected material for LLM training;
  • verbatim reproduction in AI outputs; and
  • false attribution that could harm its reputation and spread misinformation.

In addition to challenging the jurisdiction of the Indian courts on the ground that it contended it has no direct operations in India and there is no storage or training within India (as OpenAI servers are situated abroad), OpenAI argued that the non-expressive storage or use of copyright-protected works for training does not amount to copyright infringement but rather is transformative, likening its model’s learning process to human learning from a textbook. OpenAI has, however, removed ANI’s pages from its training as a part of its opt-out process.

This case has attracted significant attention, with several interventions by rights owner organisations supporting ANI (such as the Federation of Indian Publishers, Digital News Publishers Association, Indian Music Industry), and an AI start-up (Flux Labs AI) supporting OpenAI. Two amici curiae (one from academia and one a practising lawyer) have been appointed to assist the court, both of whom have taken contrasting views on many issues, although both agree that the use of copyrighted works for training is not infringement per se as it only extracts unprotectable ideas, facts and such.

Arguments on an interim injunction are currently ongoing before Justice Amit Bansal of the Delhi High Court.

Personality rights violations

Personality rights litigation has emerged as the most prominent category of AI-related cases in India. These cases primarily involve celebrities seeking protection against the unauthorised use of their name, image, voice or other personal attributes in AI-generated content. The Delhi High Court has played a pivotal role in recognising and enforcing personality rights in the context of AI.

In Anil Kapoor v Simply Life India & Ors (2023), the Delhi High Court recognised that technological tools, including AI, make it possible for unauthorised users to imitate a celebrity’s persona. Justice Prathiba Singh noted that the use of AI, machine learning, deepfakes and face morphing to create unauthorised videos or images for commercial gains constitutes a violation of personality rights. The court granted an ex parte ad interim injunction and issued directions to:

  • domain name operators to suspend unauthorised domains;
  • internet service providers (ISPs) to remove infringing links; and
  • the Department of Telecommunications (DoT) and Ministry of Electronics and Information Technology (MeitY) to issue blocking orders to block infringing content.

This judicial recognition of personality rights in the AI context has been further reinforced in subsequent cases. In Jaikishsan Kakubhai Saraf alias Jackie Shroff v The Peppy Store & Ors (2024), the court specifically addressed the issue of an AI chatbot using attributes of Bollywood actor Jackie Shroff’s persona without consent. By way of an ad interim order, the court restrained the defendants from exploiting the plaintiff’s personality rights through AI and other technologies.

Voice cloning and AI voice models

A significant development in personality rights litigation came with the case of Arijit Singh v Codible Ventures LLP and Others (2024) before the Bombay High Court. This case specifically addressed the use of AI tools to synthesise artificial recordings of a well-known Bollywood singer’s voice, including the creation of AI models that mimic the singer, the conversion of text/speech into the singer’s voice, and unauthorised song creation.

Justice R.I. Chagla, by way of an ex parte ad interim injunction order, emphasised that, for personality and publicity rights protection, the plaintiff must be a recognised celebrity and identifiable from the defendant’s unauthorised use of their attributes. The court acknowledged the plaintiff’s fame and ruled that the plaintiff’s attributes are protectable, noting that the unauthorised creation of merchandise, domains and GIFs using these attributes was illegal. Significantly, the court observed that the use of AI tools to recreate the plaintiff’s voice and likeness was particularly concerning as it could harm their career if used for defamatory purposes.

Deepfakes and public trust – a subset of personality rights

A disturbing trend in AI use involves cases where deepfakes are used to spread misinformation or promote dubious products by exploiting the public’s trust in well-known personalities. Several recent cases highlight this concerning development, as follows.

  • Medical misinformation: in Dr Devi Prasad Shetty & Anr v Medicine Me & Ors (2024), a renowned cardiac surgeon sought action against defendants creating and circulating fake videos that used his likeness to promote dubious drugs for medical conditions or give false health tips. The Delhi High Court granted an ex parte ad interim order restraining the defendants from violating the plaintiff’s publicity and personality rights.
  • Media personalities: in Rajat Sharma and Anr v Tamara Doc and Ors (2024), a well-known Indian journalist and his media company took action against defendants who were creating doctored videos with distorted images and voice to promote purported drugs allegedly formulated by eminent doctors or certified by the government. The court noted that, given the plaintiff’s position as a trusted voice in Indian households, such misrepresentation posed significant risks not just to his reputation but also to public health and safety.
  • Entertainment industry: in Manchu Bhakthavatsalam Naidu Alias Mohan Babu v Phanumantu and Ors (2024), a prominent Telugu actor sought protection against defendants using AI to morph/superimpose his face and create manipulated audio-video clips, including sexually explicit material and defamatory content. The court passed comprehensive ad interim restraining orders against the defendants.

Judicial approach to personality rights cases

The judicial approach to personality rights cases involving AI has been characterised by the following.

  • Broad protection: courts have been willing to grant broad protection to personality rights, recognising various attributes of a person’s identity, including name, image, voice and mannerisms.
  • Ex parte ad interim injunctions: given the potential for rapid spread and damage from AI-generated content, courts have frequently issued ex parte ad interim injunctions to provide immediate relief.
  • Platform responsibility: courts have issued directions to internet platforms like Meta and Google to take down infringing content and provide details of defendants, who usually hide behind a veil of secrecy to avoid detection.
  • Government direction: DoT and MeitY have often been directed to issue blocking orders to ISPs for infringing websites, demonstrating the courts’ willingness to involve governmental agencies in enforcement.

Litigation seeking government action

Public interest litigation and regulatory framework

A significant trend in AI litigation in India is the use of public interest litigation (PIL) to push for comprehensive regulatory frameworks. Several PIL cases have been filed seeking judicial intervention to address the challenges posed by AI, particularly deepfakes, as follows.

  • Kanchan Nagar and Ors v Union of India: in this PIL filed in the Delhi High Court, a group of artists (including a model, a photographer and a stock photography studio owner) have claimed that their original artwork was being used by generative AI models to produce deepfakes and AI-generated output without permission, violating the Copyright Act. The petition seeks amendments to the Copyright Act and the IT Act to address these concerns, as well as directions to identify and block public access to unregulated applications enabling the creation of AI-generated images. The petition is still pending, and the Union of India has been directed to file its response.
  • Chaitanya Rohilla v Union of India (2023): this PIL seeks directions to identify websites giving access to deepfake AI, to block these websites, and to make them accountable for usage. It also requests the formulation of guidelines to develop a mechanism and framework for AI regulation.
  • Rajat Sharma v Union of India (2024): filed by a renowned Indian journalist who himself had been the subject of deepfake videos, this petition seeks directions for the formulation of guidelines and mechanisms for the regulation of generative AI content.

The petitioners in these cases have emphasised that India is lagging behind other jurisdictions in regulating deepfakes and AI-generated content. They have argued that AI platforms are absolving themselves of liability by claiming intermediary status, despite playing a direct role in generating the content. The petitioners have also questioned the efficacy of advisories published by MeitY.

The Delhi High Court has taken these concerns seriously, directing the Union of India to file a detailed report on measures taken to address the issues raised. The court has consistently observed that “the public needs to know what is being done” and indicated that it might take it upon itself to appoint a well-equipped committee to deal with the concerns if the government fails to do so. In November 2024, the court ordered the Union of India to name nominees to a committee that would examine grievances related to deepfakes and study regulations and statutory frameworks in foreign jurisdictions.

Platform blocking and access restrictions

A more direct regulatory approach is evident in Bhavna Sharma v Union of India (2025), where PIL seeks the blocking of DeepSeek by the Indian government due to security and privacy concerns. While the Delhi High Court has not yet issued substantive orders, it has asked the central government to report on consultations being undertaken. Interestingly, news reports suggest that the court orally observed, “if you are so aggrieved by DeepSeek, do not use it”, indicating a potential reluctance to impose broad blocking measures without careful consideration.

Legislative gaps and judicial innovation

The current wave of AI litigation highlights significant gaps in India’s legislative framework. There is no comprehensive statutory recognition of personality rights in India. Courts have been filling these gaps through judicial innovation, but a more systematic legislative response may be necessary. Moreover, deepfakes are not adequately regulated in India, and few even argue that the Copyright Act, 1957, which was last amended long before the advent of generative AI, does not explicitly address issues like AI training data, ownership of AI-generated content, or AI-enabled copyright infringement.

Future AI litigation will likely require courts to balance competing interests more explicitly. Such interests include the following.

  • Innovation v protection: ensuring that regulatory measures do not stifle legitimate AI innovation while providing adequate protection for individual rights.
  • Free expression v harm prevention: balancing the right to free expression with the need to prevent harm from AI-generated misinformation or deepfakes.
  • Technological reality v legal fiction: reconciling the technical realities of how AI systems work with legal concepts like the idea-expression distinction, fair recompense for authorship, and copyright infringement.

Data protection and AI

AI use and deployment by corporations is becoming a topic of regulatory compliance, requiring proactive preparation involving processes and measures that comply with a complicated web of regulations across various areas and sectors. At the top of the list of applicable compliances is the data protection legal framework in India, since machine learning and generative AI rely on “training data” to create AI models, and such training data, in most cases, includes personal data within its purview.

In India, the interplay between AI and data protection is currently governed by the “SPDI Rules” or the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 under Section 43A of the Information Technology Act, 2000 (IT Act). However, since the SPDI Rules are dated and rudimentary, at best, the Parliament passed the Digital Personal Data Protection Act, 2023 (DPDP Act) in August 2023 to serve as the first dedicated legislation for data protection and privacy in India. The DPDP Act will soon replace the SPDI Rules once it comes into force this year.

The DPDP Act provides a principles-based framework for the “processing” of “digital personal data” (ie, data in digital form about an individual who is identifiable by or in relation to such data). The term “processing” has been defined very broadly to include wholly/partly automated processing of personal data, therefore including AI technologies and applications in its scope of regulation.

The Finance Ministry recently issued an advisory to avoid using generative AI models, citing risks posed to the confidentiality of government data and documents. Subsequently, PIL in the matter of Bhavna Sharma v Union of India [W.P(C) 1762/2025] was filed before the Delhi High Court against the joint owners and operators of the AI application “DeepSeek”, claiming non-compliance with the provisions of the SPDI Rules and the DPDP Act, including the lack of appropriate safeguards protecting Indian users’ rights such as consent mechanisms, grievance redressal system, etc.

While issuing notice in the matter, the court has acknowledged broader risks posed by AI, stating: “AI is a dangerous tool in anybody’s hands, whether it is Chinese or American, it does not make a difference. It is not that the government is unaware of these things, they are very well aware.” The outcome of this matter is of utmost relevance to generative AI tools and companies.

In view of the above, the key considerations and implications under the DPDP Act include the following.

  • The DPDP Act is applicable where the processing of digital personal data is within the territory of India (where the personal data is collected in digital form or in physical form that will be digitised subsequently), and when the processing of such data is done outside India if such processing is in connection with any activity related to offering goods/services in India. Accordingly, if AI companies are processing personal data and are not providing any goods/services in India, the DPDP Act will not be applicable.
  • Data scraping is a common practice in training AI models. Under the current legal framework (ie, the SPDI Rules), it often does not involve obtaining the informed consent of a Data Principal. Under the upcoming framework (ie, the DPDP Act), consent is the primary basis for processing personal data. The DPDP Act, however, excludes publicly available personal data (caused to be made public either by the individual themselves or by an entity under a legal obligation) from its scope, thereby implying that AI companies scraping publicly available personal data for AI training or profiling may not be required to comply with data fiduciary obligations. However, to apply this exemption, a Data Fiduciary will have to verify the source of publication of personal data, which is practically not feasible. In addition, what constitutes “caused to be made public” is not clear, leaving its scope and application unclear. The Minister of State for Electronics and Information Technology recently responded to a query regarding the web scraping of publicly available user data by social media companies, where he stated that the DPDP Act mandates organisations processing personal data – including those engaged in web scraping – to implement robust compliance measures, including obtaining consent for specified purposes before processing digital personal data and respecting individual rights. This answer by the Minister appears to contradict the DPDP Act exemption on publicly available personal data.
  • The DPDP Act will be operationalised through the delegated legislation of “rules” issued thereunder. On 5 January 2025, the Ministry of Electronics & IT published “Draft Rules” for public consultation. These draft rules include several open-ended and problematic requirements for a “Significant Data Fiduciary” (SDF) – a separate category of Data Fiduciary, which will be notified by the government and will have enhanced obligations. Specifically, the draft rules propose that an SDF must observe due diligence to verify that “algorithmic software” deployed by them for hosting, display, uploading, modification, publishing, transmission, storage, updating or sharing of personal data they process is not likely to pose a risk to the rights of Data Principals. This clause, however, has not been supplemented with any clarification regarding its scope and meaning, leading to ambiguity among SDFs deploying AI training models.
  • The draft rules also propose data localisation requirements for SDFs by mandating that personal data specified by the central government (on the basis of the recommendations of a committee constituted by it) is processed subject to the restriction that the personal data and the traffic data pertaining to its flow are not transferred outside the territory of India. This would pose a particular problem, from an ease of doing business perspective, if personal data for AI training is included within its ambit.

Government recommendations for AI governance

A report on AI Governance Guidelines Development (“AI Report”) was published in January 2025 for public consultation, by a subcommittee of an advisory group that was created by the government to analyse gaps and offer recommendations for developing a regulatory framework for AI governance in India. The AI Report proposes the principles for responsible AI governance by drawing inspiration from global frameworks like the OECD AI Principles and NITI Aayog on Responsible AI (a government initiative). The subcommittee conducted a gap analysis and examined the issues and concerns surrounding deepfakes, cybersecurity and privacy in general, and proposed recommendations such as adopting a whole-of-government approach to AI governance, establishing an interministerial AI co-ordination committee and technical secretariat (to serve as a technical advisory body), setting up an incident database, etc.

One of the key recommendations of the AI Report is the need for the proposed legislation – the Digital India Act, which will overhaul technology laws in India – to suitably empower the government with appropriate regulatory and technical capacity and capability to minimise risks of harm from malicious use of emerging technologies, including AI. The report further suggests that there is a need for the government to review and strengthen the mechanisms for the redress and adjudication of matters concerning digital technologies (including the risks posed by AI applications).

Conclusion

India is steadfast in its approach to harnessing the potential of AI to bolster its economy. While regulators are discussing appropriate methods for AI regulation and governance (be it legislation, techno-legal measures or voluntary guidelines), there is an urgent and immediate need to factor in existing legal frameworks, regulations and compliances for AI deployment by organisations in India. Proactive measures based on legal, regulatory and ethical considerations will not only push organisations to get ahead of the curve but will also ensure trust and accountability while protecting the cornerstone of innovation.

Saikrishna & Associates

8th Floor, VJ Business Tower
Plot No A-6, Sector 125
Noida, Uttar Pradesh
201301
India

+91 120 4633 900

info@saikrishnaassociates.com www.saikrishnaassociates.com
Author Business Card

Trends and Developments

Authors



Saikrishna & Associates is a Tier-1 full-service firm with approximately 180 lawyers across offices in in New Delhi, Noida, Mumbai and Bangalore, delivering top-notch and innovative solutions to a diverse array of Indian and international clients, catering to their unique business objectives. The firm is known for its “thought leaders”, who combine industry understanding and legal acumen to provide practical and holistic advice, and for its ability to handle highly complex matters involving voluminous and often technical evidence. With expertise in IP, TMT, data protection & privacy, trade & regulatory compliance, various torts (such as personality rights, hot news doctrine, commercial misappropriation, unfair competition, trade secrets), consumer laws, criminal law, and antitrust, the firm is uniquely placed to provide comprehensive advice on AI issues to clients ranging from e-commerce giants, Big Tech corporations and cloud computing MNCs to video streaming platforms, SaaS providers, gaming intermediaries and social media intermediaries.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.