There are no specific Indian laws that govern AI or its use. Legal considerations arise from different legal frameworks, including:
Key industry applications of AI and machine learning are expected to be in the healthcare, agriculture, education, telecom, infrastructure, and transportation sectors, with the government specifically focusing on these industries. No specific law defines, differentiates between, or regulates predictive and generative AI.
Healthcare
Integration of AI models within the healthcare industry is expected to increase access and affordability of quality healthcare to rural areas. Key applications include:
Agriculture
AI in agriculture is expected to improve farm productivity and reduce wastage of produce through the development of new farming methods, improvements in administration, reduction in costs of transportation and distribution. These measures are expected to positively impact farmers’ income and livelihood.
Telecommunications
The Telecom Regulatory Authority of India (TRAI), which is the nodal telecommunications regulator, has directed telecommunications service providers to implement the use of AI within telecommunications systems and networks in order to detect and prevent spam and phishing attacks that occur through phone networks, calls, and SMS.
Education
AI is expected to improve access and quality of education across India. Educational institutions have begun implementing AI solutions within their systems to tailor systems for students (such as changing language preferences) based on their needs. AI is also expected to improve the quality of online education systems, which ‒ in turn ‒ will enhance access to education.
Infrastructure
State governments are expected to adopt AI systems within their city planning department to improve public utility systems, public safety, and general management and administration. AI is also expected to bring about the development of “smart cities”, where residents will have access to better transportation and housing through AI-powered solutions.
Transportation
In line with infrastructure planning, the adoption of AI is also expected to improve road congestions and reduce accidents. Similarly, AI models are being used to create sustainable means of transportation and travel route optimisations.
The Indian government has created programmes and initiatives to promote the use of AI and guide relevant stakeholders to implement and adopt AI, as follows.
India does not presently have an AI specific legislation. The Indian government has proposed the enactment of the Digital India Act (DIA), which is intended to replace the IT Act. Although the government has not issued a formal draft, the DIA is expected to regulate emerging technologies, define, and regulate high risk AI systems, and legislate on ethical use of AI based tools. Certain types of AI may be treated as “intermediaries” under the DIA, with safe harbour protections offered under the IT Act likely to be extended to such types of AI.
The Indian government has also issued an advisory to certain intermediaries that incorporate AI in their products and services to ensure algorithms do not produce discriminatory results. Although the MeitY’s Minister of State has unofficially indicated that this advisory is specifically targeted towards larger intermediaries, the scale on which AI is deployed ‒ especially by start-ups in India ‒ may be impacted in the mid term by this advisory.
No AI-specific legislation has been enacted in India.
Indian regulatory bodies have issued several White Papers, policies, reports, and recommendation on the use and adoption of AI, as follows.
This is not applicable in India.
This is not applicable in India.
This is not applicable in India.
Indian data, information, and content laws do not explicitly regulate AI.
However, India’s newly introduced data protection law, the DPDPA (which is yet to be enforced) is noteworthy, as it entirely exempts personal data that is made publicly available by individuals themselves or by someone else under a legal obligation. Though the scope of “publicly available” data is yet to be defined, this exemption could potentially help foster AI development.
The National Strategy for Artificial Intelligence, released by NITI Aayog, proposes a blockchain-based decentralised data marketplace ‒ ensuring traceability, access controls, regulatory compliance ‒ and a price discovery mechanism for data to balance privacy concerns with the need for a large supply of data for AI training.
The “DEPA Training Framework”, issued by NITI Aayog, permits persons to receive large training datasets for analysis or training. Data shared through this framework must be aggregated and de-identified. Organisations that provide or disclose data are tasked with both, seeking consent from data subjects to share such data and aggregating and de-identifying data prior to disclosure. Participation in this ecosystem is subject to approvals from self-regulatory organisations, implementation of defined security and privacy controls, and contractual arrangements among participating entities. Participation permits large-scale processing of structured de-identified data for training AI models and may offer the ability to commercialise the sharing of such data.
As the DIA is intended to replace the IT Act, it will regulate the entire digital ecosystem in India. Objectives of the DIA include the development of the Indian digital economy, innovation, and ensuring India is considered a trusted player for digital products and solutions. It will define and regulate high-risk AI systems, develop ethical use of AI-based tools and develop accountability standards. It will also attempt to prevent user harm such as cybercrimes targeting women and children, regulate addictive technology, protect minors, provide users with digital rights, and curb the spreading of fake news and information.
In practice, organisations – especially larger companies that process or target a large number of users or roll out AI-enabled products or tools ‒ are actively taking measures to address commercial risks that may arise from the use of AI. These measures may also incorporate AI-specific terms and conditions to disclaim the use of AI, such as generative AI, to prevent liability for the results produced by such tools. Please see 11. Legal Issues With Predictive and Generative AI and 12. AI Procurement for further details.
There have been no conclusive judicial precedents on IP rights in respect of the use of AI.
Some Indian courts have recognised the benefits of using AI-powered tools to assist with investigations in cases involving missing persons and child pornography. However, courts have questioned the accuracy and reliability of AI-powered chatbots such as ChatGPT where parties have sought to establish certain facts through results generated by such tools.
Courts have not prescribed any definitions or standards for describing AI and machine-learning tools at this stage.
MeitY
The MeitY is the apex ministry established by the central government to regulate and be responsible for the development and facilitation of the use of AI in India. It has established a separate division known as the “Emerging Technologies Division”, which works towards fostering and promoting the usage of emerging technologies in India. In order to develop a framework to regulate AI, it has also constituted four committees on AI, which have published reports on issues such as ethics, cybersecurity, and re-skilling individuals.
NITI Aayog
The NITI Aayog is a public policy think tank established by the Indian government. It was primarily tasked with creating a national strategy on developing and implementing AI and related technologies in India, which was published in 2018. It has also published the “Responsible AI” approach document, which prescribed certain principles to guide relevant organisations on how to use AI in an effective and responsible manner.
NASSCOM
NASSCOM is a non-government body that works with the MeitY and other stakeholders to promote the responsible use of AI in India. NASSCOM AI is an initiative undertaken by the organisation to foster the creation, development and sustainable use of AI in India. NASSCOM has, among other articles and reports, also released the Guidelines on Generative AI, which provides a common set of standards and protocols that may be adopted by stakeholders while implementing generative AI tools within their services.
Other Regulatory Agencies
There are also regulatory authorities, such as TRAI, the Securities and Exchange Board of India, and the Insurance Regulatory and Development Authority of India, which are actively formulating recommendation papers and guidelines to regulate the use of AI in their respective sectors. However, these entities are expected to play a more active role once the government enacts specific legislations with regard to AI.
The MeitY defines machine learning as “algorithms and techniques that allow computers to “learn” from and make predictions based on data”. It also refers to it as “a branch of AI that specifically studies algorithms [that] learn and improve from training examples”.
Although it has not provided a definition of AI, the NITI Aayog has stated that AI machines may be classified as those that have the ability to perform cognitive tasks such as thinking, perceiving, learning, problem-solving and decision-making in a similar manner to humans.
As no regulations have been enacted with regard to AI, these definitions merely act as guiding principles on how the regulators conceptualise AI. At this stage, it will primarily be the courts’ responsibility to identify whether a particular software is considered AI and ‒ in the absence of legislation – to what extent will it have an impact on the legal rights and liabilities of parties involved during a dispute if so.
MeitY
The MeitY is actively taking steps to regulate AI and address the issues through various policies, strategies and working committee reports. The MeitY’s AI Committee reports aim to prevent harms such as the weaponisation of AI, cybersecurity risks, and privacy and ethical issues that arise from AI.
The MeitY’s objectives are:
NITI Aayog
NITI Aayog aims to provide a roadmap for the creation, development and use of AI in India and to guide stakeholders on how to use these technologies in a responsible and sustainable manner.
NASSCOM
NASSCOM primarily aims at preventing or mitigating the following risks with regard to AI:
NASSCOM’s objectives are:
No enforcement actions have occurred yet.
The primary bodies that have addressed AI standards are as follows.
The BIS has also constituted a committee that is drafting standards that are in the process of being published.
The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are the primary international standard-setting bodies whose certifications are sought by business in India. The BIS is in the process of adopting the ISO/IEC 42001:2023 that specify requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS) within organisations.
Government authorities are actively using and leveraging AI. Examples include:
Indian courts have not yet issued notable decisions on the use of AI by the government.
In 2022, the Indian government announced the launch of 75 AI products and technologies during a symposium organised by the Ministry of Defence for protection from future threats and for the development and peace of humanity.
While the rise of AI-generated content has prompted discussions in relation to copyright ownership and product liability, there has been no conclusive judicial or regulatory guidance on IP concerns in connection with AI. The Indian government has clarified that the existing IP rights regime in India adequately addresses and protects AI-generated works.
There has been limited Indian regulatory or judicial guidance on data protection concerns as well. India’s existing data protection regime under the IT Act prescribes light-touch compliance for the processing of personal data. Once enacted, the DPDPA will exempt processing of publicly available personal data from its ambit entirely, which mitigates certain data protection concerns.
Generally, the emerging issues raised by generative AI are as follows.
Please refer to 8.1 Emerging Issues in Generative AI.
Please see 8.1 Emerging Issues in Generative AI for more details.
Present Indian data protection laws do not address the use of AI or the rights that individuals have in relation to output produced by AI. Individuals’ rights are limited to the right to access and correct their information. They do not have the right to delete the information that is provided to an organisation, but they may opt out from the provision of their personal data, which will entitle the organisation to deny services to them. Provided that AI tools have procured training data in a manner consistent with data protection requirements, and users of such tools ensure compliance with data protection requirements while processing and inputting personal data, existing data protection risks may be mitigated.
The soon-to-be-implemented DPDPA provides individuals with rights to correction and erasure of their personal data, provided that processing is based on consent of the data subject. AI output would accordingly need to be corrected or erased when requested, to the extent it contains personal data about the individual, if the generation occurs on the basis of consent. Considering that the DPDPA exempts processing of publicly available data from its ambit, organisations that use AI tools may not need to factor significant data protection compliance if data sources are appropriately identified.
AI is increasingly being used in various aspects of the practice of law, to streamline processes, improve efficiency, and enhance decision-making. As lawyers are, at times, required to analyse a wide range of resources (eg, emails, documents, and communications) to identify key pieces of information, the use of AI-powered tools to perform such functions significantly increases efficiency. Lawyers are also relying on AI tools to improve research functions, as such tools have the ability to analyse vast amounts of information with relative accuracy and speed.
AI is also being utilised in various automated support services. Chatbots equipped with natural language processing capabilities can provide instant responses to common legal queries, offer preliminary legal advice, and assist in client onboarding processes. Additionally, AI-driven legal research platforms can efficiently search through vast databases of case law, statutes, and legal precedents to provide relevant insights and analysis to lawyers, thereby facilitating informed decision-making and strategy development. However, courts have questioned the accuracy and reliability of AI-powered chatbots such as ChatGPT where parties have sought to establish certain facts through results generated by such tools.
Separately, the integration of AI in litigation and other practices of law also raises the following related ethical issues.
More details about the risks of the use of AI are set out in 11. Legal Issues With Predictive and Generative AI.
Owing to the lack of specific AI regulations in India, there is limited precedent on the determination of liability with regard to the provision and use of AI. In this lacunae, traditional liability theories would continue to apply.
As regulatory efforts for AI are in their early stages, it is unclear as to how courts will adjudicate on such cases. Accordingly, it becomes important for organisations to:
Further clarity on these aspects is expected once the government enacts specific legislations or provides guidance on AI and its associated liability.
As India does not have regulations that are targeted towards the use of AI, the imposition and allocation of liability for the unlawful use of such tools is determined commercially at this stage.
Bias in algorithms presents both technical and legal challenges that have significant implications for consumers, companies, and regulatory bodies. From a technical standpoint, bias in AI algorithms refers to the systematic and unjust discrimination towards certain groups or individuals based on factors such as race, gender, or socioeconomic status. These biases may also violate anti-discrimination laws. Discrimination based on protected attributes such as gender is unlawful and algorithms that perpetuate discrimination may expose organisations to legal liability.
Biases may also lead to consumer grievances, as AI models tend to impose discriminatory pricing or favour certain consumer groups over others. Addressing bias in algorithms would require collaboration between industry stakeholders and regulatory bodies to ensure that AI systems are fundamentally fair.
Data protection and privacy concerns with regard to AI and related technologies stem from the manner in which these tools collect information from available sources. As publicly available data is typically not categorised as personal data, AI models may be extensively trained by using such sources. However, with the development of AI and machine learning models, it becomes increasingly likely that an individual’s personal information may be used by such technologies to produce the required results. By way of example, an individual’s social media accounts may be analysed by AI-backed tools to create a behaviour profile of the individual in order to determine purchasing trends.
The primary concern in such use cases is with regard to the individual’s consent. As AI tools may procure data through all available sources on the internet, it becomes increasingly difficult to obtain an individual’s consent. Data protection laws in the EU and other major jurisdictions provide individuals the right to object to any automated decision-making that was carried out without human oversight to account for such issues. However, this right has not been included under the DPDPA at this stage ‒ although the government may introduce such rights through subsequent regulations.
Similarly, the use of AI models raises cybersecurity concerns. Traditional encryption and baseline security measures are proving to be inadequate with the development of advanced technology such as AI and quantum computing. Adopting more enhanced encryption measures is advisable.
AI tools are capable of using an individual’s biometric features to develop photographs and videos that are artificially rendered in its entirety. This is a significant concern for celebrities and politicians as individuals may use such technology to disseminate misinformation and defame individuals.
Current Indian data protection laws classify biometric information as sensitive personal data and require organisations to obtain written consent from the individual prior to processing such information. The DPDPA also requires consent to be obtained in such situations (unless used in employment-related matters). However, where AI tools are deployed to gather vast amounts of biometric information from public and non-public sources, it is difficult to verify whether consent was obtained and, if so, whether it was adequate to the extent of the personal data processing undertaken by the tool. As advanced generative AI tools may become available to the public, regulatory action may be required to protect the privacy and reputation of individuals.
Though it may be difficult to enforce the lawful use of generative AI tools, regulators may impose obligations on the owners of the AI model to build in non-removable identification marks on AI-generated photographs and videos. This will help viewers to distinguish between real and artificially rendered products. However, this may not completely prevent mala fide actors from removing these protections by breaching security measures. Accordingly, appropriate penalties and sanctions for the breach of AI-related regulations to ensure deterrence must also be in place.
Data protection regulations do not provide individuals the right to object to automated decision-making activities undertaken by organisations. By way of example, where an employer deploys AI to screen candidates on the basis of the information provided by them, the candidate does not have a statutory claim to seek a review of such decisions. This may adversely affect individuals, as AI models may have inherent biases or errors. Although affected individuals may object to these decisions by claiming that the AI model is defective or that the results were produced based on biased models, it is impractical to raise legitimate claims where the manner in which the decision was arrived at by the organisation is not disclosed to the public.
The government’s proposal for the DIA, which is intended to repeal and replace the existing overarching law on information technology, indicates that it may provide all individuals with the right to object to automated decision-making. However, at this stage, no developments have been reported with regard to these aspects.
Indian data protection laws require the disclosure of the personal datasets collected and the purposes for the processing of such data. This would also apply in cases where AI and machine learning tools are incorporated within services to improve efficiency and decrease human dependency in cases of errors or customer grievances. However, under current and proposed regulations, there are no specific obligations imposed on organisations to disclose the fact that the service uses AI to deliver particular features.
By way of example, an organisation that implements an AI chatbot on its website or operates an AI-powered customer service number, is not statutorily required to disclose this information. However, as current data protection laws and the DPDPA encourage transparency with regard to the processing personal data, it is recommended to make these disclosures to all affected individuals. These disclosures may be made through a publicly available privacy policy.
The use of AI may lead to various anti-competitive practices by larger organisations, as follows.
Key concerns that undermine offerings by AI suppliers and that ought to be factored into transactional contracts with customers follow. The ability to effectively negotiate these depend significantly on commercial bargaining power, together with technical capabilities at both parties’ ends. Any transactions document should be supplemented by practical due diligence and risks procurement processes.
Data Concerns
Enterprise customers may overlook the fact that tools offered by AI suppliers involve large-scale processing and learning of their proprietary and confidential data. Contracts should clearly specify usage restrictions, if any. Customers may choose to deploy on-premises solutions or local instances to limit a supplier’s ability to access data and may also seek restrictions on processing or even learning undertaken by machine learning tools.
Data protection concerns – especially in respect of personal data that may have been used to initially train the AI solution or otherwise processed in connection with a customer’s use of the solution – would subsist in any commercial conversation. The contractual framework should factor responsibility for compliance, identification of legal bases, anonymisation where possible, and corresponding data subject rights.
Cybersecurity
Shared infrastructure utilised by AI suppliers – especially in the context of AI as a service solution – may be leveraged by threat actors to gain access to large scale data and information. Contracts should define security protocols, adequately define breach response processes and timelines, and allocate liability among parties for these issues. Any contract negotiation must be supplemented by security due diligence and monitoring to mitigate risk.
Disclaimers of Liability
AI suppliers ought to typically disclaim liability in respect of results generated through the use of AI. For instance, suppliers may choose to disclaim liabilities in respect of the control, endorsement, recommendations, or representations on the efficacy, appropriateness, or suitability of the AI. Similar disclaimers on use of generative AI, including in respect of input data (where liability is shifted to customers), editorial rights and incomplete, inaccurate, inappropriate, or offensive output ought to be factored in. Depending on negotiating power, suppliers may choose to disclaim liabilities on training data. This would typically be resisted by larger enterprise customers.
Bias
Customers may insist on guarantees on results that are free from bias, accurate, and in compliance with applicable law. These must be balanced with a supplier’s ability to control training data and compliance with data protection and IP concerns in respect of training data. Suppliers are likely to be required to conduct impact assessment tests and undergo regular product testing to ensure compliance with industry standards.
Indian companies have begun adopting AI models within their recruitment and employee monitoring processes to facilitate quicker and more objective evaluation of candidates and employees. Large organisations that receive several applications implement AI based tools within their recruitment system to allow mass screening of applications. Typically, a few instructions are fed into the system, which detects whether minimum parameters are present within the candidate applications. This allows the organisation to short-list a large number of applications within a few hours or days. Similarly, companies are increasingly relying on AI models to evaluate work performance and attendance of employees.
Although such measures are generally advantageous and improve efficiency, it comes with the risk of decisions being influenced by algorithmic biases and technical errors. By way of example, interview systems managed by AI may not recognise certain accents or mannerisms, which may lead to oversights and inaccurate evaluations. Companies should consider undertaking regular impact assessments to mitigate these risks.
With the increase in remote working models, employee monitoring efforts have increased ‒ in paricular, the use of AI-backed monitoring software. Tools for automatically detecting office attendance, employee working hours, etc, implement AI to achieve a more advanced level of monitoring with increased accuracy.
However, as such tools are fed with parameters that are pre-decided, there may be instances where an employee’s behaviour is not factored in. By way of example, if an office uses face recognition technology to record employee attendance, there may be instances where certain physical features are not capable of being identified by the software. These technical errors and biases may have a direct impact on employees and must be relied upon with caution.
Further, data protection concerns arising from the use of such tools must be taken into consideration prior to its implementation. Adequate consents and notices must be provided to all employees and the extent of data collected from the AI software must be limited. By way of example, facial images processed through this technology must be processed solely for the purpose of employee monitoring.
Digital platforms such as cab aggregators and food delivery service providers have begun to adopt AI systems within their business operations. Cab aggregators typically employ AI to ensure route optimisation, implement dynamic pricing, and detect fraudulent bookings. Similarly, food delivery service providers are adopting these tools for relying on features such as demand prediction, delivery routes optimisation, AI-powered customer support, and customer feedback analysis.
At this stage, these practices are solely governed by data protection, consumer and competition regulations. By way of example, cab aggregators that use AI to implement dynamic pricing must ensure that such these processes are based on fair pricing models and are not discriminatory towards certain consumer groups. Further, digital platforms that act as intermediaries must also conduct due diligence exercises to ensure that their platform does not fall afoul of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. Certain intermediaries that incorporate AI in their products and services were also specifically directed by the government to ensure that their algorithms do not produce discriminatory results.
However, these measures apply largely to the general operation of the platforms. With the increasing reliance on AI tools, regulators may need prescribe sector-specific laws to ensure responsible use of AI by large digital platforms.
Use of AI by Financial Services Companies:
Financial services companies in India utilise AI for a range of applications, including fraud detection, risk assessment, customer service, credit scoring, and investment management.
AI algorithms analyse vast amounts of data to identify patterns, trends and anomalies, thereby enabling financial institutions to make data-driven decisions, automate processes and offer personalised services to customers.
Apart from risks such as algorithmic biases and privacy concerns, businesses in the financial sector must also take into account the risks of biases in repurposed data. Such companies may repurpose historical data for training AI algorithms, potentially introducing biases present in the data. Biases in repurposed data could lead to discriminatory practices, resulting in unfair treatment of certain customer groups (such as for credit facility offerings). To address these concerns, the Reserve Bank of India may publish formal guidelines for the use of AI tools by financial services companies and other regulated entities.
The government has not prescribed specific regulations governing AI in healthcare. However, the Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare published by the Indian Council of Medical Research aims to establish an ethical framework for the development, deployment, and adoption of AI-based solutions in healthcare. The guidelines cover ethical principles, stakeholder guidance, an ethics review process, and governance of AI use.
The healthcare industry may leverage AI for the following use cases:
No specific regulations with regard to the use of AI autonomous vehicles has been prescribed in India. As the use of autonomous vehicles is in its early stages in India, regulations associated with such technology are expected to be conceptualised over the coming few years.
No specific regulations with regard to the manufacturing of AI has been prescribed in India. However, manufacturers will need to take into consideration minimum standards prescribed by applicable standard-setting bodies and general consumer protections laws when developing AI for general commercial use.
Though India does not have any regulations that specifically govern the use of AI in professional services, businesses in the legal, accounting, consulting and other professional fields must take into account various aspects such as liability and professional responsibility, confidentiality, IP, client consent, and regulatory compliance while using AI.
Liability
Service providers using AI tools are responsible for the accuracy and integrity of the outcomes generated by these tools. Courts may impose liability on professionals for errors or misconduct arising from the use of AI. Professionals must ensure that AI systems are appropriately designed, implemented, and monitored to meet professional standards and ethical obligations.
Confidentiality
Service providers may have contractual obligations to maintain client confidentiality and protect sensitive information. AI systems used in professional services must adhere to strict data privacy and security standards to safeguard client data from unauthorised access or disclosure. Professionals must ensure that AI systems comply with relevant data protection and cybersecurity laws and regulations to meet minimum standards of security.
IP Rights
Implications of using AI on IP rights, including ownership of AI-generated works and the protection of proprietary algorithms or datasets must also be taken into account by such service providers. As Indian laws do not provide AI-specific regulations for these aspects, parties will need to contractually determine ownership and licensing of IP developed through the provision of services.
Client Consent
Service providers must obtain adequate consent from customers prior to using AI tools or algorithms to process personal data.
Regulatory Compliance
Businesses using AI to perform professional services must ensure compliance with applicable laws, regulations, and industry standards governing their practice areas. This includes regulatory requirements related to data protection, privacy, security, financial reporting, and professional conduct.
Indian courts have generally held that results produced that do not involve human creativity or involvement are not eligible for patent or copyright protection.
In India, trade secret protections can be obtained without application or registration. In the context of AI, trade secret protections could include protecting output, data sets, unique algorithms, and machine learning techniques. However, the effectiveness of such protection in the AI context is contingent upon restricted access to AI outputs, which may not be necessarily helpful where the desired outcome is commercial exploitation of AI generated content.
A noteworthy development occurred when an AI-based app, RAGHAV, was recognised as a co-author of an artwork. However, the Indian Copyright Office later issued a notice of withdrawal of the registration of the AI author and requested that the human author provided details regarding RAGHAV’s legal status.
Separately, in July 2021, the Parliamentary Standing Committee on Commerce in its 161st report recognised the inadequacies of the Copyright Act in addressing ownership, authorship, and inventorship concerning AI-generated works. The committee recommended a review of existing IP rights laws to incorporate AI-generated works within their purview. However, in February 2024, the Minister of State for Commerce and Industry clarified that that the current patent and copyright frameworks are well equipped to protect AI-generated works and related innovations.
There are risks of copyright infringement as the works or products created using OpenAI may include copyrighted material. Further, issues with regard to ownership of the output may arise considering that the general rule under Indian copyright laws is that the creator of the content owns the IP rights to that content – the AI may play a significant role in generating the content, which complicates content ownership issues.
Companies can advise their board of directors to create a comprehensive AI strategy and governance framework that would consider and address the following:
By addressing these key issues, corporate boards of directors can better identify and mitigate risks in the adoption of AI, enabling the organisation to leverage AI technologies effectively while safeguarding against potential challenges and pitfalls.
Please refer to 16.1 Advising Directors.
14th Floor
SKAV 909 Building
Lavelle Road
Ashok Nagar
Bengaluru
Karnataka 560001
India
Artificial Intelligence in the Indian Workspace: Developing Guardrails
Artificial intelligence (AI) has emerged as a transformative force in the Indian workspace, revolutionising industries across the spectrum from finance to healthcare. With a burgeoning technology ecosystem and a skilled workforce, India stands at the forefront of AI adoption and innovation. Early adopters of this technology, which include both enterprises and workforce members, have integrated AI into work for a variety of reasons ‒ from personal and professional upskilling to increased efficiency and automation, employee monitoring, and recruitment. This rapid integration of AI technologies into various facets of the Indian economy has brought the lack of regulatory guardrails around its use into sharp focus – especially within and among the Indian workspace and workforce. The advantages that AI offers must be necessarily balanced against larger risks, from bias, job displacement, and data protection concerns.
This article sets out the current legal landscape that governs AI use in the Indian workforce, which – in the absence of dedicated AI-specific legislation – serves as a framework for promoting responsible and ethical AI deployment. The article also aims to identify scenarios where existing regulations may fall short, underscoring the need for tailored approaches to address emerging challenges. By examining both the strengths and limitations of existing frameworks, this review aims to empower stakeholders in fostering a culture of ethical AI usage within Indian workplaces.
Present legal framework
Employment laws
A majority of Indian employment laws predate the 21st century and are typically unsuccessful in regulating the use of technology in employment processes. While most Indian labour laws and labour courts are employee-centric and offer actionable rights to individuals, the actual exercise of these rights are ineffective in a technological context, given that most laws do not contemplate such use.
In recent years, the Indian government has rewritten and reorganised the existing labour and employment laws into new sets of labour codes, which have not yet been implemented. Much like their predecessors, these codes lack the ability to regulate the use of technological tools and AI in employment and recruitment-related matters. While certain laws (such as the Rights of Persons with Disabilities Act 2016) explicitly prohibit certain types of discrimination and bias, and the Industrial Disputes Act 1947 as well as state laws establish processes for addressing lay-offs resulting from AI-related job displacement, the majority of employment laws are inadequately equipped to address concerns regarding the use of AI.
Data protection laws
Since 2011, the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011 (the “SPDI Rules”) set out India’s data protection framework. The SPDI Rules are a light-touch set of rules issued under the larger framework of the Information Technology Act 2000 (the “IT Act”). The SPDI Rules largely regulate a subset of personal information called sensitive personal data or information (SPDI), which is a subset of personal information comprising passwords, biometric information, medical information, sexual orientation, and financial information. The SPDI Rules require businesses to seek consent for the collection and processing of SPDI. Obligations with regard to the provision of privacy policies, implementation of reasonable security practices and procedures, and data subject rights apply to all forms of personal data. Since the SPDI Rules have not been significantly enforced in an employment context, the effectiveness of the framework’s enforcement mechanism has been frequently questioned.
In 2023, India enacted the Digital Personal Data Protection Act 2023 (DPDPA). The law is expected to be implemented in 2024. Unlike the SPDI Rules, the DPDPA will govern all forms of personal data equally. The DPDPA largely provides two grounds for processing personal data:
Employers are generally expected to rely on the latter to process personal data in most contexts; however, it is presently unclear if employers may rely on this ground to process personal data at the recruitment stage (ie, before the commencement of the employment) or once the employment relationship is terminated.
Reliance on the “employment purposes” ground of processing significantly decreases the obligations placed on employers in relation to their use of personal data. Employers using this ground do not need to provide employees with privacy notices, obtain consent, or undertake any other transparency obligations (including the provision of data subject rights). Employers theoretically have an unfettered right to use the personal data of employees.
Separately, the DPDPA does not apply to data that is made publicly available by individuals. Accordingly, any data that is obtained from a public source and used by an employer is not governed by the DPDPA. This gives employers the general ability to access and otherwise process personal data concerning their employees that is publicly available without adherence to any data protection standards.
IT laws
The IT Act is another key legislation that governs the use of AI in India. The IT Act is particularly relevant, as it provides a framework for the access and usage of electronic records, digital signatures, and other forms of electronic communication that are key factors in the development and deployment of AI systems.
While the IT Act restricts unauthorised access to another’s computer system, this requirement in an employment context is untested. Recent Indian jurisprudence indicates that workplace privacy is confined to non-public spaces, with no reasonable expectation of privacy for certain types of personal data, such as demographic data. As a best practice, employers ought to avoid the use of AI tools that intrude into employees’ personal assets or personal online spaces.
Key uses – suggested guardrails
Recruitment
Several AI-powered recruitment tools, such as resume parsing, biometric processing, and specialised video interviews, aid HR teams in streamlining recruitment processes. However, the use of tools should be carefully structured to ensure compliance with data protection rules and scraping-related requirements.
Generally, where recruitment data is obtained from individuals (such as through resume submissions or direct applications), obtaining clear consent from the individual regarding the use of AI in respect of data ‒ coupled with disclosures ‒ would mitigate data protection concerns. An organisation that obtains recruitment data from third parties should ensure these parties have the necessary consents for its proposed use.
On the other hand, a twofold risk arises when employers use AI tools that scrape public sources of data for recruitment-related activities: the first arises from the IT Act, which prohibits unauthorised access to computer systems, and the second is a contractual breach of terms and conditions or end-user licence agreements that may prohibit scraping from certain websites.
Stakeholders should therefore, prior to the collection of data from public sources, ensure that data collection activities are reliable, compliant with law, and do not lead to third party infringement claims. Employers must also ensure that the use of third-party AI are underscored by adequate contractual protections in respect of third-party rights.
Employee monitoring
A growing number of employers have begun to use AI to monitor employee activities for various purposes, including productivity measurements, implementing work-from-home policies, offering incentives and benefits, detecting misconduct, and ensuring compliance with internal policies. Such use brings with it its own set of ethical and privacy considerations.
As the first step in implementing AI-powered employee monitoring tools, organisations ought to ensure that the use of such tools is transparent. While the SPDI Rules and DPDPA mandate a limited form of transparency from a data protection front, communicating the purpose, scope and consequences of monitoring, type of data or activity being monitored ‒ as well as the monitoring methods ‒ establishes a responsible framework for use. Indian law does not require organisations to consider whether alternative and less intrusive monitoring would be more appropriate; in the absence of such requirements, the ability to demonstrate reasons for the use of monitoring tools is a useful step to mitigate challenges.
Employers should also ensure compliance with data minimisation principles while undertaking monitoring activities. Though the DPDPA offers a broad employment exemption that does not strictly require data minimisation, employers may choose to, as a best practice, ensure that personal data processed in connection with AI monitoring is only used for purposes disclosed to employees; any other use ought to be subject to reasonable disclosures. Employers should also ensure that data subject to monitoring is accurate and up to date.
The SPDI Rules and the DPDPA offer a limited set of data subject rights. Though employees do not have rights to access, correct, or seek the erasure of personal data that is processed for employment-related purposes under the DPDPA, as a best practice and in line with global trends, employers may choose to facilitate such rights. Tools should offer data subject rights functionalities and, where applicable, employees should be aware of how to exercise these rights.
Organisations should also be cognisant of the fact that AI tools may not always produce accurate outputs. Any decisions undertaken on behalf of monitoring tools – including the offering of benefits, incentivising, or employment-related decisions – should involve human participants, with clearly defined standard operating procedures that define the reliance rate of outputs or decisions generated by such tools.
Use of generative AI by employees
Employers have also begun grappling with the use of generative AI and large language models by employees. While the accuracy, abilities, and output capacities of generative AI increase at a mammoth pace, employers must be wary of possible issues that arise with such use, including claims of IP infringements, ownership, transparency, bias, discrimination, and ethical considerations. As a general note, Indian law does not prohibit the use of generative AI.
If employers permit employees to use generative AI, establishing policies around its use may mitigate risks that arise. Employers that choose to harness the use of generative AI should consider risk assessments that involve security checks, due diligence of generative AI tools, analyses of the tools’ data sources and training mechanisms to mitigate data protection and third-party IP infringement risks. A list of approved generative AI tools is a useful mechanism for overseeing compliance.
Employers may choose to dictate the data sources that employees can use within tools – for instance, personal data or confidential information that is subject to access controls ought not to be shared with these tools – as well as the manner in which generated outputs may be utilised. Any use of generated output should ideally be subject to human involvement or approval. Quality control measures that set out accuracy, reliability, and adherence to internal content standards ought to supplement generative AI use. This may be achieved through the institution of specific content-related policies, especially concerning the use of trade marks, company-copyrighted materials and designs. Organisations may choose to ‒ where appropriate ‒ attribute generated output to the use of generative AI tools, so that the exact source and nature of errors are clearly identified and mitigated.
By addressing these concerns in their use of generative AI policy, employers can ensure that their use of generative AI is responsible, ethical, and beneficial for involved stakeholders.
Preventing bias
While AI has proven its mettle in streamlining processes, reducing costs, and increasing efficiency, potential risks such as bias and hallucinations have begun to arise in the use of such tools within an employment context. Although no reports of such bias have been reported in private organisations in India yet, an increased use may result in unforeseen biases and hallucinations.
i) Bias in AI tools
One of the most significant risks of using AI tools in employment processes is the potential for bias. AI tools will only be as unbiased as the data they are trained on. If the data used to train these tools is biased, the output will eventually perpetrate the same biases present within the training data. This can lead to discriminatory practices that may violate anti-discrimination laws.
To ensure that third-party AI tools do not contain bias, organisations should ensure that tool providers constantly monitor tools for bias and remain responsible for identifying and mitigating any biases discovered. When carrying out evaluations of AI tools, organisations ought to also pay close attention to samples of training data to ensure that these samples do not portray biases. To reduce bias, employers must also ensure that the data used to train these tools is diverse and representative of the workforce. Organisations may also choose to select tools with transparent decision-making processes that permit audits and identification of any potential biases.
ii) Hallucinations in Artificial Intelligence tools
Another potential risk of using AI tools in employment processes is the potential for hallucinations. Hallucinations occur when AI tools make incorrect predictions or statements, which may occur owing to incomplete or missing data. To reduce the risk of hallucinations, employers must ensure that the data used to train these tools is accurate and complete. Employers must also ensure that these tools are regularly reviewed and updated to reflect changes in the workforce.
Organisations that identify bias or hallucinations in their AI tools ought to take immediate action to address these issues. This may involve retraining the tool on more representative data, updating the tool to reflect changes in the workforce, or discontinuing the use of the tool altogether.
Employers must also be prepared to address legal issues that may arise from the use of biased or hallucinating AI tools. This may include defending against discrimination claims or addressing concerns raised by regulatory authorities.
Conclusion
AI tools can be a valuable asset for employers in streamlining recruitment and performance review processes. However, employers must remain vigilant in identifying and addressing potential risks such as bias and hallucinations. Establishing a governance framework within organisations and identifying and setting out standards for use of AI mitigate the legal and ethical risks that arise with their use.
14th Floor
SKAV 909 Building
Lavelle Road
Ashok Nagar
Bengaluru
Karnataka 560001
India