Artificial Intelligence 2024

Last Updated May 28, 2024

India

Law and Practice

Authors



Spice Route Legal Spice Route Legal has market leading TMT, IP and data protection practices that combine to offer unparalleled expertise in AI, offering pragmatic, business-oriented legal advice across industries. As AI continues to redefine the world, Spice Route Legal’s team has advised some of the most innovative companies at the intersection of technology, media, telecommunications, financial services, aviation, and life sciences on the risks, risk mitigation, accountability, transparency and governance frameworks for the use of AI.

There are no specific Indian laws that govern AI or its use. Legal considerations arise from different legal frameworks, including:

  • the Information Technology Act 2000 (the “IT Act”) and rules issued under this law, which set out cybersecurity, data protection, and intermediary liability frameworks in India. This law also contains penal provisions that would apply in instances of unlawful training, creation, and use of AI;
  • the Indian Contract Act 1872, which governs contractual relationships that underline the provision of AI solutions and services;
  • the Sale of Goods Act 1930, which sets out a statutory framework in respect of the sale of goods – the modalities and provision of AI solutions may characterise it is a good under this law, which would trigger provisions on implied warranties, the ability to seek damages for specific types of product breaches, the ability to repudiate contracts in certain instances, etc;
  • the Consumer Protection Act 2019, which governs consumer rights, prohibits unfair trade practices, regulates misleading advertisements, and imposes obligations and liability on service providers for deficiencies in products and services;
  • the Digital Personal Data Protection Act 2023, which was enacted in August 2023 but is not yet in effect ‒ once in effect, this law will supersede data protection rules issued under the IT Act and set out the legal framework for data protection concerns that arise from the creation, training, and use of AI; 
  • the Copyright Act 1957, which governs the copyright framework for the development and use of AI;
  • the Patent Act 1970, which would govern instances of patentability of AI; and
  • the Indian Penal Code 1860, which sets out the substantive criminal law in India, and which would apply in instances of unlawful training, creation, and use of AI.

Key industry applications of AI and machine learning are expected to be in the healthcare, agriculture, education, telecom, infrastructure, and transportation sectors, with the government specifically focusing on these industries. No specific law defines, differentiates between, or regulates predictive and generative AI.

Healthcare

Integration of AI models within the healthcare industry is expected to increase access and affordability of quality healthcare to rural areas. Key applications include:

  • early detection and diagnosis supported by AI-powered analytics tools and imaging methods;
  • remote monitoring of patients through the use of robotics and IoT devices;
  • improvements in administrative functions such as record-keeping, appointment bookings, bill payments, etc; and
  • research and drug discovery.

Agriculture

AI in agriculture is expected to improve farm productivity and reduce wastage of produce through the development of new farming methods, improvements in administration, reduction in costs of transportation and distribution. These measures are expected to positively impact farmers’ income and livelihood.

Telecommunications

The Telecom Regulatory Authority of India (TRAI), which is the nodal telecommunications regulator, has directed telecommunications service providers to implement the use of AI within telecommunications systems and networks in order to detect and prevent spam and phishing attacks that occur through phone networks, calls, and SMS.

Education

AI is expected to improve access and quality of education across India. Educational institutions have begun implementing AI solutions within their systems to tailor systems for students (such as changing language preferences) based on their needs. AI is also expected to improve the quality of online education systems, which ‒ in turn ‒ will enhance access to education.

Infrastructure

State governments are expected to adopt AI systems within their city planning department to improve public utility systems, public safety, and general management and administration. AI is also expected to bring about the development of “smart cities”, where residents will have access to better transportation and housing through AI-powered solutions.

Transportation

In line with infrastructure planning, the adoption of AI is also expected to improve road congestions and reduce accidents. Similarly, AI models are being used to create sustainable means of transportation and travel route optimisations.

The Indian government has created programmes and initiatives to promote the use of AI and guide relevant stakeholders to implement and adopt AI, as follows.

  • The Ministry of Electronics and IT (the “MeitY”) has initiated the FutureSkills PRIME programme in collaboration with the National Association of Software and Service Companies (“NASSCOM”). The programme is intended to educate individuals on topics such as AI and data science through certification courses.
  • National Institution for Transforming India (NITI) Aayog, a central government-backed policy think tank, published the National Strategy for Artificial Intelligence in June 2018, which proposes to develop an ecosystem for the development and implementation of AI in India. It has also established “Centres of Research Excellence” to encourage research and development of AI and related technologies by connecting entities such as start-ups, enterprises, venture capitalists, government, and policy groups.
  • The central government has established the “National AI Portal”, which is a repository of AI-based research reports, news articles, government strategies, and other similar initiatives.
  • The MeitY ‒ in collaboration with the Ministry of Human Resource Development ‒ has launched the Responsible AI for Youth programmes, aimed at educating younger individuals on the applications of AI in industries such as agriculture, transportation, and rural development (among other sectors).
  • The Department of Science and Technology is developing the National Mission on Interdisciplinary Cyber-Physical Systems, which intends to drive research and development of technologies such as AI, IoT devices, and quantum computing.
  • TRAI has directed telecommunications service providers to implement the use of AI within telecommunications systems and networks in order to detect and prevent spam and phishing attacks that occur through phone networks, calls, and SMS.

India does not presently have an AI specific legislation. The Indian government has proposed the enactment of the Digital India Act (DIA), which is intended to replace the IT Act. Although the government has not issued a formal draft, the DIA is expected to regulate emerging technologies, define, and regulate high risk AI systems, and legislate on ethical use of AI based tools. Certain types of AI may be treated as “intermediaries” under the DIA, with safe harbour protections offered under the IT Act likely to be extended to such types of AI.

The Indian government has also issued an advisory to certain intermediaries that incorporate AI in their products and services to ensure algorithms do not produce discriminatory results. Although the MeitY’s Minister of State has unofficially indicated that this advisory is specifically targeted towards larger intermediaries, the scale on which AI is deployed ‒ especially by start-ups in India ‒ may be impacted in the mid term by this advisory.

No AI-specific legislation has been enacted in India.

Indian regulatory bodies have issued several White Papers, policies, reports, and recommendation on the use and adoption of AI, as follows.

  • National Strategy for Artificial Intelligence by NITI Aayog ‒ the strategy addresses how transformative technologies can impact India’s social and inclusive growth, specifically focusing on the healthcare, agriculture, education, infrastructure, and transportation sectors. It identifies the barriers to development and deployment of AI, including the lack of expertise in research and application of AI, the absence of an enabling data ecosystem, high resource cost and low awareness, and privacy concerns.
  • Responsible AI: Part 1 - Principles for Responsible AI by NITI Aayog ‒ this document analyses ethical considerations and discusses principles concerning the responsible management of AI systems that may be leveraged by relevant stakeholders in India. These principles include safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and protection and reinforcement of positive human values. It also highlights the harms that may arise from the use of AI such as lack of understanding of AI, challenges in explaining AI systems, potential exclusion of citizens in AI systems used for delivering important services and benefits, difficulty in assigning accountability, and privacy and security risks.
  • Responsible AI: Part 2 - Operationalising Principles for Responsible AI by NITI Aayog ‒ this document highlights the role of the Indian government during the development of AI to build trust in technology and be accountable to the public. It also proposes to provide a plan of action that the government and businesses should adopt to develop and maintain responsible AI standards.
  • Guidelines for Generative AI by NASSCOM ‒ these are guidelines established for persons engaged in the design, development, and use of generative AI technologies to promote and facilitate the responsible development and use of AI. It also highlights potential harms associated with generative AI technologies.
  • Recommendations on Leveraging Artificial Intelligence and Big Data in Telecommunication Sector by TRAI – this document highlights the importance of establishing a nationwide regulatory framework for AI that is applicable across all industries. It primarily states that AI regulations must not be confined to specific industries (eg, the telecommunications industry) and must seek the general use of AI based on their level of risk, with high-risk applications that directly impact humans to be subject to mandatory legal safeguards. The recommendations also propose the creation of a statutory body, the Artificial Intelligence and Data Authority of India (AIDAI), to oversee the development of responsible AI and regulate its use in India.

This is not applicable in India.

This is not applicable in India.

This is not applicable in India.

Indian data, information, and content laws do not explicitly regulate AI.

However, India’s newly introduced data protection law, the DPDPA (which is yet to be enforced) is noteworthy, as it entirely exempts personal data that is made publicly available by individuals themselves or by someone else under a legal obligation. Though the scope of “publicly available” data is yet to be defined, this exemption could potentially help foster AI development.

The National Strategy for Artificial Intelligence, released by NITI Aayog, proposes a blockchain-based decentralised data marketplace ‒ ensuring traceability, access controls, regulatory compliance ‒ and a price discovery mechanism for data to balance privacy concerns with the need for a large supply of data for AI training.

The “DEPA Training Framework”, issued by NITI Aayog, permits persons to receive large training datasets for analysis or training. Data shared through this framework must be aggregated and de-identified. Organisations that provide or disclose data are tasked with both, seeking consent from data subjects to share such data and aggregating and de-identifying data prior to disclosure. Participation in this ecosystem is subject to approvals from self-regulatory organisations, implementation of defined security and privacy controls, and contractual arrangements among participating entities. Participation permits large-scale processing of structured de-identified data for training AI models and may offer the ability to commercialise the sharing of such data.

As the DIA is intended to replace the IT Act, it will regulate the entire digital ecosystem in India. Objectives of the DIA include the development of the Indian digital economy, innovation, and ensuring India is considered a trusted player for digital products and solutions. It will define and regulate high-risk AI systems, develop ethical use of AI-based tools and develop accountability standards. It will also attempt to prevent user harm such as cybercrimes targeting women and children, regulate addictive technology, protect minors, provide users with digital rights, and curb the spreading of fake news and information.

In practice, organisations – especially larger companies that process or target a large number of users or roll out AI-enabled products or tools ‒ are actively taking measures to address commercial risks that may arise from the use of AI. These measures may also incorporate AI-specific terms and conditions to disclaim the use of AI, such as generative AI, to prevent liability for the results produced by such tools. Please see 11. Legal Issues With Predictive and Generative AI and 12. AI Procurement for further details.

There have been no conclusive judicial precedents on IP rights in respect of the use of AI.

Some Indian courts have recognised the benefits of using AI-powered tools to assist with investigations in cases involving missing persons and child pornography. However, courts have questioned the accuracy and reliability of AI-powered chatbots such as ChatGPT where parties have sought to establish certain facts through results generated by such tools.

Courts have not prescribed any definitions or standards for describing AI and machine-learning tools at this stage.

MeitY

The MeitY is the apex ministry established by the central government to regulate and be responsible for the development and facilitation of the use of AI in India. It has established a separate division known as the “Emerging Technologies Division”, which works towards fostering and promoting the usage of emerging technologies in India. In order to develop a framework to regulate AI, it has also constituted four committees on AI, which have published reports on issues such as ethics, cybersecurity, and re-skilling individuals.

NITI Aayog

The NITI Aayog is a public policy think tank established by the Indian government. It was primarily tasked with creating a national strategy on developing and implementing AI and related technologies in India, which was published in 2018. It has also published the “Responsible AI” approach document, which prescribed certain principles to guide relevant organisations on how to use AI in an effective and responsible manner.

NASSCOM

NASSCOM is a non-government body that works with the MeitY and other stakeholders to promote the responsible use of AI in India. NASSCOM AI is an initiative undertaken by the organisation to foster the creation, development and sustainable use of AI in India. NASSCOM has, among other articles and reports, also released the Guidelines on Generative AI, which provides a common set of standards and protocols that may be adopted by stakeholders while implementing generative AI tools within their services.

Other Regulatory Agencies

There are also regulatory authorities, such as TRAI, the Securities and Exchange Board of India, and the Insurance Regulatory and Development Authority of India, which are actively formulating recommendation papers and guidelines to regulate the use of AI in their respective sectors. However, these entities are expected to play a more active role once the government enacts specific legislations with regard to AI.

The MeitY defines machine learning as “algorithms and techniques that allow computers to “learn” from and make predictions based on data”. It also refers to it as “a branch of AI that specifically studies algorithms [that] learn and improve from training examples”.

Although it has not provided a definition of AI, the NITI Aayog has stated that AI machines may be classified as those that have the ability to perform cognitive tasks such as thinking, perceiving, learning, problem-solving and decision-making in a similar manner to humans.

As no regulations have been enacted with regard to AI, these definitions merely act as guiding principles on how the regulators conceptualise AI. At this stage, it will primarily be the courts’ responsibility to identify whether a particular software is considered AI and ‒ in the absence of legislation – to what extent will it have an impact on the legal rights and liabilities of parties involved during a dispute if so.

MeitY

The MeitY is actively taking steps to regulate AI and address the issues through various policies, strategies and working committee reports. The MeitY’s AI Committee reports aim to prevent harms such as the weaponisation of AI, cybersecurity risks, and privacy and ethical issues that arise from AI.

The MeitY’s objectives are:

  • to promote the adoption of emerging technologies in India;
  • to promote the use of robotics and other emerging technologies in the fields of manufacturing, healthcare, agriculture, and national security; and
  • to improve the efficacy of data-driven governance across various government entities.

NITI Aayog

NITI Aayog aims to provide a roadmap for the creation, development and use of AI in India and to guide stakeholders on how to use these technologies in a responsible and sustainable manner.

NASSCOM

NASSCOM primarily aims at preventing or mitigating the following risks with regard to AI:

  • increase in misinformation, disinformation, and hateful comments;
  • infringement of third-party IP rights;
  • privacy harms through violations of data protection regulations;
  • dissemination of harmful social, economic, and political biases;
  • surge in malicious cyber-attacks; and
  • environmental degradation.

NASSCOM’s objectives are:

  • to provide for a uniform set of regulations to all stake holders;
  • to promote the use of generative AI responsibly; and
  • to prevent generative AI from being deployed without appropriate safeguards and regulations.

No enforcement actions have occurred yet.

The primary bodies that have addressed AI standards are as follows.

  • The Bureau of Indian Standards (BIS) – the BIS is India’s national standard-setting body and is tasked with proposing minimum standards and benchmarks goods and services. It has published several standards on artificial intelligence, including:
    1. IS/ISO/IEC 20546:2019, titled Information Technology: Big Data ‒ Overview and Vocabulary;
    2. IS/ISO/IEC/TR 20547-1:2020, titled Information Technology Big Data Reference Architecture Part 1 Framework and Application Process;
    3. IS/ISO/IEC 20547-3:2020, titled Information Technology Big Data Reference Architecture Part 3 Reference Architecture;
    4. IS/ISO/IEC/TR 24028:2020, titled Information Technology Artificial Intelligence Overview of Trustworthiness in Artificial Intelligence;
    5. IS/ISO/IEC/TR 24029-1:2021, titled Artificial Intelligence AI Assessment of the Robustness of Neural Networks Part 1 Overview;
    6. IS/ISO/IEC/TR 24030:2021, titled Information Technology Artificial Intelligence AI Use Cases;
    7. IS/ISO/IEC/TR 24372:2021, titled Information Technology Artificial Intelligence AI Overview of Computational Approaches for AI Systems; and
    8. IS/ISO/IEC 24668:2022, titled Information Technology Artificial Intelligence Process Management Framework for Big Data Analytics.

The BIS has also constituted a committee that is drafting standards that are in the process of being published.

  • The Telecom Engineering Centre (TEC) – the TEC is an agency tasked with setting minimum standards and providing certifications to telecommunications products under the authority of the Department of Telecommunications. In July 2023, the TEC published the Fairness Assessment and Rating of Artificial Intelligence Systems. It prescribes guidelines and standards on how to ensure fairness in AI systems and lays down parameters to identify and measure biases.
  • The Automotive Research Association of India (ARAI) – ARAI is an autonomous body focused on research and development, affiliated with the Ministry of Heavy Industries. It aims at creating hardware and software tools to incorporate AI- and machine-learning-based algorithms within automotives to improve efficiency and sustainability. It has currently established the Centre for System Development using Artificial Intelligence and Machine Learning Techniques to achieve these objectives.

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are the primary international standard-setting bodies whose certifications are sought by business in India. The BIS is in the process of adopting the ISO/IEC 42001:2023 that specify requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS) within organisations.

Government authorities are actively using and leveraging AI. Examples include:

  • use of the Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE) to assist judges with case management and decision-making;
  • use of the Supreme Court Vidhik Anuvaad Software (SUVAS), which leverages machine learning to translate Supreme Court decisions into regional languages;
  • development of facial recognition systems that can identify anti-social elements, even in low-resolution images at restricted zones as well as public places;
  • use of Face Recognition System under Disguise (FRSD), along with other AI-based systems, developed for the Indian Army; and
  • development of a facial recognition-based solution called ASTR, used to disconnect more than 3.6 million mobile connections that were obtained using fake or forged documents.

Indian courts have not yet issued notable decisions on the use of AI by the government.

In 2022, the Indian government announced the launch of 75 AI products and technologies during a symposium organised by the Ministry of Defence for protection from future threats and for the development and peace of humanity.

While the rise of AI-generated content has prompted discussions in relation to copyright ownership and product liability, there has been no conclusive judicial or regulatory guidance on IP concerns in connection with AI. The Indian government has clarified that the existing IP rights regime in India adequately addresses and protects AI-generated works.

There has been limited Indian regulatory or judicial guidance on data protection concerns as well. India’s existing data protection regime under the IT Act prescribes light-touch compliance for the processing of personal data. Once enacted, the DPDPA will exempt processing of publicly available personal data from its ambit entirely, which mitigates certain data protection concerns.

Generally, the emerging issues raised by generative AI are as follows.

  • Training data ‒ any use of training data should occur in a manner that does not infringe the IP rights of the owner of such data. Collection and processing of such data should be subject to checks on the legality of the data sources from both an IP and data protection angle. These data sources should either be in the public domain, made available under appropriate licence conditions, or collected on the basis of consent.
  • Input data ‒ AI providers may seek to ensure that responsibilities and corresponding liabilities arising from data or information that is inputted into generative AI tools vest with users. Users run the risk of infringing third-party IP rights when using third-party input data. Any use should therefore be structured in a manner that is either licensed appropriately or that falls within the scope of fair dealing exceptions under the Copyright Act 1957. From a data protection angle, input of personal data may require users’ consent unless specific exceptions apply. The person responsible for input data should also ensure compliance with applicable data protection principles. Enterprises that use generative AI solutions should factor these aspects into governance structures. AI providers may choose to impose contractual obligations on users to ensure that these responsibilities are adequately covered in contractual terms.
  • Output data ‒ Indian courts have recognised the importance of human creativity or involvement for any work to be eligible for copyright protection. Most AI solutions choose to, through their terms of use, ensure that persons that use the tools own any generated data, remain responsible for their use and verifying legality and appropriateness, and remain liable for use. Users should, from an IP perspective, ensure that use occurs in a manner that does not infringe copyrights of owners of input data. Separately, at this stage, exercise of data subject rights in respect of output data is less clear; it is likely that both the user and the AI provider that continues to process generated/output data would have to comply with data subject requests, unless statutory exemptions apply.

Please refer to 8.1 Emerging Issues in Generative AI.

Please see 8.1 Emerging Issues in Generative AI for more details.

Present Indian data protection laws do not address the use of AI or the rights that individuals have in relation to output produced by AI. Individuals’ rights are limited to the right to access and correct their information. They do not have the right to delete the information that is provided to an organisation, but they may opt out from the provision of their personal data, which will entitle the organisation to deny services to them. Provided that AI tools have procured training data in a manner consistent with data protection requirements, and users of such tools ensure compliance with data protection requirements while processing and inputting personal data, existing data protection risks may be mitigated.

The soon-to-be-implemented DPDPA provides individuals with rights to correction and erasure of their personal data, provided that processing is based on consent of the data subject. AI output would accordingly need to be corrected or erased when requested, to the extent it contains personal data about the individual, if the generation occurs on the basis of consent. Considering that the DPDPA exempts processing of publicly available data from its ambit, organisations that use AI tools may not need to factor significant data protection compliance if data sources are appropriately identified.

AI is increasingly being used in various aspects of the practice of law, to streamline processes, improve efficiency, and enhance decision-making. As lawyers are, at times, required to analyse a wide range of resources (eg, emails, documents, and communications) to identify key pieces of information, the use of AI-powered tools to perform such functions significantly increases efficiency. Lawyers are also relying on AI tools to improve research functions, as such tools have the ability to analyse vast amounts of information with relative accuracy and speed.

AI is also being utilised in various automated support services. Chatbots equipped with natural language processing capabilities can provide instant responses to common legal queries, offer preliminary legal advice, and assist in client onboarding processes. Additionally, AI-driven legal research platforms can efficiently search through vast databases of case law, statutes, and legal precedents to provide relevant insights and analysis to lawyers, thereby facilitating informed decision-making and strategy development. However, courts have questioned the accuracy and reliability of AI-powered chatbots such as ChatGPT where parties have sought to establish certain facts through results generated by such tools.

Separately, the integration of AI in litigation and other practices of law also raises the following related ethical issues.

  • Bias ‒ AI tools may not always produce accurate or reliable results, leading to potential errors or biases in decision-making. Legal professionals must evaluate whether AI-powered systems are sufficiently reliable to build into their advice.
  • Transparency – the decision-making process of AI is often unclear and poses challenges to transparency and accountability in the legal profession. Legal professionals that rely on AI systems must ensure that these systems are transparent in their processes and adequately document their procedure and data collection practices.
  • Data protection – legal professionals who rely on AI tools must also ensure that adequate data protection measures are implemented to safeguard sensitive client information from unauthorised access or disclosure.

More details about the risks of the use of AI are set out in 11. Legal Issues With Predictive and Generative AI.

Owing to the lack of specific AI regulations in India, there is limited precedent on the determination of liability with regard to the provision and use of AI. In this lacunae, traditional liability theories would continue to apply.

  • Contractual liability ‒ parties to a contract for the provision of AI-powered tools and services such as software licence agreements or software development agreements will need to contractually determine and allocate liability. This allocation would depend significantly on bargaining power and technical capabilities at each party’s end.
  • Tortious liability – the requirements for imposition of liability for harm resulting from AI-enabled products or services are as follows.
    1. Strict liability – certain AI products, such as autonomous vehicles, may need to be regulated on the principles of strict liability owing to the difficulty of determination of liability if accidents are solely attributable to a failure of the vehicle’s AI systems. In such circumstances, liability for the accident may need to be borne by the AI developer, regardless of their lack of interference in the matter.
    2. Product liability – the Consumer Protection Act 2019 imposes liability for the defect in a good or deficiency in a service on the product manufacturer (the actual developer of the AI), product service provider (the organisation that leverages AI to provide services), and product seller (the consumer-facing entity that sells AI-powered products or provides professional services), depending on the type of harm caused to the consumer. By way of example, if any harm or loss was caused by an inherent defect in the AI’s development, liability may be imposed on the manufacturer, whereas liability may also be attributable to the service provider if the AI software was used in a manner that infringes third-party IP rights.

As regulatory efforts for AI are in their early stages, it is unclear as to how courts will adjudicate on such cases. Accordingly, it becomes important for organisations to:

  • contractually allocate liability; and
  • procure adequate errors and omissions and professional liability insurance in cases where its business is reliant on AI tools and products.

Further clarity on these aspects is expected once the government enacts specific legislations or provides guidance on AI and its associated liability.

As India does not have regulations that are targeted towards the use of AI, the imposition and allocation of liability for the unlawful use of such tools is determined commercially at this stage.

Bias in algorithms presents both technical and legal challenges that have significant implications for consumers, companies, and regulatory bodies. From a technical standpoint, bias in AI algorithms refers to the systematic and unjust discrimination towards certain groups or individuals based on factors such as race, gender, or socioeconomic status. These biases may also violate anti-discrimination laws. Discrimination based on protected attributes such as gender is unlawful and algorithms that perpetuate discrimination may expose organisations to legal liability.

Biases may also lead to consumer grievances, as AI models tend to impose discriminatory pricing or favour certain consumer groups over others. Addressing bias in algorithms would require collaboration between industry stakeholders and regulatory bodies to ensure that AI systems are fundamentally fair.

Data protection and privacy concerns with regard to AI and related technologies stem from the manner in which these tools collect information from available sources. As publicly available data is typically not categorised as personal data, AI models may be extensively trained by using such sources. However, with the development of AI and machine learning models, it becomes increasingly likely that an individual’s personal information may be used by such technologies to produce the required results. By way of example, an individual’s social media accounts may be analysed by AI-backed tools to create a behaviour profile of the individual in order to determine purchasing trends.

The primary concern in such use cases is with regard to the individual’s consent. As AI tools may procure data through all available sources on the internet, it becomes increasingly difficult to obtain an individual’s consent. Data protection laws in the EU and other major jurisdictions provide individuals the right to object to any automated decision-making that was carried out without human oversight to account for such issues. However, this right has not been included under the DPDPA at this stage ‒ although the government may introduce such rights through subsequent regulations.

Similarly, the use of AI models raises cybersecurity concerns. Traditional encryption and baseline security measures are proving to be inadequate with the development of advanced technology such as AI and quantum computing. Adopting more enhanced encryption measures is advisable.

AI tools are capable of using an individual’s biometric features to develop photographs and videos that are artificially rendered in its entirety. This is a significant concern for celebrities and politicians as individuals may use such technology to disseminate misinformation and defame individuals.

Current Indian data protection laws classify biometric information as sensitive personal data and require organisations to obtain written consent from the individual prior to processing such information. The DPDPA also requires consent to be obtained in such situations (unless used in employment-related matters). However, where AI tools are deployed to gather vast amounts of biometric information from public and non-public sources, it is difficult to verify whether consent was obtained and, if so, whether it was adequate to the extent of the personal data processing undertaken by the tool. As advanced generative AI tools may become available to the public, regulatory action may be required to protect the privacy and reputation of individuals.

Though it may be difficult to enforce the lawful use of generative AI tools, regulators may impose obligations on the owners of the AI model to build in non-removable identification marks on AI-generated photographs and videos. This will help viewers to distinguish between real and artificially rendered products. However, this may not completely prevent mala fide actors from removing these protections by breaching security measures. Accordingly, appropriate penalties and sanctions for the breach of AI-related regulations to ensure deterrence must also be in place.

Data protection regulations do not provide individuals the right to object to automated decision-making activities undertaken by organisations. By way of example, where an employer deploys AI to screen candidates on the basis of the information provided by them, the candidate does not have a statutory claim to seek a review of such decisions. This may adversely affect individuals, as AI models may have inherent biases or errors. Although affected individuals may object to these decisions by claiming that the AI model is defective or that the results were produced based on biased models, it is impractical to raise legitimate claims where the manner in which the decision was arrived at by the organisation is not disclosed to the public.

The government’s proposal for the DIA, which is intended to repeal and replace the existing overarching law on information technology, indicates that it may provide all individuals with the right to object to automated decision-making. However, at this stage, no developments have been reported with regard to these aspects.

Indian data protection laws require the disclosure of the personal datasets collected and the purposes for the processing of such data. This would also apply in cases where AI and machine learning tools are incorporated within services to improve efficiency and decrease human dependency in cases of errors or customer grievances. However, under current and proposed regulations, there are no specific obligations imposed on organisations to disclose the fact that the service uses AI to deliver particular features.

By way of example, an organisation that implements an AI chatbot on its website or operates an AI-powered customer service number, is not statutorily required to disclose this information. However, as current data protection laws and the DPDPA encourage transparency with regard to the processing personal data, it is recommended to make these disclosures to all affected individuals. These disclosures may be made through a publicly available privacy policy.

The use of AI may lead to various anti-competitive practices by larger organisations, as follows.

  • Scope for abuse of dominant position ‒ larger organisations may leverage their resources and market share to obtain and implement AI tools within their business operations. As these companies have the ability to enter into exclusive agreements with manufacturers and distributors, AI technology may not be available for the general market. Such practices are considered anti-competitive and would be prohibited by competition laws in India.
  • Predatory pricing – the adoption of AI in certain operations have shown to drastically reduce costs and expenses. Companies that have the ability and resources to acquire these tools may be able to significantly lower prices of goods and services offered, which tends to eliminate competitors in the market.
  • Discriminatory pricing – as AI has the ability to impose dynamic pricing and implement pricing strategies without human oversight, there are risks of biases affecting these decisions. By way of example, pricing strategies implemented by AI models may prefer certain customer profiles over others owing to biases that are present within learning algorithms. This in turn may lead to discriminatory pricing in relation to the products offered by the organisation.

Key concerns that undermine offerings by AI suppliers and that ought to be factored into transactional contracts with customers follow. The ability to effectively negotiate these depend significantly on commercial bargaining power, together with technical capabilities at both parties’ ends. Any transactions document should be supplemented by practical due diligence and risks procurement processes. 

Data Concerns

Enterprise customers may overlook the fact that tools offered by AI suppliers involve large-scale processing and learning of their proprietary and confidential data. Contracts should clearly specify usage restrictions, if any. Customers may choose to deploy on-premises solutions or local instances to limit a supplier’s ability to access data and may also seek restrictions on processing or even learning undertaken by machine learning tools.

Data protection concerns – especially in respect of personal data that may have been used to initially train the AI solution or otherwise processed in connection with a customer’s use of the solution – would subsist in any commercial conversation. The contractual framework should factor responsibility for compliance, identification of legal bases, anonymisation where possible, and corresponding data subject rights.

Cybersecurity

Shared infrastructure utilised by AI suppliers – especially in the context of AI as a service solution – may be leveraged by threat actors to gain access to large scale data and information. Contracts should define security protocols, adequately define breach response processes and timelines, and allocate liability among parties for these issues. Any contract negotiation must be supplemented by security due diligence and monitoring to mitigate risk.

Disclaimers of Liability

AI suppliers ought to typically disclaim liability in respect of results generated through the use of AI. For instance, suppliers may choose to disclaim liabilities in respect of the control, endorsement, recommendations, or representations on the efficacy, appropriateness, or suitability of the AI. Similar disclaimers on use of generative AI, including in respect of input data (where liability is shifted to customers), editorial rights and incomplete, inaccurate, inappropriate, or offensive output ought to be factored in. Depending on negotiating power, suppliers may choose to disclaim liabilities on training data. This would typically be resisted by larger enterprise customers.

Bias

Customers may insist on guarantees on results that are free from bias, accurate, and in compliance with applicable law. These must be balanced with a supplier’s ability to control training data and compliance with data protection and IP concerns in respect of training data. Suppliers are likely to be required to conduct impact assessment tests and undergo regular product testing to ensure compliance with industry standards.

Indian companies have begun adopting AI models within their recruitment and employee monitoring processes to facilitate quicker and more objective evaluation of candidates and employees. Large organisations that receive several applications implement AI based tools within their recruitment system to allow mass screening of applications. Typically, a few instructions are fed into the system, which detects whether minimum parameters are present within the candidate applications. This allows the organisation to short-list a large number of applications within a few hours or days. Similarly, companies are increasingly relying on AI models to evaluate work performance and attendance of employees.

Although such measures are generally advantageous and improve efficiency, it comes with the risk of decisions being influenced by algorithmic biases and technical errors. By way of example, interview systems managed by AI may not recognise certain accents or mannerisms, which may lead to oversights and inaccurate evaluations. Companies should consider undertaking regular impact assessments to mitigate these risks.

With the increase in remote working models, employee monitoring efforts have increased ‒ in paricular, the use of AI-backed monitoring software. Tools for automatically detecting office attendance, employee working hours, etc, implement AI to achieve a more advanced level of monitoring with increased accuracy.

However, as such tools are fed with parameters that are pre-decided, there may be instances where an employee’s behaviour is not factored in. By way of example, if an office uses face recognition technology to record employee attendance, there may be instances where certain physical features are not capable of being identified by the software. These technical errors and biases may have a direct impact on employees and must be relied upon with caution.

Further, data protection concerns arising from the use of such tools must be taken into consideration prior to its implementation. Adequate consents and notices must be provided to all employees and the extent of data collected from the AI software must be limited. By way of example, facial images processed through this technology must be processed solely for the purpose of employee monitoring.

Digital platforms such as cab aggregators and food delivery service providers have begun to adopt AI systems within their business operations. Cab aggregators typically employ AI to ensure route optimisation, implement dynamic pricing, and detect fraudulent bookings. Similarly, food delivery service providers are adopting these tools for relying on features such as demand prediction, delivery routes optimisation, AI-powered customer support, and customer feedback analysis.

At this stage, these practices are solely governed by data protection, consumer and competition regulations. By way of example, cab aggregators that use AI to implement dynamic pricing must ensure that such these processes are based on fair pricing models and are not discriminatory towards certain consumer groups. Further, digital platforms that act as intermediaries must also conduct due diligence exercises to ensure that their platform does not fall afoul of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. Certain intermediaries that incorporate AI in their products and services were also specifically directed by the government to ensure that their algorithms do not produce discriminatory results.

However, these measures apply largely to the general operation of the platforms. With the increasing reliance on AI tools, regulators may need prescribe sector-specific laws to ensure responsible use of AI by large digital platforms.

Use of AI by Financial Services Companies:

Financial services companies in India utilise AI for a range of applications, including fraud detection, risk assessment, customer service, credit scoring, and investment management.

AI algorithms analyse vast amounts of data to identify patterns, trends and anomalies, thereby enabling financial institutions to make data-driven decisions, automate processes and offer personalised services to customers.

Apart from risks such as algorithmic biases and privacy concerns, businesses in the financial sector must also take into account the risks of biases in repurposed data. Such companies may repurpose historical data for training AI algorithms, potentially introducing biases present in the data. Biases in repurposed data could lead to discriminatory practices, resulting in unfair treatment of certain customer groups (such as for credit facility offerings). To address these concerns, the Reserve Bank of India may publish formal guidelines for the use of AI tools by financial services companies and other regulated entities.

The government has not prescribed specific regulations governing AI in healthcare. However, the Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare published by the Indian Council of Medical Research aims to establish an ethical framework for the development, deployment, and adoption of AI-based solutions in healthcare. The guidelines cover ethical principles, stakeholder guidance, an ethics review process, and governance of AI use.

The healthcare industry may leverage AI for the following use cases:

  • AI in Software as a Medical Device (SaMD) tools can greatly improve the precision, efficiency and effectiveness of medical devices, as AI and ML can process extensive data, identify patterns and make accurate predictions, leading to faster and more personalised diagnoses and treatments;
  • robotic surgery (ie, surgery conducted through the use of robots and advanced software) allows medical practitioners various alternatives to perform intrusive surgeries in safer ways – although the use of such tools presents legal challenges such as questions on liability for medical misconduct and negligence, and data protection concerns;
  • the discovery of new drugs and improvements in their efficacy; and
  • improvements in diagnoses through increased accuracy and faster detection.

No specific regulations with regard to the use of AI autonomous vehicles has been prescribed in India. As the use of autonomous vehicles is in its early stages in India, regulations associated with such technology are expected to be conceptualised over the coming few years.

No specific regulations with regard to the manufacturing of AI has been prescribed in India. However, manufacturers will need to take into consideration minimum standards prescribed by applicable standard-setting bodies and general consumer protections laws when developing AI for general commercial use.

Though India does not have any regulations that specifically govern the use of AI in professional services, businesses in the legal, accounting, consulting and other professional fields must take into account various aspects such as liability and professional responsibility, confidentiality, IP, client consent, and regulatory compliance while using AI.

Liability

Service providers using AI tools are responsible for the accuracy and integrity of the outcomes generated by these tools. Courts may impose liability on professionals for errors or misconduct arising from the use of AI. Professionals must ensure that AI systems are appropriately designed, implemented, and monitored to meet professional standards and ethical obligations.

Confidentiality

Service providers may have contractual obligations to maintain client confidentiality and protect sensitive information. AI systems used in professional services must adhere to strict data privacy and security standards to safeguard client data from unauthorised access or disclosure. Professionals must ensure that AI systems comply with relevant data protection and cybersecurity laws and regulations to meet minimum standards of security.

IP Rights

Implications of using AI on IP rights, including ownership of AI-generated works and the protection of proprietary algorithms or datasets must also be taken into account by such service providers. As Indian laws do not provide AI-specific regulations for these aspects, parties will need to contractually determine ownership and licensing of IP developed through the provision of services.

Client Consent

Service providers must obtain adequate consent from customers prior to using AI tools or algorithms to process personal data.

Regulatory Compliance

Businesses using AI to perform professional services must ensure compliance with applicable laws, regulations, and industry standards governing their practice areas. This includes regulatory requirements related to data protection, privacy, security, financial reporting, and professional conduct.

Indian courts have generally held that results produced that do not involve human creativity or involvement are not eligible for patent or copyright protection.

In India, trade secret protections can be obtained without application or registration. In the context of AI, trade secret protections could include protecting output, data sets, unique algorithms, and machine learning techniques. However, the effectiveness of such protection in the AI context is contingent upon restricted access to AI outputs, which may not be necessarily helpful where the desired outcome is commercial exploitation of AI generated content.

A noteworthy development occurred when an AI-based app, RAGHAV, was recognised as a co-author of an artwork. However, the Indian Copyright Office later issued a notice of withdrawal of the registration of the AI author and requested that the human author provided details regarding RAGHAV’s legal status.

Separately, in July 2021, the Parliamentary Standing Committee on Commerce in its 161st report recognised the inadequacies of the Copyright Act in addressing ownership, authorship, and inventorship concerning AI-generated works. The committee recommended a review of existing IP rights laws to incorporate AI-generated works within their purview. However, in February 2024, the Minister of State for Commerce and Industry clarified that that the current patent and copyright frameworks are well equipped to protect AI-generated works and related innovations.

There are risks of copyright infringement as the works or products created using OpenAI may include copyrighted material. Further, issues with regard to ownership of the output may arise considering that the general rule under Indian copyright laws is that the creator of the content owns the IP rights to that content – the AI may play a significant role in generating the content, which complicates content ownership issues.

Companies can advise their board of directors to create a comprehensive AI strategy and governance framework that would consider and address the following:

  • Checklists and due diligence – detailed checklists that have been prepared in consultation with all stakeholders should be followed prior to the development, usage, and roll-out of AI or AI-enabled products and services. These checklists ought to consider the nature and use of AI, the sector the company operates in, whether the subject matter of the AI is regulated, and whether the customers that the company works with are regulated entities (among other factors). Companies should conduct regular due diligence with legal and business teams to analyse laws applicable to the AI products and services, as well as conduct a review of datasets that are embedded into the AI products and services that are rolled out to ensure there are no data protection, IP, bias, and ethical concerns.
  • Data protection and cybersecurity issues ‒ companies should advise their board to consider all the data protection risks that may arise through the use of AI and generative AI models, particularly with regard to the data that has been input into these systems, and whether there will be implications if personal data has been fed into these systems, as well as the rights associated with such data and whether relevant licences have been obtained for the use of such data. Companies should also ensure the adoption and implementation of robust data governance practices to manage the collection, storage, processing and sharing of data used in AI systems. These policies must also address cybersecurity risks associated with AI, including the potential vulnerabilities in AI systems, potential for data breaches, and exposure to cyber-threats. AI systems are designed with security in mind and implement measures to protect against unauthorised access to its systems, malicious attacks, and data breaches.
  • IP issues ‒ please refer to 8. Generative AI and 15. Intellectual Property for details on IP risks that companies should consider. The impact of employing AI on IP rights – specifically, the ownership of AI-generated content and safeguarding proprietary algorithms or datasets – should generally be considered.
  • Competition issues ‒ AI-powered systems may have inherent biases that lead to unfair or inaccurate results. In instances where companies use these tools to assist with their business policies and operations, there is a likelihood of inherent biases fundamentally affecting these processes. By way of example, AI-backed pricing policies may produce anti-competitive results such as the imposition of discriminatory pricing owing to algorithmic bias towards certain segments of the customer base. Organisations must ensure that their AI governance framework sets up adequate safeguards to prevent or mitigate the risk of these issues.
  • Advisory committees ‒ advisory committees should be set up with relevant senior commercial stakeholders and technical experts to advise on the range of issues surrounding AI, taking into account the risk levels of the AI offerings.
  • Bias and fairness issues ‒ in instances where the organisation is looking to onboard third-party AI service providers to assist with their internal and business operations, the following measures may be undertaken:
    1. Provisions should be included within the services contract that allow the organisation to inform the service provider of any bias within results produced by the AI software and seek remediation of these issues. Additional obligations that contractually require the service provider to conduct routine monitoring of their software to detect bias and other deficiencies may be imposed as well.
    2. The organisation may insist on the inclusion of audit rights under the service agreement to allow inspection of the back-end functioning of the software and associated documents and other relevant materials (subject to confidentiality obligations). This ensures fairness and transparency in the decision-making processes of the AI software.
    3. Service providers may also be required to ensure that any data used to train or improve the AI software must be legally obtained (ie, the service provider must have all relevant permissions and licences or the data must be publicly available for such purposes).

By addressing these key issues, corporate boards of directors can better identify and mitigate risks in the adoption of AI, enabling the organisation to leverage AI technologies effectively while safeguarding against potential challenges and pitfalls.

Please refer to 16.1 Advising Directors.

Spice Route Legal

14th Floor
SKAV 909 Building
Lavelle Road
Ashok Nagar
Bengaluru
Karnataka 560001
India

contact@spiceroutelegal.com www.spiceroutelegal.com/
Author Business Card

Trends and Developments


Authors



Spice Route Legal has market leading TMT, IP and data protection practices that combine to offer unparalleled expertise in AI, offering pragmatic, business-oriented legal advice across industries. As AI continues to redefine the world, Spice Route Legal’s team has advised some of the most innovative companies at the intersection of technology, media, telecommunications, financial services, aviation, and life sciences on the risks, risk mitigation, accountability, transparency and governance frameworks for the use of AI.

Artificial Intelligence in the Indian Workspace: Developing Guardrails

Artificial intelligence (AI) has emerged as a transformative force in the Indian workspace, revolutionising industries across the spectrum from finance to healthcare. With a burgeoning technology ecosystem and a skilled workforce, India stands at the forefront of AI adoption and innovation. Early adopters of this technology, which include both enterprises and workforce members, have integrated AI into work for a variety of reasons ‒ from personal and professional upskilling to increased efficiency and automation, employee monitoring, and recruitment. This rapid integration of AI technologies into various facets of the Indian economy has brought the lack of regulatory guardrails around its use into sharp focus – especially within and among the Indian workspace and workforce. The advantages that AI offers must be necessarily balanced against larger risks, from bias, job displacement, and data protection concerns.

This article sets out the current legal landscape that governs AI use in the Indian workforce, which – in the absence of dedicated AI-specific legislation – serves as a framework for promoting responsible and ethical AI deployment. The article also aims to identify scenarios where existing regulations may fall short, underscoring the need for tailored approaches to address emerging challenges. By examining both the strengths and limitations of existing frameworks, this review aims to empower stakeholders in fostering a culture of ethical AI usage within Indian workplaces.

Present legal framework

Employment laws

A majority of Indian employment laws predate the 21st century and are typically unsuccessful in regulating the use of technology in employment processes. While most Indian labour laws and labour courts are employee-centric and offer actionable rights to individuals, the actual exercise of these rights are ineffective in a technological context, given that most laws do not contemplate such use.

In recent years, the Indian government has rewritten and reorganised the existing labour and employment laws into new sets of labour codes, which have not yet been implemented. Much like their predecessors, these codes lack the ability to regulate the use of technological tools and AI in employment and recruitment-related matters. While certain laws (such as the Rights of Persons with Disabilities Act 2016) explicitly prohibit certain types of discrimination and bias, and the Industrial Disputes Act 1947 as well as state laws establish processes for addressing lay-offs resulting from AI-related job displacement, the majority of employment laws are inadequately equipped to address concerns regarding the use of AI.

Data protection laws

Since 2011, the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011 (the “SPDI Rules”) set out India’s data protection framework. The SPDI Rules are a light-touch set of rules issued under the larger framework of the Information Technology Act 2000 (the “IT Act”). The SPDI Rules largely regulate a subset of personal information called sensitive personal data or information (SPDI), which is a subset of personal information comprising passwords, biometric information, medical information, sexual orientation, and financial information. The SPDI Rules require businesses to seek consent for the collection and processing of SPDI. Obligations with regard to the provision of privacy policies, implementation of reasonable security practices and procedures, and data subject rights apply to all forms of personal data. Since the SPDI Rules have not been significantly enforced in an employment context, the effectiveness of the framework’s enforcement mechanism has been frequently questioned.

In 2023, India enacted the Digital Personal Data Protection Act 2023 (DPDPA). The law is expected to be implemented in 2024. Unlike the SPDI Rules, the DPDPA will govern all forms of personal data equally. The DPDPA largely provides two grounds for processing personal data:

  • consent that is free, specific, informed, unconditional, unambiguous, and expressed with a clear affirmative action; or
  • legitimate uses, which include the use of personal data for employment purposes.

Employers are generally expected to rely on the latter to process personal data in most contexts; however, it is presently unclear if employers may rely on this ground to process personal data at the recruitment stage (ie, before the commencement of the employment) or once the employment relationship is terminated.

Reliance on the “employment purposes” ground of processing significantly decreases the obligations placed on employers in relation to their use of personal data. Employers using this ground do not need to provide employees with privacy notices, obtain consent, or undertake any other transparency obligations (including the provision of data subject rights). Employers theoretically have an unfettered right to use the personal data of employees.

Separately, the DPDPA does not apply to data that is made publicly available by individuals. Accordingly, any data that is obtained from a public source and used by an employer is not governed by the DPDPA. This gives employers the general ability to access and otherwise process personal data concerning their employees that is publicly available without adherence to any data protection standards.

IT laws

The IT Act is another key legislation that governs the use of AI in India. The IT Act is particularly relevant, as it provides a framework for the access and usage of electronic records, digital signatures, and other forms of electronic communication that are key factors in the development and deployment of AI systems.

While the IT Act restricts unauthorised access to another’s computer system, this requirement in an employment context is untested. Recent Indian jurisprudence indicates that workplace privacy is confined to non-public spaces, with no reasonable expectation of privacy for certain types of personal data, such as demographic data. As a best practice, employers ought to avoid the use of AI tools that intrude into employees’ personal assets or personal online spaces.

Key uses – suggested guardrails

Recruitment

Several AI-powered recruitment tools, such as resume parsing, biometric processing, and specialised video interviews, aid HR teams in streamlining recruitment processes. However, the use of tools should be carefully structured to ensure compliance with data protection rules and scraping-related requirements.

Generally, where recruitment data is obtained from individuals (such as through resume submissions or direct applications), obtaining clear consent from the individual regarding the use of AI in respect of data ‒ coupled with disclosures ‒ would mitigate data protection concerns. An organisation that obtains recruitment data from third parties should ensure these parties have the necessary consents for its proposed use.

On the other hand, a twofold risk arises when employers use AI tools that scrape public sources of data for recruitment-related activities: the first arises from the IT Act, which prohibits unauthorised access to computer systems, and the second is a contractual breach of terms and conditions or end-user licence agreements that may prohibit scraping from certain websites.

Stakeholders should therefore, prior to the collection of data from public sources, ensure that data collection activities are reliable, compliant with law, and do not lead to third party infringement claims. Employers must also ensure that the use of third-party AI are underscored by adequate contractual protections in respect of third-party rights.

Employee monitoring

A growing number of employers have begun to use AI to monitor employee activities for various purposes, including productivity measurements, implementing work-from-home policies, offering incentives and benefits, detecting misconduct, and ensuring compliance with internal policies. Such use brings with it its own set of ethical and privacy considerations.

As the first step in implementing AI-powered employee monitoring tools, organisations ought to ensure that the use of such tools is transparent. While the SPDI Rules and DPDPA mandate a limited form of transparency from a data protection front, communicating the purpose, scope and consequences of monitoring, type of data or activity being monitored ‒ as well as the monitoring methods ‒ establishes a responsible framework for use. Indian law does not require organisations to consider whether alternative and less intrusive monitoring would be more appropriate; in the absence of such requirements, the ability to demonstrate reasons for the use of monitoring tools is a useful step to mitigate challenges. 

Employers should also ensure compliance with data minimisation principles while undertaking monitoring activities. Though the DPDPA offers a broad employment exemption that does not strictly require data minimisation, employers may choose to, as a best practice, ensure that personal data processed in connection with AI monitoring is only used for purposes disclosed to employees; any other use ought to be subject to reasonable disclosures. Employers should also ensure that data subject to monitoring is accurate and up to date.

The SPDI Rules and the DPDPA offer a limited set of data subject rights. Though employees do not have rights to access, correct, or seek the erasure of personal data that is processed for employment-related purposes under the DPDPA, as a best practice and in line with global trends, employers may choose to facilitate such rights. Tools should offer data subject rights functionalities and, where applicable, employees should be aware of how to exercise these rights.

Organisations should also be cognisant of the fact that AI tools may not always produce accurate outputs. Any decisions undertaken on behalf of monitoring tools – including the offering of benefits, incentivising, or employment-related decisions – should involve human participants, with clearly defined standard operating procedures that define the reliance rate of outputs or decisions generated by such tools.

Use of generative AI by employees

Employers have also begun grappling with the use of generative AI and large language models by employees. While the accuracy, abilities, and output capacities of generative AI increase at a mammoth pace, employers must be wary of possible issues that arise with such use, including claims of IP infringements, ownership, transparency, bias, discrimination, and ethical considerations. As a general note, Indian law does not prohibit the use of generative AI.

If employers permit employees to use generative AI, establishing policies around its use may mitigate risks that arise. Employers that choose to harness the use of generative AI should consider risk assessments that involve security checks, due diligence of generative AI tools, analyses of the tools’ data sources and training mechanisms to mitigate data protection and third-party IP infringement risks. A list of approved generative AI tools is a useful mechanism for overseeing compliance.

Employers may choose to dictate the data sources that employees can use within tools – for instance, personal data or confidential information that is subject to access controls ought not to be shared with these tools – as well as the manner in which generated outputs may be utilised. Any use of generated output should ideally be subject to human involvement or approval. Quality control measures that set out accuracy, reliability, and adherence to internal content standards ought to supplement generative AI use. This may be achieved through the institution of specific content-related policies, especially concerning the use of trade marks, company-copyrighted materials and designs. Organisations may choose to ‒ where appropriate ‒ attribute generated output to the use of generative AI tools, so that the exact source and nature of errors are clearly identified and mitigated.

By addressing these concerns in their use of generative AI policy, employers can ensure that their use of generative AI is responsible, ethical, and beneficial for involved stakeholders.

Preventing bias

While AI has proven its mettle in streamlining processes, reducing costs, and increasing efficiency, potential risks such as bias and hallucinations have begun to arise in the use of such tools within an employment context. Although no reports of such bias have been reported in private organisations in India yet, an increased use may result in unforeseen biases and hallucinations.

i) Bias in AI tools

One of the most significant risks of using AI tools in employment processes is the potential for bias. AI tools will only be as unbiased as the data they are trained on. If the data used to train these tools is biased, the output will eventually perpetrate the same biases present within the training data. This can lead to discriminatory practices that may violate anti-discrimination laws.

To ensure that third-party AI tools do not contain bias, organisations should ensure that tool providers constantly monitor tools for bias and remain responsible for identifying and mitigating any biases discovered. When carrying out evaluations of AI tools, organisations ought to also pay close attention to samples of training data to ensure that these samples do not portray biases. To reduce bias, employers must also ensure that the data used to train these tools is diverse and representative of the workforce. Organisations may also choose to select tools with transparent decision-making processes that permit audits and identification of any potential biases.

ii) Hallucinations in Artificial Intelligence tools

Another potential risk of using AI tools in employment processes is the potential for hallucinations. Hallucinations occur when AI tools make incorrect predictions or statements, which may occur owing to incomplete or missing data. To reduce the risk of hallucinations, employers must ensure that the data used to train these tools is accurate and complete. Employers must also ensure that these tools are regularly reviewed and updated to reflect changes in the workforce.

Organisations that identify bias or hallucinations in their AI tools ought to take immediate action to address these issues. This may involve retraining the tool on more representative data, updating the tool to reflect changes in the workforce, or discontinuing the use of the tool altogether.

Employers must also be prepared to address legal issues that may arise from the use of biased or hallucinating AI tools. This may include defending against discrimination claims or addressing concerns raised by regulatory authorities.

Conclusion

AI tools can be a valuable asset for employers in streamlining recruitment and performance review processes. However, employers must remain vigilant in identifying and addressing potential risks such as bias and hallucinations. Establishing a governance framework within organisations and identifying and setting out standards for use of AI mitigate the legal and ethical risks that arise with their use.

Spice Route Legal

14th Floor
SKAV 909 Building
Lavelle Road
Ashok Nagar
Bengaluru
Karnataka 560001
India

contact@spiceroutelegal.com www.spiceroutelegal.com
Author Business Card

Law and Practice

Authors



Spice Route Legal Spice Route Legal has market leading TMT, IP and data protection practices that combine to offer unparalleled expertise in AI, offering pragmatic, business-oriented legal advice across industries. As AI continues to redefine the world, Spice Route Legal’s team has advised some of the most innovative companies at the intersection of technology, media, telecommunications, financial services, aviation, and life sciences on the risks, risk mitigation, accountability, transparency and governance frameworks for the use of AI.

Trends and Developments

Authors



Spice Route Legal has market leading TMT, IP and data protection practices that combine to offer unparalleled expertise in AI, offering pragmatic, business-oriented legal advice across industries. As AI continues to redefine the world, Spice Route Legal’s team has advised some of the most innovative companies at the intersection of technology, media, telecommunications, financial services, aviation, and life sciences on the risks, risk mitigation, accountability, transparency and governance frameworks for the use of AI.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.