Contributed By Spice Route Legal
There are no specific Indian laws that govern AI or its use. Legal considerations arise from different legal frameworks, including:
Key industry applications of AI and machine learning are expected to be in the healthcare, agriculture, education, telecom, infrastructure, and transportation sectors, with the government specifically focusing on these industries. No specific law defines, differentiates between, or regulates predictive and generative AI.
Healthcare
Integration of AI models within the healthcare industry is expected to increase access and affordability of quality healthcare to rural areas. Key applications include:
Agriculture
AI in agriculture is expected to improve farm productivity and reduce wastage of produce through the development of new farming methods, improvements in administration, reduction in costs of transportation and distribution. These measures are expected to positively impact farmers’ income and livelihood.
Telecommunications
The Telecom Regulatory Authority of India (TRAI), which is the nodal telecommunications regulator, has directed telecommunications service providers to implement the use of AI within telecommunications systems and networks in order to detect and prevent spam and phishing attacks that occur through phone networks, calls, and SMS.
Education
AI is expected to improve access and quality of education across India. Educational institutions have begun implementing AI solutions within their systems to tailor systems for students (such as changing language preferences) based on their needs. AI is also expected to improve the quality of online education systems, which ‒ in turn ‒ will enhance access to education.
Infrastructure
State governments are expected to adopt AI systems within their city planning department to improve public utility systems, public safety, and general management and administration. AI is also expected to bring about the development of “smart cities”, where residents will have access to better transportation and housing through AI-powered solutions.
Transportation
In line with infrastructure planning, the adoption of AI is also expected to improve road congestions and reduce accidents. Similarly, AI models are being used to create sustainable means of transportation and travel route optimisations.
The Indian government has created programmes and initiatives to promote the use of AI and guide relevant stakeholders to implement and adopt AI, as follows.
India does not presently have an AI specific legislation. The Indian government has proposed the enactment of the Digital India Act (DIA), which is intended to replace the IT Act. Although the government has not issued a formal draft, the DIA is expected to regulate emerging technologies, define, and regulate high risk AI systems, and legislate on ethical use of AI based tools. Certain types of AI may be treated as “intermediaries” under the DIA, with safe harbour protections offered under the IT Act likely to be extended to such types of AI.
The Indian government has also issued an advisory to certain intermediaries that incorporate AI in their products and services to ensure algorithms do not produce discriminatory results. Although the MeitY’s Minister of State has unofficially indicated that this advisory is specifically targeted towards larger intermediaries, the scale on which AI is deployed ‒ especially by start-ups in India ‒ may be impacted in the mid term by this advisory.
No AI-specific legislation has been enacted in India.
Indian regulatory bodies have issued several White Papers, policies, reports, and recommendation on the use and adoption of AI, as follows.
This is not applicable in India.
This is not applicable in India.
This is not applicable in India.
Indian data, information, and content laws do not explicitly regulate AI.
However, India’s newly introduced data protection law, the DPDPA (which is yet to be enforced) is noteworthy, as it entirely exempts personal data that is made publicly available by individuals themselves or by someone else under a legal obligation. Though the scope of “publicly available” data is yet to be defined, this exemption could potentially help foster AI development.
The National Strategy for Artificial Intelligence, released by NITI Aayog, proposes a blockchain-based decentralised data marketplace ‒ ensuring traceability, access controls, regulatory compliance ‒ and a price discovery mechanism for data to balance privacy concerns with the need for a large supply of data for AI training.
The “DEPA Training Framework”, issued by NITI Aayog, permits persons to receive large training datasets for analysis or training. Data shared through this framework must be aggregated and de-identified. Organisations that provide or disclose data are tasked with both, seeking consent from data subjects to share such data and aggregating and de-identifying data prior to disclosure. Participation in this ecosystem is subject to approvals from self-regulatory organisations, implementation of defined security and privacy controls, and contractual arrangements among participating entities. Participation permits large-scale processing of structured de-identified data for training AI models and may offer the ability to commercialise the sharing of such data.
As the DIA is intended to replace the IT Act, it will regulate the entire digital ecosystem in India. Objectives of the DIA include the development of the Indian digital economy, innovation, and ensuring India is considered a trusted player for digital products and solutions. It will define and regulate high-risk AI systems, develop ethical use of AI-based tools and develop accountability standards. It will also attempt to prevent user harm such as cybercrimes targeting women and children, regulate addictive technology, protect minors, provide users with digital rights, and curb the spreading of fake news and information.
In practice, organisations – especially larger companies that process or target a large number of users or roll out AI-enabled products or tools ‒ are actively taking measures to address commercial risks that may arise from the use of AI. These measures may also incorporate AI-specific terms and conditions to disclaim the use of AI, such as generative AI, to prevent liability for the results produced by such tools. Please see 11. Legal Issues With Predictive and Generative AI and 12. AI Procurement for further details.
There have been no conclusive judicial precedents on IP rights in respect of the use of AI.
Some Indian courts have recognised the benefits of using AI-powered tools to assist with investigations in cases involving missing persons and child pornography. However, courts have questioned the accuracy and reliability of AI-powered chatbots such as ChatGPT where parties have sought to establish certain facts through results generated by such tools.
Courts have not prescribed any definitions or standards for describing AI and machine-learning tools at this stage.
MeitY
The MeitY is the apex ministry established by the central government to regulate and be responsible for the development and facilitation of the use of AI in India. It has established a separate division known as the “Emerging Technologies Division”, which works towards fostering and promoting the usage of emerging technologies in India. In order to develop a framework to regulate AI, it has also constituted four committees on AI, which have published reports on issues such as ethics, cybersecurity, and re-skilling individuals.
NITI Aayog
The NITI Aayog is a public policy think tank established by the Indian government. It was primarily tasked with creating a national strategy on developing and implementing AI and related technologies in India, which was published in 2018. It has also published the “Responsible AI” approach document, which prescribed certain principles to guide relevant organisations on how to use AI in an effective and responsible manner.
NASSCOM
NASSCOM is a non-government body that works with the MeitY and other stakeholders to promote the responsible use of AI in India. NASSCOM AI is an initiative undertaken by the organisation to foster the creation, development and sustainable use of AI in India. NASSCOM has, among other articles and reports, also released the Guidelines on Generative AI, which provides a common set of standards and protocols that may be adopted by stakeholders while implementing generative AI tools within their services.
Other Regulatory Agencies
There are also regulatory authorities, such as TRAI, the Securities and Exchange Board of India, and the Insurance Regulatory and Development Authority of India, which are actively formulating recommendation papers and guidelines to regulate the use of AI in their respective sectors. However, these entities are expected to play a more active role once the government enacts specific legislations with regard to AI.
The MeitY defines machine learning as “algorithms and techniques that allow computers to “learn” from and make predictions based on data”. It also refers to it as “a branch of AI that specifically studies algorithms [that] learn and improve from training examples”.
Although it has not provided a definition of AI, the NITI Aayog has stated that AI machines may be classified as those that have the ability to perform cognitive tasks such as thinking, perceiving, learning, problem-solving and decision-making in a similar manner to humans.
As no regulations have been enacted with regard to AI, these definitions merely act as guiding principles on how the regulators conceptualise AI. At this stage, it will primarily be the courts’ responsibility to identify whether a particular software is considered AI and ‒ in the absence of legislation – to what extent will it have an impact on the legal rights and liabilities of parties involved during a dispute if so.
MeitY
The MeitY is actively taking steps to regulate AI and address the issues through various policies, strategies and working committee reports. The MeitY’s AI Committee reports aim to prevent harms such as the weaponisation of AI, cybersecurity risks, and privacy and ethical issues that arise from AI.
The MeitY’s objectives are:
NITI Aayog
NITI Aayog aims to provide a roadmap for the creation, development and use of AI in India and to guide stakeholders on how to use these technologies in a responsible and sustainable manner.
NASSCOM
NASSCOM primarily aims at preventing or mitigating the following risks with regard to AI:
NASSCOM’s objectives are:
No enforcement actions have occurred yet.
The primary bodies that have addressed AI standards are as follows.
The BIS has also constituted a committee that is drafting standards that are in the process of being published.
The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are the primary international standard-setting bodies whose certifications are sought by business in India. The BIS is in the process of adopting the ISO/IEC 42001:2023 that specify requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS) within organisations.
Government authorities are actively using and leveraging AI. Examples include:
Indian courts have not yet issued notable decisions on the use of AI by the government.
In 2022, the Indian government announced the launch of 75 AI products and technologies during a symposium organised by the Ministry of Defence for protection from future threats and for the development and peace of humanity.
While the rise of AI-generated content has prompted discussions in relation to copyright ownership and product liability, there has been no conclusive judicial or regulatory guidance on IP concerns in connection with AI. The Indian government has clarified that the existing IP rights regime in India adequately addresses and protects AI-generated works.
There has been limited Indian regulatory or judicial guidance on data protection concerns as well. India’s existing data protection regime under the IT Act prescribes light-touch compliance for the processing of personal data. Once enacted, the DPDPA will exempt processing of publicly available personal data from its ambit entirely, which mitigates certain data protection concerns.
Generally, the emerging issues raised by generative AI are as follows.
Please refer to 8.1 Emerging Issues in Generative AI.
Please see 8.1 Emerging Issues in Generative AI for more details.
Present Indian data protection laws do not address the use of AI or the rights that individuals have in relation to output produced by AI. Individuals’ rights are limited to the right to access and correct their information. They do not have the right to delete the information that is provided to an organisation, but they may opt out from the provision of their personal data, which will entitle the organisation to deny services to them. Provided that AI tools have procured training data in a manner consistent with data protection requirements, and users of such tools ensure compliance with data protection requirements while processing and inputting personal data, existing data protection risks may be mitigated.
The soon-to-be-implemented DPDPA provides individuals with rights to correction and erasure of their personal data, provided that processing is based on consent of the data subject. AI output would accordingly need to be corrected or erased when requested, to the extent it contains personal data about the individual, if the generation occurs on the basis of consent. Considering that the DPDPA exempts processing of publicly available data from its ambit, organisations that use AI tools may not need to factor significant data protection compliance if data sources are appropriately identified.
AI is increasingly being used in various aspects of the practice of law, to streamline processes, improve efficiency, and enhance decision-making. As lawyers are, at times, required to analyse a wide range of resources (eg, emails, documents, and communications) to identify key pieces of information, the use of AI-powered tools to perform such functions significantly increases efficiency. Lawyers are also relying on AI tools to improve research functions, as such tools have the ability to analyse vast amounts of information with relative accuracy and speed.
AI is also being utilised in various automated support services. Chatbots equipped with natural language processing capabilities can provide instant responses to common legal queries, offer preliminary legal advice, and assist in client onboarding processes. Additionally, AI-driven legal research platforms can efficiently search through vast databases of case law, statutes, and legal precedents to provide relevant insights and analysis to lawyers, thereby facilitating informed decision-making and strategy development. However, courts have questioned the accuracy and reliability of AI-powered chatbots such as ChatGPT where parties have sought to establish certain facts through results generated by such tools.
Separately, the integration of AI in litigation and other practices of law also raises the following related ethical issues.
More details about the risks of the use of AI are set out in 11. Legal Issues With Predictive and Generative AI.
Owing to the lack of specific AI regulations in India, there is limited precedent on the determination of liability with regard to the provision and use of AI. In this lacunae, traditional liability theories would continue to apply.
As regulatory efforts for AI are in their early stages, it is unclear as to how courts will adjudicate on such cases. Accordingly, it becomes important for organisations to:
Further clarity on these aspects is expected once the government enacts specific legislations or provides guidance on AI and its associated liability.
As India does not have regulations that are targeted towards the use of AI, the imposition and allocation of liability for the unlawful use of such tools is determined commercially at this stage.
Bias in algorithms presents both technical and legal challenges that have significant implications for consumers, companies, and regulatory bodies. From a technical standpoint, bias in AI algorithms refers to the systematic and unjust discrimination towards certain groups or individuals based on factors such as race, gender, or socioeconomic status. These biases may also violate anti-discrimination laws. Discrimination based on protected attributes such as gender is unlawful and algorithms that perpetuate discrimination may expose organisations to legal liability.
Biases may also lead to consumer grievances, as AI models tend to impose discriminatory pricing or favour certain consumer groups over others. Addressing bias in algorithms would require collaboration between industry stakeholders and regulatory bodies to ensure that AI systems are fundamentally fair.
Data protection and privacy concerns with regard to AI and related technologies stem from the manner in which these tools collect information from available sources. As publicly available data is typically not categorised as personal data, AI models may be extensively trained by using such sources. However, with the development of AI and machine learning models, it becomes increasingly likely that an individual’s personal information may be used by such technologies to produce the required results. By way of example, an individual’s social media accounts may be analysed by AI-backed tools to create a behaviour profile of the individual in order to determine purchasing trends.
The primary concern in such use cases is with regard to the individual’s consent. As AI tools may procure data through all available sources on the internet, it becomes increasingly difficult to obtain an individual’s consent. Data protection laws in the EU and other major jurisdictions provide individuals the right to object to any automated decision-making that was carried out without human oversight to account for such issues. However, this right has not been included under the DPDPA at this stage ‒ although the government may introduce such rights through subsequent regulations.
Similarly, the use of AI models raises cybersecurity concerns. Traditional encryption and baseline security measures are proving to be inadequate with the development of advanced technology such as AI and quantum computing. Adopting more enhanced encryption measures is advisable.
AI tools are capable of using an individual’s biometric features to develop photographs and videos that are artificially rendered in its entirety. This is a significant concern for celebrities and politicians as individuals may use such technology to disseminate misinformation and defame individuals.
Current Indian data protection laws classify biometric information as sensitive personal data and require organisations to obtain written consent from the individual prior to processing such information. The DPDPA also requires consent to be obtained in such situations (unless used in employment-related matters). However, where AI tools are deployed to gather vast amounts of biometric information from public and non-public sources, it is difficult to verify whether consent was obtained and, if so, whether it was adequate to the extent of the personal data processing undertaken by the tool. As advanced generative AI tools may become available to the public, regulatory action may be required to protect the privacy and reputation of individuals.
Though it may be difficult to enforce the lawful use of generative AI tools, regulators may impose obligations on the owners of the AI model to build in non-removable identification marks on AI-generated photographs and videos. This will help viewers to distinguish between real and artificially rendered products. However, this may not completely prevent mala fide actors from removing these protections by breaching security measures. Accordingly, appropriate penalties and sanctions for the breach of AI-related regulations to ensure deterrence must also be in place.
Data protection regulations do not provide individuals the right to object to automated decision-making activities undertaken by organisations. By way of example, where an employer deploys AI to screen candidates on the basis of the information provided by them, the candidate does not have a statutory claim to seek a review of such decisions. This may adversely affect individuals, as AI models may have inherent biases or errors. Although affected individuals may object to these decisions by claiming that the AI model is defective or that the results were produced based on biased models, it is impractical to raise legitimate claims where the manner in which the decision was arrived at by the organisation is not disclosed to the public.
The government’s proposal for the DIA, which is intended to repeal and replace the existing overarching law on information technology, indicates that it may provide all individuals with the right to object to automated decision-making. However, at this stage, no developments have been reported with regard to these aspects.
Indian data protection laws require the disclosure of the personal datasets collected and the purposes for the processing of such data. This would also apply in cases where AI and machine learning tools are incorporated within services to improve efficiency and decrease human dependency in cases of errors or customer grievances. However, under current and proposed regulations, there are no specific obligations imposed on organisations to disclose the fact that the service uses AI to deliver particular features.
By way of example, an organisation that implements an AI chatbot on its website or operates an AI-powered customer service number, is not statutorily required to disclose this information. However, as current data protection laws and the DPDPA encourage transparency with regard to the processing personal data, it is recommended to make these disclosures to all affected individuals. These disclosures may be made through a publicly available privacy policy.
The use of AI may lead to various anti-competitive practices by larger organisations, as follows.
Key concerns that undermine offerings by AI suppliers and that ought to be factored into transactional contracts with customers follow. The ability to effectively negotiate these depend significantly on commercial bargaining power, together with technical capabilities at both parties’ ends. Any transactions document should be supplemented by practical due diligence and risks procurement processes.
Data Concerns
Enterprise customers may overlook the fact that tools offered by AI suppliers involve large-scale processing and learning of their proprietary and confidential data. Contracts should clearly specify usage restrictions, if any. Customers may choose to deploy on-premises solutions or local instances to limit a supplier’s ability to access data and may also seek restrictions on processing or even learning undertaken by machine learning tools.
Data protection concerns – especially in respect of personal data that may have been used to initially train the AI solution or otherwise processed in connection with a customer’s use of the solution – would subsist in any commercial conversation. The contractual framework should factor responsibility for compliance, identification of legal bases, anonymisation where possible, and corresponding data subject rights.
Cybersecurity
Shared infrastructure utilised by AI suppliers – especially in the context of AI as a service solution – may be leveraged by threat actors to gain access to large scale data and information. Contracts should define security protocols, adequately define breach response processes and timelines, and allocate liability among parties for these issues. Any contract negotiation must be supplemented by security due diligence and monitoring to mitigate risk.
Disclaimers of Liability
AI suppliers ought to typically disclaim liability in respect of results generated through the use of AI. For instance, suppliers may choose to disclaim liabilities in respect of the control, endorsement, recommendations, or representations on the efficacy, appropriateness, or suitability of the AI. Similar disclaimers on use of generative AI, including in respect of input data (where liability is shifted to customers), editorial rights and incomplete, inaccurate, inappropriate, or offensive output ought to be factored in. Depending on negotiating power, suppliers may choose to disclaim liabilities on training data. This would typically be resisted by larger enterprise customers.
Bias
Customers may insist on guarantees on results that are free from bias, accurate, and in compliance with applicable law. These must be balanced with a supplier’s ability to control training data and compliance with data protection and IP concerns in respect of training data. Suppliers are likely to be required to conduct impact assessment tests and undergo regular product testing to ensure compliance with industry standards.
Indian companies have begun adopting AI models within their recruitment and employee monitoring processes to facilitate quicker and more objective evaluation of candidates and employees. Large organisations that receive several applications implement AI based tools within their recruitment system to allow mass screening of applications. Typically, a few instructions are fed into the system, which detects whether minimum parameters are present within the candidate applications. This allows the organisation to short-list a large number of applications within a few hours or days. Similarly, companies are increasingly relying on AI models to evaluate work performance and attendance of employees.
Although such measures are generally advantageous and improve efficiency, it comes with the risk of decisions being influenced by algorithmic biases and technical errors. By way of example, interview systems managed by AI may not recognise certain accents or mannerisms, which may lead to oversights and inaccurate evaluations. Companies should consider undertaking regular impact assessments to mitigate these risks.
With the increase in remote working models, employee monitoring efforts have increased ‒ in paricular, the use of AI-backed monitoring software. Tools for automatically detecting office attendance, employee working hours, etc, implement AI to achieve a more advanced level of monitoring with increased accuracy.
However, as such tools are fed with parameters that are pre-decided, there may be instances where an employee’s behaviour is not factored in. By way of example, if an office uses face recognition technology to record employee attendance, there may be instances where certain physical features are not capable of being identified by the software. These technical errors and biases may have a direct impact on employees and must be relied upon with caution.
Further, data protection concerns arising from the use of such tools must be taken into consideration prior to its implementation. Adequate consents and notices must be provided to all employees and the extent of data collected from the AI software must be limited. By way of example, facial images processed through this technology must be processed solely for the purpose of employee monitoring.
Digital platforms such as cab aggregators and food delivery service providers have begun to adopt AI systems within their business operations. Cab aggregators typically employ AI to ensure route optimisation, implement dynamic pricing, and detect fraudulent bookings. Similarly, food delivery service providers are adopting these tools for relying on features such as demand prediction, delivery routes optimisation, AI-powered customer support, and customer feedback analysis.
At this stage, these practices are solely governed by data protection, consumer and competition regulations. By way of example, cab aggregators that use AI to implement dynamic pricing must ensure that such these processes are based on fair pricing models and are not discriminatory towards certain consumer groups. Further, digital platforms that act as intermediaries must also conduct due diligence exercises to ensure that their platform does not fall afoul of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. Certain intermediaries that incorporate AI in their products and services were also specifically directed by the government to ensure that their algorithms do not produce discriminatory results.
However, these measures apply largely to the general operation of the platforms. With the increasing reliance on AI tools, regulators may need prescribe sector-specific laws to ensure responsible use of AI by large digital platforms.
Use of AI by Financial Services Companies:
Financial services companies in India utilise AI for a range of applications, including fraud detection, risk assessment, customer service, credit scoring, and investment management.
AI algorithms analyse vast amounts of data to identify patterns, trends and anomalies, thereby enabling financial institutions to make data-driven decisions, automate processes and offer personalised services to customers.
Apart from risks such as algorithmic biases and privacy concerns, businesses in the financial sector must also take into account the risks of biases in repurposed data. Such companies may repurpose historical data for training AI algorithms, potentially introducing biases present in the data. Biases in repurposed data could lead to discriminatory practices, resulting in unfair treatment of certain customer groups (such as for credit facility offerings). To address these concerns, the Reserve Bank of India may publish formal guidelines for the use of AI tools by financial services companies and other regulated entities.
The government has not prescribed specific regulations governing AI in healthcare. However, the Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare published by the Indian Council of Medical Research aims to establish an ethical framework for the development, deployment, and adoption of AI-based solutions in healthcare. The guidelines cover ethical principles, stakeholder guidance, an ethics review process, and governance of AI use.
The healthcare industry may leverage AI for the following use cases:
No specific regulations with regard to the use of AI autonomous vehicles has been prescribed in India. As the use of autonomous vehicles is in its early stages in India, regulations associated with such technology are expected to be conceptualised over the coming few years.
No specific regulations with regard to the manufacturing of AI has been prescribed in India. However, manufacturers will need to take into consideration minimum standards prescribed by applicable standard-setting bodies and general consumer protections laws when developing AI for general commercial use.
Though India does not have any regulations that specifically govern the use of AI in professional services, businesses in the legal, accounting, consulting and other professional fields must take into account various aspects such as liability and professional responsibility, confidentiality, IP, client consent, and regulatory compliance while using AI.
Liability
Service providers using AI tools are responsible for the accuracy and integrity of the outcomes generated by these tools. Courts may impose liability on professionals for errors or misconduct arising from the use of AI. Professionals must ensure that AI systems are appropriately designed, implemented, and monitored to meet professional standards and ethical obligations.
Confidentiality
Service providers may have contractual obligations to maintain client confidentiality and protect sensitive information. AI systems used in professional services must adhere to strict data privacy and security standards to safeguard client data from unauthorised access or disclosure. Professionals must ensure that AI systems comply with relevant data protection and cybersecurity laws and regulations to meet minimum standards of security.
IP Rights
Implications of using AI on IP rights, including ownership of AI-generated works and the protection of proprietary algorithms or datasets must also be taken into account by such service providers. As Indian laws do not provide AI-specific regulations for these aspects, parties will need to contractually determine ownership and licensing of IP developed through the provision of services.
Client Consent
Service providers must obtain adequate consent from customers prior to using AI tools or algorithms to process personal data.
Regulatory Compliance
Businesses using AI to perform professional services must ensure compliance with applicable laws, regulations, and industry standards governing their practice areas. This includes regulatory requirements related to data protection, privacy, security, financial reporting, and professional conduct.
Indian courts have generally held that results produced that do not involve human creativity or involvement are not eligible for patent or copyright protection.
In India, trade secret protections can be obtained without application or registration. In the context of AI, trade secret protections could include protecting output, data sets, unique algorithms, and machine learning techniques. However, the effectiveness of such protection in the AI context is contingent upon restricted access to AI outputs, which may not be necessarily helpful where the desired outcome is commercial exploitation of AI generated content.
A noteworthy development occurred when an AI-based app, RAGHAV, was recognised as a co-author of an artwork. However, the Indian Copyright Office later issued a notice of withdrawal of the registration of the AI author and requested that the human author provided details regarding RAGHAV’s legal status.
Separately, in July 2021, the Parliamentary Standing Committee on Commerce in its 161st report recognised the inadequacies of the Copyright Act in addressing ownership, authorship, and inventorship concerning AI-generated works. The committee recommended a review of existing IP rights laws to incorporate AI-generated works within their purview. However, in February 2024, the Minister of State for Commerce and Industry clarified that that the current patent and copyright frameworks are well equipped to protect AI-generated works and related innovations.
There are risks of copyright infringement as the works or products created using OpenAI may include copyrighted material. Further, issues with regard to ownership of the output may arise considering that the general rule under Indian copyright laws is that the creator of the content owns the IP rights to that content – the AI may play a significant role in generating the content, which complicates content ownership issues.
Companies can advise their board of directors to create a comprehensive AI strategy and governance framework that would consider and address the following:
By addressing these key issues, corporate boards of directors can better identify and mitigate risks in the adoption of AI, enabling the organisation to leverage AI technologies effectively while safeguarding against potential challenges and pitfalls.
Please refer to 16.1 Advising Directors.
14th Floor
SKAV 909 Building
Lavelle Road
Ashok Nagar
Bengaluru
Karnataka 560001
India