Artificial Intelligence 2024

Last Updated August 19, 2024

South Africa

Law and Practice

Authors



Spoor & Fisher was established in 1920 and is a renowned specialist intellectual property (IP) law firm with extensive expertise in all aspects of IP. This includes trade marks, copyright, patents, registered designs and anti-counterfeiting measures. The firm also excels in handling the intellectual property aspects of commercial transactions and offers robust litigation services in these fields. With a strong focus on innovation, Spoor & Fisher is deeply invested in the intersection of AI and IP, staying at the forefront of technological advancements and their implications for IP law.

Contract Law

Typical examples of the application of contract law in the AI context include:

  • regulation of terms of use of proprietary information and intellectual property for AI system training and development, including compensation;
  • regulating proprietary rights to intellectual property arising from the development of AI systems and outputs and varying, to the extent capable of alteration, the incidence of proprietary rights that arise by operation of the law; and
  • allocation of risk between AI tool developers, providers and deployers for liability arising from the deployment and use of an AI system and its outputs (for example claims arising from intellectual property, personal information and human rights violations) by regulating  limitations on liability, warranties and indemnities.

It is not possible to “contract out of statute” unless the applicable statute permits such alteration.

Delict (Tort) and Product Liability

Delictual liability for harm is determined by applying a test for negligence based on a four-step enquiry of what a reasonable person, or in the case of an expert, the reasonable expert, ought to have done in the same circumstances. Wrongfulness, fault, causation and loss must be proved. The negligent act or omission must have caused the harm. Liability arises for the wrongful and negligent use or reliance of an AI tool which causes loss.

See 10.1 Theories of Liability.

Privacy and Data Protection

The Protection of Personal Information Act, 2013 (POPIA) generally prevents processing, disclosure and cross-border transfer of personal information without the consent of the data subject or in the absence of a legitimate exclusion. 

Principles of transparency, explainability, data minimisation and data security underpin POPIA and apply in the AI context in the following way:

  • AI systems should be transparent about data usage and processing methods;
  • individuals should have access to understandable explanations of how AI systems make decisions that affect them, particularly where personal information is involved;
  • AI systems should only collect and retain the minimum amount of personal data necessary to achieve their objectives; and
  • strong security measures should be implemented to protect personal data from unauthorised access, theft or misuse including encryption, access controls, regular security audits and compliance with industry standards.

Intellectual Property

Appropriate licences are required to authorise the use of intellectual property for the training and development of AI systems.

AI systems may be protected by IP law in various ways. Firstly, we expect that an AI invention would be patentable if it provides a technical solution to a specific technical problem. Generally, the mere automation of a process which has traditionally been done through a mental/manual process by a technical expert, would not be considered patentable. When applying for patent protection, the inventor should not however, be cited as a machine.

The following works associated with AI models may qualify as protected works under the Copyright Act 1978:

  • the source code (a “computer program” under the Copyright Act) inherent in the AI model;
  • a compilation of information (a “literary work” under the Copyright Act) on which an AI model is trained;
  • input prompts that are not copied;
  • outputs (“computer-generated works” under the Copyright Act).

Employment Law

The primary laws governing employee rights and workplace health and safety, and that are of general application, are:

  • the Labour Relations Act, which seeks to give effect to and regulate the fundamental human right to fair labour practices enshrined in section 27 of the South African Constitution;
  • the Employment Equity Act 1998, which seeks to eliminate unfair discrimination in the workplace, ensure the implementation of employment equity in the workplace and achieve a diverse workforce representative of the South African population;
  • the Occupational Health and Safety Act 1993, which requires (a) an employer to provide and maintain, as far as is reasonably practicable, a working environment that is safe and without risk to the health of its employees; and (b) the establishment of safety representatives and safety committees in the workplace and lines of reporting;
  • the Mine Health and Safety Act 1996, which requires mines to be designed, constructed and equipped to provide a safe and healthy working environment and imposes reporting requirements; and
  • the Compensation for Occupational Injuries and Diseases Act 1993, for employee claims for injuries sustained during the course and scope of employment;
  • the Skills Development Act 1998;
  • the Skills Development Levies Act 1999;
  • the Unemployment Insurance Act 2001;
  • the Unemployment Insurance Contributions Act 2002; and
  • the National Minimum Wage Act 2018.

Criminal Law

In South Africa, commercial crimes include:

  • fraud, being the common law crime of unlawfully and intentionally making a misrepresentation which causes actual prejudice or which is potentially prejudicial to another; and
  • offences such as money laundering, terrorist financing, bribery and corruption, market abuse and insider trading.

Key pieces of legislation include the Financial Intelligence Centre Act 2011 and the Prevention of Organised Crime Act 1998.

Our criminal law also recognises the committing of unlawful conduct against the personality (dignity, reputation and privacy) of a person, as a crime. Fabrications (or “hallucinations”), inaccuracies, errors, bias or discrimination produced by AI systems may have criminal repercussions.

The Cybercrimes Act 2020 criminalises offences relating to cybercrimes as well as the disclosure of data messages which incite or threaten violence or damage to property.

The Electronic Communications and Transactions Act 2002 regulates the admissibility of digital evidence in court proceedings.

AI is increasingly being applied to enhance efficiency in a variety of sectors and in a variety of ways, for example:

  • legal services (see 9.1 AI in the Legal Profession and Ethical Considerations);
  • financial services (see 14.2 Financial Services);
  • healthcare (see 14.3 Healthcare);
  • education, where generative AI classroom tools are used to adapt passive educational files into completed dynamic and active learning modules in the form of quizzes, flashcards, games and interactive content;
  • manufacturing (see 14.5 Manufacturing);
  • retail, where AI tools are used to: (a) analyse user profiles and historic purchasing behaviour; (b) predict future consumer purchasing behaviour and patterns; (c) suggest products or services; and (d) provide customer support in the forms of chatbots;
  • agriculture, where farmers use sensors, drones and satellites, to collect real-time data on their crops and where AI tools are used to analyse and predict crop yields and determine optimal planting times;
  • marketing, where AI tools are used for deep data analytics, hyper-personalising the customer experience, enhancing predictive modelling and automated advertisement buying and where generative AI tools are used for content creation; and
  • social media, where AI tools are used for personalised news and content feeds.

In December 2023, President Cyril Ramaphosa committed to investing USD53 million (and up to USD265 million by 2030 from pooling resources with the private sector) toward PhD programmes focused on bringing “critical skills in areas like artificial intelligence research, advanced biotechnology, fuel cell development, batteries and other storage, and next-generation mining.” (Munyaradzi Makoni, “Cyril Ramaphosa Unveils R1bn PhD Initiative,” Research Professional News (blog), 14 December 2023). 

Please also see 3.1 General Approach to AI Specific Legislation, and in particular, the collaboration between the South African government and universities in relation to the establishment of AI centres of excellence.

South Africa’s AI policy began with the Presidential Commission on the Fourth Industrial Revolution (PC4IR) and its recommendation to establish an AI Institute (in the form of a private public partnership) in 2019. In 2022, the Department of Communications and Digital Technologies (DCDT) founded The AI Institute of South Africa (AIISA) together with the University of Johannesburg (UJ) and Tshwane University of Technology (TUT).  AIISA has launched two AI centres of excellence since its establishment in 2022: the UJ AI Hub and the TUT AI Hub.  In 2024, the DCDT published a draft discussion document, entitled “South Africa's Artificial Intelligence Planning (for general discussion purposes)” (the “SAAIP Draft Discussion Document”).

South Africa has not yet formalised any laws and/or policy documents for the regulation of AI, however, and the release of the SAAIP Draft Discussion Document clearly appears premature.

See 3.1 General Approach to AI-Specific Legislation.

See 3.1 General Approach to AI-Specific Legislation.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

No data, information or content laws have been amended or newly introduced to foster AI technologies, nor have any non-binding recommendations or directives been issued which facilitate this objective.

See 3.1 General Approach to AI-Specific Legislation.

No key judicial decisions have been handed down regarding AI, including with respect to generative AI and intellectual property rights.

See 4.1 Judicial Decisions.

The following regulatory bodies play a leading role in regulating the legal impacts of AI:

  • The Information Regulator, an independent body established in terms of POPIA;
  • The National Consumer Commission (NCC), established in terms of the Consumer Protection Act No. 2008 (CPA);
  • The Consumer Affairs Committee;
  • The Department of Communications, South African Government; and
  • The Competition Commission, a statutory body established in terms of the Competition Act 1998 (the “Competition Act”).

See 5.3 Regulatory Objectives, 11.2 Data Protection and Privacy, 11.6 Anti-competitive Conduct, 14.2 Financial Services, 14.3 Healthcare and 14.6 Professional Services.

We have not seen regulatory agencies make use of their own definitions of AI yet.

The regulators referred to in 5.1 Regulatory Agencies seek to prevent harm and promote the objectives as set out below:

  • the Information Regulator seeks to prevent non-compliance with the provisions of POPIA and to ensure that data subjects’ personal information is protected;
  • the NCC seeks to protect the interests of consumers and ensure accessible, transparent and efficient redress for consumers;
  • the Consumer Affairs Committee seeks to prevent unsolicited communications and to protect the interests of consumers in relation to the sale of goods or services from websites;
  • the Department of Communications maintains a register of the names and addresses of suppliers of cryptography products or services and the names of those products or services, with a brief description;
  • the Competition Commission investigates, controls and evaluates restrictive business practices, abuse of dominant positions and mergers in order to achieve equity and efficiency in the South African economy;
  • the Financial Sector Conduct Authority, established in terms of Financial Sector Regulation Act, 2017 (FSRA), regulates and supervises financial institutions, promotes financial inclusion and monitors the extent to which the financial system is delivering fair outcomes for financial customers, with a focus on the fairness and appropriateness of financial products and financial services and the extent to which they meet the needs and reasonable expectations of financial customers; and
  • the South African Health Products Regulatory Authority (SAHPRA) regulates health products intended for human and animal use; the licensing of manufacturers, wholesalers and distributors of medicines, medical devices, radiation emitting devices, radioactive nuclides and the conducting of clinical trials.

We have not yet seen any enforcement or other regulatory action in relation to AI yet.

See 9.1 AI in the Legal Profession and Ethical Considerations and 14.5 Manufacturing.

See 14.5 Manufacturing.

The use of AI by the government has the benefit of improved efficiency in service delivery, and policy and decision making. Appropriate policy and legal frameworks regulating its application and use is necessary, however, given the risk of interference with fundamental human rights, for example, the use and application of:

  • automated decision making (ADM) tools risks interference with the right to equality (see 14.2 Financial Services); and
  • AI surveillance tools and the use of biometrics risks interference with right to privacy (see 11.3 Facial Recognition and Biometrics).

Principles of procedural fairness, lawfulness and transparency should underpin policy and legal frameworks.

Closed-circuit television (CCTV) surveillance cameras equipped with facial recognition software and the use of drones and body-worn cameras in private and public spaces in South African metropolitan areas is increasingly commonplace. CCTV surveillance is used for safety and security, criminal investigative and evidentiary purposes. According to Institute For Security Studies, “VumaCam’s licence plate recognition system in Johannesburg uses over 2,000 cameras and is connected to the South African Police Service’s national database of suspicious or stolen vehicles. Bidvest Protea Coin’s Scarface in South Africa uses facial recognition for the real-time detection of potential suspects. Its data can be used as evidence in criminal cases” (link to article).

The use of CCTV cameras is not specifically regulated in South Africa, although POPIA stipulates that personal information may only be processed with the consent of the data subject, or where the data subject is a minor, the consent of a competent person. See 11.2 Data Protection and Privacy.

We would expect to see guidelines from the Information Regulator that:

  • balance public safety and security imperatives against the Constitutional right to privacy enshrined in South Africa’s Bill of Rights; and
  • establish best practices for installing CCTV systems in public spaces using standards that recognise the need for a reasonable expectation of privacy, having regard to the particular location.

The South African government has biometric identification systems in the departments of agriculture, correctional services, home affairs, police services and social assistance (see 11.3 Facial Recognition and Biometrics).

No key judicial decisions have been handed down regarding AI, including with respect to government use of AI.

AI tools have many applications for national security, including:

  • analysing and gathering intelligence from vast datasets, such as global communication traffic, satellite imagery and social media posts, to identify patterns and anticipate, investigate and prevent crime;
  • analysing images and videos for identification of suspects and areas of heightened criminal activity;
  • translating foreign language communications and analysing tone in communications by AI-driven NLP Natural language processing models;
  • use in drones for intelligence, surveillance and reconnaissance; and
  • use in autonomous vehicles and weapon systems.

POPIA includes exemptions from certain of its provisions. POPIA does not apply to the processing of personal information relating by or on behalf of a public body which involves national security, including activities that are aimed at assisting in the identification of the financing of terrorist and related activities, defence or public safety, to the extent that adequate safeguards have been established in legislation for the protection of such personal information.

Personal information is required to be collected directly from the data subject. However, where the collection of personal information from another source is necessary in the interests of national security, then this requirement need not be complied with.

Further processing of personal information must be compatible with the purpose for which it was collected. Further processing that is necessary in the interests of national security, however, is deemed not to be incompatible with the purpose of collection.

The notification obligations in favour of data subjects when collecting the data subject’s personal information need not be discharged if non-compliance is necessary in the interests of national security.

Emerging issues raised by generative AI include:

  • reliance on outputs characterised by “convincing” fabrication, inaccuracies, errors, bias and/or discrimination;
  • uncertainty around accountability and liability;
  • use of user inputs by providers of generative AI tools;
  • inadequate protective measures for AI inputs by providers of generative AI tools (confidentiality undertakings may be absent or materially diluted);
  • third party IP rights violations and personal information rights violations in use of datasets and/or outputs;
  • onerous contractual terms imposed on the use of generative AI tools by providers and their “affiliates”, for example application based opt-out processes, undertakings by users to defend third-party claims; and
  • the identification of the inventor of an AI-generated invention and the author of a generative AI work.

Our Patent Office has granted a patent for an invention in which the inventor is cited as “DABUS (Device for the Autonomous Bootstrapping of Unified Sentience)” and notes that “the invention was autonomously generated by an artificial intelligence”. However, it is questionable whether this patent should have been granted by the Registrar since our legislation provides that there must be a transfer of rights in the invention from the inventor to the applicant, in a situation where the inventor is not the applicant. While we expect that the law will eventually evolve to accommodate an AI inventor, the current law does not make clear provision for this scenario and we are not aware of any amendments to the Patents Act in the pipeline aimed at addressing this issue.

Conversely, our Copyright Act expressly acknowledges that certain works may be computer generated (ie, where it is not possible to attribute the resultant work directly to the efforts of any individual causing the work to be made) and stipulates that the author of a literary, dramatic, musical or artistic work or computer program which is computer-generated, is the person by whom the arrangements necessary for the creation of the work were undertaken. We have no case law that is useful in interpreting who this would be for a generative AI work.

Given that bias and discrimination in training information are often repeated, if not amplified, in generative AI outputs, we expect to see an increase in disputes pertaining to outputs that identify a data subject in association with incorrect information or false claims.

See 1.1 General Legal Background.

There is uncertainty regarding the ownership of copyright in the outputs of generative AI models. It is not clear whether the author and first owner of the copyright in such output is the provider of the AI model or the person who submits the prompt. In some cases, terms and conditions of use will seek to regulate IP ownership. Any transfer of copyright must comply with the requirements of the Copyright Act, in order to be valid.

An objective similarity between an original work and another work will likely lead to an inference of copying and then our courts consider the following:

  • The extent to which the value of the original is sensibly diminished;
  • nature and objects of the selection made, the quantity and value of the materials used, the degree to which the use may prejudice the sale, or diminish the profits or supersede the objects of the original work;
  • the importance of the copied portion of the work, even where only a small amount has been copied; and 
  • the defendant’s intent to steal, for the purpose of saving himself labour.

We have no case law on the interpretation of the law with regard to the infringement of copyright in training material.  We expect that a key factor will be whether or not the use of the training material is “transformative”. In other words, an AI system should use the training material to find patterns or insights required to guide its creation of new works, instead of merely copying the training material.

An AI model may only use data to develop or train AI models, in compliance with POPIA. Also, data may be provided under restricted licence terms and/or under conditions of confidentiality, in which case an AI provider must use the information in a manner that does not result in breach of its contractual obligations.

If an AI model is trained on personal information expressed in a manner where the individual identities of data subjects is revealed, the permission of the data subjects for the particular purpose must be obtained. If permission was obtained from data subjects for a specific purpose then the use of the data for AI testing must align with that purpose.

Subject to certain exceptions, data subjects have the right to be notified that their personal information is being collected and the person responsible for collecting the information is obliged to provide various facts regarding the collection, such as who is collecting it and the purpose for which it is being collected. 

A data subject may request a responsible party to:

  • correct or delete personal information about the data subject in its possession or under its control that is inaccurate, irrelevant, excessive, out of date, incomplete, misleading or obtained unlawfully; or
  • destroy or delete a record of personal information about the data subject that the responsible party is no longer authorised to retain.

AI is increasingly being used in the legal profession to conduct legal research by analysing vast amounts of legal data, contract analysis to identify key aspects and issues, document review and analysis, document creation and the prediction of legal outcomes based on historic data e-discovery, taxation and conflict clearance. Although still limited mostly to internal usage, chatbots are increasingly used to provide or assist with legal advice.

South Africa does not have any legislation or regulations specifically governing the use of AI in the legal profession. However, the Legal Practice Act 2014 regulates the professional conduct of legal practitioners so as to ensure accountable conduct and establishes the Legal Practice Council (LPC). The LPC sets the standards and regulates the professional conduct of legal practitioners. The LPC also administers the enrolment of legal practitioners and investigates any misconduct by legal practitioners. Legal practitioners must ensure that professional services rendered in whatever manner, including through the use of AI, meet the standards set by the LPC in order to avoid disciplinary action.

The General Council of the Bar (“Bar Council”) deals with matters affecting the advocates profession. It seeks to maintain the high standards, professional integrity and independence which are established hallmarks of the Bar.

The use of AI in the legal profession raises several ethical concerns. Lawyers are expected to apply professional judgment and appropriate skill and expertise as opposed merely to rely on AI outputs. Confidentiality is of the utmost importance in the legal profession and lawyers are required to ensure that confidentially is not breached through the use of any AI tool. The use of AI tools should also be disclosed to clients and client consent should be obtained in advance before using them on client matters, to ensure transparency, especially when handling sensitive information and personal data. As officers of the court, lawyers are also required to uphold the law and ensure that legislation, rules and regulations on the use of AI are strictly adhered to. AI has also increased the risk of unqualified persons providing legal advice that is reserved for legal professionals.

See 1.1 General Legal Background.

Where a defect or failing in an AI tool causes loss as opposed to the person operating the AI system, liability may lie with the developer or provider of the tool. Clear and meaningful explanations of the results of AI tools is therefore essential. However, it may not necessarily be possible to determine how an algorithm reached a particular outcome. If it may therefore be difficult to allocate responsibility to AI tool developers or providers if it is not possible for them to foresee how an AI tool might cause harm.

The particular outcome may be unpredictable given that algorithms are adaptive to environmental inputs. The more autonomously an AI tool operates, the more difficult it becomes to allocate liability to humans.

AI tool developers and providers would seek to exclude or limit liability contractually in advance of deployment and use.

South African law recognises that, in certain cases where damage is caused wrongfully but where fault is absent, the wrongdoer is nevertheless liable on the basis of increased risk of harm, either in the seriousness of the harm, or in its high degree of probability. Liability in these cases is typically imposed by statute or by our courts.

The CPA regulates product liability and imposes strict liability for product defects upon all parties in the supply chain, in this way allocating risk to those for whose benefit the risk is created. The CPA provides for a number of defences and the CPA does not apply to transactions for the supply or promotion of goods and services to the State.

See 11.5 Transparency and 14.5 Manufacturing.

South Africa does not have any proposed regulations governing the imposition and allocation of liability in relation to AI systems.

Algorithms are not objective and may exhibit systematic or unfair deviation of outputs from the intended or expected outputs. This bias can arise from various sources, such as the datasets used in training, the personal views of users used in reinforced learning, the design or implementation of the algorithm or the use of the algorithm and its outputs. Bias in algorithms and biometric categorisation and automated decision making could affect various consumer areas, including social media and content curation, credit scoring, finance and lending, insurance and risk assessment and insurance pricing, health care and treatment recommendations, education and admissions, hiring and employment, e-commerce, migration, law enforcement, criminal justice, the democratic process and autonomous systems.

Bias in algorithms may result in:

  • violation of personality rights;
  • violation of privacy rights and the POPI Act;
  • unfair discrimination or harm to individuals or groups, which ultimately may infringe the basic human rights protected in the Bill of Rights in the South African Constitution.

Companies that use or provide biased algorithms may face potential liability for violating laws or regulations, infringing contracts or warranties, breaching duties of care or fiduciary obligations, causing harm or damage to consumers or third parties, or infringing human rights or ethical principles. For example, organisations may be liable for discrimination under human rights laws or damages claims under the law of delict where bias in algorithms causes damage. Organisations should also be wary of reputational losses where bias is involved, considering the demand for accountability and transparency from consumers.

There are no specific initiatives to develop standards or frameworks for addressing algorithmic bias that are specific to South Africa.

The processing of personal information by AI systems poses significant risks to data subjects and to society at large. On the one hand, AI models require vast amounts of rich, varied and representative data to comprehend patterns and generalise effectively, and failure to train an AI model in this manner results in outputs that are biased or perpetuate discriminatory outcomes. On the other hand, the rights of data subjects to protect their identity and to prevent unauthorised access by others to sensitive information must be protected. AI models must be trained and used in a transparent and fair manner to foster trust in this new technology.

Furthermore, automated decision-making by AI algorithms and machine learning models without direct human supervision, lack transparency and accountability, which makes it difficult to understand how decisions were made and to challenge unfair outcomes. For this reason, POPIA prescribes that data subjects may not (except in certain limited circumstances) be subject to decisions resulting in legal consequences which affects them to a substantial degree, where the decision is based solely on the basis of the automated processing of personal information intended to provide a profile of such person, such as creditworthiness, location, health, personal preferences or conduct. So, for example, a business would not be able to make a decision on the creditworthiness of a customer using only an AI system.

Regarding data security for AI models, effective safeguards are essential to mitigate the risks of data breaches and unauthorised access, which are fundamental principles outlined in POPIA. AI systems must therefore enlist mechanisms, such as encryption and authentication, to combat cyberattacks and restrict unauthorised access to personal information. The implementation of data security measures must nevertheless be balanced with the need for data accessibility and usability in AI applications so as not to impede data sharing to hinder innovation and limit the potential benefits of AI technology.

A balanced approach to data security should therefore be adopted, anonymising personal information where possible and balancing the rights of data subjects with the principles of data accessibility, portability and usability to realise the full potential of AI.

The term "biometrics" is defined in:

  • POPIA as “[a] technique of personal identification that is based on physical, physiological or behavioural characterisation including blood typing, fingerprinting, DNA analysis, retinal scanning and voice recognition”; and
  • the Births and Deaths Registration Act, 1992 as “photographs, fingerprints (including palm prints), hand measurements, signature verification or retinal patterns that may be used to verify the identity of individuals”.

Under POPIA:

  • the collection of information based on race, sex or ethnic origin;
  • the collection of biometric information and of information based on race, sex or ethnic origin; and
  • the processing of personal information concerning a data subject’s criminal behaviour or biometric information,

are generally prohibited.

The prohibition in respect of the collection and processing of biometric information or criminal behaviour is subject to exceptions:

  • where the data subject has consented to such collection, the prohibition does not apply;
  • if it “is necessary for the establishment, exercise or defence of a right or obligation in law” (which clearly includes law enforcement), the prohibition does not apply;
  • bodies charged by law with applying criminal law fall outside of the prohibition.

Responsible parties who have obtained such biometric information in accordance with the law fall outside of the prohibition.

Automated decision-making (ADM) tools are designed to:

  • prioritise or rank information according to specific criteria eg, predictive law enforcement software to identify high crime areas or predictive product recommendations customised to user preferences;
  • categorise information, for example classifying financial services customers according risk or classifying beneficiaries of social grants according to particular algorithmic design criteria;
  • associate or link information to enable predictions; and
  • filter information to include or exclude data.

Biased data and algorithmic design criteria risk value-laden or discriminatory decisions. Algorithmic processes may also be opaque and decision making may therefore be difficult to trace or interpret. Decisions that have a high impact on human rights, whether individually or as a collective community, are particularly high risk. 

As stated in 11.2 Data Protection and Privacy, under POPIA, a data subject cannot be subject to a decision that has legal consequences or affects them substantially, based solely on the automated processing of personal information for profiling purposes, such as creditworthiness, reliability, location, health, personal preferences, or conduct. POPIA also requires organisations to provide data subjects with sufficient information about the underlying logic of such automated processing to enable them to make representations about such a decision.

The GDPR guiding principles require the use of legally compliant and non-discriminatory ADM, disclosure and traceability of automated decisions, human oversight and review of automated decisions.

South Africa does not have specific legislation or regulations governing the substitution of human services for AI technologies. The CPA, however, requires organisations to be transparent about the nature of their goods and services. This could potentially be interpreted to require disclosure when AI is used to interact with consumers. The CPA further prohibits “unconscionable, unfair, unreasonable, unjust or improper trade practices”, and “deceptive, misleading, unfair and fraudulent conduct”. The use of chatbots and other AI technologies to influence or manipulate consumers unduly may constitute a contravention of the CPA. 

Organisations making use of AI technologies such as chatbots should also consider their obligations under POPIA in the collection and processing of personal data, including confidentiality obligations. Automated services such as chatbots also have the potential to cause damage to users and organisation should be careful not to be negligent in the rendering of services using automated services. Negligence could see organisations being held liability for damages suffered by users. Automated services could also result in reputational risk due to errors, inconsistencies, inaccuracies, bias and other factors that affect the quality of the service rendered. 

Firms are increasingly using AI technology in the form of pricing algorithms to set prices, solve various market challenges and achieve efficiencies. These pricing algorithms process large amounts of market data (ie, demand, supply, customer information and competitor prices) and optimise the pricing decisions of firms. Having gained traction in the airline industry over many years, the use of pricing algorithms as a tool to set prices is not a new phenomenon in global commerce. Over time, the increased use of multi-sectoral price setting algorithms has become the centre of global competition law discourse. Whereas firms may argue that the use of pricing algorithms to set prices may have advantages for the consumer, such as gathering market intelligence in order to enable the innovation of products that will ultimately benefit consumers, it is widely understood that its purpose may be profit maximisation.

Similar to other global competition law regulators, our Competition Commission (the “Commission”) is concerned that firms may use pricing algorithms to achieve sinister gains. In its paper on Competition in the Digital Economy, the Commission postulates that algorithms may enable firms to engage in exclusionary anti-competitive behaviour through the use of self-preference algorithms, as well as facilitate collusive agreements on price and other trading conditions. This, according to the Commission, poses the risk that firms may be placed in a better position to  engage in cartel conduct without easy detection. Price setting algorithms may be used to  facilitate collusion in the following manner:

  • to implement and monitor an express collusive agreement between competitors by instructing the software to set and adjust prices in a particular manner;
  • competitors may use the same third-party supplier of AI technology, which may result in the same pricing algorithm and therefore same pricing; and
  • competitors may use independent algorithms that tacitly coordinate with one another resulting in price fixing.

Price fixing is prohibited under section 4(1)(b)(i) of the Competition Act 89 of 1998, which provides that the direct or indirect fixing of price or other trading conditions is prohibited. Price fixing is a form of cartel conduct that is per se prohibited and cannot be justified or defended on the basis of any technological, efficiency or other pro-competitive gains resulting from the relevant conduct. This means that whether the agreement results in actual anti-competitive effects or that the parties may have not enforced it is not considered when determining a section 4(1)(b)(i) contravention. Proving that a cartel was implemented through a price setting algorithm requires similar evidence applicable to traditional cartels, being that there was a “meeting of minds” between those alleged to have participated in the cartel.

There is limited precedence dealing with these concerns; however competition regulators around the world are increasingly working towards legislation to regulate the anti-competitive effects that may result from the use of pricing algorithms. The Commission has acknowledged that in order to successfully detect and prosecute AI related cartel conduct, it must have the requisite skills, tools and jurisdiction. It specifically indicated that it intends to develop appropriate tools for detecting digital cartels and assessing the effects of agreements amongst competitors; pilot a tender bid-rigging detection programme and build and staff a cartels forensic lab. This is an indication that there are ongoing discussions aimed at improving the Commission’s capacity to ensure optimum outcomes when addressing competition concerns arising from the use of AI.

Intellectual property, confidentiality and personal information violations, and ownership of intellectual property rights in outputs are risks that should be regulated in agreements between customers and AI suppliers. These agreements should therefore regulate:

  • the confidentiality of customer inputs and outputs and liability for breach of confidentiality undertakings;
  • the use of inputs by the supplier and protective measures in place for maintaining confidentiality of inputs; and
  • liability for claims in respect of third party rights in outputs and datasets (however arising including for breach of intellectual property rights and personal information violations).

Any other terms and conditions which also apply to the AI service must be specified, eg, general online services terms and conditions, product terms and conditions and supplemental terms and conditions.

Technology platforms for talent acquisition (recruitment), candidate screening and evaluation, and employee monitoring, learning, evaluation and talent optimisation (development) such as Workday and Wamly assist in reducing time to manually process data and administrative tasks. These AI systems process personal information, which is regulated by POPIA

Employment equity requirements impose demographic preferences on employers. These preferences may not be present in the AI system. Demographics should also not be applied as the only basis for rejection as this may constitute discrimination.

There is an increase globally in the use of monitoring and evaluation tools in the workplace. The extent to which these tools are used in the South African context is not known.

Interguard is a monitoring system for on-site and remote workers, which assesses and reports on the computer activity of a remote worker, measures productivity and idle time as well as overall management of the worker’s time. Teramid allows employers to conduct screen recordings, live views of employee computers, tracking emails and keystrokes. Other AI systems that provide a similar function are Hubstaff and AgenTrack.

These systems have access to employer data and information. This information can be sensitive and/or confidential. The security integrity of the system is important for an employer to ensure that its content and data is protected and cannot be shared or leaked to parties without approval or outside of the organisation. Each system needs to be assessed for its security functions on a case by cases basis. POPIA considerations may also apply.

AI algorithms are used in digital platform companies to customise and enhance recommendation systems, anticipate customer preferences, optimise delivery routes, forecast demand and automate customer service resulting in delivering personalised experiences, improving operational efficiency, and boosting customer satisfaction.

AI-powered chatbots and virtual assistants can handle customer inquiries and support tasks, facilitating seamless order processing.

Concerns regarding data privacy, algorithmic bias and labour practices necessitate regulatory oversight, transparency and ethical governance of AI to ensure equitable and responsible utilisation within digital platform ecosystems. While these advancements hold promise for improving service quality and reducing costs, they also raise concerns about job displacement and the need for regulatory frameworks to ensure equitable access, fair labour practices, re-skilling and upskilling.

The use of technology to monitor employees or workers may enhance accountability, productivity and safety. However, organisations should consider the risks attached to the implementation of technology that may expose the organisation to potential legal and financial risk. Organisations are advised to implement policies or standard operating procedures to mitigate these risks and ensure consistency in application.

Examples of AI tools in the financial services sector include:

  • chatbots to assist customers by asking questions, directing enquiries and complaints and assisting with administrative tasks;
  • fraud detection and prevention by assisting in identifying irregularities or discrepancies in financial transactions;
  • facial recognition for app logins;
  • automation of routine transactions;
  • analysis of customer behaviours;
  • forecasting and prediction;
  • risk calculation, assessment and management;
  • portfolio construction; and
  • the development of highly personalised and efficient products, eg, mobile banking, e-wallet payments, banking capabilities in retail outlets.

The overarching objective of the FSRA is to promote financial stability. The FSRA aims, to this end, to establish, in conjunction with the specific financial sector laws, a regulatory and supervisory framework that promotes:

  • the safety and soundness of financial institutions;
  • the fair treatment and protection of financial customers;
  • the efficiency and integrity of the financial system; and
  • financial inclusion and confidence in the financial sector.

See 5.3 Regulatory Objectives.

The objectives of the National Credit Act, 2005 (NCA) include:

  • promoting the development of a credit market that is accessible to all South Africans, particularly to those who have historically been unable to access credit;
  • encouraging responsible borrowing;
  • discouraging reckless credit granting; and
  • promoting equity in the credit market by balancing the respective rights and responsibilities of credit providers and consumers.

The National Credit Regulator, established in terms of NCA, is mandated to monitor credit availability, price and market conditions, conduct and trends.

The CPA  protects consumers and imposes obligations of suppliers of consumer goods or services. The CPA's scope is wide, although there are exemptions from its application. The CPA applies to general banking products and services but not to financial products or financial services  that are subject to a financial sector law regulated by the Financial Sector Conduct Authority.

Fundamental risks of AI tools in the financial services sector include:

  • opacity/ the black box effect (see 11.4 Automated Decision-Making);
  • automation bias (see 11.4 Automated Decision-Making), which can arise from biased training data and result in biased and discriminatory decisions;
  • security of data and increased cyber threats as information becomes increasingly granular and more valuable.

The risk of using repurposed data is that the original purpose and the new purpose may not be compatible. Also, the original data may have been biased, exclusionary or discriminatory, thus perpetuating biased and discriminatory outcomes.

Under POPIA, further processing or repurposing of personal information must be in accordance or compatible with the purpose for which it was collected. Certain instances of further processing are deemed not to be incompatible with the original purposes. In other cases, to assess whether further processing is compatible with the purpose of collection, the responsible party must take account of:

  • the relationship between the purpose of the intended further processing and the purpose for which the information has been collected;
  • the nature of the information concerned;
  • the consequences of the intended further processing for the data subject;
  • the manner in which the information has been collected; and
  • any contractual rights and obligations between the parties.

Traditionally, in the healthcare sector, liability arises from medical malpractice in the form of negligence, employer vicarious liability, product liability and unauthorised disclosure of private patient information.

AI tools in this sector have a wide range of applications including:

  • diagnosis and prognosis prediction;
  • personalised treatment, ie, matching interventions to individual patient traits, for example patient genetics, medical history, behaviours and biological responses;
  • drug discovery and development;
  • medical imaging;
  • medical devices in an AI system;
  • robotic surgery;
  • patient management; and
  • hospital administration.

In addition, natural language processing (NLP) technologies can, for example, be applied to:

  • analysing clinical notes;
  • extracting medical information from electronic health records; and
  • improving clinical decision support systems.

With the advent of these technologies in this sector:

  • injury or the death of a patient may result from data bias, repurposed data, incorrect diagnosis or treatment recommendations or decisions by AI systems, decision making in unforeseen circumstances, over-reliance on AI systems, failure or malfunction of robotic devices;
  • privacy and security issues are exacerbated due to the collection and analysis of sensitive patient data, raising concerns about unauthorised access to information or data breaches;
  • care delivery may be disrupted during system downtime or technical issues;
  • determining liability and accountability in cases of errors or omissions becomes complex where humans and AI collaborate;
  • challenges with data interoperability and concerns about data ownership may result in impediments to information exchange and co-ordinated care between healthcare providers due; and
  • disparities in access to robotic surgery technology may widen healthcare inequalities, prompting concerns about fair distribution and affordability.

Given high concentrations of sensitive data, the areas posing the most risk of misuse of sensitive data, patient privacy breaches and cybersecurity attacks in digital healthcare include:

  • the use of centralised electronic health records (EHRs) which contain vast amounts of personal health information and provide access to patient information across healthcare providers and locations;
  • telemedicine platforms; and
  • medical devices connected to the internet, eg, insulin pumps or pacemakers.

From a compliance perspective, businesses in South Africa that operate in this sector must ensure that their use and safeguarding of data and their AI system is compliant with POPIA and where applicable, regulations like the United States Health Insurance Portability and Accountability Act (HIPAA) in EU General Data Protection Regulation (GDPR), which impose stringent controls on data access, encryption, and anonymisation to protect patient confidentiality.

South Africa has not yet formalised any laws and/or policy documents for the regulation of AI but we would expect to see the development of laws and regulations that:

  • require robust data security measures, encryption, regular audits and staff training to safeguard sensitive data and ensure the integrity of digital healthcare systems, redundancy plans and clear data governance policies;
  • require rigorous evaluation of data, ongoing monitoring and proactive measures to address bias throughout AI development and deployment; and
  • restrict the degree of autonomy afforded to AI during surgical procedures, particularly in respect of decision-making in unforeseen circumstances.

See 5.3 Regulatory Objectives.

Autopilot systems in cars use AI technologies to automate driving functions like steering, acceleration and braking, promising safety and convenience benefits. However, regulating these systems entails addressing concerns regarding reliability, accountability and ethical implications.

Given that autonomous vehicles gather vast amounts of data, including location information, driving patterns and even audiovisual recordings of the vehicle’s surroundings, POPIA will be engaged.

South Africa does not have specific regulations governing the use of AI in autonomous vehicles. We would expect to see the development of laws and regulations that:

  • mandate vehicle safety standards, transparency and user education on the capabilities and limitations of autopilot systems to prevent misuse and dependency;
  • mandate the implementation of cybersecurity measures against potential cyber threats and attacks which could compromise system operations and/or the integrity and confidentiality of data gathered by autonomous vehicles;
  • provide ethical guidelines, frameworks and standards for AI decision-making in critical and potentially life-threatening scenarios, such as avoiding collisions or mitigating the severity of accidents; and
  • regulate liability and insurance for accidents or malfunctions, having regard to the level of autonomy and the specific capabilities of the autonomous vehicle at the time of the incident.

Ethical dilemmas arise when determining how AI should prioritise competing objectives, such as protecting occupants versus minimising harm to pedestrians or other road users.

Questions of fairness, transparency, accountability and the moral values embedded in AI algorithms must be addressed to ensure that AI-driven decisions align with societal norms and ethical principles.

Efforts to promote international harmonisation for global collaboration and consistency in regulations and standards have been evident across various sectors, including trade, health and technology.

South Africa actively participates in international forums and organisations such as the United Nations, the World Trade Organization (WTO), and regional bodies like the African Union (AU) and the Southern African Development Community (SADC).

common These platforms serve as avenues for South Africa to engage with other nations and collaborate on developing frameworks, standards and regulations that facilitate international trade, ensure product safety and quality, and promote sustainable development.

South Africa aligns its regulations with international standards set by organisations like the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) to facilitate the export and import of goods and services. Additionally, South Africa is a signatory to various multilateral trade agreements, such as the African Continental Free Trade Area (AfCFTA) and bilateral trade agreements, which aim to harmonise trade rules and regulations across participating countries.

In the manufacturing sector, the integration of AI into manufacturing processes impacts:

  • product quality and safety;
  • labour, more specifically workplace health and safety; job displacement due to automation; reskilling and upskilling adaptation for evolving roles in AI-enabled manufacturing environments; and
  • data privacy and security.

The CPA regulates consumers' rights to safe, good quality goods that are in good working order; liability for defective or unsafe goods; and general consumer protection. The CPA does not apply where goods or services are promoted or supplied to the State, or where the consumer is a juristic person whose asset value or annual turnover, at the time of the transaction, equals or exceeds the threshold value determined by the Minister.

The National Consumer Commission (NCC) may recommend to the Minister of Trade and Industry (“Minister”) that a particular code of conduct is to be recognised as the code which regulates the conduct of persons conducting business within a particular industry. The Minister may, by regulation, prescribe an industry code on the recommendation of the NCC or withdraw all or part of a previously prescribed industry code, on the recommendation of the NCC. In addition, the Minister is empowered to prescribe an industry code regulating the interaction between or among persons conducting business within an industry. The NCC is mandated to consult more widely in the industry than the persons who made the original proposal, and the code is required to be published for public comment, thus ensuring that any persons conducting business within the relevant industry are afforded the opportunity to raise objections.

The Consumer Product Safety Recall Guidelines, 2012 established in terms of the CPA (the “Recall Guidelines”) apply to all products sold to consumers that may be defective or unsafe and requires suppliers (which includes manufacturers, importers, distributors and retailers) to adopt a system that will ensure the efficient and effective recall of unsafe consumer products from consumers and from within the supply chain. The Recall Guidelines provide for voluntary (supplier) recall and compulsory (NCC) recall.

The South African Bureau of Standards (SABS) is a statutory body established in 1945 as South Africa's national standardisation body.  It continues to operate under the Standards Act, 2008 and its primary function is the development, maintenance, promotion and dissemination of South African National Standards (SANS), South African Technical Standards (SATS), South African Technical Reports (SATR) and other relevant publications. SABS is associated with various international and regional standards bodies, including IEC, IEEE, ISO, ITU, ASTM, WSSN, EBU, ETSI, CEN, CENELEC, UN/ECE and SADCSTAN.

Although the SABS certification scheme is voluntary by nature, for a number of products SABS certification (ie, the use of the SABS Mark of Approval under licence) is mandatory, imposed by regulators for the protection of public interest, human, animal or plant health and safety, the safety of the environment, prevention of unfair trade practices and national security.

See 1.1 General Legal Background which sets out the primary laws governing employee rights and workplace health and safety.

Use and safeguarding of data and AI systems must comply with POPIA and where applicable, HIPAA and the GDPR.

South Africa has not yet formalised any laws and/or policy documents for the regulation of AI but the above laws are broad and can be applied the AI context and manufacturer must conform to the existing regulatory framework and adapt its products and services accordingly.

See 9.1 AI in the Legal Profession and Ethical Considerations.

South Africa does not have any legislation or regulations specifically governing the use of AI in the professions. However, there are a number of statutory and voluntary professional bodies regulating professions in South Africa (in addition to the LPC), for example:

  • the Health Professions Council (HPC);
  • the South African Nursing Council (SANC);
  • the South African Council For Educators (SACE);
  • the South African Council for Social Service Professions (SACSSP);
  • the South African Geomatics Council SAGC);
  • the Institute of Mine Surveyors of Southern Africa (IMSSA);
  • the Engineering Council of South Africa (ECSA);
  • the South African Council for Natural Scientific Professions (SACNASP);
  • the South African Institute of Chartered Accountants (SAICA);
  • the Independent Regulatory Board of Auditors (IRBA);
  • the South African Institute of Taxation (SAIT);
  • the Institute of Certificated and Chartered Statisticians of South Africa (ICCSSA);
  • the South African Board for People Practices (SABPP) (HR profession);
  • the Public Relations Institute of Southern Africa (PRISA);
  • the Southern African Tourism Services Association (SATSA);
  • the Federated Hospitality Association of South Africa (FEDHASA);
  • the Southern African Marketing Research Association (SAMRA); and
  • the Library and Information Association of South Africa (LIASA).

Please see 8.1 Emerging Issues in Generative AI.

There are three requirements for information to qualify as a trade secret:

  • the information must not only relate to, but also be capable of application in, trade or industry;
  • the information must be secret or confidential, ie, only available and known to a closed circle of people;
  • the information must, objectively, be of economic value to the plaintiff.

Trade secrets in the form of:

  • confidential and proprietary AI technology; and
  • data,

should be disclosed under conditions of confidentiality only. 

Typical non-disclosure agreements require the receiving party:

  • not to disclose the information under specified circumstances (eg, to employees who have a need to know or if required by law);
  • not to use the information for any purpose other than the purpose for which it was disclosed; and
  • to maintain the confidentiality of the information by implementing appropriate, reasonable technical and organisational measures to prevent unlawful access to the information by others, having regard to generally accepted information security practices and procedures and, if applicable, in terms of any specific industry or professional rules and regulations.

Please see 8.1 Emerging Issues in Generative AI.

Users creating works and products using OpenAI are not provided with any assurances that they have unencumbered title to outputs and are entitled to use outputs freely. This exposes users to third-party claims for infringement of intellectual property rights.

The New York Times and OpenAI Microsoft litigation in New York is illustrative. In December 2023, the New York Times (NYT) instituted proceedings against OpenAI and Microsoft, alleging that they are copying and using NYT’s work and “massive investment in journalism” without permission or payment to create generative AI tools and products, like Microsoft’s Co-Pilot (formerly Bing Chat) and OpenAI’s Chat GP, that compete with it. NYT’s claims against MS and OpenAI are for copyright infringement, vicarious copyright infringement, contributory copyright infringement; the removal of copyright management information (in contravention of the Digital Millenium Copyright Act), unfair competition by misappropriation and trade mark dilution.

In setting the strategic direction of the company, Boards must become informed about the opportunities and risks of using AI, including ethical and reputational risk. Each individual director must also discharge their fiduciary duties and duty of care, skill and diligence and should continuously develop their competence to lead ethically and effectively.

The company’s strategic direction must be implemented through:

  • policies, including data governance, cybersecurity and safety and AI usage policies; and
  • board and employee training programmes. 

These policies should address legal and ethical/reputational risks.

Boards should adopt best global practices in the use of AI. The frontrunner, being the EU AI Act and the OECD AI Principles.

The King IV Code on Corporate Governance, which is mandatory for publicly listed companies, provides that the boards should:

  • lead ethically and effectively;
  • govern technology and information in a way that supports the organisation setting and achieving its strategic objectives, and to this end, recommends that technology and information management should result in an information architecture that supports confidentiality, integrity and availability of information; the protection of privacy of personal information; and the continual monitoring of security of information; and
  • govern compliance with applicable laws and adopted, non-binding rules, codes and standards in a way that supports the organisation being ethical and a good corporate citizen.

AI usage policies and training should identify those AI tools deployed and authorised by an organisation and should regulate compliance obligations, confidentiality, protection of personal information, human oversight, transparency, monitoring and updates.

Data governance policies and frameworks and their implementation should regulate data used in tools, addressing collection, storage, sharing and access control and should regulate the use of data ethically, promote transparency and accountability, and mitigate the risks associated with data misuse or privacy violations.

Cybersecurity and safety policies and their implementation should regulate the robust, secure and safe functionality of AI systems throughout their lifecycle, including by requiring encryption, authentication and access controls.

Spoor & Fisher

11 Byls Bridge Boulevard
Building No. 14
Highveld Ext 73
Centurion
Pretoria, 0157
South Africa

+27 012 676 1111

info@spoor.com www.spoor.com/
Author Business Card

Law and Practice

Authors



Spoor & Fisher was established in 1920 and is a renowned specialist intellectual property (IP) law firm with extensive expertise in all aspects of IP. This includes trade marks, copyright, patents, registered designs and anti-counterfeiting measures. The firm also excels in handling the intellectual property aspects of commercial transactions and offers robust litigation services in these fields. With a strong focus on innovation, Spoor & Fisher is deeply invested in the intersection of AI and IP, staying at the forefront of technological advancements and their implications for IP law.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.