Contract Law
Typical examples of the application of contract law in the AI context include:
It is not possible to “contract out of statute” unless the applicable statute permits such alteration.
Delict (Tort) and Product Liability
Delictual liability for harm is determined by applying a test for negligence based on a four-step enquiry of what a reasonable person, or in the case of an expert, the reasonable expert, ought to have done in the same circumstances. Wrongfulness, fault, causation and loss must be proved. The negligent act or omission must have caused the harm. Liability arises for the wrongful and negligent use or reliance of an AI tool which causes loss.
See 10.1 Theories of Liability.
Privacy and Data Protection
The Protection of Personal Information Act, 2013 (POPIA) generally prevents processing, disclosure and cross-border transfer of personal information without the consent of the data subject or in the absence of a legitimate exclusion.
Principles of transparency, explainability, data minimisation and data security underpin POPIA and apply in the AI context in the following way:
Intellectual Property
Appropriate licences are required to authorise the use of intellectual property for the training and development of AI systems.
AI systems may be protected by IP law in various ways. Firstly, we expect that an AI invention would be patentable if it provides a technical solution to a specific technical problem. Generally, the mere automation of a process which has traditionally been done through a mental/manual process by a technical expert, would not be considered patentable. When applying for patent protection, the inventor should not however, be cited as a machine.
The following works associated with AI models may qualify as protected works under the Copyright Act 1978:
Employment Law
The primary laws governing employee rights and workplace health and safety, and that are of general application, are:
Criminal Law
In South Africa, commercial crimes include:
Key pieces of legislation include the Financial Intelligence Centre Act 2011 and the Prevention of Organised Crime Act 1998.
Our criminal law also recognises the committing of unlawful conduct against the personality (dignity, reputation and privacy) of a person, as a crime. Fabrications (or “hallucinations”), inaccuracies, errors, bias or discrimination produced by AI systems may have criminal repercussions.
The Cybercrimes Act 2020 criminalises offences relating to cybercrimes as well as the disclosure of data messages which incite or threaten violence or damage to property.
The Electronic Communications and Transactions Act 2002 regulates the admissibility of digital evidence in court proceedings.
AI is increasingly being applied to enhance efficiency in a variety of sectors and in a variety of ways, for example:
In December 2023, President Cyril Ramaphosa committed to investing USD53 million (and up to USD265 million by 2030 from pooling resources with the private sector) toward PhD programmes focused on bringing “critical skills in areas like artificial intelligence research, advanced biotechnology, fuel cell development, batteries and other storage, and next-generation mining.” (Munyaradzi Makoni, “Cyril Ramaphosa Unveils R1bn PhD Initiative,” Research Professional News (blog), 14 December 2023).
Please also see 3.1 General Approach to AI Specific Legislation, and in particular, the collaboration between the South African government and universities in relation to the establishment of AI centres of excellence.
South Africa’s AI policy began with the Presidential Commission on the Fourth Industrial Revolution (PC4IR) and its recommendation to establish an AI Institute (in the form of a private public partnership) in 2019. In 2022, the Department of Communications and Digital Technologies (DCDT) founded The AI Institute of South Africa (AIISA) together with the University of Johannesburg (UJ) and Tshwane University of Technology (TUT). AIISA has launched two AI centres of excellence since its establishment in 2022: the UJ AI Hub and the TUT AI Hub. In 2024, the DCDT published a draft discussion document, entitled “South Africa's Artificial Intelligence Planning (for general discussion purposes)” (the “SAAIP Draft Discussion Document”).
South Africa has not yet formalised any laws and/or policy documents for the regulation of AI, however, and the release of the SAAIP Draft Discussion Document clearly appears premature.
See 3.1 General Approach to AI-Specific Legislation.
See 3.1 General Approach to AI-Specific Legislation.
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
No data, information or content laws have been amended or newly introduced to foster AI technologies, nor have any non-binding recommendations or directives been issued which facilitate this objective.
See 3.1 General Approach to AI-Specific Legislation.
No key judicial decisions have been handed down regarding AI, including with respect to generative AI and intellectual property rights.
See 4.1 Judicial Decisions.
The following regulatory bodies play a leading role in regulating the legal impacts of AI:
See 5.3 Regulatory Objectives, 11.2 Data Protection and Privacy, 11.6 Anti-competitive Conduct, 14.2 Financial Services, 14.3 Healthcare and 14.6 Professional Services.
We have not seen regulatory agencies make use of their own definitions of AI yet.
The regulators referred to in 5.1 Regulatory Agencies seek to prevent harm and promote the objectives as set out below:
We have not yet seen any enforcement or other regulatory action in relation to AI yet.
See 9.1 AI in the Legal Profession and Ethical Considerations and 14.5 Manufacturing.
See 14.5 Manufacturing.
The use of AI by the government has the benefit of improved efficiency in service delivery, and policy and decision making. Appropriate policy and legal frameworks regulating its application and use is necessary, however, given the risk of interference with fundamental human rights, for example, the use and application of:
Principles of procedural fairness, lawfulness and transparency should underpin policy and legal frameworks.
Closed-circuit television (CCTV) surveillance cameras equipped with facial recognition software and the use of drones and body-worn cameras in private and public spaces in South African metropolitan areas is increasingly commonplace. CCTV surveillance is used for safety and security, criminal investigative and evidentiary purposes. According to Institute For Security Studies, “VumaCam’s licence plate recognition system in Johannesburg uses over 2,000 cameras and is connected to the South African Police Service’s national database of suspicious or stolen vehicles. Bidvest Protea Coin’s Scarface in South Africa uses facial recognition for the real-time detection of potential suspects. Its data can be used as evidence in criminal cases” (link to article).
The use of CCTV cameras is not specifically regulated in South Africa, although POPIA stipulates that personal information may only be processed with the consent of the data subject, or where the data subject is a minor, the consent of a competent person. See 11.2 Data Protection and Privacy.
We would expect to see guidelines from the Information Regulator that:
The South African government has biometric identification systems in the departments of agriculture, correctional services, home affairs, police services and social assistance (see 11.3 Facial Recognition and Biometrics).
No key judicial decisions have been handed down regarding AI, including with respect to government use of AI.
AI tools have many applications for national security, including:
POPIA includes exemptions from certain of its provisions. POPIA does not apply to the processing of personal information relating by or on behalf of a public body which involves national security, including activities that are aimed at assisting in the identification of the financing of terrorist and related activities, defence or public safety, to the extent that adequate safeguards have been established in legislation for the protection of such personal information.
Personal information is required to be collected directly from the data subject. However, where the collection of personal information from another source is necessary in the interests of national security, then this requirement need not be complied with.
Further processing of personal information must be compatible with the purpose for which it was collected. Further processing that is necessary in the interests of national security, however, is deemed not to be incompatible with the purpose of collection.
The notification obligations in favour of data subjects when collecting the data subject’s personal information need not be discharged if non-compliance is necessary in the interests of national security.
Emerging issues raised by generative AI include:
Our Patent Office has granted a patent for an invention in which the inventor is cited as “DABUS (Device for the Autonomous Bootstrapping of Unified Sentience)” and notes that “the invention was autonomously generated by an artificial intelligence”. However, it is questionable whether this patent should have been granted by the Registrar since our legislation provides that there must be a transfer of rights in the invention from the inventor to the applicant, in a situation where the inventor is not the applicant. While we expect that the law will eventually evolve to accommodate an AI inventor, the current law does not make clear provision for this scenario and we are not aware of any amendments to the Patents Act in the pipeline aimed at addressing this issue.
Conversely, our Copyright Act expressly acknowledges that certain works may be computer generated (ie, where it is not possible to attribute the resultant work directly to the efforts of any individual causing the work to be made) and stipulates that the author of a literary, dramatic, musical or artistic work or computer program which is computer-generated, is the person by whom the arrangements necessary for the creation of the work were undertaken. We have no case law that is useful in interpreting who this would be for a generative AI work.
Given that bias and discrimination in training information are often repeated, if not amplified, in generative AI outputs, we expect to see an increase in disputes pertaining to outputs that identify a data subject in association with incorrect information or false claims.
See 1.1 General Legal Background.
There is uncertainty regarding the ownership of copyright in the outputs of generative AI models. It is not clear whether the author and first owner of the copyright in such output is the provider of the AI model or the person who submits the prompt. In some cases, terms and conditions of use will seek to regulate IP ownership. Any transfer of copyright must comply with the requirements of the Copyright Act, in order to be valid.
An objective similarity between an original work and another work will likely lead to an inference of copying and then our courts consider the following:
We have no case law on the interpretation of the law with regard to the infringement of copyright in training material. We expect that a key factor will be whether or not the use of the training material is “transformative”. In other words, an AI system should use the training material to find patterns or insights required to guide its creation of new works, instead of merely copying the training material.
An AI model may only use data to develop or train AI models, in compliance with POPIA. Also, data may be provided under restricted licence terms and/or under conditions of confidentiality, in which case an AI provider must use the information in a manner that does not result in breach of its contractual obligations.
If an AI model is trained on personal information expressed in a manner where the individual identities of data subjects is revealed, the permission of the data subjects for the particular purpose must be obtained. If permission was obtained from data subjects for a specific purpose then the use of the data for AI testing must align with that purpose.
Subject to certain exceptions, data subjects have the right to be notified that their personal information is being collected and the person responsible for collecting the information is obliged to provide various facts regarding the collection, such as who is collecting it and the purpose for which it is being collected.
A data subject may request a responsible party to:
AI is increasingly being used in the legal profession to conduct legal research by analysing vast amounts of legal data, contract analysis to identify key aspects and issues, document review and analysis, document creation and the prediction of legal outcomes based on historic data e-discovery, taxation and conflict clearance. Although still limited mostly to internal usage, chatbots are increasingly used to provide or assist with legal advice.
South Africa does not have any legislation or regulations specifically governing the use of AI in the legal profession. However, the Legal Practice Act 2014 regulates the professional conduct of legal practitioners so as to ensure accountable conduct and establishes the Legal Practice Council (LPC). The LPC sets the standards and regulates the professional conduct of legal practitioners. The LPC also administers the enrolment of legal practitioners and investigates any misconduct by legal practitioners. Legal practitioners must ensure that professional services rendered in whatever manner, including through the use of AI, meet the standards set by the LPC in order to avoid disciplinary action.
The General Council of the Bar (“Bar Council”) deals with matters affecting the advocates profession. It seeks to maintain the high standards, professional integrity and independence which are established hallmarks of the Bar.
The use of AI in the legal profession raises several ethical concerns. Lawyers are expected to apply professional judgment and appropriate skill and expertise as opposed merely to rely on AI outputs. Confidentiality is of the utmost importance in the legal profession and lawyers are required to ensure that confidentially is not breached through the use of any AI tool. The use of AI tools should also be disclosed to clients and client consent should be obtained in advance before using them on client matters, to ensure transparency, especially when handling sensitive information and personal data. As officers of the court, lawyers are also required to uphold the law and ensure that legislation, rules and regulations on the use of AI are strictly adhered to. AI has also increased the risk of unqualified persons providing legal advice that is reserved for legal professionals.
See 1.1 General Legal Background.
Where a defect or failing in an AI tool causes loss as opposed to the person operating the AI system, liability may lie with the developer or provider of the tool. Clear and meaningful explanations of the results of AI tools is therefore essential. However, it may not necessarily be possible to determine how an algorithm reached a particular outcome. If it may therefore be difficult to allocate responsibility to AI tool developers or providers if it is not possible for them to foresee how an AI tool might cause harm.
The particular outcome may be unpredictable given that algorithms are adaptive to environmental inputs. The more autonomously an AI tool operates, the more difficult it becomes to allocate liability to humans.
AI tool developers and providers would seek to exclude or limit liability contractually in advance of deployment and use.
South African law recognises that, in certain cases where damage is caused wrongfully but where fault is absent, the wrongdoer is nevertheless liable on the basis of increased risk of harm, either in the seriousness of the harm, or in its high degree of probability. Liability in these cases is typically imposed by statute or by our courts.
The CPA regulates product liability and imposes strict liability for product defects upon all parties in the supply chain, in this way allocating risk to those for whose benefit the risk is created. The CPA provides for a number of defences and the CPA does not apply to transactions for the supply or promotion of goods and services to the State.
See 11.5 Transparency and 14.5 Manufacturing.
South Africa does not have any proposed regulations governing the imposition and allocation of liability in relation to AI systems.
Algorithms are not objective and may exhibit systematic or unfair deviation of outputs from the intended or expected outputs. This bias can arise from various sources, such as the datasets used in training, the personal views of users used in reinforced learning, the design or implementation of the algorithm or the use of the algorithm and its outputs. Bias in algorithms and biometric categorisation and automated decision making could affect various consumer areas, including social media and content curation, credit scoring, finance and lending, insurance and risk assessment and insurance pricing, health care and treatment recommendations, education and admissions, hiring and employment, e-commerce, migration, law enforcement, criminal justice, the democratic process and autonomous systems.
Bias in algorithms may result in:
Companies that use or provide biased algorithms may face potential liability for violating laws or regulations, infringing contracts or warranties, breaching duties of care or fiduciary obligations, causing harm or damage to consumers or third parties, or infringing human rights or ethical principles. For example, organisations may be liable for discrimination under human rights laws or damages claims under the law of delict where bias in algorithms causes damage. Organisations should also be wary of reputational losses where bias is involved, considering the demand for accountability and transparency from consumers.
There are no specific initiatives to develop standards or frameworks for addressing algorithmic bias that are specific to South Africa.
The processing of personal information by AI systems poses significant risks to data subjects and to society at large. On the one hand, AI models require vast amounts of rich, varied and representative data to comprehend patterns and generalise effectively, and failure to train an AI model in this manner results in outputs that are biased or perpetuate discriminatory outcomes. On the other hand, the rights of data subjects to protect their identity and to prevent unauthorised access by others to sensitive information must be protected. AI models must be trained and used in a transparent and fair manner to foster trust in this new technology.
Furthermore, automated decision-making by AI algorithms and machine learning models without direct human supervision, lack transparency and accountability, which makes it difficult to understand how decisions were made and to challenge unfair outcomes. For this reason, POPIA prescribes that data subjects may not (except in certain limited circumstances) be subject to decisions resulting in legal consequences which affects them to a substantial degree, where the decision is based solely on the basis of the automated processing of personal information intended to provide a profile of such person, such as creditworthiness, location, health, personal preferences or conduct. So, for example, a business would not be able to make a decision on the creditworthiness of a customer using only an AI system.
Regarding data security for AI models, effective safeguards are essential to mitigate the risks of data breaches and unauthorised access, which are fundamental principles outlined in POPIA. AI systems must therefore enlist mechanisms, such as encryption and authentication, to combat cyberattacks and restrict unauthorised access to personal information. The implementation of data security measures must nevertheless be balanced with the need for data accessibility and usability in AI applications so as not to impede data sharing to hinder innovation and limit the potential benefits of AI technology.
A balanced approach to data security should therefore be adopted, anonymising personal information where possible and balancing the rights of data subjects with the principles of data accessibility, portability and usability to realise the full potential of AI.
The term "biometrics" is defined in:
Under POPIA:
are generally prohibited.
The prohibition in respect of the collection and processing of biometric information or criminal behaviour is subject to exceptions:
Responsible parties who have obtained such biometric information in accordance with the law fall outside of the prohibition.
Automated decision-making (ADM) tools are designed to:
Biased data and algorithmic design criteria risk value-laden or discriminatory decisions. Algorithmic processes may also be opaque and decision making may therefore be difficult to trace or interpret. Decisions that have a high impact on human rights, whether individually or as a collective community, are particularly high risk.
As stated in 11.2 Data Protection and Privacy, under POPIA, a data subject cannot be subject to a decision that has legal consequences or affects them substantially, based solely on the automated processing of personal information for profiling purposes, such as creditworthiness, reliability, location, health, personal preferences, or conduct. POPIA also requires organisations to provide data subjects with sufficient information about the underlying logic of such automated processing to enable them to make representations about such a decision.
The GDPR guiding principles require the use of legally compliant and non-discriminatory ADM, disclosure and traceability of automated decisions, human oversight and review of automated decisions.
South Africa does not have specific legislation or regulations governing the substitution of human services for AI technologies. The CPA, however, requires organisations to be transparent about the nature of their goods and services. This could potentially be interpreted to require disclosure when AI is used to interact with consumers. The CPA further prohibits “unconscionable, unfair, unreasonable, unjust or improper trade practices”, and “deceptive, misleading, unfair and fraudulent conduct”. The use of chatbots and other AI technologies to influence or manipulate consumers unduly may constitute a contravention of the CPA.
Organisations making use of AI technologies such as chatbots should also consider their obligations under POPIA in the collection and processing of personal data, including confidentiality obligations. Automated services such as chatbots also have the potential to cause damage to users and organisation should be careful not to be negligent in the rendering of services using automated services. Negligence could see organisations being held liability for damages suffered by users. Automated services could also result in reputational risk due to errors, inconsistencies, inaccuracies, bias and other factors that affect the quality of the service rendered.
Firms are increasingly using AI technology in the form of pricing algorithms to set prices, solve various market challenges and achieve efficiencies. These pricing algorithms process large amounts of market data (ie, demand, supply, customer information and competitor prices) and optimise the pricing decisions of firms. Having gained traction in the airline industry over many years, the use of pricing algorithms as a tool to set prices is not a new phenomenon in global commerce. Over time, the increased use of multi-sectoral price setting algorithms has become the centre of global competition law discourse. Whereas firms may argue that the use of pricing algorithms to set prices may have advantages for the consumer, such as gathering market intelligence in order to enable the innovation of products that will ultimately benefit consumers, it is widely understood that its purpose may be profit maximisation.
Similar to other global competition law regulators, our Competition Commission (the “Commission”) is concerned that firms may use pricing algorithms to achieve sinister gains. In its paper on Competition in the Digital Economy, the Commission postulates that algorithms may enable firms to engage in exclusionary anti-competitive behaviour through the use of self-preference algorithms, as well as facilitate collusive agreements on price and other trading conditions. This, according to the Commission, poses the risk that firms may be placed in a better position to engage in cartel conduct without easy detection. Price setting algorithms may be used to facilitate collusion in the following manner:
Price fixing is prohibited under section 4(1)(b)(i) of the Competition Act 89 of 1998, which provides that the direct or indirect fixing of price or other trading conditions is prohibited. Price fixing is a form of cartel conduct that is per se prohibited and cannot be justified or defended on the basis of any technological, efficiency or other pro-competitive gains resulting from the relevant conduct. This means that whether the agreement results in actual anti-competitive effects or that the parties may have not enforced it is not considered when determining a section 4(1)(b)(i) contravention. Proving that a cartel was implemented through a price setting algorithm requires similar evidence applicable to traditional cartels, being that there was a “meeting of minds” between those alleged to have participated in the cartel.
There is limited precedence dealing with these concerns; however competition regulators around the world are increasingly working towards legislation to regulate the anti-competitive effects that may result from the use of pricing algorithms. The Commission has acknowledged that in order to successfully detect and prosecute AI related cartel conduct, it must have the requisite skills, tools and jurisdiction. It specifically indicated that it intends to develop appropriate tools for detecting digital cartels and assessing the effects of agreements amongst competitors; pilot a tender bid-rigging detection programme and build and staff a cartels forensic lab. This is an indication that there are ongoing discussions aimed at improving the Commission’s capacity to ensure optimum outcomes when addressing competition concerns arising from the use of AI.
Intellectual property, confidentiality and personal information violations, and ownership of intellectual property rights in outputs are risks that should be regulated in agreements between customers and AI suppliers. These agreements should therefore regulate:
Any other terms and conditions which also apply to the AI service must be specified, eg, general online services terms and conditions, product terms and conditions and supplemental terms and conditions.
Technology platforms for talent acquisition (recruitment), candidate screening and evaluation, and employee monitoring, learning, evaluation and talent optimisation (development) such as Workday and Wamly assist in reducing time to manually process data and administrative tasks. These AI systems process personal information, which is regulated by POPIA
Employment equity requirements impose demographic preferences on employers. These preferences may not be present in the AI system. Demographics should also not be applied as the only basis for rejection as this may constitute discrimination.
There is an increase globally in the use of monitoring and evaluation tools in the workplace. The extent to which these tools are used in the South African context is not known.
Interguard is a monitoring system for on-site and remote workers, which assesses and reports on the computer activity of a remote worker, measures productivity and idle time as well as overall management of the worker’s time. Teramid allows employers to conduct screen recordings, live views of employee computers, tracking emails and keystrokes. Other AI systems that provide a similar function are Hubstaff and AgenTrack.
These systems have access to employer data and information. This information can be sensitive and/or confidential. The security integrity of the system is important for an employer to ensure that its content and data is protected and cannot be shared or leaked to parties without approval or outside of the organisation. Each system needs to be assessed for its security functions on a case by cases basis. POPIA considerations may also apply.
AI algorithms are used in digital platform companies to customise and enhance recommendation systems, anticipate customer preferences, optimise delivery routes, forecast demand and automate customer service resulting in delivering personalised experiences, improving operational efficiency, and boosting customer satisfaction.
AI-powered chatbots and virtual assistants can handle customer inquiries and support tasks, facilitating seamless order processing.
Concerns regarding data privacy, algorithmic bias and labour practices necessitate regulatory oversight, transparency and ethical governance of AI to ensure equitable and responsible utilisation within digital platform ecosystems. While these advancements hold promise for improving service quality and reducing costs, they also raise concerns about job displacement and the need for regulatory frameworks to ensure equitable access, fair labour practices, re-skilling and upskilling.
The use of technology to monitor employees or workers may enhance accountability, productivity and safety. However, organisations should consider the risks attached to the implementation of technology that may expose the organisation to potential legal and financial risk. Organisations are advised to implement policies or standard operating procedures to mitigate these risks and ensure consistency in application.
Examples of AI tools in the financial services sector include:
The overarching objective of the FSRA is to promote financial stability. The FSRA aims, to this end, to establish, in conjunction with the specific financial sector laws, a regulatory and supervisory framework that promotes:
See 5.3 Regulatory Objectives.
The objectives of the National Credit Act, 2005 (NCA) include:
The National Credit Regulator, established in terms of NCA, is mandated to monitor credit availability, price and market conditions, conduct and trends.
The CPA protects consumers and imposes obligations of suppliers of consumer goods or services. The CPA's scope is wide, although there are exemptions from its application. The CPA applies to general banking products and services but not to financial products or financial services that are subject to a financial sector law regulated by the Financial Sector Conduct Authority.
Fundamental risks of AI tools in the financial services sector include:
The risk of using repurposed data is that the original purpose and the new purpose may not be compatible. Also, the original data may have been biased, exclusionary or discriminatory, thus perpetuating biased and discriminatory outcomes.
Under POPIA, further processing or repurposing of personal information must be in accordance or compatible with the purpose for which it was collected. Certain instances of further processing are deemed not to be incompatible with the original purposes. In other cases, to assess whether further processing is compatible with the purpose of collection, the responsible party must take account of:
Traditionally, in the healthcare sector, liability arises from medical malpractice in the form of negligence, employer vicarious liability, product liability and unauthorised disclosure of private patient information.
AI tools in this sector have a wide range of applications including:
In addition, natural language processing (NLP) technologies can, for example, be applied to:
With the advent of these technologies in this sector:
Given high concentrations of sensitive data, the areas posing the most risk of misuse of sensitive data, patient privacy breaches and cybersecurity attacks in digital healthcare include:
From a compliance perspective, businesses in South Africa that operate in this sector must ensure that their use and safeguarding of data and their AI system is compliant with POPIA and where applicable, regulations like the United States Health Insurance Portability and Accountability Act (HIPAA) in EU General Data Protection Regulation (GDPR), which impose stringent controls on data access, encryption, and anonymisation to protect patient confidentiality.
South Africa has not yet formalised any laws and/or policy documents for the regulation of AI but we would expect to see the development of laws and regulations that:
See 5.3 Regulatory Objectives.
Autopilot systems in cars use AI technologies to automate driving functions like steering, acceleration and braking, promising safety and convenience benefits. However, regulating these systems entails addressing concerns regarding reliability, accountability and ethical implications.
Given that autonomous vehicles gather vast amounts of data, including location information, driving patterns and even audiovisual recordings of the vehicle’s surroundings, POPIA will be engaged.
South Africa does not have specific regulations governing the use of AI in autonomous vehicles. We would expect to see the development of laws and regulations that:
Ethical dilemmas arise when determining how AI should prioritise competing objectives, such as protecting occupants versus minimising harm to pedestrians or other road users.
Questions of fairness, transparency, accountability and the moral values embedded in AI algorithms must be addressed to ensure that AI-driven decisions align with societal norms and ethical principles.
Efforts to promote international harmonisation for global collaboration and consistency in regulations and standards have been evident across various sectors, including trade, health and technology.
South Africa actively participates in international forums and organisations such as the United Nations, the World Trade Organization (WTO), and regional bodies like the African Union (AU) and the Southern African Development Community (SADC).
common These platforms serve as avenues for South Africa to engage with other nations and collaborate on developing frameworks, standards and regulations that facilitate international trade, ensure product safety and quality, and promote sustainable development.
South Africa aligns its regulations with international standards set by organisations like the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) to facilitate the export and import of goods and services. Additionally, South Africa is a signatory to various multilateral trade agreements, such as the African Continental Free Trade Area (AfCFTA) and bilateral trade agreements, which aim to harmonise trade rules and regulations across participating countries.
In the manufacturing sector, the integration of AI into manufacturing processes impacts:
The CPA regulates consumers' rights to safe, good quality goods that are in good working order; liability for defective or unsafe goods; and general consumer protection. The CPA does not apply where goods or services are promoted or supplied to the State, or where the consumer is a juristic person whose asset value or annual turnover, at the time of the transaction, equals or exceeds the threshold value determined by the Minister.
The National Consumer Commission (NCC) may recommend to the Minister of Trade and Industry (“Minister”) that a particular code of conduct is to be recognised as the code which regulates the conduct of persons conducting business within a particular industry. The Minister may, by regulation, prescribe an industry code on the recommendation of the NCC or withdraw all or part of a previously prescribed industry code, on the recommendation of the NCC. In addition, the Minister is empowered to prescribe an industry code regulating the interaction between or among persons conducting business within an industry. The NCC is mandated to consult more widely in the industry than the persons who made the original proposal, and the code is required to be published for public comment, thus ensuring that any persons conducting business within the relevant industry are afforded the opportunity to raise objections.
The Consumer Product Safety Recall Guidelines, 2012 established in terms of the CPA (the “Recall Guidelines”) apply to all products sold to consumers that may be defective or unsafe and requires suppliers (which includes manufacturers, importers, distributors and retailers) to adopt a system that will ensure the efficient and effective recall of unsafe consumer products from consumers and from within the supply chain. The Recall Guidelines provide for voluntary (supplier) recall and compulsory (NCC) recall.
The South African Bureau of Standards (SABS) is a statutory body established in 1945 as South Africa's national standardisation body. It continues to operate under the Standards Act, 2008 and its primary function is the development, maintenance, promotion and dissemination of South African National Standards (SANS), South African Technical Standards (SATS), South African Technical Reports (SATR) and other relevant publications. SABS is associated with various international and regional standards bodies, including IEC, IEEE, ISO, ITU, ASTM, WSSN, EBU, ETSI, CEN, CENELEC, UN/ECE and SADCSTAN.
Although the SABS certification scheme is voluntary by nature, for a number of products SABS certification (ie, the use of the SABS Mark of Approval under licence) is mandatory, imposed by regulators for the protection of public interest, human, animal or plant health and safety, the safety of the environment, prevention of unfair trade practices and national security.
See 1.1 General Legal Background which sets out the primary laws governing employee rights and workplace health and safety.
Use and safeguarding of data and AI systems must comply with POPIA and where applicable, HIPAA and the GDPR.
South Africa has not yet formalised any laws and/or policy documents for the regulation of AI but the above laws are broad and can be applied the AI context and manufacturer must conform to the existing regulatory framework and adapt its products and services accordingly.
See 9.1 AI in the Legal Profession and Ethical Considerations.
South Africa does not have any legislation or regulations specifically governing the use of AI in the professions. However, there are a number of statutory and voluntary professional bodies regulating professions in South Africa (in addition to the LPC), for example:
Please see 8.1 Emerging Issues in Generative AI.
There are three requirements for information to qualify as a trade secret:
Trade secrets in the form of:
should be disclosed under conditions of confidentiality only.
Typical non-disclosure agreements require the receiving party:
Please see 8.1 Emerging Issues in Generative AI.
Users creating works and products using OpenAI are not provided with any assurances that they have unencumbered title to outputs and are entitled to use outputs freely. This exposes users to third-party claims for infringement of intellectual property rights.
The New York Times and OpenAI Microsoft litigation in New York is illustrative. In December 2023, the New York Times (NYT) instituted proceedings against OpenAI and Microsoft, alleging that they are copying and using NYT’s work and “massive investment in journalism” without permission or payment to create generative AI tools and products, like Microsoft’s Co-Pilot (formerly Bing Chat) and OpenAI’s Chat GP, that compete with it. NYT’s claims against MS and OpenAI are for copyright infringement, vicarious copyright infringement, contributory copyright infringement; the removal of copyright management information (in contravention of the Digital Millenium Copyright Act), unfair competition by misappropriation and trade mark dilution.
In setting the strategic direction of the company, Boards must become informed about the opportunities and risks of using AI, including ethical and reputational risk. Each individual director must also discharge their fiduciary duties and duty of care, skill and diligence and should continuously develop their competence to lead ethically and effectively.
The company’s strategic direction must be implemented through:
These policies should address legal and ethical/reputational risks.
Boards should adopt best global practices in the use of AI. The frontrunner, being the EU AI Act and the OECD AI Principles.
The King IV Code on Corporate Governance, which is mandatory for publicly listed companies, provides that the boards should:
AI usage policies and training should identify those AI tools deployed and authorised by an organisation and should regulate compliance obligations, confidentiality, protection of personal information, human oversight, transparency, monitoring and updates.
Data governance policies and frameworks and their implementation should regulate data used in tools, addressing collection, storage, sharing and access control and should regulate the use of data ethically, promote transparency and accountability, and mitigate the risks associated with data misuse or privacy violations.
Cybersecurity and safety policies and their implementation should regulate the robust, secure and safe functionality of AI systems throughout their lifecycle, including by requiring encryption, authentication and access controls.
11 Byls Bridge Boulevard
Building No. 14
Highveld Ext 73
Centurion
Pretoria, 0157
South Africa
+27 012 676 1111
info@spoor.com www.spoor.com/