Artificial Intelligence 2024 Comparisons

Last Updated May 28, 2024

Contributed By Allende & Brea

Law and Practice

Authors



Allende & Brea is one of the leading full-service law firms in Argentina. Founded in 1957, the firm advises companies of all sizes on an ongoing basis, and acts as special counsel across a broad range of commercial transactions carried out in Argentina or abroad. The firm’s AI practice is comprised of one partner, six associates and three paralegals, who have specialised in the legal issues related to AI and machine learning. The team provides comprehensive legal services to a diverse range of clients, including tech start-ups, established technology companies, AI developers, and end-users. It focuses on navigating the complex legal landscape that surrounds the rapidly evolving field of AI and is also developing in-house programs. Among the clients Allende & Brea has assisted in AI matters are Amazon, Anthropic, Brevity, Franklin Templeton, Ferrero, MetLife, Meta, Open AI, Thermo Fisher and Tik Tok.

In Argentina, the general background legislation that applies to artificial intelligence (AI) includes:

  • the Argentine Constitution, which sets up fundamental rights and principles such as the right to privacy and the exercise of the habeas data action;
  • the Civil and Commercial Code, which includes provisions on contracts, torts and liability that are applicable to agreements involving AI and to liability arising from damages caused by AI;
  • the Personal Data Protection Act, which addresses the processing of personal data, including data processed by AI technologies;
  • the Intellectual Property Act, which is particularly relevant to copyright protection regarding AI-generated works;
  • the Labour Act, which is applicable to the use of AI in employment relationships (eg, in recruitment and hiring, performance management, and employee monitoring);
  • the Consumer Protection Act, which sets up principles relevant to the safeguarding of consumers of AI products and services, such as transparency and safety; and
  • the Criminal Code, which provides the legal framework for criminal offences which is relevant when AI is used to commit crimes, such as cybercrimes.

In Argentina, AI and machine learning (ML) are applied in agriculture to optimise production (eg, by analysing climate conditions and historical data on crop yield), as well as to improve crop monitoring and control (eg, by using sensors and cameras to collect real time data on soil conditions and air quality, among other factors). Further, the manufacturing industry has incorporated AI and ML systems to optimise the production of goods and services, while monitoring the activities deployed in factories.

Likewise, since 2021, AI has been consistently used in the judiciary. In this respect, the Artificial Intelligence Laboratory of the Prosecutor’s Office of Buenos Aires has developed different predictive technologies which assist lawyers – eg, in the preparation of a tax observation in retirement proceedings. Another use case for AI has been the health sector, where, especially since the pandemic, it has been applied in the management of medical appointments (via “chatbots”).

AI and ML algorithms have also been implemented by sports clubs, allowing them to analyse athlete performance, injuries and match history and, hence, optimise decision-making (eg, on which player best suits the club’s needs or a particular position).

Furthermore, algorithms help the financial sector to analyse investment portfolios according to historical data and market conditions – eg, by identifying their profitability and potential risks.

In addition, E-commerce has improved since AI and ML were implemented, as they allow businesses to predict consumer behaviour and personalise marketing and customer experience.

In 2023, by means of Law 27,738, the National Plan on Science, Technology and Innovation 2030 was created to define, organise and communicate the policies, strategies and instruments for all public and private actors that constitute the National Science, Technology and Innovation System (SNCTI).

Likewise, the National Agency for the Promotion of Investigation, Technological Development and Innovation, under the Office of the Chief of Staff, launched two calls aimed at the development of the knowledge economy, by training human resources and promoting technology-based companies. One of these proposals seeks to encourage the generation and implementation of innovative knowledge of AI in the productive sector. The other proposal seeks to develop new scientific solutions, based on AI and data science, to increase export capacity.

Moreover, both national and provincial governments promote AI through various initiatives, such as:

  • the National Institute of Public Administration’s course on AI, which is addressed to public officials and employees of the public administration;
  • the government of Buenos Aires’s free online AI course, which is open to the public; and
  • the Agency for the Access to Public Information’s regular participation in conferences and programmes related to the use of AI.

Please note that there is no specific legislation on artificial intelligence in Argentina.

Although there is no specific legislation on AI, Argentina tends to follow the European Union’s regulations on the use of AI – eg, in the guidelines for the use of AI launched by the Undersecretariat of Information Technologies under the Office of the Chief of Staff (the “Undersecretariat”), which establishes the principles of security, non-discrimination, sustainability, privacy, data protection, human supervision, transparency, explainability, responsibility, accountability, education and governance, among others. For further information, please see 3.3 Jurisdictional Directives.

By means of Provision 2/2023, the Undersecretariat approved the “Recommendations for a Reliable Use of Artificial Intelligence”, consisting of guidelines for the development of AI systems under the principles of security, non-discrimination, sustainability, privacy, data protection, human supervision, transparency, explainability, responsibility, accountability, education and governance, among others. Although the recommendations are addressed to the public sector, they shall be adopted by private sector entities that develop AI technology as well.

In addition, through Resolution 161/2023, the Agency for Access to Public Information created the “Programme for Transparency and Protection of Personal Data in the Use of Artificial Intelligence”, aimed at the promotion of processes of analysis, regulation and the strengthening of government capabilities in order to support the development and use of AI, both in the public and private sectors.

Also, through Administrative Decision 750/2023, the executive branch ordered the implementation of an inter-ministerial roundtable on AI, in order to discuss, analyse and examine the impact of AI in Argentina, considering its advantages for the private activity and public policy, as well as the risks it poses to individuals and society.

In this sense, government bodies have already issued non-binding directives to address AI matters. Binding resolutions, however, have not been released.

This is not applicable in Argentina.

This is not applicable in Argentina.

This is not applicable in Argentina.

No Argentine laws have been amended to foster AI technology. However, recent bills of law have contemplated AI issues in their provisions.

In particular, Jujuy province is currently in the process of a constitutional amendment that seeks to recognise the right to use AI systems and technologies.

Nonetheless, case law has already started to target AI issues as discussed in 4. Judicial Decisions.

Numerous bills of law have been introduced in Argentina to regulate AI, following the European Union’s regulations on the matter. Such bills mainly seek to provide guiding principles for the use of AI, including transparency, explainability, security, privacy, data protection, human supervision, responsibility and accountability. Also, the bills aim to appoint an authority with supervisory and sanctioning powers.

A few bills attempting to amend the Criminal Code, through seeking to penalise sexual offences committed with the aid of AI systems, have also been submitted.

Some of the key judicial decisions involving AI matters in Argentina are outlined below:

Denegri, Natalia Ruth c/ Google Inc.

Model/presenter Natalia Denegri sued Google, requesting the company delete all websites that linked Mrs Denegri’s person with the “Coppola case”, in particular, those that featured video clips of Ms Denegri being verbally abusive to guests on an Argentinean TV talk show covering the drugs case against Guillermo Coppola.

Although this case was not specifically centred on AI, the Supreme Court did raise awareness of the use of AI by search engines. In this sense, the Court stated that the algorithm is not neutral, but rather displays people’s preferences. The Court also highlighted the importance of AI becoming more transparent and understandable to its users.

Rodriguez, María Belen v Google

A model filed a lawsuit for damages against two internet search engines, claiming that her image had been used commercially and without her authorisation and that, furthermore, her personal rights had been infringed upon by being linked to websites with erotic or pornographic content. She also requested the cessation of such use and the removal of the links.

Although the ruling did not deal with AI specifically, it did target the use of thumbnails, and the responsibility of platforms.

Observatorio de Derecho Informático Argentino (ODIA) and Others Against the Government of Buenos Aires

The ODIA. sued the government of Buenos Aires, alleging that the amendments introduced to Law No 5.688 (which implemented the Fugitive Facial Recognition System, a system AI to identify fugitives in public areas through the use of AI) were unconstitutional and contrary to various international treaties signed by Argentina. The claim stated that such amendments affected the rights of privacy, assembly, equality, non-discrimination, criminal guarantees, honour and image, among others, of the inhabitants of Buenos Aires.

The court granted the petition, as it considered:

  • the non-establishment of a Special Commission in the legislature of Buenos Aires;
  • the lack of reports from the Ombudsman of Buenos Aires;
  • the absence of an impact study on citizens’ rights prior to the implementation of the system;
  • the flaws in the databases that fed the system; and
  • the exclusion of citizen participation.

In conclusion, the court did not focus on the system itself, but on the consequences of its premature implementation and its use under precarious conditions regarding respect for people’s rights and guarantees.

Argentine courts have not yet defined AI.

Argentina does not have a specific regulatory agency dedicated to AI matters. However, the Agency for Access to Public Information, through the National Directorate for the Protection of Personal Data, as the local data protection authority, may play a significant role when the use of AI technologies and algorithms involves the processing of personal data, by enforcing data protection regulations.

In addition, the Undersecretariat for Consumer Protection may also address AI matters in B2C relationships within the provision of AI products and services.

The bodies discussed in 5.1 Regulatory Agencies have not issued a specific definition of AI. However, within the Recommendations for a Reliable use of Artificial Intelligence, the Undersecretariat has adopted the definitions issued by the OECD and the United Nations.

According to the OECD’s Expert Group on AI, artificial intelligence is:

a machine-based system capable of influencing the environment by producing an outcome (predictions, recommendations or decisions) for a specific set of objectives. It utilizes data and inputs from machines and/or humans to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through automated analysis (e.g., machine learning) or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate at different levels of autonomy”.

Additionally, the United Nations define artificial intelligence as “the ability of a computer or computer-enabled robotic system to process information and produce results similar to human thought processes in learning, decision-making, and problem-solving”.

The Agency for Access to Public Information aims to enforce data protection regulations in Argentina, hence, it is responsible for:

  • ensuring that AI systems comply with the principles on data protection – ie, legality, minimisation, purpose, conservation, information, security and confidentiality, among others; and
  • supervising the use of AI so that it does not result in privacy violations, identity theft, discrimination or data breaches.

The Undersecretariat for Consumer Protection seeks to guarantee compliance with the Consumer Protection Act by:

  • ensuring that consumers are treated with dignity and provided with certain, clear and detailed information; and
  • supervising the adequate provision of AI products and services within B2C relationships.

No enforcement actions, nor any other regulatory actions, related to AI have occurred in Argentina.

By means of Provision 2/2023, the Undersecretariat has established standards related to AI in order to guarantee its ethical use by all sectors of society, including the principles of security, non-discrimination, sustainability, privacy, data protection, human supervision, transparency, explainability, responsibility, accountability, education and governance.

The international background that inspired the recommendations issued by the Undersecretariat includes the following:

  • The “Recommendation on the Ethics of Artificial Intelligence” of the United Nations Educational, Scientific and Cultural Organization (UNESCO), to which Argentina has adhered.
  • The Asilomar AI Principles, which state that AI should be managed based on three axes: research problems, ethics and values, and long-term problems.
  • The OECD Principles on AI, to which Argentina has subscribed, and which aim to guide governments, organisations and individuals in the design and management of AI systems, so that the interests of individuals are prioritised.

Furthermore, the Agency for Access to Public Information has issued, together with Uruguay’s data protection authority, a set of Guidelines for data protection impact assessments, which has been recently recognised as a relevant standard by local courts.

The government of Argentina has tried to implement AI programs for security purposes. However, some of the programs have been challenged before the courts, as citizens believe that constitutional rights may be at risk. The government has also issued AI programs to provide banks and other financial institutions with ID cross-checking services.

The Buenos Aires city government has implemented an AI chatbot system (known as “Boti”) to aid citizens with daily inquiries and requests.

The Buenos Aires city government has launched its “Artificial Intelligence Plan”, which encourages the use of AI in the public and productive sectors, promoting strategic projects. In addition, it seeks to mitigate risks and promote benefits in the labour market, with continuous training initiatives and a focus on social security and gender inclusion.

The Ministry of Justice of Buenos Aires has established the “National Comprehensive Programme for Artificial Intelligence in Justice”. The programme seeks to:

  • harness emerging technologies, such as artificial intelligence;
  • expedite judicial processes;
  • improve access to justice; and
  • ensure transparent and efficient management.

Please see 4.1 Judicial Decisions, specifically the section on Observatorio de Derecho Informático Argentino (ODIA) and Others Against the Government of Buenos Aires.

AI has been used in the field of national security for a variety of purposes:

  • Surveillance and monitoring – AI systems are employed for monitoring borders, ports, airports and other sensitive areas.
  • Cybersecurity – AI systems are used to detect and prevent cyber-attacks, identify patterns of malicious activity in networks and protect critical infrastructure against cyber threats.
  • Biometric identification – facial recognition systems and other biometric technologies are used to identify suspicious people or those who are wanted by the authorities in national security databases.
  • Crime prediction and prevention – AI is used to predict the occurrence of crime and allocate resources more efficiently in crime prevention; this includes the identification of crime patterns in historical data and the allocation of police patrols.
  • Intelligence data analysis – AI is used to analyse large volumes of intelligence data, including open-source information and data collected by intelligence agencies; this helps in identifying potential threats and in making strategic decisions.

Generative AI poses a number of challenges, especially with regard to the creation of synthetic content, such as text, images and videos:

  • Misinformation – the ability to generate compelling content can be used to create and spread misinformation.
  • Copyright violations – automated content generation may violate copyrights by reproducing protected works.
  • Violation of the privacy of individuals – automated content generation may create images or disseminate data of individuals without their consent.
  • Creation of inappropriate content – generative AI can be used to create inappropriate content, such as fake pornographic images or hate speech. In this regard, the Olympia Act (an amendment to the Protection of Women Act) states that the diffusion of synthetic erotic content without prior consent constitutes a form of digital violence. In addition, the Criminal Code punishes any form of production, commercialisation, distribution and possession of sexual content involving minors.

The input assets used in the AI process (sometimes known as the “training data”) can be protected through the inclusion of specific provisions within the terms and conditions of AI services.

Output assets in the AI process could be eventually protected under copyright law, once registered before the Copyright Office. However, it should be noted that the Copyright Office has not yet issued a specific criteria for the valid registration of works involving AI.

Please see 8.1 Emerging Issues in Generative AI.

The Personal Data Protection Act (PDPA) protects data subjects’ rights of/to access, rectification, update and deletion of their personal data.

When exercising the right of rectification, the data subject must submit evidence in order to prove that the data is inaccurate, and the data controller shall confirm the alleged inaccuracy. If the information provided by the data subject is not sufficient for the data controller to verify the accuracy of the data, the latter would be relieved from its obligation to rectify the personal data.

Regarding the right to deletion, the PDPA states that this will not apply when the deletion of the data could cause damage to the rights or legitimate interests of third parties, or when there is a legal obligation to retain the data. Also, the right of deletion shall not apply if the data is not false or discriminatory or if the evidence provided by the data subject does not sufficiently support the request for deletion.

AI for Research and Analysis

The main uses of AI in the practice of law centre on legal research and case analysis.

Legal research using AI involves tools that search for legal precedents, case law and doctrine relevant to cases. For example, in Argentina, lawyers have access to a tool called “DoctIA”. This tool uses AI to search for case law from the Supreme Court of Justice free of charge and online. It allows legal professionals to efficiently and accurately find judicial precedents relevant to their cases.

There are examples of AI software that are used to analyse the details of a case and predict possible judicial outcomes. In 2017, the Prosecutor’s Office of Buenos Aires developed “PROMETEA”, a system that applies AI to automatically prepare judicial opinions. In particular, this tool consists of a software system whose main task is the automation of repetitive tasks and the application of AI for the automatic preparation of legal opinions based on analogous cases for the solution of which there are already multiple judicial precedents.

Ethical Concerns

In relation to the ethical issues related to the use of AI in the practice of law, concerns have been raised regarding lack of transparency and human supervision, which make it difficult to understand how legal decisions are made. Also, the use of AI in the practice of law involves the processing of large amounts of personal data in Argentina, which raises concerns about the privacy and information security of data subjects.

Liability for personal injury or commercial harm arising from AI-enabled technologies can be imposed through extra-contractual civil liability and contractual liability. These theories may require different elements for liability to be imposed, such as the existence of a harm, a causal link between the conduct and the harm, and the foreseeability of the harm.

It is important to note that companies can purchase specific insurance policies to cover potential liability arising from harm caused by the use of AI.

In the context of the supply chain, different participants – such as manufacturers of AI technologies, service providers and software developers – can be held liable. Liability may vary depending on contractual agreements between the parties and insurance policy provisions.

Since AI is not a legal person, liability arising from the acts or omissions of AI technology acting autonomously may be attributed to the company selling or providing AI products or services. This can be based on principles of product liability, negligence or breach of contract that are embodied in the Civil and Commercial Codes.

There are two bills of law regarding the imposition of liability arising from the use of AI-derived technologies:

  • Bill of law 4436-D-2023, which amends the Criminal Code to criminalise the production and distribution of sexual images of minors, including those generated by AI systems; and
  • Bill of law 4411-D-2023, which amends the Criminal Code to address digital or telematic violence against women – it proposes to penalise the production and distribution of false or adulterated images, including those generated by AI, with the aim of causing physical, psychological, economic, sexual or moral harm.

Currently, there is no specific legislation in Argentina that directly addresses bias in algorithms. However, bias in algorithms may constitute a violation of fundamental rights, such as equality and non-discrimination, enshrined in the National Constitution and in international treaties ratified by Argentina.

In consumer areas, bias in algorithms can generate significant risks, especially in decisions related to automated customer services.

Although there is no specific regulation in this area, there are non-binding guidelines to address bias in algorithms, such as the inclusion of principles of transparency, fairness and non-discrimination.

AI presents a number of risks to the protection of personal data:

  • AI systems may lack transparency, making it difficult for data subjects to understand how their personal data is used and what it will be used for;
  • AI tools may allow wider and automated access to data subjects’ information;
  • automated processing of personal data by AI systems may increase the risk of security breaches or unauthorised access; and
  • data may be perpetually stored as long as it is useful for the AI system.

AI access to personal data does, however, also promise potential benefits:

  • AI enables the personalisation of products and services according to individual user needs, which improves user experience and increases operational efficiency of businesses; and
  • access to large data sets enables innovation and the development of new AI technologies and applications that can benefit society in areas such as health, education and transport.

The main issues arising from the processing of personal data and machine-generated data without direct human supervision are that there may be errors in the patterns or data used to analyse the data and, therefore, in the outcome itself.

This may lead to biased decisions, lack of accountability and potential discrimination. Hence, human oversight is crucial to address these challenges.

Facial recognition and biometric information are considered as sensitive data under the PDPA and complementary dispositions. Hence, these dispositions will apply.

As a general rule, in order to lawfully collect said data and process it, data controllers should require the express, prior and informed consent of data subjects. The lack of consent may result in the data controller’s liability.

The Agency for Access to Public Information has issued specific guidelines on video surveillance. For these specific cases, data controllers are not obliged to collect prior express consent to collect this type of data. However, data controllers must inform data subjects that they are being recorded by using a specific sign, and register the database under the National Database Registry.

The PDPA specifically regulates the use of automated decision-making within judicial rulings or governmental acts that involve the assessment of human behaviour.

Under Resolution 4/2019, the Agency for Access to Public Information has issued a set of guidelines on the use of personal data. Regarding automated decisions, it establishes that in the event that a data controller makes decisions based solely on automated processing that produce detrimental legal effects on the data subject or negatively affect them in some way, the data subject shall have the right to request from the controller an explanation of the logic applied in that decision.

Also, the PDPA provides that the data subject has the right not to be subject to a decision based solely or partly on the processing of data by automated means, including profiling and inference, which produces pernicious legal effects, significantly adversely affects him or her, or has discriminatory effects. Partially automated or semi-automated decisions are those in which there is no significant human intervention. The data subject has the right to request a review by a human person of decisions taken on the basis of the processing which affect his or her interests, including decisions aimed at defining his or her personal, professional, consumer, credit or other qualities.

In Argentina, there is no specific regulation of the use of chatbots and other technologies to replace services provided by natural persons. There is also no specific regulation requiring mandatory disclosure of such chatbots in Argentina. However, the following legislation is applicable:

  • the Argentine Constitution;
  • the Civil and Commercial Codes;
  • the PDPA;
  • the Labour Act;
  • the Consumer Protection Act; and
  • the Criminal Code.

Price-setting practices usually involve web-scraping, as companies use sophisticated AI systems to monitor the prices set by their competitors and act upon that information. Although there is no specific regulation, the Agency for Access to Public Information has issued a guideline with other data protection authorities, highlighting the risks of web-scraping practices. Hence, companies could be held liable for engaging in anti-competitive conduct by using AI.

The use of AI may create risks in transactional contracts between customers and AI suppliers where the AI products promise accuracy and completeness. AI suppliers should ensure that all AI products must be subject to human supervision and address the fact that outcomes may be biased or limited to the database entries on which they AI relies.

AI may be used as part of a hiring process. Although patterns established by algorithms have the specific aim to facilitate and accelerate different procedures, there are possible risks associated with discrimination in this area. Section 17 of the Labour Act prohibits any type of discrimination among workers based on sex, race, nationality, religious beliefs, political affiliation, union membership or age. In this sense, if the outcome is based on discriminating data, employers may be held liable.

The Labour Act provides that the person who has been discriminated against may request the seizure (cessation) of the discriminatory conduct, or may initiate civil or administrative proceedings. Although candidates for a job are not technically employees, they can still initiate proceedings under Section 24.

Although less often used, AI could be a valuable part of termination practices in big companies conducting mass lay-offs, where a case-by-case analysis for every individual is very difficult to achieve. In cases of mass lay-offs, the Labour Act provides, in Section 247, a specific objective form for these. This pattern could be introduced into an AI tool, and the outcome would not be considered as discriminatory.

AI may be used to measure productivity and optimise processes. However, if a software is established to constantly measure and optimise productivity, this may generate additional pressure on employees, and affect their wellbeing.

In this sense, under local dispositions, constant monitoring (for example, by video-recording or software) is prohibited. Indeed, Section 16 of the Remote Work Act specifically prohibits the use of monitoring software.

Limits on constant monitoring for remote workers benefits them as these limits protect their privacy and reduce the risks of lay-offs for arbitrary reasons. However, due to the fact that employers have limited ability to monitor their employees remotely, such positions may become rare.

Under the Consumer Protection Act, all information provided to consumers will be binding within the B2C relationship. As a result, if a company uses AI to communicate with consumers or channel requests from them, all information given by the AI will be binding on the company.

Financial services companies usually use AI to provide banking-platforms and credit and loan facilities, as it helps them in profiling users and optimising and enhancing their services. Also, companies use AI to analyse investment portfolios according to historical data and market conditions.

There is no specific regulation on the use of AI for financial services. However, if a company uses AI to render a service, it will be considered as a tool. As a result, financial services companies could be held liable for any misuse or tort resulting from the use of AI tools, if it were to affect the regular provision of the service.

Some of the risks involved in the use of AI in financial services include fraud, identity theft, insurance errors, errors in loans and credit grants.

Biased repurposed data may result in discriminatory practices. However, provided that the company subjects AI decision processes to human monitoring, risks should be minimised.

In Argentina, AI systems used in in healthcare are considered medical devices, and therefore, the National Administration of Drugs, Food, and Medical Technology (ANMAT) is responsible for regulating these systems. ANMAT has issued a guide for the industry on Software as a Medical Device (SaMD) and Machine Learning Medical Devices (MLMD).

Indeed, the ANMAT approved ENTELAI PIC, a tool developed in Argentina for diagnosing COVID-19 through the analysis of chest X-rays, with the description “Software for automated processing of medical images”. Additionally, ANMAT is involved in Artificial Intelligence Medical Devices (AIMD), a working group within the International Medical Device Regulators Forum (IMDRF) with a focus on managing medical devices based on AI.

The potential risks of the use of AI in the healthcare space include the misdiagnoses of a patient and errors being introduced in input data or processing parameters. For this reason, healthcare professionals should review and analyse results given by AI systems, to demonstrate their diligent conduct. Lack of such monitoring could result in professional liability.

Data Protection

In terms of data processing, the PDPA is the main legislation on personal data protection issued in Argentina, in addition to the Regulatory Decree 1558/2001 and the regulations issued by the local data protection authority, the Agency of Access to Public Information (AAIP). Health data is considered as sensitive data. The sensitive nature of health data is reinforced by the protection afforded to the patient under Law No 26,529 on the Rights of Patients in their Relationship With Health Professionals and Institutions (“Law 26,529”).

However, if the data in question does not identify a specific data subject, it is not considered to be personal data under the PDPA.

Under local law, the concept of health data is broad and includes much more than explicit health data. Health data begins with the information about the individual collected when they enter or consult the health system and all the data generated during the provision of such assistance. Undoubtedly, a patient’s name and surname and their attendance at a certain doctor or clinic, due to their specialty, may reveal health data.

Medical records

The medical record is the document that stores the entirety of the patient’s health data. The controllers of this data are the doctor who creates and updates it with each visit or consultation of the patient or clinical event and the hospital or clinic where the doctor works. Everything that appears in the medical record must be considered health data. Section 12 of Law 26,529 defines medical the record as “the obligatory chronological, foliated and complete document in which all actions performed on the patient by healthcare professionals and assistants are recorded”. This document is subject to confidentiality and privacy obligations under Section 2, paragraphs c and d of Law 26,529 and Section 2 of Regulatory Decree 1089/2012.

Genetic data

Finally, it is also important to consider genetic data as health data. In this sense, Resolution 255/2023 issued by the AAIP determines that genetic data is data related to the inherited or acquired genetic characteristics of a human person that provide information about their physiology or health.

Genetic data is considered sensitive personal data, if it univocally identifies a natural person and/or if it may be used to derive or disclose information that is related to the health or physiology of the data subject or whose use may be potentially discriminatory to the data subject. When genetic data is considered sensitive personal data, higher levels of security, confidentiality, access restrictions, use and circulation must be implemented in its treatment. There is no specific further regulation on the use of AI to process health data.

There are no regulations governing the use of AI in autonomous vehicles. The Argentine government had initially considered the regulation of autonomous vehicles in a bill of law, but such bill lost parliamentary status.

Due to the lack of specific legal framework, liability will be determined by the Civil and Commercial Code general dispositions. In this sense, the guardian (owner) of the car, will be strictly liable in the event of an accident or incident involving an autonomous vehicle.

In terms of data privacy and security, vehicle companies will only be able to collect data from data subjects provided that (i) they express their prior and informed consent, (ii) the company provides full information and transparency on how the data is collected and the purposes for said collection, and (iii) subjects give their further consent to data being disclosed to third parties.

There is no specific regulation regarding AI in manufacturing. In terms of product quality control and safety, companies will be liable for any malfunctioning of the product, or harm caused in the regular use of said product.

In terms of workforce impact, data privacy and security, AI cannot be used to monitor employees when manufacturing goods – eg, companies cannot constantly monitor employees using AI to check how many breaks they take.

There is no specific regulation regarding the use of AI in professional services. As a result, AI will be considered as a tool when rendering professional services.

Under the Civil and Commercial Code, professionals have an obligation of means – ie, they must commit to making a diligent effort and display competence to fulfil a task or goal, without guaranteeing the outcome.

In this sense, when AI tools are used for professional purposes, the professional using them will be potentially liable for negligence if they do so knowing that the outcome may be inaccurate or incomplete, and they do not review or control that outcome.

There are no judicial nor agency decisions in Argentina relating to whether AI technology can be an inventor or co-inventor for patent purposes or an author or co-author for copyright purposes and moral right purposes.

According to the Confidentiality Law No 24,766 (which incorporated the principles set forth by Section 39 of the TRIPS Agreement), confidential information (including AI technologies and data) shall not be disclosed in a way that is contrary to fair commercial practices, as long as such information: (i) is not generally known or easily accessible, (ii) has commercial value by being secret, and (iii) is subject to reasonable measures to keep it secret.

In this respect, the Confidentiality Law considers as contrary to fair commercial practices the following acts: breach of contract, abuse of trust, instigation to commit an infringement and negligence in the acquisition of undisclosed information.

There are no developments in the scope of intellectual property protection for works of art and works of authorship generated by AI in Argentina.

There are no regulations nor judicial decisions on the intellectual property issues related to the use of AI in Argentina. Nevertheless, the distinction between “responsibility” and “execution” made by Provision 2/2023 of the Undersecretariat should be noted. This distinction provides that, since AI does not have consciousness, AI systems, even though able to execute actions, lack true decision-making power and therefore these decisions, and the responsibility for them, must necessarily lie with the person in charge.

Board of Directors should address the following:

  • ethical aspects in design and data modelling;
  • how the ethical design of AI models is validated;
  • how an appropriate level of information security is established;
  • what aspects should be considered when establishing traceability;
  • what aspects should be considered for systems to be auditable;
  • how monitoring can be carried out and what should be monitored;
  • what general aspects should be considered regarding the existence of ethical incidents; and
  • what precautions from an ethical point of view are advisable to consider for the control of internal users.

It is not clear what would be considered best practice, but based on recent experience, companies should ensure compliance with data protection and consumer protection laws to avoid any investigations or claims.

Allende & Brea

Maipu 1300, 11th floor
C1006ACT
Buenos Aires
Argentina

+54 11 4318 9986

+54 11 4318 999

ppalazzi@allende.com www.allende.com.ar
Author Business Card

Law and Practice in Argentina

Authors



Allende & Brea is one of the leading full-service law firms in Argentina. Founded in 1957, the firm advises companies of all sizes on an ongoing basis, and acts as special counsel across a broad range of commercial transactions carried out in Argentina or abroad. The firm’s AI practice is comprised of one partner, six associates and three paralegals, who have specialised in the legal issues related to AI and machine learning. The team provides comprehensive legal services to a diverse range of clients, including tech start-ups, established technology companies, AI developers, and end-users. It focuses on navigating the complex legal landscape that surrounds the rapidly evolving field of AI and is also developing in-house programs. Among the clients Allende & Brea has assisted in AI matters are Amazon, Anthropic, Brevity, Franklin Templeton, Ferrero, MetLife, Meta, Open AI, Thermo Fisher and Tik Tok.