Artificial Intelligence 2025 Comparisons

Last Updated May 22, 2025

Contributed By Dannemann Siemsen

Law and Practice

Authors



Dannemann Siemsen is recognised as a leading firm in Latin America, and is composed of a team of experts dedicated to the protection of intellectual property (IP) since 1900. The firm delivers first-rate IP services and litigation across all industry sectors, while also assisting IP firms from various regions. With the largest patent team in Brazil, Dannemann Siemsen brings together specialists from diverse technical backgrounds, enabling the development of comprehensive strategies to safeguard clients’ innovations. The firm’s trade mark department leverages advanced technology to manage cases from clearance searches to appeals, combining strategic thinking with robust portfolio management and monitoring structures – including in the digital environment. Dannemann Siemsen proudly celebrates 125 years of experience founded on knowledge, respect, ethical values, diversity and a continuous commitment to staying ahead of future developments.

Brazil does not yet have a general law regulating the use and development of artificial intelligence (AI). In December 2024, the Federal Senate approved Bill No 2338/2023, which seeks to fill this legislative gap. The Bill is currently under review by the House of Representatives and is expected to be enacted – possibly with amendments – sometime in 2025.

In the meantime, the development and deployment of AI in Brazil are governed by existing legal frameworks, including the Civil Code, the Consumer Protection Code, the General Data Protection Law (Lei Geral de Proteção de Dados Pessoais; LGPD) and the Copyright Law, among others.

AI technologies are being applied across a wide range of sectors, including healthcare, finance, manufacturing and the legal field, among others.

In agriculture, predictive AI is being used to assist farmers in forecasting weather patterns, optimising irrigation and managing crop yields. Machine learning models analyse data from sources such as soil composition analyses, satellite imagery and historical weather records to generate actionable insights. Generative AI, on the other hand, is employed to create synthetic data for training models and to simulate crop growth under varying conditions. These technologies contribute to increased agricultural productivity, reduced environmental impact and improved resource management.

In the retail and e-commerce sector, predictive AI plays a key role in optimising inventory management, forecasting demand and enabling personalised marketing strategies. By analysing consumer behaviour, machine learning algorithms can anticipate purchasing trends with greater accuracy. Generative AI is also making an impact by producing product descriptions, designing marketing content and creating virtual try-on experiences for customers. As a result, retailers benefit from enhanced sales performance, lower operational costs and a more personalised shopping experience.

Federal and state governments in Brazil have taken an active role in promoting the adoption and advancement of AI for industrial use.

At the federal level, initiatives such as the Brazilian Artificial Intelligence Plan (Plano Brasileiro de Inteligência Artificial; PBIA 2024–28) demonstrate a strong commitment to AI innovation. The plan allocates approximately BRL23 billion (around USD4 billion) over four years to support measures including infrastructure development, capacity building, business innovation and regulatory improvement.

The Brazilian Development Bank (Banco Nacional de Desenvolvimento Econômico e Social; BNDES) has also allocated funding to support AI research and development. For instance, the BNDES Garagem programme provides financial support to start-ups developing innovative AI solutions.

Efforts to attract talent have likewise intensified. The federal government has introduced programmes to draw in skilled AI professionals from abroad, including tax incentives and streamlined visa processes. Initiatives such as the Science Without Borders (Ciência sem Fronteiras) programme have been expanded to cover AI-related fields, fostering international collaboration and knowledge exchange.

Several Brazilian states have launched their own AI strategies to complement federal initiatives. For example, São Paulo, Rio de Janeiro, Pernambuco and Minas Gerais are investing in AI research hubs and innovation centres to strengthen local industries.

Incentives for industry include not only direct investment and tax relief but also regulatory sandboxes established by agencies such as the National Data Protection Authority (Autoridade Nacional de Proteção de Dados; ANPD). These sandboxes enable companies to test innovative AI applications within a more flexible regulatory framework, thereby encouraging experimentation and accelerating market adoption.

In 2022, amidst ongoing regulatory developments in Europe, the Brazilian Federal Senate took the initiative to establish a Commission of Legal Experts for supporting the drafting of a substitute bill on AI. The commission was tasked with formulating an alternative legislative proposal, consolidating the AI-related bills then under consideration in the National Congress (Bill Nos 5,051/2019, 21/2020 and 872/2021).

Led by the commission, public hearings were held with the participation of over 50 experts, including representatives from government, industry, civil society and academia. In May 2023, the commission finalised its report, which formed the basis for Bill No 2,338/2023, Brazil’s proposed general framework for AI regulation. The Bill was approved by the Senate on 10 December 2024 and is currently under review by the House of Representatives.

Bill No 2,338/2023 aims to establish a general legal framework for the development, deployment and use of AI in Brazil. It sets out principles such as transparency, accountability, safety, human oversight and non-discrimination, and introduces risk-based classifications for AI systems – ranging from minimal to unacceptable risk. It outlines obligations for developers and operators according to the level of risk involved and mandates impact assessments for high-risk systems. The Bill also proposes the creation of a national authority to oversee AI governance and enforcement. Its overarching goal is to foster innovation while safeguarding fundamental rights and ensuring ethical, responsible AI use across sectors.

No response has been provided in this jurisdiction.

In Brazil, while comprehensive AI-specific legislation is still under development, several government bodies have issued non-binding guidelines, recommendations and frameworks to guide the ethical and responsible use of AI. These directives aim to address the challenges and opportunities posed by AI technologies, laying the groundwork for future regulatory efforts. Two relevant examples follow.

Ethical Guidelines for AI in Public Administration

Issuing body

The Brazilian Federal Court of Accounts (Tribunal de Contas da União; TCU) is the issuing body.

Scope

In July 2024, the TCU issued the Ethical Guidelines for AI in Public Administration, intended as a non-binding framework for public agencies. These guidelines emphasise transparency, accountability and fairness in AI-driven decision-making processes. They apply to AI systems used in areas such as public service delivery, law enforcement and resource allocation.

Objectives

The guidelines seek to ensure that AI systems employed in public administration are transparent and accountable; to prevent bias and discrimination in AI-based decisions; and to promote the ethical use of AI, thereby enhancing public trust in government institutions.

Guidelines for AI in Education

Issuing body

The Ministry of Education (Ministério da Educação; MEC) is the issuing body.

Scope

Also in July 2024, and under the framework of the PBIA, the MEC published recommendations for the use of AI in education. These focus particularly on personalised learning, student assessment and administrative processes. The guidelines stress the importance of protecting student privacy, ensuring equitable access to AI technologies and embedding ethical considerations into their use.

Objectives

The MEC aims to promote the use of AI to improve learning outcomes and personalise education while ensuring ethical and responsible deployment. Recognising that effective use of AI depends on the skills of educators, the guidelines recommend that teacher training programmes incorporate digital literacy and AI-related competencies. These recommendations align with broader educational policies under Brazil’s National Education Plan (Plano Nacional da Educação; PNE) and the Brazilian Digital Transformation Strategy (E-Digital). A central goal is to ensure that AI contributes to reducing educational inequalities – particularly between urban and rural areas – and to promoting inclusive, high-quality education for all students.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

Enacted in 2018, the LGPD now plays a central role in regulating AI-related data processing in Brazil. In response to rapid technological advancements, the ANPD has been actively issuing guidance and launching public consultations on the intersection between AI and data protection.

For example, the ANPD opened a consultation on how AI systems process personal data, with the aim of enhancing transparency, safeguarding sensitive data (particularly that of minors) and ensuring that users are able to effectively exercise their rights. These non-binding recommendations serve to guide organisations towards practices that both uphold individual rights and promote responsible AI innovation.

The ANPD has also issued further guidance on matters such as transparency in privacy policies and the importance of user-friendly opt-out mechanisms when personal data is used for AI training purposes.

On the copyright front, Brazil is revisiting its long-standing legal framework to better balance intellectual property (IP) rights with the need to foster AI research and development. Currently, the Brazilian Copyright Law (Law No 9,610/1998) does not explicitly address text and data mining (TDM) for research or AI training purposes. However, recent policy discussions – including public hearings held by a Senate commission – have led to proposals for the regulation of a TDM exception.

Bill No 2,338/2023 introduces a specific provision that would exempt certain TDM activities from infringement claims, provided they are carried out by research institutions or entities acting in the public interest. However, the current wording of the Bill has drawn significant criticism for potentially hindering AI training by private or for-profit organisations. This is because the exception does not extend to commercial actors, thereby excluding a crucial segment of the innovation ecosystem that depends on large-scale data processing to develop competitive AI solutions. Discussions on the TDM exception and its scope are still ongoing in Congress, and the final wording of the provision remains subject to change.

Bill No 2,338/2023 represents Brazil’s most comprehensive legislative effort to regulate AI. As mentioned in 3.1 General Approach to AI-Specific Legislation, it adopts a risk-based approach to AI governance, inspired in part by the EU AI Act, and classifies AI systems into different levels of risk – limited, high and prohibited – each with corresponding obligations. High-risk AI systems, for example, are subject to transparency, oversight and impact assessment requirements, particularly where their use may significantly affect fundamental rights or public safety.

The Bill seeks to prevent harms to fundamental rights while aiming to foster an innovation-friendly environment that supports the development and deployment of both predictive and generative AI systems. It introduces general principles for AI development and use in Brazil, including transparency, accountability, safety, fairness, non-discrimination and human oversight.

To strengthen individual protection, the Bill guarantees a set of data subject rights, including the right to:

  • access information about how AI systems affect individuals;
  • data protection in accordance with the LGPD;
  • explanation of automated decisions;
  • challenge decisions made by AI; and
  • human review of significant automated outcomes.

These provisions are particularly relevant in contexts involving public administration, law enforcement, finance and health.

The proposition also addresses systemic risk, which may arise from general-purpose or generative AI systems. Special safeguards are foreseen in this regard.

In terms of legal infrastructure, the Bill proposes the creation of a national authority for AI governance, with the power to issue regulations, oversee compliance and apply sanctions. It also sets out obligations for different actors in the AI supply chain.

One of the Bill’s most debated elements is the proposed exception for TDM, as explained in 3.6 Data, Information or Content Laws. The provision would allow certain TDM activities without prior authorisation from rights holders, but only when conducted by public interest entities such as research institutions. While this exception is intended to facilitate scientific progress and innovation, it has been criticised for excluding private or for-profit organisations – a limitation that could restrict the development of generative AI models, which rely on large-scale datasets. Debates over the scope of this exception are ongoing in Congress, and the final text may still undergo amendments.

Brazil’s regulatory process has been marked by multi-stakeholder engagement, with contributions from academia, the private sector, civil society and government institutions. This participatory model is seen as a way to ensure that the law remains adaptable and aligned with international standards, while also addressing the country’s specific social and economic context.

Brazilian courts have begun to address issues at the intersection of AI, IP and data protection, even as dedicated AI litigation remains in its early stages. Some key judicial pronouncements and interpretative statements have set initial benchmarks for how existing legal frameworks apply to AI technologies.

The Brazilian Superior Court of Justice (Superior Tribunal de Justiça; STJ) has adopted an expansive interpretation of the copyright limitations and exceptions provided in the Copyright Law, considering fundamental rights and the social function of property as established in the Federal Constitution. This approach was further reinforced by Interpretative Statement 115, issued by the Federal Justice Council, which emphasises that the limitations of copyright must be applied flexibly to ensure that the social function of copyright is not undermined by overly restrictive protection. These interpretations support the use of TDM for research and AI training – especially for generative AI systems – without unduly harming the economic interests of copyright holders. This understanding may eventually be overruled by the new TDM rules proposed under Bill 2338/23 (see 3.6 Data, Information or Content Laws).

In Brazil, the ANPD, established under the LGPD, serves as the primary authority overseeing personal data protection. With the introduction of Bill No 2,338/2023, the ANPD is proposed to assume a central role in the regulation and supervision of AI systems, particularly concerning data protection and privacy matters. This expanded mandate includes overseeing how AI systems process personal data, ensuring transparency, upholding user rights and issuing guidelines to steer industry practices.

The proposed legislation also envisions the creation of the National System for AI Governance and Regulation (Sistema Nacional de Regulação e Governança da Inteligência Artificial; SIA), a co-ordinated system involving multiple existing agencies contributing their expertise. This system is designed to ensure that AI systems deployed across various sectors comply with existing laws while promoting a harmonised national approach to AI governance. According to the Bill, the ANPD would co-ordinate the SIA.

It is important to note that the Bill is still under consideration, and its provisions may evolve as it progresses through the legislative process. Stakeholders are encouraged to stay informed about developments to understand the final structure and roles defined within Brazil’s AI regulatory framework.

In Brazil, several regulatory agencies and governmental bodies have issued non-binding directives to guide the ethical and responsible use of AI. While these guidelines do not carry the force of law, they play a significant role in shaping industry practices and informing future legislation. In addition to the ANPD, notable examples include the following.

Brazilian Health Regulatory Agency (Agência Nacional de Vigilância Sanitária; ANVISA)

Scope

ANVISA has established specific requirements for approving software as a medical device (SaMD), which encompasses certain AI-driven healthcare solutions. In 2022, ANVISA issued Board Resolutions No 657/2022 and No 751/2022, defining SaMD as products or applications intended for medical indications that function independently of hardware medical devices. These regulations outline criteria for risk classification and the necessary evaluation processes for safety and efficacy. AI systems, especially predictive tools like diagnostic algorithms, must undergo rigorous clinical validation and performance testing. The guidelines also emphasise ongoing monitoring and risk management throughout the product’s life cycle.

Objectives

Objectives include the following:

  • ensure that AI technologies used in healthcare meet high standards of safety and efficacy;
  • promote the adoption of AI technologies to improve healthcare outcomes and reduce costs;
  • encourage the development and adoption of AI technologies that can enhance healthcare delivery while aligning with international standards; and
  • provide a consistent framework that facilitates regulatory assessments and approvals, helping to streamline the innovation process.

Central Bank of Brazil (Banco Central do Brasil; BACEN) and Brazilian Securities and Exchange Commission (Comissão de Valores Mobiliários; CVM)

Scope

BACEN and CVM have been proactive in fostering innovation within the financial sector, particularly concerning fintech developments. BACEN has established financial frameworks for new banking models, such as direct credit companies and peer-to-peer lending companies, and has introduced rules for foreign investment in fintech. These initiatives demonstrate a commitment to facilitating growth and innovation in the financial sector. While specific non-binding guidelines for AI in financial services, including credit scoring, fraud detection and investment advisory, are not explicitly detailed in the available sources, the regulatory environment indicates an openness to integrating AI technologies, provided they adhere to principles of transparency, fairness and risk management.

Objectives

Objectives include the following:

  • ensure that AI systems used in financial services are transparent and explainable;
  • prevent bias and discrimination in credit scoring and lending decisions;
  • enhance the security and reliability of AI-driven financial systems; and
  • create a regulatory environment that supports innovation while safeguarding consumer interests.

In Brazil, the enforcement landscape concerning AI is evolving alongside broader digital and data protection frameworks.

In mid-2024, the ANPD initiated proceedings against social media platforms regarding the processing of personal data for training their generative AI models. These actions were based on concerns that the platforms’ reliance on “legitimate interest” as a legal basis did not sufficiently safeguard user rights – particularly in relation to transparency, clear user notification and accessible opt-out mechanisms. Following the submission of a compliance plan by one platform, the ANPD authorised most data processing activities, with the exception of those involving the personal data of minors. The compliance plan included measures such as issuing clear notifications to users about data processing for AI training, updating privacy policies with specific information, informing users of their right to object and refraining from processing the personal data of minors.

No fine was issued, though the ANPD is empowered to impose fines of up to 2% of a company’s revenue in Brazil, capped at BRL50 million per violation, for non-compliance with the LGPD.

If and when enacted, non-compliance with the provisions of Bill No 2,338/2023, Brazil’s proposed AI Act, could lead to remedial orders, including the suspension of services or mandatory system adjustments, in addition to financial penalties similar to those under the LGPD. These measures aim to prevent harms such as discrimination, privacy breaches and inadequate accountability in automated decision-making (ADM) processes, while also fostering innovation through the establishment of clearer compliance standards.

The Brazilian National Standards Organization (Associação Brasileira de Normas Técnicas; ABNT) develops technical standards known as the Brazilian Rules (Normas Brasileira; NBRs) to harmonise the quality, security and safety of AI systems. As Brazil’s exclusive representative in international bodies such as the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), the ABNT plays a crucial role in aligning Brazilian standards with global best practices. These technical guidelines cover aspects such as system reliability, interoperability, risk assessment and ethical design, providing an important reference for industries implementing AI.

Further, the Brazilian Observatory for Artificial Intelligence (OBIA) is part of the PBIA. The OBIA aims to monitor AI usage in the country and supports regulatory and governance processes related to AI.

For companies operating in Brazil, adherence to international frameworks such as ISO/IEC standards or the NIST AI Risk Management Framework enhances consumer confidence and facilitates cross-border transactions by offering a shared technical language and recognised benchmarks for quality. The Brazilian national standards organisation (ABNT) acts as a conduit between international standards and local industry practices, ensuring that companies can effectively integrate global best practices into domestic operations.

The OECD AI Principles, which emphasise fairness, accountability and transparency, are also highly relevant. While these principles broadly align with Brazilian legal and ethical values, Brazil’s specific cultural and social contexts may necessitate tailored approaches. For instance, concerns such as algorithmic bias and discrimination may carry particular weight in Brazil due to its diverse population and historical social inequalities. Companies must therefore adapt international standards to reflect these local nuances and realities.

In sectors such as healthcare and finance, Brazilian regulatory agencies – such as Anvisa (Agência Nacional de Vigilância Sanitária; the Brazilian Health Regulatory Agency) and Febraban (Federação Brasileira de Bancos; the Brazilian Federation of Banks) – have developed sector-specific requirements that may not fully align with international standards. For example, AI-driven medical devices must comply with Anvisa’s stringent approval procedures, which may exceed international norms. Likewise, financial institutions are subject to guidelines issued by BACEN, which may impose additional restrictions on the use of AI in areas such as credit scoring and fraud detection.

Brazilian public authorities at national, state and local levels are increasingly deploying AI to optimise service delivery and administrative processes. AI tools are employed for data analytics, process automation and decision support in areas such as tax administration, public health management and social welfare programmes. These systems enhance transparency, enable predictive maintenance of public infrastructure and facilitate citizen interactions through AI-powered chatbots. For administrative law purposes, AI assists in monitoring regulatory compliance and managing documentation workflows, thereby streamlining internal processes

Law enforcement and judicial bodies have begun integrating AI technologies into criminal investigations and judicial processes. AI is utilised for forensic analysis, predictive policing and managing case backlogs. Government agencies use facial recognition and biometric systems for various purposes, including border control, law enforcement and identity verification in public services. In some instances, authorities deploy facial recognition systems to identify suspects in crowded public spaces. However, these tools have raised substantial concerns about potential errors in facial recognition – such as misidentification – which may lead to wrongful detention or discrimination. Moreover, the risk of racial and ethnic bias in algorithmic decision-making has sparked public debate and judicial scrutiny.

AI is finding applications in civil litigation and court administration – for example, by assisting with legal research, automating document review and supporting decision-making in civil disputes. These tools help reduce case processing times and improve the consistency of judicial outcomes. Nonetheless, their use must be balanced against the rights to due process and transparency, ensuring that individuals affected by automated decisions have adequate means to challenge or seek human intervention.

In recruitment, governmental agencies are increasingly adopting AI-driven tools to screen applicants for public positions and contractor roles. These systems analyse large datasets to identify qualified candidates and assess performance metrics, aiming to make the hiring process more efficient. However, such ADM raises issues regarding transparency and potential bias. Ensuring that these tools comply with anti-discrimination principles and the LGPD is critical to prevent inadvertent exclusion of qualified candidates.

The Smart Sampa programme, initiated by São Paulo City Hall, aims to enhance public security through the deployment of facial recognition cameras. In May 2023, the São Paulo Court of Justice (Tribunal de Justiça de São Paulo; TJSP) lifted a suspension on the Smart Sampa bidding process, which had been previously halted due to concerns over privacy violations and potential misuse of collected data. The court’s decision allowed the bidding process to continue.

During Carnival 2025, the Public Defender’s Office of São Paulo requested the suspension of Smart Sampa, arguing that the technology’s use could be discriminatory and inconsistent. They also recommended that its application be limited to real situations of risk with relevant justification. Despite these concerns, the mayor decided to maintain the use of Smart Sampa during the festivities.

Civil organisations have expressed concerns regarding the potential for racial bias and privacy infringements associated with the use of facial recognition technology in public spaces. In response, São Paulo’s government has updated the Smart Sampa tender to include measures such as a 90% match threshold for detections, involvement of trained human reviewers and enhanced data protection protocols to comply with the General Law of Data Protection.

The implementation of Smart Sampa reflects a broader trend in Latin American cities adopting AI-based surveillance tools to improve public safety. However, these initiatives have sparked debates about the balance between security enhancements and the protection of civil liberties.

The Brazilian Artificial Intelligence Strategy (Estratégia Brasileira de Inteligência Artificial; EBIA), published in 2021 by the Ministry of Science, Technology and Innovations (Ministério da Ciência, Tecnologia e Inovação; MCTI), recognises national defence and public security as strategic sectors that should benefit from the integration of AI technologies. Although the document does not present a detailed implementation roadmap, it outlines key objectives for these areas. Among them are the promotion of research and development of AI applications for national defence purposes and the encouragement of AI adoption in areas such as situational awareness, risk analysis and decision support systems. These objectives reflect an early but clear recognition by Brazilian authorities of the potential role AI can play in enhancing national security capabilities.

The current uses of AI for national security matters in Brazil include border surveillance, cybersecurity and cyber defence, autonomous aerial and ground surveillance (high-capacity drones) and public security operations, among other initiatives.

For IP, see 15. Intellectual Property, and for data protection, see 8.2 Data Protection and Generative AI.

Under Brazil’s LGPD, data subjects have the right to request correction of incomplete, inaccurate or outdated personal data. In the context of generative AI, this right may be invoked when a model generates false or defamatory outputs relating to an individual.

The LGPD also grants the right to data deletion, but this presents challenges in AI systems, especially those based on deep learning. Once personal data is used in training, it is embedded in the model’s parameters and cannot be easily isolated. While regulators do not expect full model deletion, companies may adopt solutions such as machine unlearning or retraining with filtered datasets. The focus is on mitigating harm rather than eliminating the entire model.

The LGPD further requires that personal data be used only for specific, legitimate and clearly defined purposes. In AI contexts, organisations must specify in advance how data will be used – whether for training, fine-tuning or generating content. Any deviation may require renewed consent. The principle of data minimisation also applies, mandating that only necessary data be processed. Given AI’s reliance on large datasets, firms must apply anonymisation or pseudonymisation techniques to maintain utility while reducing identifiability.

Finally, the LGPD requires organisations to conduct data protection impact assessments (DPIAs) to assess and mitigate risks in data processing activities, including those involving AI systems.

In early 2025, the Federal Council of the Brazilian Bar Association (Ordem dos Advogados do Brasil; OAB) approved guidelines to steer the ethical use of generative AI in legal practice. These recommendations require lawyers to supervise AI-generated outputs, ensure client confidentiality and comply with the principles set out in the OAB Code of Ethics and Discipline.

The National Council of Justice (Conselho Nacional de Justiça; CNJ) has issued resolutions – such as Resolution 332/2020 and Resolution 615/2025 – to promote transparency, accountability and human oversight in the use of AI in judicial processes. While these measures are not binding statutory law, they serve as influential normative frameworks within the legal community.

AI tools are already being used to efficiently search databases containing case law, legislation and legal commentary. Generative AI models and specialised legal research platforms assist with drafting briefs, contracts and legal memoranda, not to mention tracking deadlines, managing caseloads and monitoring changes in judicial processes through integrated platforms.

Brazilian jurisprudence and civil code doctrine have traditionally relied on negligence-based principles. Under this framework, a party may be held liable if it fails to exercise reasonable care in the design, development, deployment or supervision of an AI system. In the context of AI, negligence may arise from a failure to implement adequate human oversight or to intervene when the system generates erroneous or harmful outputs.

Under Brazil’s Consumer Defence Code (Law No 8,078/1990), manufacturers and service providers may be held strictly liable for harm caused by defective products, regardless of fault. AI systems, when offered as products or services to consumers, may fall within this regime if a defect results in personal injury or commercial loss.

To mitigate these risks, parties increasingly resort to contractual risk allocation and specialised insurance policies, in line with global trends.

Bill 2338/2023 establishes that civil liability for damage caused by AI systems will be subject to the rules set out in the Civil Code or the Consumer Defense Code, depending on the case. The Bill also provides for a fine of up to BRL50 million (approximately USD9 million) or 2% of the company’s turnover, applicable to developers, suppliers and applicators of AI systems. Sanctions include suspension and prohibition of operation.

Bias in algorithms can have serious consequences in consumer-facing sectors where fairness is essential:

  • credit and finance – biased credit scoring models may lead to unfair loan denials or disproportionately high interest rates for certain demographic groups;
  • employment – automated hiring tools and performance evaluation systems can perpetuate discrimination by favouring candidates from specific backgrounds;
  • healthcare and insurance – algorithms used to determine eligibility or premium rates may result in unequal access to essential services, particularly for vulnerable populations; and
  • social services – in areas such as social security or welfare benefits, biased decision-making can unjustly exclude individuals in genuine need of support.

Where biased outcomes cause consumer harm, companies may face legal action or regulatory sanctions – particularly if they failed to implement appropriate measures to detect and mitigate bias. Industry responses, including technical audits, de-biasing tools and collaborative initiatives to establish best practices, are central to managing these risks.

Under Brazil’s LGPD, the processing of biometric data – such as that used in facial recognition systems – generally requires the explicit consent of the data subject. However, the LGPD provides for alternative legal bases that may permit processing without consent, including compliance with a legal obligation or the protection of the life or physical safety of the data subject or a third party.

Liability under existing laws not specific to AI, such as the Consumer Defence Code and the Civil Code, can be triggered if facial recognition systems cause harm or discriminatory outcomes. Companies may be held strictly liable for defects or found negligent in system design and oversight.

Industry-specific risks include substantial fines, reputational damage and potential exclusion from regulated markets if systems fail to meet stringent regulatory requirements. Therefore, companies must invest in robust compliance frameworks, conduct regular audits and implement appropriate risk management strategies to mitigate exposure to sanctions.

Under Brazil’s LGPD, data subjects have the right to request an explanation regarding the criteria and procedures used in ADM systems, particularly those that significantly affect their personal, professional, consumer, or credit profile. They are also entitled to request a review of decisions made solely through automated processing of personal data. While the LGPD does not mandate human review in every instance, these provisions aim to promote transparency, safeguard individual rights, and help prevent discriminatory or biased outcomes.

Failure to disclose the use of ADM systems may significantly increase an organisation’s exposure to both civil lawsuits – brought by affected individuals – and regulatory enforcement actions. In addition to potential financial penalties, such non-compliance can result in serious reputational damage.

Although Bill No 2,338/2023 currently classifies certain applications – such as chatbots used for customer service – as low risk, it encourages entities to adopt robust internal governance measures and conduct algorithmic impact assessments to ensure non-discrimination and fairness. These assessments support compliance and enable companies to self-regulate by identifying and mitigating risks before they cause harm.

AI systems, particularly those operating as recommendation engines or chatbots, often employ machine learning algorithms that analyse user data to generate tailored suggestions. Technologies that issue undisclosed suggestions can manipulate consumer behaviour by exploiting psychological triggers, such as subliminal cues, behavioural nudges and opaque personalisation. If found to be misleading or infringing on consumer rights, such practices could trigger liability under consumer protection laws or attract antitrust scrutiny, especially if they result in significant harm or distort competition.

Proposals and industry guidelines currently under discussion, reflected in public consultations by the ANPD, consider introducing safe harbour provisions that would allow companies some limited flexibility in non-disclosure, mainly if disclosure would reveal proprietary algorithms or trade secrets. These safe harbours aim to balance transparency with the need to protect IP and innovation, while ensuring that consumers retain the ability to challenge or seek human review of decisions when they are materially affected.

AI systems introduce specific risks relating to the reliability and accuracy of their outputs, particularly in high-stakes or consumer-facing applications. Given the probabilistic nature of machine learning, outputs may be incorrect or misleading. To manage this, contracts should include performance warranties, set out accuracy parameters and establish clear mechanisms for human oversight, output validation and remedial action, where necessary.

A further concern relates to the lack of transparency inherent in many AI systems. This affects not only user trust but also regulatory compliance and internal governance. Contractual arrangements should therefore require the supplier to provide detailed documentation on model logic, decision criteria, and system limitations alongside audit rights or access to meaningful explanations when material decisions are automated.

The use of training data presents another legal risk, particularly where datasets include personal data or third-party copyrighted content. If such data is used without proper authorisation, it may expose the customer to claims under data protection or IP law. Contracts must therefore contain warranties of lawful data use, data provenance assurances and, where appropriate, indemnities covering infringement or regulatory breaches.

AI systems may produce biased or discriminatory outcomes, especially in areas like recruitment, financial services and public services. In such contexts, the parties should allocate responsibility for bias testing, fairness assessments and ongoing monitoring, and require compliance with applicable anti-discrimination legislation. These obligations can be supported by audit clauses and regular compliance reporting.

Regulatory compliance is also critical, given that AI systems may be subject to multiple legal regimes including the LGPD and forthcoming AI-specific legislation, such as Bill No 2,338/2023. To manage this, contracts should impose general compliance obligations, require the supplier to monitor applicable laws and allocate liability for regulatory breaches, including the possibility of fines or administrative sanctions.

Ascribing liability in the context of AI is particularly complex. Where multiple actors contribute to the development, integration or deployment of the system, the contract must provide for clear allocation of responsibility, including limitations of liability and specific indemnities for losses resulting from defective systems, personal data misuse or unlawful outputs.

IP rights over AI-generated content also require careful consideration. The contract should define ownership of the outputs, whether the customer holds exclusive rights and the extent to which the supplier may retain rights over improvements or residual learning. Licensing terms, use restrictions and post-termination provisions should be carefully drafted to avoid ambiguity.

Finally, security is another material concern. AI systems are vulnerable to specific threats, such as adversarial inputs, data poisoning and model inversion attacks. Contracts should include obligations for technical safeguards, compliance with cybersecurity standards and incident notification duties, particularly where personal data or sensitive use cases are involved.

Automated systems have increasingly been adopted by human resources (HR) departments to streamline recruitment and performance evaluation processes. These technologies can significantly reduce the time required for preliminary screening and decision-making, allowing employers to act more swiftly in both hiring and termination procedures. By relying on structured, quantitative data, such systems also aim to introduce greater consistency and predictability into decision-making, reducing the degree of subjectivity traditionally present in human-led evaluations.

However, despite their potential, such systems are not inherently neutral. AI algorithms are only as objective as the data on which they are trained, and there remains a significant risk that historical biases – whether based on gender, race, social background or other factors – may be inadvertently embedded in these tools. Compounding this issue is the limited explainability of many AI systems, particularly those based on complex machine learning models. This lack of transparency can make it difficult for affected individuals to understand the rationale behind their exclusion from a recruitment process or the grounds for termination, thereby limiting their ability to challenge or correct potentially unlawful outcomes.

From a Brazilian legal perspective, employers must navigate these innovations carefully. Brazilian labour law and the LGPD impose strict duties on employers to prevent discrimination and ensure fairness in employment practices. Should an AI system contribute to decisions that are found to be discriminatory or lacking in due process, employers may face not only reputational damage but also labour claims and administrative penalties. Moreover, under the LGPD, the improper use or inadequate protection of personal data – especially where sensitive data is involved – can lead to significant financial liability, including fines and compensation for moral damages. Employers may also bear legal responsibility if decisions made through automated systems are not subject to adequate human oversight or review, particularly in cases where the outcome materially affects the data subject.

In this context, companies operating in Brazil should implement robust governance frameworks to monitor the performance and fairness of AI systems used in employment. This includes documenting decision-making criteria, ensuring transparency with candidates and employees and maintaining procedures for human intervention where needed. These measures are not only good practice from a risk management standpoint but may also be essential to demonstrating compliance with both employment and data protection legislation.

The use of AI in employee evaluation and monitoring also raises important legal and ethical considerations. While such tools may enhance productivity oversight, track key performance indicators or detect behavioural anomalies, their deployment must be carefully calibrated to respect employee dignity and privacy.

Under Brazilian labour law and prevailing jurisprudence, employees have a right to intimacy and honour, which may be infringed by excessive or intrusive monitoring practices. Additionally, under the LGPD, monitoring systems that collect personal data – including behavioural or biometric information – must be clearly justified, proportionate and accompanied by transparent notice to the data subject. Employers must ensure that employees are informed about what data is being collected, for what purpose and how it will be used in performance assessments or disciplinary processes. Where AI tools are used to issue automated evaluations or trigger disciplinary measures, companies must implement mechanisms for human validation, ensuring that workers have the opportunity to respond, contest or seek clarification. Failure to do so could not only breach data protection obligations but also support claims for moral damages or constructive dismissal under labour law principles.

The classification of platform workers has been a contentious issue within the Brazilian legal system. While some regional labour courts have reclassified platform workers as employees, entitling them to associated labour benefits, the Federal Supreme Court (Supremo Tribunal Federal; STF) has taken a more nuanced stance, emphasising the need to adapt labour laws to the evolving dynamics of the modern economy.

In the regulatory sphere, Bill No 2,338/2023 seeks to establish a comprehensive framework for the development and use of AI in Brazil. This proposed legislation underscores the importance of transparency, particularly concerning AI systems that could impact fundamental rights or public safety. It mandates that developers and users of AI systems ensure their operations are fair, transparent and comprehensible. While the Bill does not explicitly require digital platforms to disclose the inner workings of their algorithms, it does promote the adoption of best practices related to AI explainability and transparency.

Given these developments, digital platform companies operating in Brazil should proactively assess and, if necessary, adjust their AI systems and operational practices. Ensuring compliance with existing labour laws and forthcoming AI regulations will be crucial to mitigate legal risks and uphold ethical standards in the deployment of AI technologies.

Brazilian financial services companies are increasingly integrating AI to streamline operations, enhance decision-making and manage risk. In practice, these institutions employ machine learning and predictive analytics for functions such as credit scoring, fraud detection, customer segmentation, algorithmic trading and overall risk management.

In Brazil, financial institutions leveraging AI must navigate a range of regulatory frameworks, notably the LGPD and the regulations issued by BACEN.

Repurposing legacy data for AI training poses significant risks, particularly where historical datasets reflect discriminatory practices. This not only exposes financial institutions to litigation and regulatory scrutiny but also undermines the principles of fairness and equality. Even where unintentional, such bias can give rise to claims of discriminatory practices under both labour and consumer protection laws, potentially resulting in costly lawsuits and administrative sanctions. Incidents involving biased decision-making or data breaches can severely damage a bank’s reputation, erode customer trust and ultimately lead to a loss of business and investor confidence.

ANVISA is responsible for overseeing the safety and efficacy of SaMD and AI-powered medical devices used in diagnostics, treatment support and robotic surgery. Products classified as medical devices – including those incorporating AI – must undergo rigorous clinical validation, risk assessment and post-market surveillance. Additional regulatory requirements apply to technologies used in telemedicine and remote care, contributing to a broader regulatory framework governing the digital health ecosystem.

AI-driven robotic surgical systems offer enhanced precision and enable minimally invasive procedures, improving surgical outcomes. However, their use also raises important concerns, particularly regarding system failures, lack of algorithmic transparency and difficulties in assigning liability when errors occur.

Where patient data is used to train AI algorithms in healthcare, several critical issues must be addressed. These include obtaining informed consent, ensuring transparency, applying effective anonymisation or de-identification techniques, verifying data quality and representativeness and enabling secure data sharing in compliance with data protection legislation.

AI algorithms are increasingly used to analyse medical images, laboratory results and patient histories to support early diagnosis and treatment planning. They can forecast disease progression, identify patient-specific risk factors and generate personalised treatment recommendations. AI also contributes to the efficiency of healthcare administration, streamlining appointment scheduling, patient triage and resource allocation. In addition, natural language processing (NLP) is applied to extract insights from electronic health records (EHRs) and to automate the documentation of clinical notes.

In Brazil, the National Traffic Department (Departamento Nacional de Trânsito; Denatran) and the National Land Transport Agency (Agência Nacional de Transportes Terrestres; ANTT) are responsible for overseeing vehicle safety and operational standards. With the integration of AI into vehicles, these authorities are increasingly considering international guidelines, such as those developed by the United Nations Economic Commission for Europe (UNECE), to establish performance, cybersecurity and data handling requirements.

Determining liability in incidents involving autonomous vehicles in Brazil involves a combination of product liability, traffic laws and contractual provisions. If an accident is caused by a malfunction in the AI system or a failure in the vehicle’s software, manufacturers and software providers may be held strictly liable. Conversely, if the incident results from improper human intervention or negligence – particularly in semi-autonomous systems where human oversight is mandated – the driver or operator may bear responsibility.

To address cybersecurity concerns, it is essential for manufacturers to implement comprehensive measures, including end-to-end encryption, regular vulnerability assessments and strict access controls, to protect against data breaches and hacking attempts.

The deployment of AI in autonomous vehicles also raises ethical considerations. While Brazil has not yet established definitive legal standards for these dilemmas, regulators and industry stakeholders are actively engaged in discussions to ensure that AI systems are designed to make decisions that respect human rights and minimise harm.

Inmetro (Instituto Nacional de Metrologia, Normalização e Qualidade Industrial – the National Institute of Metrology, Standardisation and Industrial Quality) and other sector-specific regulatory bodies are responsible for overseeing product safety standards in Brazil. AI systems integrated into manufacturing processes must comply with existing quality and safety regulations applicable to the final products. Where AI technologies influence product performance or reliability, regulators may assess both the physical and algorithmic components under applicable technical standards.

Under Brazil’s Consumer Defence Code, manufacturers are held strictly liable for defects that cause harm to consumers. Where an AI system contributes to a product malfunction – by affecting its performance, safety or quality – liability may extend not only to the manufacturer but also to the software developer or system integrator, particularly where a defect in the algorithm or data processing has played a causal role.

The introduction of AI in manufacturing also presents opportunities for upskilling, as workers are trained to operate, maintain and supervise advanced automated systems. The automation of hazardous or repetitive tasks can contribute to improved workplace safety and reduced physical strain. However, increased automation may also lead to a reduction in lower-skilled roles, highlighting the need for effective retraining programmes and ongoing social dialogue. These measures are essential to support labour transitions and promote inclusive workforce development in the context of digital transformation.

Although there is no legislation in Brazil specifically regulating the use of AI in professional services, several legal and regulatory instruments – such as the LGPD, the Civil Code and sector-specific guidelines – are applicable to the deployment of AI in legal, financial, consulting and other advisory contexts.

Professionals who make use of AI tools within their services are required to exercise a high standard of due diligence, particularly in verifying the accuracy and reliability of AI-generated outputs. Where erroneous or biased recommendations are produced by such systems, liability may extend to both the individual professional and their firm, especially where there has been a failure to implement appropriate oversight and quality control mechanisms.

Furthermore, AI systems used in the provision of professional services must comply with the same strict standards of confidentiality and client data protection that apply to traditional professional conduct. This includes ensuring that personal or sensitive information processed by AI tools is safeguarded in accordance with both the LGPD and applicable codes of ethics governing the profession.

Apart from discussions on AI inputs and outputs, which will be addressed later, it is possible to protect AI systems as such. Under current legislation, computer programmes considered “software per se” are not eligible for patent protection. However, computer-implemented methods – including those based on AI – that produce a verifiable technical effect applicable to an industrial activity may be eligible for protection.

The guidelines of the Brazilian National Institute of Industrial Property (Instituto Nacional da Propriedade Industrial; INPI) specify that AI techniques, including machine learning and deep learning tools, may qualify as inventions when applied to the resolution of technical problems. This approach is consistent with that adopted in other jurisdictions, requiring the invention to go beyond an abstract idea by demonstrating practical application and measurable technical utility.

Thus, AI-assisted and AI-related inventions must meet the legal requirements for patentability – namely, novelty, inventive step and industrial applicability – in addition to demonstrating a specific technical effect. This effect must result from the application of AI to the solution of a real technical problem. Accordingly, the patent application must clearly describe the technical functioning of the employed technology, as well as the nature and relevance of the technical problem being addressed.

With regard to the legal and formal requirements for patentability, descriptive sufficiency plays a central role in the assessment of AI-generated and AI-assisted inventions. The specification must contain sufficient detail to enable a person skilled in the art to understand and reproduce the invention, as required by Article 24 of the Industrial Property Law (Law No 9,279/1996). However, machine learning algorithms – particularly those that evolve during training – pose additional challenges, as their internal logic may change dynamically based on input data. As a rule, such changes or additions to the specification are considered to constitute the introduction of new matter after filing, which is prohibited under Article 32 of Law No 9,279/1996. Therefore, the patent application must detail, at the time of filing, the AI model used, the architecture adopted, the training methods employed and the relevant characteristics of the training data, including volume, origin, preprocessing and representativeness. This approach is essential to demonstrate descriptive sufficiency from the filing date, ensuring technical clarity and legal certainty during the examination of the patent application.

Finally, and no less importantly, the current definition of a person skilled in the art – both for the purpose of analysing inventive step and for examining patentability – still refers to an individual with ordinary knowledge in the relevant field, rather than someone assisted by AI, with full access to AI systems and training data, and with routine capacity for work and experimentation. This distinction could significantly affect the objectivity of inventive step assessments and the subjectivity and rigour applied during examination.

In Brazil, the designation of AI systems as inventors in patent applications has been a subject of legal scrutiny. In August 2022, the Brazilian Patent and Trademark Office (BPTO) issued a legal opinion stating that only natural persons can be named as inventors, thereby excluding AI entities from this role. This position was exemplified in the case of the AI system DABUS, where the BPTO rejected the application listing DABUS as the inventor, reinforcing the requirement for human inventorship under current Brazilian law.

In response to the evolving discourse on AI and IP, Brazilian lawmakers have introduced legislative measures to address this gap. Notably, Bill No 303/2024 proposes amendments to the Industrial Property Law to allow AI systems to be recognised as inventors and holders of patent rights. The Bill aims to adapt Brazilian legislation to technological advancements by acknowledging AI’s role in innovation. However, as of March 2025, this Bill remains under discussion and has not yet been enacted into law.

Regarding copyright, there is currently no specific legislation or judicial decision in Brazil addressing the recognition of AI systems as authors or co-authors of creative works. The existing legal framework implies that authorship is attributed to natural persons, and no public records indicate court rulings or agency decisions that confer authorship status to AI entities. Therefore, under current Brazilian law, AI-generated works do not receive copyright protection, as authorship is reserved for human creators.

To enforce trade secret protection, Brazilian law requires that the information is not generally known or easily accessible and that the holder has taken appropriate steps to keep it confidential. In the context of AI, this often involves implementing robust internal policies, restricting access to sensitive information, and utilising non-disclosure agreements (NDAs) with employees and third parties. Additionally, non-compete clauses may be employed to restrict former employees or partners from entering into competition using proprietary AI technologies or data. It is essential for organisations to ensure that such contractual provisions comply with Brazilian labour laws and are reasonable in scope and duration to be enforceable. While trade secret protection offers a flexible and cost-effective means to protect AI innovations, it is important to note that this form of IP does not prevent independent discovery or reverse engineering by others. Therefore, companies must weigh the benefits and limitations of trade secret protection against other forms of IP rights.

In Brazil, the protection of works of art and authorship generated by AI is contingent upon the extent of human involvement in their creation. The prevailing legal framework attributes authorship exclusively to natural persons, implying that purely AI-generated works lack eligibility for copyright protection. However, if a human provides a prompt that embodies significant creativity, utilising AI solely as an instrument rather than as the originator of creativity, the resulting work may qualify for copyright protection, with the human considered the author. This perspective aligns with broader international discussions on the necessity of human authorship in copyright law. It is important to note that specific judicial decisions have not been issued yet.

See the foregoing sections in 15. Intellectual Property.

Acqui-hires involve tech giants acquiring AI start-ups not solely for their technology or market share, but primarily to secure their human capital and specialised expertise. In Brazil, the Competition Authority (Conselho Administrativo de Defesa Econômica; CADE) has begun scrutinising these non-traditional transactions, as they may impact competitive dynamics. Regulators are concerned that such deals could be used strategically to pre-empt future innovation by removing promising start-ups from the market. While acqui-hires may lead to efficiencies and accelerated innovation, they can also result in market concentration if tech giants systematically absorb AI talent, thereby diminishing the pool of independent innovators and stifling competition.

This potential abuse of data-driven market power is increasingly attracting the attention of antitrust regulators. Although Brazil’s competition law (Federal Law No 12.529/2011) provides a legal basis for addressing abuses of market power, there is a growing call for reforms specifically tailored to digital markets and data-driven practices.

Lawmakers are considering proposals to update Brazil’s cybersecurity framework to explicitly address risks emerging from AI systems. These legislative efforts aim to impose additional obligations on companies developing or deploying AI – particularly those utilising large language models (LLMs) – to counteract the increased speed and scope of potential cyber-attacks.

AI technologies, and LLMs in particular, have lowered the technical and financial barriers to entry for malicious actors, enabling the rapid creation of sophisticated malware, social engineering schemes and advanced data analytics for use in cyber-attacks. Forthcoming legislation may require AI systems to incorporate “security by design” principles, including advanced verification measures, robust encryption and regular security audits.

Training and operating AI models require substantial computational power, leading to increased energy consumption. Brazil’s energy matrix, largely based on renewable sources such as hydroelectricity, may help mitigate some of these impacts. Nevertheless, the growing demand for data centres and cloud computing presents significant challenges. Conversely, AI-driven applications can optimise resource use, support environmental monitoring and enhance efficiency in sectors such as agriculture and logistics. These and other environmental, social and governance (ESG) considerations are reflected in the PBIA for 2024–28, which aims to develop sustainable and socially oriented technologies.

Implementing effective AI governance in Brazil requires the establishment of a comprehensive legal framework that delineates clear rights and obligations for all stakeholders involved in the AI ecosystem. The proposed Bill No 2,338/2023 aims to create such a framework by defining the roles of developers, deployers and distributors of AI systems, and by introducing risk categorisation to manage AI applications appropriately. This Bill underscores the importance of principles such as transparency, accountability and non-discrimination, which are essential for fostering trust and safeguarding fundamental rights. To operationalise these principles, organisations should conduct thorough risk assessments of their AI systems, implement robust data governance policies and ensure compliance with existing regulations like the LGPD. Additionally, establishing internal oversight committees can help monitor AI deployments and address ethical concerns proactively. Lastly, investing in education and capacity-building initiatives can empower individuals and organisations to understand and navigate the complexities of AI.

Dannemann Siemsen

Av Rodolfo Amoedo, 300
Barra da Tijuca 22620-350
Rio de Janeiro – RJ
Brazil

+55 21 2237 8700

+55 21 2237 8922

mail@dannemann.com.br www.dannemann.com.br
Author Business Card

Law and Practice in Brazil

Authors



Dannemann Siemsen is recognised as a leading firm in Latin America, and is composed of a team of experts dedicated to the protection of intellectual property (IP) since 1900. The firm delivers first-rate IP services and litigation across all industry sectors, while also assisting IP firms from various regions. With the largest patent team in Brazil, Dannemann Siemsen brings together specialists from diverse technical backgrounds, enabling the development of comprehensive strategies to safeguard clients’ innovations. The firm’s trade mark department leverages advanced technology to manage cases from clearance searches to appeals, combining strategic thinking with robust portfolio management and monitoring structures – including in the digital environment. Dannemann Siemsen proudly celebrates 125 years of experience founded on knowledge, respect, ethical values, diversity and a continuous commitment to staying ahead of future developments.