Artificial Intelligence 2024

Last Updated May 28, 2024

Poland

Law and Practice

Authors



Sołtysiński Kawecki & Szlęzak (SK&S) is one of Poland's leading full-service law firms. With more than 180 attorneys, the firm provides the highest standard of legal services in all areas of business activity and is well-reputed for the quality of its work and innovative approach to complex legal problems. Since the 1990s, SK&S has been closely associated with the ever-changing technology sector, especially the dynamically developing IT industry. The firm provides high-quality legal services to both individuals and companies, covering the full scope of TMT issues. The team works alongside the firm's fintech, IP/IT, privacy and tax teams to provide an innovative interdisciplinary service and to help businesses use state-of-the-art technologies in a safe, cost- and time-effective manner. SK&S was the founding member of the New Technologies Association.

There are no general provisions of Polish law that would specifically apply to AI.

AI is currently qualified as software, and the laws applicable to software should be used to regulate AI, eg:

  • Contracts made with the use of AI should be treated like those made with pre-programmed algorithms.
  • No category of AI agents or entities can hold legal capacity or be legally liable.
  • Tort liability for using AI vis-à-vis a third party should be attributed to users and/or providers of such AI.
  • Privacy laws, including GDPR, apply to the processing of personal data by AI.
  • There are no AI creations or inventions. As a rule, only a human being can be the author of copyrighted work or the inventor of inventions protected by industrial property law; however, AI may be covered by so-called derivative copyrights (eg, video or sound recordings).
  • The use of AI in an employment context should respect all employment regulations (including non-discrimination) and respect employees' rights.
  • Consumer laws apply to the offering or use of AI by consumers (eg, to terms and conditions).

AI and machine learning ("ML") have been applied in various sectors in Poland, but AI deployments are rather slow and focus on automatisation and efficiency.

From our experience, key industry applications now are based on generative AI (“GAI”) and include:

  • Chatbots and virtual assistants (both generally available chatbots and dedicated systems) – eg, internal chatbots that provide responses based on AI grounded in a given entity's knowledge base.
  • Supporting the back-office functions (eg, software development, summarisation of meetings or documents, drafting/reviewing, updating and verifying internal databases, preparing marketing content).

There are multiple cross-industry initiatives concerning new technologies in Poland.

The Polish government's initiatives to facilitate and support the adoption and advancement of AI for industry use are limited and include:

  • Adoption of an AI Policy by the Polish Council of Ministers in 2020, which is a strategic plan to advance AI development for societal, economic, and scientific benefits.
  • Establishment of a PL/AI Artificial Intelligence for Poland advisory group in early 2024 upon the initiative of the Deputy Prime Minister and Minister of Digital Affairs.
  • Implementation of tax incentives for robotisation. There are plans to establish in 2025 a dedicated AI fund for support.
  • Activities of the National Centre for Research and Development to support AI innovation (eg, by awarding grants and IDEAS NCBR – an R&D centre).
  • Establishment of a Standing Subcommittee on Artificial Intelligence and Transparency of Algorithms in the Polish Parliament.

There are no AI-specific regulations in Poland; hence, the EU-wide AI legislation will apply. Poland will opt to regulate AI at the EU level. Some local acts would have to be adopted in Poland to implement a regulation laying down harmonised rules on Artificial Intelligence ("AI Act") and amending certain Union legislative acts 2021/0106 – see section 3.7 Proposed AI-Specific Legislation and Regulations.

No AI-specific legislation has been enacted in Poland.

AI Policy

AI Policy focuses on creating transparent and accountable algorithms for use in public administration, enhancing data access, and applying AI to healthcare and environmental protection. There is an ongoing discussion regarding the amendment of this Policy.

Position of the Polish Financial Supervision Authority (“FSA”) on the provision of robo-advisory services (2020)

The guidelines emphasise the user's control over AI use and responsibility for clear client communication. Humans should make the final decision.

Recommendations on AI in the financial sector (2022)

The Ministry's Working Group on AI - Subgroup for the Financial Sector - identified several barriers to using AI and provided its recommendations in the identified fields.

Recommendations for the use of AI in justice and law enforcement (2024)

The document suggests adopting AI to modernise and speed up the judicial system – including digitalising records, automating transcriptions, drafting orders, using chatbots, searching for case precedents, drafting decisions, and implementing electronic delivery and translation systems.

Guidelines on the responsible use of generative AI in research (2024)

In March 2024, the European Commission – together with the European Research Area countries and stakeholders – published guidelines focusing on research quality, honest GAI use, respect for participants, and accountability in research.

Communication on information processing by supervised entities using public or hybrid cloud computing services

The Polish Financial Authority published the recommendations in January 2020. They do not focus on AI technology itself but are of great importance for AI implementations, as most AI is available through the public cloud.

In  April 2024, the Ministry of Digital Affairs started consultations on implementing the AI Act.

Poland currently has no AI-specific laws, hence no related inconsistencies.

There is a pending project regarding implementation of text and data mining exceptions provided by the Digital Single Market Directive, but not yet adopted.

Only applicable to US law.

No special local Polish laws have been introduced or amended to foster AI development. On the contrary, Poland is already behind with the implementation of the Directive on copyright and related rights in the Digital Single Market, which envisages data and text mining exemptions relevant to AI training. 

While the EU's primary aim is to regulate AI through the EU AI Act, AI must also comply with all other EU regulations, ie:

  • GDPR. Both the AI Act and GDPR apply to AI solutions, leading to potential legislative overlap. Practices banned under the AI Act could also violate GDPR's rules on automated personal data processing. With its legal instruments, the AI Act enhances GDPR rights, emphasising transparency and effective human oversight of AI systems.
  • Data Services Act – see section 14.1 Digital Platform Companies.
  • The Data Act specifies the basic data-sharing models and enables access to the data to train, fine-tune or verify AI models. The AI may also be needed to analyse vast amounts of data generated by connected products and services – particularly to identify patterns, potential improvements or inventions earlier.
  • Digital Single Market – see section 3.2 Jurisdictional Law.

The EU's most significant pending AI legislation is the AI Act.

Polish companies usually use the existing AI models already implemented in their solutions. At present, due to the costs, they usually do not fine-tune the existing models or create their own models. However, as the availability of various models (from simple ones, which may be stored on mobile devices, to the very complex ones) will increase, it is expected that they will also start to build their own models.

The AI Act provides many new obligations regarding the use of AI systems in the EU. It will apply not only to entities based in EU countries but also to all entities outside the EU that would like to introduce AI systems in the EU. It will also apply to entities based outside the EU when the results of the AI systems are intended to be used within the EU.

Prohibited and high-risk AI systems

The development and use of certain AI systems will be prohibited in the EU. The AI Act also introduces high-risk systems, which will be allowed as long as providers adopt additional safeguards, particularly creating risk management and data governance systems. High-risk AI systems include remote biometric identification systems, systems used for evaluation or admission in education or employment, credit score systems, etc.

Timeline and steps to be taken

Provisions regarding prohibited AI systems will apply after six months of the AI Act's coming into effect, which means they will apply by the beginning of 2025. The obligations regarding high-risk AI systems will apply after 24 months of the AI Act becoming effective (or 36 months for systems already required to undergo conformity assessment under EU law).

First preparations should include the following:

  • reviewing current projects and future development plans to assess whether an AI system may fall into a prohibited or high-risk category;
  • if an AI system may qualify as high-risk:
    1. is it possible to change it; or
    2. does it fall within a specific exception provided by the AI Act; and
  • establish AI governance and an implementation task force.

New regulators

The AI Act provides for the establishment of an AI Office and the addition of new AI powers at the member states' level. The business would need to adapt to the new area of regulations and new powers of authority.

There have been no decisions related to AI in Poland.

Not applicable in Poland.

EU

  • European Artificial Intelligence Office (“AI Office”): The AI Office (established on January 24, 2024) will enforce the rules for general-purpose AI models, except high-risk models.
  • European Commission (EC): The EC regulates AI in the EU, emphasising ethical development, data protection, consumer rights, and competition to promoFte innovation and responsible use.
  • European AI Board: The AI Act also provides for the establishment of the European Artificial Intelligence Board, comprising one representative from each member state, to support the Commission and member states in implementing the AI Act effectively.
  • Scientific panel: This panel is composed of experts chosen by the Commission because of their current scientific or technical knowledge in the AI domain. The scientific panel will advise and support the AI Office and member states.

Poland

  • Polish AI Office - it has not yet been decided whether this new office will be created or if the competencies will be awarded to one of the existing offices. Based on the results of a pre-consultation completed in May 2024, the Ministry of Digital Affairs has suggested that the most likely scenario at present is the creation of a new body responsible for overseeing and certifying AI systems in various sectors.
  • The coordination of AI implementation is the responsibility of the Ministry of Digital Affairs.
  • In order to effectively monitor, implement and coordinate the AI implementation, the AI Policy Task Force is planned to be established at the Committee of the Council of Ministers for Digital Affairs.
  • Standing Subcommittee on AI and Transparency of Algorithms. The Polish Parliament formed a Subcommittee to discuss AI’s societal impact and assess related opportunities and risks.
  • Working Group on Artificial Intelligence (GRAI), established by the Ministry of Digitisation in Poland, aims to identify strategies to foster the right environment for AI development in Poland's private and public sectors as well as scientific research.

Additionally, several governmental bodies and institutions play roles in regulating aspects related to AI. Key entities are:

  • Office of the Personal Data Protection Authority (UODO): UODO regulates data protection in Poland and is vital in overseeing AI that involves personal data collection and processing.
  • Office of Electronic Communications (UKE): UKE oversees Poland’s telecom sector, potentially regulating AI in telecom infrastructure and services to ensure data, privacy, and security compliance.
  • Polish FSA (KNF): KNF oversees Poland's financial sector. In the past, it regulated the use of new technologies by the financial sector (including cloud computing), and it is expected to issue some recommendations on embracing AI.

A legal definition of AI in national legislation and international conventions has yet to be developed.

AI system, according to the OECD, is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments, having different levels of autonomy and adaptiveness after deployment. The EU finally decided to follow the concept of the OECD definition in the AI Act (see 3.7 Proposed AI-Specific Legislation and Regulations).

Harmonising the definition of AI across member states and legal documents applicable in a given country is crucial for consistency, legal clarity, and responsible AI adoption. Businesses should actively engage with policymakers to shape coherent and adaptive frameworks.

Each agency focuses on its own area of activity. Please refer to section 5.2 Technology Definitions.

So far, the enforcement and other regulatory actions have limited scope. For instance, UODO is investigating a complaint about ChatGPT (see 8.3 Data Protection and Generative AI).

Poland

The Polish Committee for Standardisation (PKN) creates and approves Polish Standards (PN) and plans to establish a separate Technical Committee on AI. It will be the lead Polish committee for international cooperation in international standardisation committees: ISO/IEC JTC 1/SC 42 AI and CEN/CLC/JTC 21 AI.

See also information about the AI Policy – section 2.2 Involvement of Governments in AI Innovation.

International

  • OECD - Framework for the Classification of AI systems: The framework allows users to zoom in on specific risks typical of AI, such as bias, explainability and robustness, yet it is generic in nature. It facilitates nuanced and precise policy debate.
  • International Organisation for Standardisation (ISO) - BS ISO/IEC 42001:2023 Information technology. AI Management system specifies a certifiable AI management system framework under which AI-based products or services can be developed as part of an AI assurance ecosystem. AI standards enhance transparency, data quality, and system reliability, mitigating risks and maximising rewards.
  • European Standards Organisations (ESOs): CEN, CENELEC, and ETSI are responsible for setting up EU standards.
  • The National Institute of Standards and Technology (NIST) - AI Risk Management Framework (AI RMF). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into designing, developing, using, and evaluating AI products, services, and systems.

The standards are usually intended for voluntary use by businesses, but in some sectors, compliance with such standards is deemed essential for the credibility and reliability of products.

In Poland, the decisive standards are those set by ISO, and any discrepancies with Polish Standards (PN) are likely to be inconsequential (similar to the situation with cloud computing, which is based on ISO and SOC I and II standards). Polish companies will need to adhere to international standards if they wish to offer their services beyond Polish borders.

Polish government is engaging with AI across various domains, and the known examples include: 

  • Virtual Assistant – chatbot using the GPT model, which will assist citizens while using mObywatel app (a mobile app offering access to digital official services and electronic documents);
  • Virtual Civil Servant - Information Systems for eGovernment, is an advanced software for e-offices;
  • PLLuM (Polish Large Language Model) - an initiative of Polish scientific institutions supported by the Ministry of Digital Affairs, which aims to create a Polish large language model; and
  • STIR - the system, which is a set of algorithms introduced with the aim of analysing provided data obligatory by financial institutions (including banks) to the Polish tax authorities.

AI is not widely used in the criminal justice system, but its use is thought to be very widespread. Currently, the most common example of using AI in this area is a general legal information system at the disposal of judges and prosecutors. This system includes a search system (for rulings, literature, etc) based on AI algorithms.

In October 2023, Ethics and Legal Subgroup of the Working Group on Artificial Intelligence (GRAI) under the Ministry of Digitisation issued a report on “Recommendations for the application of artificial intelligence in the judiciary and prosecution”. The considered use of AI systems in the criminal justice system is focused on supporting, rather than replacing, the judges, prosecutors and other legal professionals.

Not applicable in Poland.

The AI Policy (see 2.2 Involvement of Governments in AI Innovation) emphasises the importance of AI in national security and encourages cooperation between the private and military sectors to address defence needs.

According to the National Security Strategy of the Republic of Poland (2020), AI creates new development opportunities for Poland while generating previously unknown risks.

The GAI raises many issues, the key (at the moment) including:

  • data protection;
  • confidentiality and IP;
  • sector-specific regulations (such as banking or other financial services regulations, health rules, etc);
  • ethical concerns (eg, ensuring lack of bias or discrimination);
  • technical risks (eg, prompt injection, training data poisoning, lack of proper training data, etc);
  • general operational risks (eg, overreliance, possible service disruptions, lack of employees with AI skills and lack of technological resources).

As for IP, see 8.2 IP and Generative AI. For personal data, see 8.3 Data Protection and Generative AI.

Copyright protection

AI-generated results

Copyright protection of AI-generated output is limited since only humans can be considered authors under Polish law (as in most countries). The AI output cannot be considered as copyrightable work.

It may be claimed that AI is only a tool and AI-generated output can be copyrightable work as long as human oversight in its creation was significant and that all creative choices and decisions were made by a human, as with photography. This can apply only to specific examples and should be assessed case by case.

With regard to derivative (neighbouring) rights, such as rights to recordings of video or sound (as protection may be granted, eg, to birds' sounds), the rights may be allocated to individuals or companies.

The position with respect to the output ownership depends on the market. For example, MidJourney T&Cs provide that users of the commercial (paid) version of the platform are the sole owners of the content, while free version users only receive the licence for the personal use of such content. Users are responsible for the input and are obliged to indemnify MidJourney if there is any third-party rights violation due to such non-compliant input. Microsoft Product Terms state that Microsoft does not own the output of Generative AI Services.

The output may infringe on copyrights, other IP rights, or third parties. Therefore, some providers are following Microsoft's customer copyright commitment to offer indemnification should a third party raise a claim that AI output infringes their IP rights. Such indemnification is usually subject to certain conditions (eg, using filters or content monitoring tools).

Moreover, the providers usually specify in their T&C if they use the output for training the model.

AI input

The input provided to AI systems can be protected as a standard work under copyright law, as long as it meets all the copyright law requirements (eg, originality and creativity). T&Cs usually specify whether the input will be used to train model.

AI system itself

An AI system itself can be protected under copyright law as the software. In Poland, as in the whole EU, software is protected similarly to literature works, with some necessary modifications due to the nature of the computer programs.

Training data

Training data can be protected by copyright or by databases protection laws. However, there are discussions about whether the use of training data will be an infringement of copyright or may be based on text and data mining exemptions (once implemented in Poland). There are arguments raised that training the AI models is similar to learning from a book and consists of learning ideas or concepts that are not protected by copyright. However, no court cases have been raised in Poland against the model developers.

Trade secrets, know-how

Input, training data, software, and output can be considered trade secrets as long as the definition of the trade secret is met (eg, they have economic value, are kept confidential, and security measures are introduced to maintain their confidentiality). This may be relevant in cases where AI output cannot be protected (eg, software output).

The main data protection issues connected to GAI are:

  • Lack of transparency of complex GAI models (the "black box" problem), which does not allow data controllers to assess the processing (in particular, to explain the automatic decision-making under Art. 22 GDPR). To a certain extent, this risk may be addressed by more information provided by the model's developers.
  • No data minimisation, especially at the phase of training the models. Proper training data, including anonymised information, may limit this risk during model training or fine-tuning. When using GAI, users should implement policies and, when feasible, use technical tools (eg, filters) to mitigate the risk that personal data will be provided to the model unintentionally.
  • No purpose limitation. Using personal data for training or fine-tuning the models may not be consistent with the purposes for which such data were collected. The controllers should assess whether they may process the data for such purposes and, if necessary, comply with additional requirements (Art. 6.4 of GDPR).
  • No data accuracy in AI is challenging due to the risk of generating incorrect responses. It is important to design models based on correct data, with safeguards and output filters, and for businesses to verify outputs and train users on prompt creation.
  • Data subject rights should be respected, and there are no specific exceptions for GAI. Some AI companies provide guidelines on how to perform data subjects' rights using their technology).
  • It seems that false personal data in outputs could still be considered personal data and rectified when requested. However, there are also opposing views stating that they are only some projections about the data subject based on probability.

If possible, the rights of data subjects should be respected, and in particular, a request to delete data should not entail the deletion of the whole model. However, the model should be reviewed in order not to create future output with false data or data of a person who requested to have their data deleted.

Guidance in this scope is expected following a complaint filed in Poland in September 2023 claiming a lack of data subject rights fulfilment and transparency in ChatGPT.

There are currently no guidelines or recommendations from the Bar Associations or the Ministry of Justice on the use of AI by legal professionals; however, the Bar Associations is currently preparing a proposal. There are no case laws either.

As for AI solutions, most law firms use standard AI tools, particularly for translation purposes or for achieving efficiency and improving the quality of work (eg, summarisation of meetings, drafting clauses, etc). The most common seems to be Copilot (including M375 Copilot). AI tools are also used in e-discovery to identify relevant documents. Other solutions (eg, supporting drafting litigation documents) are often not adjusted to Polish law and, therefore, are of limited use.

The legal practitioners should ensure that: 

  • any AI-generated content is verified by a human;
  • the confidentiality of the client's data is protected, and AI tools that use the input to train the models or which do not offer an appropriate level of protection are prohibited; and
  • appropriate procedures and policies are implemented.

Currently, the use of AI is subject to standard liability rules set forth in national laws:

Fault-based liability

The aggrieved persons need to prove the fault of the liable person, the damage and the causation between the fault and the damage to successfully bring a liability claim. In the context of AI, reliance on this principle is difficult, as often it may not be easy to find the person responsible or the cause (taking into account the lack of access to the model's "construction"). Compliance with AI manufacturer's instructions may shield users from fault while altering an AI code or using it for unintended purposes could attribute fault to the user.

Strict (risk-based) liability

In specific cases, Polish law attributes liability to certain persons without the need of the aggrieved person to prove fault. An individual who operates an enterprise powered by natural forces (such as steam, gas, electricity, or liquid fuels) is liable for injuries or property damage resulting from the operation without the necessity to prove the fault. However, this provision does not apply in the case of AI, as providers of AI systems or models will not qualify as operators of enterprises powered by natural forces.

The risk liability also applies to the vehicle's possessor, though it is still necessary to prove the cause. Thus, its application will still be difficult if the damage caused by the AI system is applied to the vehicle.

Liability for defective products

The current rules, based on the implementation of Council Directive 85/374/EEC, do not allow considering AI systems themselves as "products" within the scope of provisions related to defective products as they are exclusively movable items. AI systems cannot be classified as a movable product. However, please see section 10.2 Regulatory.

Contractual liability

If the aggrieved party is using AI based on the contract, it may potentially claim the damage for breach of contract; it has to prove the breach, the damage and causation. There is a presumption that the alleged perpetrator is liable for the damage. However, he may prove otherwise. The risks outlined above to claim the damage caused by AI also apply to the contractual liability. However, the provider of AI solutions often offers additional measures, such as indemnity for third-party claims related to the infringement of IP rights by output generated by provided AI (see 8.2 IP and Generative AI). On the other hand, in most business transactions, the parties limit the provider's liability (eg, up to 12 months' remuneration). The limitations or exclusion of liability will not be effective in contracts with consumers or the case of B2B if the damage is caused wilfully. 

Liability for infringement of personal interests

This is a fault-based liability. It is attributable to the person who uses the AI systems or the creations of AI systems without proper verification or with a false intent, eg, using deepfake technology. Besides the typical financial claims, the plaintiff may request the infringer to publish a given statement in the media.

Insurance position

At the moment, there is no obligatory insurance similar to that necessary for the use of vehicles. Due to the confidentiality of model training, it is rather unlikely that insurance coverage will be an affordable standard solution in the foreseeable future.

Currently, two legislative initiatives in the EU are relevant from the perspective of liability for using AI.

Directive on the liability for defective products

The European Parliament adopted a directive on March 12, 2024. While similar to the existing Directive on defective products from 1985, it introduces changes that could significantly impact the liability of AI systems.

It will cover not only physical goods but also software (especially damages for destroyed or corrupted data). The burden of proof will be simplified, and the aggrieved party will be able to claim material and mental damages (confirmed medically).

The Directive of the European Parliament and the Council on adapting non-contractual civil liability rules to AI

The Directive, introduced in September 2022, is currently halted but may be reintroduced after the 2024 European Parliament elections.

Bias characterisations and risks

Algorithmic bias happens when the system discriminates against a specific group or individual, resulting from various factors, eg, biased training data, discrimination in data collection, a biased training team or defective parameters in the models, or inappropriate deployment. Bias has not been explicitly defined in the Polish legal system. AI bias can affect the personal interests and freedoms of individuals, for example, by discriminating against them in a recruitment process or credit scoring, which may lead to claims for compensation and erosion of consumer trust.

Regulations, industry efforts

The GDPR introduces a human oversight requirement for processes that qualify as automated decision-making unless required by law. Human oversight can take different forms: (i) Human-in-the-Loop (HITL), which involves human intervention in every decision cycle of the system; (ii) Human-on-the-Loop (HOTL) allows human intervention during system design and monitoring; (iii) Human-in-Command (HIC) enables overseeing the overall AI activity and deciding when and how to use it in specific situations. See also section 13. AI in Employment.

Under Art. 10 of the AI Act, in the development of high-risk AI systems that use data for training, validation and testing, it is crucial to adhere to quality criteria, such as data governance, examination for biases that are likely to affect the health and safety of persons and ensuring data relevance, representativeness, accuracy and completeness. The statistical properties of these datasets should align with the system’s intended purpose and the specific groups affected by its use. Even when an AI system isn’t deemed high-risk, it remains crucial for organisations to carry out their own risk evaluations to mitigate any potential adverse outcomes, including bias.

Transparency requirements arising from Art. 13 of the AI Act also aim to ensure clarity and minimise bias in AI.

Art. 14 of the AI Act requires that high-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.

Risks

GDPR compliance risks and potential solutions are described in section 8. Generative AI above.

In cases where a human does not supervise AI, these risks are greater, especially regarding AI's potential lack of quality training data, lack of context/human understanding, potentially no ethical considerations, or risk of bias. Also, it may be difficult to interpret and explain such decisions.

In terms of ensuring security for AI systems, there is a risk that such systems may be difficult to secure due to their complexity. Also, new threats appear concerning such technology, such as the possibility of creating inaccuracies in personal data (hallucinations) or the difficulties in respecting data subject rights requests (eg, data deletion requests).

Benefits

AI can provide benefits to protecting data, such as enhanced personalisation (ie, in marketing or healthcare) and efficiency (ie, faster decision-making based on data).

AI may also ensure greater security through its efficiency (the possibility to handle large volumes of data), adaptive learning (learning from new threats) and advanced threat detection (quick and accurate threat detection).

Legal issues, liability

The AI Act will provide restrictions on using real-time and retrospective facial recognition:

  • Most of all, biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation will be prohibited (with some exceptions for law enforcement) and using real-time remote biometric identification systems in publicly-accessible spaces for the purposes of law enforcement will be prohibited (unless the use is strictly necessary for the objectives enlisted in the AI Act and specific obligations are met). Also, AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage will be prohibited.
  • Many biometric systems would be considered high-risk AI systems and will be subject to specific rules (eg, emotion recognition biometrics, biometric categorisation based on sensitive attributes).
  • Post-remote identification AI systems may be considered high-risk AI systems and are subject to specific rules. Importantly, it will be ensured that law enforcement authorities may take no decision that produces an adverse legal effect on a person based solely on the output of such post-remote biometric identification systems.
  • As a rule, deployers of emotion recognition or biometric categorisation systems should inform the natural persons exposed thereto of the operation of the system.
  • Member states will be able to adopt stricter rules regarding facial recognition in public spaces.

Currently, the use of such systems is subject to GDPR and local laws implementing the Data Protection Law Enforcement Directive (LED).

Risks

  • Biological biometric data are uniquely personal and irreplaceable if compromised. The storage of such data poses a heightened risk of cyber attacks, leading to severe consequences of identity theft or unauthorised access to secure systems if the data is breached. This risk is lower in cases of behavioural biometric data.
  • The current AI systems used for facial recognition may generate unreliable results, in particular with people of different races or ethnic origins. Also, the systems may not work properly with women.
  • Use of facial recognition technology can lead to privacy infringements and discrimination.
  • There is a high risk of misuse of such technologies.
  • Ethical concerns over facial recognition involve debates on surveillance and mass tracking.

Automated Decision-Making (ADM) employs algorithms and AI systems to automate tasks traditionally requiring human insight, learning from data to predict outcomes.

Risks

In terms of risks, ADM systems can inherit biases (section 11.1 Algorithmic Bias). ADM models often lack transparency, leading to mistrust.

Applicable rules – Art. 22 GDPR

  • GDPR restricts automated decision-making that produces legal effects or significantly affects individuals.
  • Individuals have the right not to be subject to decisions based solely on automated processing. Exceptions exist, eg, a decision necessary to conclude or perform a contract.
  • Failure to comply could result in liability under GDPR.
  • Article 29 Data Protection Working Party Guidelines on Automated Individual Decision-making and Profiling for the purposes of Regulation 2016/679 specifies the rules resulting from the GDPR. European Parliament Resolution on Automated Decision-Making Processes:
  • Existing regulatory frameworks cover aspects relevant to services incorporating ADM, including consumer protection, ethics, and liability.

DSA

  • The DSA aims to enhance algorithmic transparency and accountability, targeting intermediary services with additional specific requirements for online platforms and VLOPs.
  • Noncompliance may lead to liability under DSA.

AI Act

  • Deployers must inform individuals when high-risk AI systems are used in decision-making that affects them, eg, those affecting the terms of work-related relationships.
  • The AI Act is set under specific conditions – a right to a meaningful explanation when an AI system is used to make a decision.
  • Noncompliance may lead to liability on the AI Act.

Currently, there are no specific rules.

  • Transparency for AI systems:
    1. Providers must design AI systems that directly interact with natural persons in a way that informs users they are interacting with an AI system.
    2. Exceptions apply when it is obvious to a reasonably well-informed person that they are interacting with an AI system.
    3. Legal authorisation may exempt certain AI systems used for criminal offence detection, prevention, investigation, or prosecution.
    4. How a high-risk AI system functions should be transparent so that the user can comprehend and use the output suitably.
  • Marking outputs:
    1. AI-generated outputs (such as audio, image, video, or text) must be marked in a machine-readable format. Some exceptions apply (eg, for an artistic activity or if systems used do not significantly alter input data). Legislators may also exempt certain AI systems; stricter rules apply to deepfakes.
  • Text content for public information:
    1. Deployers must disclose if text content published with the purpose of informing the public on matters of public interest is artificially generated or manipulated.
    2. Exceptions apply when authorised by law for criminal offence detection or if the content undergoes human review.

It is currently unclear whether use of AI technology in price-setting may have a similar effect as use of pricing algorithms.

Risks

  • Competitor Monitoring and Price Adjustment:
    1. Competitors may utilise pricing algorithms based on AI to continuously monitor and adjust their prices based on market conditions in a precise manner. This may lead to higher prices and reduced competition.
    2. AI tools may provide more analytical tools, thus improving price trend-setting.
  • Confidentiality risk:
    1. Use of models trained on the inputs provided by clients may lead to accidental disclosure of pricing mechanisms to competitors.
  • Discrimination:
    1. Although this could result in improved market efficiencies, it might also raise issues related to fairness and equality.
  • Enforcement:
    1. The intricacy of AI-driven algorithms may be challenging for antitrust regulators in pinpointing potential anti-competitive behaviour. There may be a need to revise existing legal regulations and enforcement methods to tackle these issues effectively.

The procurement of AI technology in Poland can be implemented by two groups of entities: private entities or public entities.

Private entities

In cases of private entities, the principle of freedom of contract applies. The current key issues which would have to be addressed in contracts for procurement of AI solutions include the following: whether the input or output data are used to train the models, ownership of output, liability and warranties, IP-related issues (in particular, dealing with third party claims to the model or output), use of personal data (processor, controller, joint controller), data flows, location of input, output and model, abuse monitoring and content filtering, possibility of suspension of services in cases of abusing services or other violations.

Public entities

Public entities may be obliged to conclude the contract for AI technology under public procurement rules, and they often provide a template contract in the contracting documents. As they are not experts in AI, the proposed contracts may not address relevant risks and may not be consistent with market standards. Therefore, it is recommended that public entities carry out two-step procedures or negotiations to first precisely identify its goals and the way in which they may be achieved.

Employers utilise tech and software for recruitment and termination processes to save costs and time. The use of AI in employment is unregulated in Poland, so employers must adhere to general anti-discrimination and privacy laws:

  • Discrimination in employment is prohibited, including during hiring or termination. Technology use must avoid any discrimination unrelated to work.
  • Employers are responsible for the technology they use and can face claims for compensation or damages in discrimination cases if such use leads to that effect.
  • The Polish Labour Code specifies that personal data is permissible in hiring and employment; technology use, including AI, must not violate privacy regulations.

Employers use various digital tools to evaluate employees' performance, track working time, and review employees' work or use of resources. If the technology is used to monitor employees' email and/or activity, such monitoring should be introduced in a formal way via relevant internal policies and procedures, announced to employees, and applied within their frameworks. It cannot infringe on the integrity of correspondence or on an employee's personal interests.

The breach of the rules may result in employees’ claims for breach of their rights by illegal monitoring and compensation. It may also constitute a breach of privacy laws.

Using AI on digital platforms should be assessed in light of the Digital Services Act (“DSA”). Most of all, the following conclusions shall be kept in mind:

  • In principle, providing GAI or solutions based on such models (eg, conversational solutions such as chats) will not constitute an indirect service within the meaning of the DSA.
  • However, to the extent that providers of indirect services are also suppliers or operators of AI systems, they may be obliged to take certain actions under the AI Act.
  • Information generated or modified by AI that is not marked in accordance with the AI Act should be considered illegal content within the meaning of the DSA. Also, even if properly marked, information may be illegal under the DSA, particularly if it refers to content that is illegal or describes illegal activities.
  • Due to varying requirements posed by the regulations, a risk analysis conducted under the DSA may not be sufficient under the AI Act.

There are no specific rules regarding employment in Digital Platform Companies (see also section 13. AI in Employment).

Financial services institutions ("FSI") are very focused on innovation, including AI. For example, a Polish bank implemented a chat service for customers, which informs customers about the bank's services and analyses customers' transactional data. FSI are also interested in services that automate internal activities, eg, anti-money laundering and anti-fraud purposes.

There are no legal provisions regarding specifically AI in FSI. However, financial services are highly regulated and introducing new technologies is usually subject to complex obligations:

  • If AI is used by FSI to process banking, insurance or other statutory professional secrets, regulations regarding outsourcing (including potential non-EEA outsourcing) may be applicable. Outsourcing definitions vary across regulations in Poland's financial sector.
  • When AI services use cloud computing, the Communication from the UKNF (the Polish financial regulator) on information processing by supervised entities using public or hybrid cloud computing services will apply.
  • Automated decision-making by an FSI (which can be supported by AI) is subject to additional restrictions resulting from local legislation, especially where it is made to assess creditworthiness and analyse credit risk.
  • In the scope of using AI for robo-advisory, the FSI should take into account the FSA position on the provision of robo-advisory services – see section 3.3 Jurisdictional Directives.
  • EU-wide guidelines (eg, the EBA Discussion Paper on ML for IRB Models) may also be considered.
  • FSIs that implement AI should take the risks related specifically to AI into account, including the risk of bias (see 11.1 Algorithmic Bias) or opacity and non-transparent decision-making (see 11.5 Transparency). On top of that, the FSI should take into account standard risks applicable to its operations, including ensuring the confidentiality of professional secrets and business continuity.

New technologies may be highly beneficial for the healthcare sector. ML algorithms may analyse big datasets and entail accuracy and speed in healthcare providers/health research activity. ML and AI can assist in diagnosis and disease management, interpreting medical images, drug discovery, operations. Also, new technology may enable the use of new ways of collecting data and providing health assistance, eg, telemedicine/chats or wearable technology.

Regulations and main issues

There are no regulations related specifically to using AI in healthcare and connected ML. The general regulations will apply, ie:

  • The GDPR, as supplemented by special Polish provisions related to processing data in the medical context.
  • Provisions regulating medical secrets.
  • AI may constitute a medical device and, as a consequence, be subject to applicable regulations (eg, Medical Devices Regulation (MDR) and In Vitro Diagnostic Medical Devices Regulation (IVDR)). If the AI were to become a part of another medical device, then such a device would have to meet the regulatory requirements. The burden would generally be placed on the manufacturer/importer of such a device.
  • Data sharing, including for better healthcare delivery and training AI, is covered by the Data Act and will be governed by the EU Regulation on European Health Data Space once applicable.

Risks

  • Bias in AI – Risks related to bias are described in 11.1 Algorithmic Bias. In healthcare, AI's decision-making can significantly affect patient life.
  • Overreliance and misuse – AI users may struggle to evaluate AI outputs, including their origins, hindering proper verification.
  • AI errors causing patient harm and liability – AI errors (eg, stemming from flawed data or mistakes in its design) can cause misdiagnosis or incorrect medical interventions, potentially resulting in physical harm during AI-assisted surgery (see 10. Liability for AI).
  • Transparency and privacy/security issues - see sections 8.3 Data Protection and Generative AI and 11.2 Data Protection and Privacy. Healthcare AI's main privacy risks include misuse of patient data, breaches exposing sensitive information, and harmful cyber attacks, especially with medical records and live health data devices.

Polish law generally prohibits the use of autonomous vehicles (AVs); however, automation features under the supervision of a human driver can be used. There is an exemption for approved research testing on public roads, but not every type (level) of an AV can currently be tested on Polish roads – only vehicles at level 3 of automation can be tested in Poland. There are no AI-specific regulations. The Ministry of Infrastructure intends to facilitate testing autonomous vehicles on all levels of automation and their equipment on public roads (starting in 2025).

Liability

The standard vehicle liability rules also apply to AVs (see section 10. Liability for AI). Motor vehicle users are required to hold mandatory insurance for civil liability.

Significant issues

In the AI Policy, it is acknowledged that AI-based systems used in autonomous transport can significantly lower the number of accidents – and, as a result – the number of fatalities. However, using AVs is linked to certain issues and risks, ie:

  • Privacy and safety: AVs may collect a vast amount of personal data, which may pose risks to privacy. Also, the systems used in AVs may be subject to cyber-attacks, threatening the safety of users.
  • Ethical concerns: AI in AVs may encounter scenarios requiring moral judgment, touching upon significant ethical concerns.
  • Responsibility: Assigning accountability for the decisions made by AI involves multiple parties.

There are no specific regulations related to using AI in manufacturing - the rules related to liability are described in section 10. Liability for AI and the employment issues are described in section 13. AI in Employment. Data privacy (sections 8.3 Data Protection and Generative AI and 11.2 Data Protection and Privacy) apply. The rules regarding product safety are based on the Machinery Regulation, which generally aims to cover new technologies. AI may be beneficial to reduce the size of the workforce. However, as AI will automate certain activities, it may lead to layoffs, increased unemployment and loss of competencies. Manufacturers may be one of the key sources of data for training AI based on the Data Act.

There are no specific regulations related to using AI in professional services; thus, the rules related to liability described in section 10. Liability for AI will apply. Providing some professional services is covered by sector-specific regulations and ethical rules (eg, legal services) and using AI should enable compliance with them. Most importantly:

  • Confidentiality: Providing professional services may be subject to specific professional secrecy rules (eg, lawyers). Professionals should ensure that they provide client data only to AI systems that ensure confidentiality (including the performance of legal obligations and compliance with engagement letters signed with clients) or refrain from providing such data in their prompts.
  • Compliance: Professionals may implement internal policies for their businesses to ensure proper AI usage.
  • Need of supervision: Professional services require knowledge and insights into the specific situation of a client. While it can be supported by AI, outputs provided by AI should be verified by humans.

There are no decisions in Poland relating to whether AI technology can be an inventor or co-inventor for patent purposes or an author or co-author for copyright purposes.

The prevailing view is similar to one presented already in the USA or other countries in the EU, ie, that AI-created works or inventions cannot be copyrighted or protected by industrial property law.

Patent law

With regard to patent protection, only a human can be an inventor. It was confirmed in the DABUS case in which both the European Patent Office and later the Board of Appeal refused patent protection for an AI system.

Copyright law

In Poland, copyright protection arises by operation of law. There is no need to file an application, and there is no Copyright Office. Disputes relating to copyright infringements are decided by special IP departments of common courts.

See also section 8.2 IP and Generative AI.

Recently, in the EU, the first case regarding copyright ownership of AI creations was decided when a Prague court ruled that DALL-E creations aren't copyrightable as they're not human-made.

Contractual provisions

Non-disclosure arrangements are the common legal instruments to protect AI technologies and generated content. However, it is limited to the parties and isn't generally enforceable like IP protection, which applies to all potential violators. See section 8.2 IP and Generative AI.

Trade secret

AI technology, training data, input, or output may be protected as trade secrets, as they do not necessitate human authorship. As confidentiality is one condition of such protection, appropriate instruments must be in place (eg, non-disclosure contracts; see above and section 8.2 IP and Generative AI).

The prevailing view is that the creator of copyright-protected works can be human only; however, video or sound output may be protected as derivative rights. See sections 8.2 IP and Generative AI and 15.1 Intellectual Property.

Using OpenAI while creating works and products is generally associated with the same risks as using other GAI systems.

When advising corporate boards of directors in identifying and mitigating the risks in the adoption of AI, the following key issues should be discussed:

  • clear understanding of contemplated use cases and potential risks and benefits (also for non-implementation);
  • competitors' and customers' approach to AI and applying AI to contemplated use;
  • internal governance.

Based on market practice, the following best practices are usually adopted in organisations implementing AI solutions:

  • Identification and understanding of use cases that involve the use of AI.
  • The verification of potential AI solutions and terms and conditions of contracts for the use of such AI (see section 15.2 Applicability of Trade Secrecy and Similar Protection).
  • Careful assessment of potential risks and the methods by which they may be mitigated.
  • Introduction of policies and rules for using AI in an organisation.
  • Implementing human supervision.
  • Ensuring transparency, eg, providing information on the use of AI in the case of chatbots or within an organisation.
Sołtysiński Kawecki & Szlęzak

Jasna 26 Street
00-054 Warsaw
Poland

+48 22 608 70 00

+48 22 608 70 01

office@skslegal.pl www.skslegal.pl
Author Business Card

Trends and Developments


Authors



Sołtysiński Kawecki & Szlęzak (SK&S) is one of Poland's leading full-service law firms. With more than 180 attorneys, the firm provides the highest standard of legal services in all areas of business activity and is well-reputed for the quality of its work and innovative approach to complex legal problems. Since the 1990s, SK&S has been closely associated with the ever-changing technology sector, especially the dynamically developing IT industry. The firm provides high-quality legal services to both individuals and companies, covering the full scope of TMT issues. The team works alongside the firm's fintech, IP/IT, privacy and tax teams to provide an innovative interdisciplinary service and to help businesses use state-of-the-art technologies in a safe, cost- and time-effective manner. SK&S was the founding member of the New Technologies Association.

Background of implementing Artificial Intelligence (AI) in Poland

In Poland, companies are looking into opportunities created by AI in two main ways. First, they want to develop and sell AI tech, and then, they want to use it internally to improve efficiency and gain a competitive advantage. The market participants are careful (especially in highly regulated and sensitive sectors like healthcare or financial services) but monitor and assess the potentialities of using AI.

Government and parliament activity

Poland held parliamentary elections in October 2023, in which the opposition won control over the country. Subsequently, the new President of the Office of Personal Data Protection (UODO) was appointed in January 2024.

The Polish Parliament established a Standing Subcommittee on AI and Transparency of Algorithms to discuss and assess the societal impact of AI, as well as the opportunities and risks associated with it. So far, the Subcommittee has had five meetings (it started its activity in 2024), which were also attended by various Polish leaders and experts from government, academia and the industry, who are involved in the advancement and regulation of digital technology, communications, consumer rights, personal data protection, labour relations, and trade.

The Subcommittee discussed, in particular:

  • The implications of the AI Act and its implementation in Poland, and the discussions concentrated mainly on the designation of AI authorities in Poland.
  • The main issues that arise from applying AI in the employment context. A proposal was made to amend the Trade Unions Act and oblige the employers to provide trade unions with information about "the parameters, rules and instructions on which algorithms or artificial intelligence systems that influence decision-making are based, and which may affect working conditions and pay, access to and retention in employment, including profiling" at their request. This project was first discussed in 2022, before the parliamentary elections, but it has not proceeded outside the commissions; thus, reactivating the new Parliament's works on this bill was discussed. The practical significance of this provision is rather limited, as it does not give the trade unions consulting power before such tools are deployed. Also, it seems that providing general information on the granularity of the information about processing personal data (Art. 13 and 14 of GDPR) will be sufficient to meet the requirement (however, during the discussions in the Subcommittee some persons raised the point that businesses did not provide information on algorithms because of trade/business secrets). Thus, the trade unions may not receive more information than employees or candidates. Also, it will not apply to organisations which do not have trade unions. It has been underlined that either existing provisions or the upcoming AI Act already cover automated decision-making, including profiling, and this additional restriction may be excessive or may become inconsistent with other regulations. However, as the work is in its early stages, Parliament may decide to substantially expand this information obligation and powers of trade unions (though some business organisations state that the proposal is too general and unclear now). Currently, the Subcommittee has provided the project to the Committee of Digitisation, Innovation and Modern Technologies, and subsequently, it may be provided to the Parliament.
  • The reimbursement system for the use of non-drug technologies (treatment options) in the health care system in Poland. From the discussions, which were also attended by representatives of healthcare technology companies and industry organisations, AI in healthcare may be given a lot of attention in Poland. The Subcommittee underlined that AI can be used as a method to improve the organization of the healthcare system and public healthcare. In this meeting, it was pointed out that AI can be beneficial in the healthcare system, with some hospitals already using AI tools (eg, support in radiology), and further implementation of such tools would be particularly important in light of the overloaded Polish healthcare system. Financial issues have also been discussed, and as a result of the topic of the meeting, it has been said that companies preparing AI solutions for healthcare should benefit from more financial support, especially given the high costs of preparing such solutions and the potential benefit for the whole of society.
  • Generally, processing medical data in AI systems is seen as an important issue, and the Subcommittee aims to address this issue in a complex manner (for example, enabling the development of medical AI tools while ensuring that individuals' rights are respected). A meeting is planned to be held with the President of the UODO and Ministry of Health, at which point this issue will be discussed. The President of the UODO stated that a significant review of medical regulations may be expected to cover issues related to health data sharing (especially in the light of the EU’s European Health Data Space) and that it will issue recommendations in respect to processing medical data in AI systems.
  • The last of the Subcommittee’s meetings was focused on presentation held by the President of the UODO, in which the authority underlined that Poland will have to carefully implement new EU legislation related to data (eg, DA, DGA) in order to enable achieving the EU’s strategic goals related to digitalization and AI and to ensure protection of individuals. The President of the UODO also discussed the relation between the AI Act and the GDPR.

In addition, a separate meeting dedicated to the issue of personal data protection in the context of AI has already been announced.

Dedicated AI teams have also been created by the Ministry of Digital Affairs, which recently established a PL/AI team - a group of young entrepreneurs, developers and AI experts who will advise on improving specific areas of the State with AI (in the field of public services or administration) and will also accelerate the technological revolution in the country. The goal of the team is to identify both opportunities and threats in five key areas (security, health, effective state, education and development), and the team will recommend the implementation of 10 strategic projects, two in each of the five areas, so significant developments and opportunities in these fields can be expected.

Upcoming regulatory supervision

The AI Act requires designating local bodies to implement and enforce the AI Act locally. In Poland, the debate is whether to assign AI oversight to new institutions or integrate it into existing ones. The Ministry of Digital Affairs conducted a pre-consultation on the implementation of the AI Act, including the question of who should perform the function of the notifying body, whether it should be one and the same institution or whether independent categories of stakeholders. In May 2024, the Ministry presented the results of the pre-consultation.

The vast majority of views expressed were in favour of establishing a new AI market regulator. They believe that the existing institutions do not have sufficient powers and resources. In addition, they believe that the creation of a specialised body responsible for the safe and legal development of AI will benefit both the public and private sectors. They also argue that the creation of a new body will avoid potential conflicts of competence and priorities in the work of other institutions.

A minority of respondents believe that the existing institution should be entrusted with the competences of the supervisory authority. As the most appropriate, the following entities were mentioned: consumer protection authority, Urząd Ochrony Konkurencji i Konsumentów, (OCCP), data protection authority ((UODO) financial regulator, Komisja Nadzoru Finansowego (KNF), broadcasting authority, Krajowa Rada Radiofonii i Telewizji (KRRiTV), regulatory body in the field of telecommunications, postal activities and frequency resources management, Urząd Komunikacji Elektronicznej, (UKE) or the Ministry of Digital Affairs, Ministerstwo Cyfryzacji, (MC). Regardless of which entities will oversee artificial intelligence in Poland, recruiting staff with sufficient experience will be difficult.

The UODO did not issue any decisions or guidelines regarding data protection, but it has conducted some activities in this field. Until now, the authority showed some interest in AI when it initiated conferences on new technologies, including AI. The new President of the UODO – Mirosław Wróblewski– was chosen in January 2024, and it is expected that the Office will be more active and engaged in issues related to new technologies. The President of the UODO stated that it is already preparing for work related to AI and will support work on legislation connected to AI. He also underlined that the UODO plans to contribute expertise to future legislative efforts in this domain. As mentioned above, the President of the UODO had a presentation regarding AI in the Parliament, which proves the UODO’s interest and engagement into the topic of AI.

He also commented on significant risks to fundamental rights due to AI advancements and stressed the importance of enforcing GDPR data protection principles, including training Data Protection Officers to handle technological challenges.

According to press releases, the UODO is currently reviewing a complaint against OpenAI's ChatGPT for alleged unlawful data processing and lack of transparency. In this case, the Deputy President of UODO acknowledged the significance of AI technology and insisted on its development in line with GDPR, emphasising the need to protect EU citizens from the potential adverse effects of data processing technologies.

Market discussions related to AI legislation

The Polish legal and business environment is also focused on monitoring and discussing AI-specific legislation and standards. In fact, they are very active in ensuring that potential legislation will enable the proper development of AI in Poland.

For example, recently, the Government published its project on the implementation of the text and data mining exception to the Polish copyright law, as required under Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market, and amending Directives 96/9/EC and 2001/29/EC, consequently sparking a wide-ranging discussion in Poland, especially in the IT sector. In the proposed regulation, using this exception to train general-purpose AI systems was prohibited, even though the Directive itself did not provide such a limitation. Many public organisations and developer associations criticised this proposal during the public consultations. As a result, this restriction was excluded from the project. In its current version, the use of the general text and data mining exception, even for commercial purposes, is also allowed to train general-purpose AI systems.

Risks related to AI are also taken into account when commenting upon proposed changes to the law, even if AI is not the direct subject of such propositions. For example, the Ministry of Justice is working on changes to civil law and a procedure to introduce an audiovisual will. Comments made by experts underline that new technologies such as AI, which allow for easy and cheap generation of deepfakes, also pose significant risks for such solutions. The legislator is considering these risks and providing potential security measures in the law.

AI deployments by public entities

There are some interesting examples of public entities deploying AI systems in Poland. In February 2024, the National Information Processing Institute in Poland announced that the new AI tool was introduced to the Single Anti-Plagiarism System, which Universities in Poland use to detect plagiarism in students' and doctoral dissertations. The new tool is designed to detect content created by generative AI systems such as Chat-GPT.

In April 2024, the President of the OCCP announced that it is using an AI system to analyse the terms and conditions used by various service providers on the Polish market and detect so-called abusive clauses in such terms and conditions. Other regulators are expected to start using AI solutions soon (eg, to analyse vast amounts of data and documents during inspections).

Although the adoption of AI in the public sector will be slower than in businesses and, most likely, subject to additional risk assessments, we may expect that the application of AI systems in the public sector will increase, especially where it may significantly speed up the processing of judicial and administrative cases.

AI in business

Poland has seen an increase in AI projects, with several startups and companies making significant contributions to the field. However, AI absorption is generally slower than in other European countries, especially those which traditionally develop and implement new technologies (such as Nordic countries) first. Right now, most AI projects implemented in businesses in Poland focus on internal applications, such as improvement in efficiency (eg, in software development or analysis), cost reduction and the use of AI for the analysis of vast amounts of data. There aren't many AI projects that deal with external parties (customers) yet, although they are starting to appear on the market. More advanced AI projects are expected to follow.

There is an initiative to create the Polish Large Language Universal Model run by the consortium established at the end of 2024 by the Wrocław University of Technology (consortium leader), the National Research Institute NASK (PIB NASK), the Information Processing Centre - National Research Institute (OPI PIB), the Institute of Computer Science Foundations of the Polish Academy of Sciences (PAN), the University of Łódź and the Institute of Slavic Studies of the PAN. The consortium's main goal is to create an open, free model, mostly trained on Polish language content, and to develop an intelligent assistant using this model.

Sołtysiński Kawecki & Szlęzak

Jasna 26 Street
00-054 Warsaw
Poland

+48 22 608 70 00

+48 22 608 70 01

office@skslegal.pl www.skslegal.pl
Author Business Card

Law and Practice

Authors



Sołtysiński Kawecki & Szlęzak (SK&S) is one of Poland's leading full-service law firms. With more than 180 attorneys, the firm provides the highest standard of legal services in all areas of business activity and is well-reputed for the quality of its work and innovative approach to complex legal problems. Since the 1990s, SK&S has been closely associated with the ever-changing technology sector, especially the dynamically developing IT industry. The firm provides high-quality legal services to both individuals and companies, covering the full scope of TMT issues. The team works alongside the firm's fintech, IP/IT, privacy and tax teams to provide an innovative interdisciplinary service and to help businesses use state-of-the-art technologies in a safe, cost- and time-effective manner. SK&S was the founding member of the New Technologies Association.

Trends and Developments

Authors



Sołtysiński Kawecki & Szlęzak (SK&S) is one of Poland's leading full-service law firms. With more than 180 attorneys, the firm provides the highest standard of legal services in all areas of business activity and is well-reputed for the quality of its work and innovative approach to complex legal problems. Since the 1990s, SK&S has been closely associated with the ever-changing technology sector, especially the dynamically developing IT industry. The firm provides high-quality legal services to both individuals and companies, covering the full scope of TMT issues. The team works alongside the firm's fintech, IP/IT, privacy and tax teams to provide an innovative interdisciplinary service and to help businesses use state-of-the-art technologies in a safe, cost- and time-effective manner. SK&S was the founding member of the New Technologies Association.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.