The dawn of the AI era means that, finally, every operation and decision of systems does not have to be defined based on specific instructions, and their adaptation no longer requires code modifications, unlike those based on sorting algorithms (eg, quicksort, mergesort) or search algorithms (eg, binary search).
The use of AI-based systems in the Polish healthcare system will become increasingly common.
Examples of AI systems in healthcare include:
The AI revolution in medicine, in practice, means the acceleration of medical procedures in physician diagnosis. AI primarily helps physicians analyse data by combining data from a specific medical procedure for a specific patient with a modelled approach to the data. AI can identify diseases in their early stages and with greater precision and speed than humans can, for example, in diagnostic imaging. AI is capable of personalising therapy through data analysis. This means that AI can develop individual treatment plans and eliminate unwanted side effects.
AI, therefore, transcends the barrier of diagnostic capabilities, as it not only can detect diseases but goes a step further: predicting a patient’s future condition by identifying patterns suggesting health problems, for example, using data from wearable devices such as watches or fitness trackers, which is mandated by Article 17 of Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (the “AI Act”).
AI can also support clinical trials by identifying suitable candidates for participation in final-phase clinical evaluation programmes for new drugs or medical devices and also shortens drug production time.
Challenges of AI
One challenge of AI in medicine is the risk of AI drift, also known as model drift, where subsequent input data differs from training data. This can lead to, for example, incorrect diagnoses when input data (eg, X-rays) differ from those known to the model, or when the model was trained primarily on older individuals, but over time began to accept and diagnose more young individuals (Article 67 of the AI Act).
Another significant risk and barrier is AI hallucination, which involves AI creating information that is not based on actual data or a known source (substantively false). The postulate that the final decision should be made by a human is therefore important (Article 27 of the AI Act).
Also among the challenges facing AI is protecting patient data privacy and implementing appropriate control and security mechanisms. AI requires large sets of medical data, such as images (eg, X-rays, CT scans), laboratory test results and genomic records. However, these are particularly sensitive data. Processing them carries risks such as patient re-identification after pseudonymisation, unauthorised access or data leakage (Articles 75 and 76 of the AI Act).
There is also the the ‘black box’ problem. This is where the operation of an algorithm is difficult to understand and explain, and it is unclear exactly how the AI reached its conclusions (Article 72 of the AI Act).
The main market trends in healthcare AI in Poland are:
Healthcare AI systems should be explained starting from the basic definition of an AI system introduced by the AI Act. According to Article 3 of the AI Act, an AI system is “a machine-based system that is designed to operate with varying degrees of autonomy and that may exhibit adaptive capabilities after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
Importantly, AI systems are classified in terms of risk, which, according to Article 3 of the AI Act, is a combination of the probability of harm and its severity. The classification distinguishes four risk levels: unacceptable, high, limited and minimal. Any AI system deemed to pose a clear threat to human safety, livelihoods or rights is prohibited as an unacceptable risk.
Polish national legislation does not define AI in healthcare per se. The AI Act does not directly regulate AI systems in healthcare, but it does include them under its umbrella if they meet certain criteria, for example, the criteria to be classified as high-risk systems. If an AI system qualifies as a medical device, Regulation (EU) 2017/745 (Medical Devices Regulation, MDR) and Regulation (EU) 2017/746 (In Vitro Diagnostic Medical Devices Regulation, IVDR) also apply, which are subject to harmonisation with the AI Act.
In the context of data processing by such systems, data protection regulations are relevant, in particular Regulation (EU) 2016/679 (General Data Protection Regulation, GDPR), Regulation (EU) 2023/2854 (Data Act), Regulation (EU) 2022/868 (Data Governance Act) and the NIS Directives. The Clinical Trials Regulation (Regulation (EU) 536/2014) does not directly address AI, but it remains in effect in relation to clinical research.
The European Health Data Space (EHDS) Regulation (Regulation (EU) 2025/327) also plays an important role, facilitating the secondary use of health data for research and innovation.
From a regulatory perspective, the classification of AI systems in the healthcare sector depends primarily on their intended use and risk level under the MDR and the AI Act. Diagnostic and therapeutic systems, if they have a medical purpose, can be considered medical devices under the MDR. They are then classified into one of four risk classes (I, IIa, IIb, III, where I is the lowest risk) (Article 51(1) of the MDR), and only those deemed to belong to Class IIa or higher, requiring third-party conformity assessment, are considered high-risk AI systems under the AI Act.
The substantive regulatory core governing healthcare AI in the Polish jurisdiction is the AI Act, which will enter into force in Poland on 2 August 2026, although some provisions, including the general ones establishing, for example, the statutory definition of “AI system”, entered into force on 2 February 2025, while the regulatory portion applies from 2 August 2025, such as the classification of AI models set out in Chapter V of the AI Act.
MDR
The MDR contains a definition of a medical device that confirms that software can also be a medical device. It also contains rules for the marketing of medical devices and identifies the authorities responsible for overseeing the compliance of medical devices with the MDR.
According to the MDR, a medical device may include software, provided that the manufacturer has intended the software for use with humans for at least one of the specific medical purposes indicated in the MDR.
In Poland, the regulation is supplemented by the Act of 7 April 2022 on medical devices.
The EHDS Regulation aims to establish a common framework for the use and exchange of electronic health data across the EU and sets forth rules for training, testing and evaluation of algorithms, including those used in medical devices.
GDPR
The GDPR regulates data processing, including the circumstances in which data can be used to train AI, and it contains a definition of sensitive data.
Work on a draft Polish Act on Artificial Intelligence Systems is currently under way, the developer of which is the Polish Ministry of Digital Affairs.
The 2025 amendment to the Polish Code of Medical Ethics regulates issues related to the use of AI in medicine, stipulating the need to ensure that the algorithms used are approved for use and certified. It is essential to inform the patient about the possibility of AI being used and to obtain their informed consent.
The Polish Act on Medical Activity of 15 April 2011 defines the legal framework for medical facilities and health service providers, including the principles underlying the use of information technology and AI to diagnose and treat patients.
Soft law, for example, the Health Care Artificial Intelligence Code of Conduct of the National Academy of Medicine, can provide an element of guidance for healthcare professionals and AI systems developers.
In the commercial healthcare market, developers of AI systems in healthcare will primarily need to adhere to the regulatory approval and certification requirements for medical device marketing procedures. These apply when an AI system is introduced into the Polish market with a declared intended use as a medical device.
Because an item of medical software can belong to various classes of medical device depending on its function and the risk it poses to the patient, the developer, as the responsible entity, will be subject to the relevant conformity assessment rules of the MDR, depending on the declared classification of the AI Software as a Medical Device (SaMD) into Class I, IIa, IIb or III. This will be the case when the AI system is intended for use in a medical device as a treatment or standard diagnostic tool.
In the case of in vitro diagnostics, the classification path indicated in the IVDR will apply.
In the context of requirements for evidence of safety and effectiveness, an item of software, including an AI system, must obtain a CE mark before being placed on the market, and the product must be marked with it by the manufacturer alone or with the involvement of a notified body. Therefore, the speed of the path for certain AI applications will depend on the need for a notified body to participate in the certification process.
The source of regulation for AI-based software depends on the declared intended use of the device:
In this respect, regulations such as the guidelines of the International Medical Device Regulators Forum (IMDRF) are important. In 2025, it published an important document for AI-based medical devices and SaMD titled ‘Good Machine Learning Practice (GMLP) for Medical Devices’ (N88, 2025).
The two primary sources of regulations for data protection and privacy are the GDPR and the AI Act. They have a particular impact on the development and implementation of AI in healthcare because, within the conceptual framework of these acts, training AI systems in the healthcare system involves processing sensitive patient data, such as:
Importantly, data protection standards do not distinguish between the source of data, which may be, for example, a database entered by a doctor or other sources within the healthcare system, hospitals, medical devices or in vitro diagnostic tests.
By introducing a system and principles of accountability, the GDPR significantly impacts the collection, processing, storage and sharing of data for the training and operation of AI. The GDPR is complemented by official European Data Protection Board (EDPB) guidelines, most recently Guidelines 02/2024 on Article 48 of the GDPR, finalised on 4 June 2025, which, in the context of cross-border transfer, indicate additional safeguards for sensitive data and ensure an independent mechanism for redress and oversight.
AI systems in healthcare use techniques that involve training AI models with sensitive data. Therefore, the following standards and interoperability requirements apply to them, which are considered high-risk under the AI Act:
The mandatory obligations of a supplier of high-risk AI systems also include the introduction of a quality management system.
White papers also apply in this respect, such as, for example, ‘Good Machine Learning Practice (GMLP) for Medical Devices’ (N88, 2025), which mandates, for example, that clinical data should be representative and free from bias, that AI models should be trained on diverse datasets, and that AI models must be monitored for performance degradation, errors and unexpected risks after deployment.
In relation to AI systems used within the medical data space and in the administrative procedures of healthcare services, the GMLP applies to the interpretation and application of the EHDS Regulation.
In Poland, the regulatory body overseeing these aspects will likely be the planned Commission for the Development and Security of Artificial Intelligence, to be established under the proposed Polish Act on Artificial Intelligence Systems.
Under current Polish law, no dedicated institution has been established to oversee the use of AI in Poland. However, the Polish state intends to adopt legislation that will implement the AI Act by regulating more general issues related to the AI market itself. According to the draft Polish law on AI systems, the AI market oversight body will be an entirely new entity called the Commission for the Development and Security of Artificial Intelligence, which will also address healthcare AI.
A certain safety brake in the context of supervision is introduced by the Polish Code of Medical Ethics, which in Article 12 provides that a physician may use AI algorithms in diagnostic, therapeutic or preventive procedures, but, among other things, on condition of informing the patient that AI will be used in the diagnosis or therapeutic process and ensuring that the final diagnostic and therapeutic decision is always made by the physician.
The President of the Personal Data Protection Office will also be responsible for overseeing the AI market. Their responsibilities will include overseeing high-risk artificial intelligence systems (listed in Annex III to the AI Act), including in healthcare.
AI systems related to healthcare are classified as high-risk AI systems in the AI Act. This classification is important for manufacturers, importers and distributors of, and entities using, AI systems.
This triggers obligations under both the AI Act and the MDR:
The AI Act focuses on key obligations regarding AI systems, including in relation to:
Under the MDR, all medical device manufacturers are required to conduct post-market surveillance. This includes establishing a risk management system and a system for reporting incidents and field safety corrective actions.
AI systems used in healthcare, as high-risk systems, are subject to mandatory post-market surveillance (Article 72(2)). Active collection and analysis of real-world data is required, including – where relevant – the system’s interactions with other AI systems. Additionally, monitoring should ensure ongoing assessment of the system’s compliance with regulations. All activities should be implemented based on a formal post-market monitoring plan.
The AI Act does not explicitly regulate updates and changes to algorithms after their initial approval. However, the previously described principle regarding the post-market monitoring system (Article 72(2)) can be used for this purpose. The purpose of this process is to assess whether the system – including updates – continues to meet the requirements of the AI Act.
Under Article 73, providers of high-risk AI systems placed on the EU market are required to report serious incidents to the market surveillance authorities of the Member State where the incident occurred. In Poland, the authority responsible for oversight will likely be the Commission for the Development and Security of Artificial Intelligence.
Poland does not yet have specific national regulations on AI, other than the EU AI Act. In Poland, the main authority responsible for the MDR and IVDR is the President of the Office for Registration of Medicinal Products, who has statutory powers to inspect and classify products, impose penalties and suspend trade. The Chief Pharmaceutical Inspectorate also exercises supervisory authority.
Poland has not yet designated national market surveillance authorities for the MDR and IVDR. Authorities designated under these three regulations will be obligated to co-operate, and co-ordination activities at the EU level will typically be supported by the EU Product Compliance Network.
Under Article 99 (1) of the AI Act, financial penalties may be imposed for non-compliance, depending on the type of violation. The penalties must be effective, proportionate, and dissuasive. They must take into account the interests of SMEs, including start-ups, and their economic situation. For violations of the prohibitions on specific AI practices, effective from 2 February 2025, penalties will apply from 2 August 2025, and will amount to up to EUR35 million or 7% of global annual turnover, whichever is higher.
Your suggestion is good: "inspect and classify products, impose penalties and suspend trade"
The European Commission withdrew the proposed Artificial Intelligence Liability Directive, which aimed to facilitate compensation claims by introducing a presumption of a causal link in specific situations involving high-risk AI systems. Hence, the national provisions of the Polish Civil Code will be applicable.
Under the AI Act, any natural or legal person who influences the operation of AI is liable for any damage caused by it. The regulation uses the concept of an operator in this context, which can be either:
Liability for medical errors, traditionally understood, concerns actions or omissions that contradict current medical knowledge and lead to patient harm. Depending on the circumstances, this liability may rest with a doctor, hospital or insurer. However, liability for AI errors is questionable from a legal perspective, as AI lacks legal personality.
The supply chain for AI products includes programmers, device manufacturers, component suppliers, distributors and, ultimately, users.
Liability for damages resulting from the provision of AI services depends on the cause of the damage. If the service provider is at fault, it is liable under general principles, whereas if none of the entities involved in providing the service are at fault, we can investigate the potential fault of other entities, such as sellers.
Liability for a product deemed unsafe is based on strict liability. Liability for AI products used in medicine combines liability for medical errors in the traditional sense with liability for the actions of AI.
Some market actors propose creating special compensation funds or insurance systems that would cover damage caused by AI. Such a model could work similarly to third-party liability insurance for motor vehicles. For example, there could be third-party liability insurance for AI robots.
The issue of liability for AI errors in medicine was discussed in 4.1 Liability Framework, where it was explained that the main problem is attribution of liability for AI’s actions. AI errors in medical practice can affect, for example, diagnoses and test results, which can then result in a physician making inappropriate recommendations given the circumstances.
In such cases, a distinction can be made between liability based on fault, pursuant to Article 415 of the Polish Civil Code, if the act is unlawful, and liability based on the attribution of improper behaviour, which can be used to establish liability. Therefore, liability for damage caused by AI should view AI primarily as a tool in the hands of humans.
Strict liability can be distinguished if the use of a given device can be deemed to involve an increased risk of harm. In this framework, the ability to seek compensation for damage caused by AI depends on establishing the culpable conduct of the owner, possessor or other person controlling these technologies and on demonstrating a causal link between their operation and the harm caused.
Equitable liability can also be distinguished; ie, the legal norm imposing the obligation to redress damages refers to the principles of social coexistence.
Traditional risk management addresses known and easily predictable risks, but the case of artificial intelligence is much more sophisticated. Artificial intelligence systems in medicine are classified by the AI Act as high-risk systems. Therefore, they require pre-market compliance assessments before implementation and undergo regular evaluations throughout their lifecycle.
Data quality is assessed, as risk mitigation requires high-quality data feeding the system. Clear information must be provided to users and regulators, human oversight of the AI system’s operation is essential, and systems must be accurate, robust, and safe in operation.
The European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)) even proposed mandatory liability insurance for end-users (operators) or producers for high-risk systems, to the extent that liability would not fall within, or go beyond, product liability regulations. The insurance market is adapting to these requirements, and policies proposed by insurance companies should cover civil liability for damage caused by AI systems, AI system failures, hacker attacks, data leaks, and cyber threats.
Possible limitations of liability or defences in healthcare AI include, but are not limited to:
In the context of medical procedures, it is important to properly manage the patient’s awareness of the possible potential risks resulting from the use of AI software in diagnostic procedures and in the therapeutic process.
Liability will also be limited by proper organisational supervision of the clinic and the work environment of medical procedures, by obtaining appropriate certificates for the use and approval of AI algorithms appropriate for medical use.
It is an open question how to determine the priority of responsibility in the multi-entity chain of providing a medical procedure and whether the doctor, as the organiser and main person responsible for a medical procedure using AI, is able to demonstrate that they acted in accordance with the state of the art.
Ethics within the legal framework for the use of AI will be comprehensively addressed this year for the first time. The regulatory process for the General Purpose AI Code of Conduct is in its final stages of preparation. This will be a legally binding document applicable in EU jurisdictions, issued by the Artificial Intelligence Council and the Office for Artificial Intelligence.
Ethical issues are explicitly addressed in the latest Polish Code of Medical Ethics in the context of patient consent (Article 12). As for other healthcare providers, certain ethical obligations can be inferred from general standards for improving qualifications to the latest technical knowledge. An example is the Code of Ethics for Laboratory Diagnosticians, who, according to Article 10, should strive to obtain reliable test results and interpret them in accordance with current scientific knowledge and technical standards.
The ethical standards that must be observed in accordance with the latest Polish Code of Medical Ethics include informing the patient about the use of AI algorithms, obtaining informed consent from the patient to the use of AI, using only such AI algorithms that are acceptable for medical use and, above all, always having the final decision made by a human.
Transparency and explainability are required for high-risk systems such as healthcare. Transparency is particularly important for combating the so-called ‘black box’ problem. To minimise risk, requirements must be met before the system is commercially deployed.
The information given should include the characteristics, capabilities and limitations of the AI system’s effectiveness. However, the concept of transparency in AI research is fragmented and often limited to the transparency of the algorithm itself.
It has been proposed by many who follow the industry closely that AI transparency operates at three levels: algorithmic, interactional and social.
Patients must be informed about the use of AI in their case, if its use concerns their treatment or diagnosis, and about the potential consequences. Mere efficiencies gained through AI (eg, directing patients to their appointed doctors in waiting rooms or using AI to better navigate supplies in pharmacies) will not require such transparency.
Article 13(2)(f) of the GDPR also mandates the right to information about data processing, and the method and purpose of any profiling, as specified in Article 22. The GDPR primarily focuses on requirements for patient information and explanations regarding decisions made by AI. A patient may not be subject to a decision made solely by AI unless they expressly consent thereto.
There are four main forms of algorithmic bias:
The preamble to the AI Act mentions in Recital 75 that what should characterise high-risk AI systems (including those used in healthcare) is their technical robustness, which should also include providing “technical solutions to prevent or limit harmful or other undesirable behaviour”. An example is the existence of tools allowing the AI to be interrupted (the system enters a “fail-safe” state) in the event of errors or when predefined boundaries are exceeded.
An additional threat that arises is the possibility of third parties providing false data, to which high-risk AI systems should be resistant pursuant to Article 15(5) of the AI Act.
For testing high-risk AI systems, EU law introduces requirements regarding the real-world conditions under which a supplier or potential supplier should conduct such testing. The testing plan specified in Article 60 of the AI Act requires special attention to potential discrimination and good representativeness of data subjects at particular risk due to age or disability, which is a particularly important provision for high-risk AI systems related to health.
Recital 73 of the AI Act mandates that high-risk AI systems be created with appropriate oversight by individuals, and that such systems have specific mechanisms for providing guidance and information to those responsible for oversight to avoid potential errors.
The AI Act also establishes particularly enhanced oversight of AI systems that process such unique biometric data, which is crucial in the healthcare sector. The entity using the system should not be able to act on the basis of identification made by the system until it has been validated by at least two individuals.
Human oversight is addressed in more detail in Article 14(2) of the AI Act, which lists among its purposes preventing or minimising risks to health, safety or fundamental rights.
Under the GDPR and the AI Act, data governance frameworks ensure that AI systems are trained on high-quality, unbiased data, leading to more accurate diagnoses and treatment plans.
In the health domain, the EHDS will facilitate non-discriminatory access to health data and the training of AI algorithms on these datasets in a secure, timely, transparent, reliable and privacy-friendly manner, with appropriate institutional governance.
The status of AI systems intended for use in the field of healthcare as high-risk AI systems is also uncontroversial in the context of Article 6 and Annex III of the AI Act.
As part of the EDPB’s activities, in December 2024, an Opinion was published on the technical side of proper data training including in the health sector, which recommends training AI models by directly collecting data from data subjects.
In practical terms, it is important to consider the existence of horizontal LLMs and vertical LLMs, which can train the horizontal LLMs and create multi-agent models themselves. It is at this point in feeding the model with data that transparency and reliability become essential.
From the perspective of the GDPR, the secondary use of medical data for AI training and development purposes is considered secondary data processing. The definition of processing indicated in Article 4 (2) of the GDPR is so broad that its understanding of “processing” includes activities such as data sharing, including the anonymisation procedure as defined in Article 4 (5) of the GDPR, carried out in order to process anonymised data by outsourcing such anonymised data.
The GDPR permits secondary use of data, but under certain conditions that must be met for it to be lawful.
For example, training an AI model for a diagnostic device for reading lung cancer X-rays requires feeding the model data, which should be partially anonymised. It is not essential for the AI system to know that the training data, ie, the tumour image and the patient’s actual condition, comes from a person named x, but it is crucial for the system to properly read and process that the image comes from a man of a specific age, race and location, who has other additional conditions and addictions, and a family history of illness. This makes the requirement for pseudonymisation a very complex problem for training diagnostic devices in medicine.
Under the GDPR, patient consent is required for secondary processing of personal data if the purpose of processing has changed, with the exception of research purposes. While such a purpose is important from the perspective of technological progress, it depends on the individual case.
The EHDS Regulation requires the following measures to protect data when it is reused:
The definition of processing indicated in Article 4 (2) of the GDPR is so broad that its understanding of “processing” includes activities such as data sharing, including the anonymisation procedure as defined in Article 4 (5) of the GDPR, carried out in order to process anonymised data by outsourcing such anonymised data.
The legal landscape of data sharing, including medical data, is complex, including the GDPR, AI Act, EHDS Regulation and Data Act. In respect of the Data Act, the European Data Protection Board (EDPB) recently issued Statement 4/2025 (dated 14 July 2025) for draft non-binding model contractual terms (MCTs) on data sharing under the Data Act, in which the EDPB highlighted areas needing clarification and improvement in the MCTs, particularly concerning user definitions, data distinctions, and overall structure.
The EHDS Regulation provides, among other things, for the creation of digital infrastructures and organisational units that will facilitate access to and processing of data for both primary and secondary use, including cross-border. Chapter II of the EHDS Regulation lays out comprehensive provisions regarding patients’ rights related to the primary use of electronic health data, focusing on access, portability and control. The framework strongly emphasises security and privacy, ensuring that all data sharing takes place within secure processing environments.
Alongside encryption, pseudonymisation is one of the main technical and organisational measures for data processing under the GDPR. This is a general requirement under the GDPR that data be processed in such a way that it cannot be attributed to a specific data subject without the use of additional information (so-called “pseudonymisation”, as defined in Article 4(5)). This standard, which hinders the development of AI in healthcare, depends on the degree of data identification and its quality, which are necessary for training linguistic models necessary for the proper functioning of AI systems in individual healthcare procedures.
EU law is very restrictive regarding anonymisation, establishing very high standards, including the definition of anonymisation regarding any information that can be identified by a natural person.
This issue is compounded by the requirement for AI systems to permanently record all AI actions, arising from Article 26 of the AI Act. Therefore, the anonymisation obligation certainly impacts the quality of training data for a model running, for example, in a medical device during the comparison of patient data in a trained model, eg, a diagnostic model.
Data anonymisation law has been supplemented for many years with official interpretative documents. The Polish Ministry of Digital Affairs was among the first to issue such a document in the form of an annex in 2020 specifying data depersonalisation techniques. Guidelines No 4/2019 on Article 25 of the GDPR, issued by the EDPB, may also provide useful guidance.
From the perspective of an AI system as a component of a medical device, the definition of a medical device under the MDR and IVDR has a very broad scope which, in practice, in addition to such state-of-the-art devices, can also include simple elements used in medical procedures of varying degrees of complexity. Therefore, intellectual property protection is a multi-layer issue, not an issue only in the context of industrial property law, including patent protection, because software, even in a trained model, is subject to copyright.
Computer programs, are not considered inventions. Yet, the exclusion of a computer program from patentability applies only to the extent that the European patent application or European patent concerns the computer program itself. A computer program is not technical in nature, and therefore is not unpatented technical knowledge and does not merit patenting. Therefore, the so-called technical test can be applied: where the invention (and thus grounds for patentability) manifests itself in the use.
This is so-called protection based on the principle of producing a further technical effect and applies to computer-implemented inventions. Therefore, each case should be approached individually, an example being the European patent for an innovative method of predicting the growth of microorganisms using AI.
Applications of healthcare AI systems rely on software and trained computational operations performed on data. Therefore, in addition to the traditional understanding of invention as a patentable subject, AI systems are protected as software, for which copyright law in Poland has special provisions similar to those in force. This concerns a specific chapter in Polish copyright law dedicated to the legal status of software. These provisions apply the same legal principles to software as they do to artistic works. Legal structures designed to protect authors of artistic works also apply to software code. The problem is that machine learning and AI are not a closed system of codes (AI is not a closed, complete binary system). Legislators and lawyers therefore assume that a dedicated IP protection law for AI will be created in a few years..
Protection can be granted to both the computer program itself and its source code and algorithms, provided they do not constitute merely an idea and are not excluded from protection based on exceptions.
The definition of a trade secret is provided in the Polish Act on Combating Unfair Competition and is understood to include technical, technological and organisational information about a company, or other information of economic value. Potentially, the outputs generated by an AI system could also be covered by trade secrets law, thus protecting something that otherwise could be copied or distributed as it is not covered by copyright under a separate regime.
The results of AI systems based on software and computational operations on data using algorithms to process them are assessed first from the level of copyright protection, and only then from the level of invention and patent protection as computer-implemented inventions in class C2 according to the European Patent Office.
Only a human can be considered a creator under copyright law. This means that the outputs generated by an AI system based on collected data, training on that data, and queries entered into the system are likely not subject to copyright protection because they are not the result of creative, intellectual human work, even though human work occurred in earlier stages and may even have been predominant.
A trained AI system used for a specific medical procedure, such as cancer diagnosis, is a medical device whose outputs are an element of the execution of a specific medical procedure. The status of such data is therefore identical to the ownership of operational data belonging to a medical institution and covered by patient data protection regulations. The status of such data, therefore, clearly differs, in being data produced by the practical operation of a device within a medical institution and used for specific medical procedures, from the status of the AI system when it is made available for medical purposes. Therefore, this fact creates the need for IP clauses in contracts to cover this issue with a mutual obligation relationship between the entity using the high-risk AI system and the entity marketing or commissioning such an AI system, for example, within the framework of a specific medical device and end-user licence.
Among the many licensing and commissioning models within the meaning of the AI Act – for example, in connection with the introduction of a specific medical device to the market – the focus should be on Software as a Service (SaaS), on-premise and open-source licence models. It can be concluded that, in practice, medical facilities are more likely to operate under licences based on on-premise software models than SaaS, and other healthcare entities may be more inclined to use cloud solutions.
AI-supported clinical decisions include the obligation to provide the system with patient health data; therefore, the EHDS Regulation applies. Article 3 stipulates that individuals have the right to access, at a minimum, their electronic personal health data processed through electronic data access services.
Users of health data must also use data based on and in accordance with appropriate authorisation and co-operate with authorities regarding access to health data (Article 61). The GDPR also applies. Articles 35 and 36 stipulate that if a type of data processing is likely to pose a risk to the rights and freedoms of natural persons, an impact assessment must be carried out. If such an assessment indicates a high risk to these rights, consultation with the relevant supervisory authority is required.
AI systems are subject to the following requirements:
Article 12 of the Polish Code of Medical Ethics addresses the use of AI systems in treatment. A physician may use such models in therapeutic, diagnostic or preventive procedures if four conditions are met:
Apart from the issue of the use of a product approved for marketing in Poland and patient awareness, it should be noted that the medical device will be classified separately depending on whether it is used for conventional diagnosis or for in vitro diagnostic procedures, the regulations regarding which in the latter case will have their source in the IVDR.
From a regulatory perspective, marketing such a device before notification requires first meeting standard operational regulatory requirements which are crucial for the entity introducing the device. Confirmation of compliance with the relevant safety and performance requirements is crucial for the AI algorithm. From the perspective of AI algorithm operation, it is crucial that these are normal conditions of intended use of the device and the assessment of adverse events.
The most challenging task for AI systems in healthcare is creatively presenting a model at the regulatory level of acceptability of adverse events in the relationship between the benefit of the medical procedure and the risk to the patient within the meaning of Article 61 of the MDR. This is achieved through the so-called complex procedure and documentation of the clinical evaluation of the device. In the case of such devices, the marketing authorisation holder should co-operate with a notified body, which prepares a clinical follow-up plan.
From a technological level, the entire treatment process (treatment planning or therapeutic decision-making) can be supported by AI to make the process more effective. However, AI systems as tools – for example, those that recommend dosages, treatment protocols or surgical approaches or that are used in treatment planning or therapeutic decision-making – pose a risk of non-compliance with privacy standards for sensitive data (not just that of patients) or even liability for damages regarding fundamental personal rights such as patient health.
Article 2(1) of the MDR, defining a medical device, specifies that software can be a medical device, provided it meets the other requirements set out there. It should be borne in mind when interpreting this provision that the therapeutic process encompasses activities and interactions, diagnoses and treatment phases, ie, therapies and progress evaluations. Therefore, it is not a process that takes place exclusively in a hospital or only at the stage following diagnosis and initial examination. A tool that aids in the interpretation of symptoms and diagnosis can serve as a medical application. This means that AI systems, as a tool or part of a tool used in a medical procedure, are subject to Article 5 and Article 10 of the MDR. Article 5 refers to Annex I (issues such as labelling). Article 10 introduces the requirement to monitor product quality. Annex VIII of the MDR, however, is a legal solution to a key issue for AI. It classifies diagnostic systems based on the potential consequences of their use. Essentially, a diagnostic software system is classified as Class IIa. If its use could result in death or irreversible damage, it is classified as Class III. In the event of significant deterioration or the need for surgical intervention, the device will be classified as Class IIb. If the software is not used for diagnostics, it belongs to Class I.
Remote patient monitoring and telemedicine encompass the entire therapeutic process, including activities and interactions, diagnosis and the treatment phase, ie, therapy and progress evaluation. Therefore, since platform-based and software-based AI systems encompass all patient-physician relationships, access to medication and administrative supervision, it is important that, from a healthcare regulatory perspective, the legal term “therapeutic process” does not refer to a process that occurs solely in a hospital or clinical setting. Any remote action using a tool, even one that merely supports the interpretation of initial symptoms or diagnosis, conducted remotely or automatically, will also be subject to all healthcare regulatory requirements, in addition to software sector regulations such as the AI Act.
Since the status of a medical procedure is not related to the actual need for a hospital stay, all regulations regarding the legality of medical procedures and medical professional liability will apply to AI systems operating in a clinic or non-clinical setting, including very detailed and restrictive regulations regarding cybersecurity, the operation of IT platforms, and data processing.
From the perspective of the fundamental GDPR, any medical procedure conducted by a healthcare AI device constitutes data processing. For example, the general requirement that a human factor ultimately makes the final decision in the decision-making chain remains in force; hence, electronic platforms employ AI-generated systems only for initial diagnostic procedures. Patient interviewing via chatbot does not eliminate the obligation of a registered clinician to connect in person (in practice, by phone) and act as if the medical service were taking place in the clinic.
Besides the consultation procedure and the use of the communication platform, additional issues include regulatory cybersecurity regulations and the regulatory framework of the EHDS – this concerns connecting to the IT system for online prescription issuance. Sensitive data in the cloud and GDPR are also considered.
The AI Act does not explicitly mention remote patient monitoring and telemedicine. It only includes a reference in the Preamble to AI systems intended for emergency reporting (Recital 58 of the AI Act). Annex III of the AI Act classifies such situations as high-risk AI systems. According to Article 6 of the AI Act, high-risk systems are also considered medical devices of Class IIa or higher (according to the MDR) that use AI for the diagnosis, monitoring, treatment or mitigation of diseases. Simple bots will typically be considered low-risk AI systems, while supporting solutions will be considered high-risk AI systems.
It is important to focus on the most crucial aspect of the drug discovery process, namely its final phase: clinical trials. Polish regulations directly address the analysis of drug testing results in humans based on a positive decision to conduct a clinical trial. This concerns the functionality of the AI algorithm for analysing data from the first patients of the drug being tested, ie, clinical trial participants.
When using AI/machine learning-based AI as a subset of AI technology in the context of drug evaluation, development or monitoring, as recommended by the European Medicines Agency (EMA), prior regulatory support must be obtained, for example, through scientific advice or qualification of the innovative development method. These recommendations are presented, among others, in ‘Review of AI/ML applications in medicines lifecycle’, 9 July 2025 (EMA/571739/2024).
In Poland, the Act on Clinical Trials of Medicinal Products for Human Use is in force, implementing Regulation (EU) 536/2014. Although it does not directly address AI, its provisions apply to AI-based tools used in clinical trials. According to the regulations, the ethical review of the trial is performed by the Supreme Bioethics Committee (Chapter 5).
Furthermore, the use of Good Clinical Practice is recommended. In Poland, these principles are recommended not only for clinical trials of drugs but also for other medical experiments, particularly those involving human participants. These standards expand upon the provisions of the Medical Profession Act regarding medical experiments.
According to the European Parliament guidelines, any use of AI in clinical trials requires appropriate ethical oversight.
All medicinal products are subject to Good Manufacturing Practice in accordance with the Polish Pharmaceutical Law. According to Polish Pharmaceutical Act, an application for a marketing authorisation for a medicinal product must include a description of the manufacturing of the medicinal product and a description of the control methods used in the manufacturing process, and for this purpose the regulatory authority will examine issues such as, among others:
and the method of applying AI in these processes.
In the EU, regulation of AI in drug discovery and development is based on co-operation between national regulatory authorities, the EMA and the European Commission. According to Recital 25 of the AI Act, AI systems used solely for scientific research and development purposes are exempt from the regulation (Article 2(6) of the AI Act), which aims to protect scientific freedom and foster innovation. Article 2(8) of the AI Act exempts pre-market AI research and development from regulation, except for tests in real-world conditions.
Legislative work is currently under way in Poland on the draft Act on Artificial Intelligence Systems, developed by the Polish Ministry of Digital Affairs. This project aims to implement the provisions of the AI Act, particularly a coherent national system for oversight of AI systems.
The proposal would establish a new institution to oversee and support the AI sector: the Commission for the Development and Security of Artificial Intelligence.
In June 2025, the European Commission received the final version of the General-Purpose AI Code of Practice, which is designed to help industry comply with the AI Act’s rules on general-purpose AI, including healthcare AI systems.
The Polish Act on Artificial Intelligence Systems should be adopted in 2025.
Poland has not yet introduced regulatory sandboxes specifically for the healthcare sector, but they are planned under the Act on Artificial Intelligence Systems. The Committee for the Development and Security of Artificial Intelligence is to take steps to establish them. They will allow manufacturers who qualify in a competition announced by the Chairman of the Committee for the Development and Security of Artificial Intelligence to test AI systems in a controlled environment.
Poland’s AI development policy until 2030 envisages the creation of three AI sandboxes and a cross-border regulatory sandbox. To date, limited measures have been implemented to test innovative solutions, such as the Urban Tech Hub launched by the Polish Development Fund.
In 2024, the World Health Organization (WHO) presented new guidelines for national governments regarding healthcare. In Poland, these recommendations have been primarily implemented through the Personal Data Protection Act and government bills aimed at implementing the provisions of the AI Act. These will likely include the establishment of the Commission for the Development and Security of Artificial Intelligence as a supervisory authority, and the President of the Personal Data Protection Office will exercise oversight over high-risk AI systems. These guidelines also include provisions on the use of LLM.
The GMLP published by IMDRF establishes ten guiding principles to ensure the safety and effectiveness of AI-based medical devices.
Additionally, SaMD WG/N81 regulates issues related to medical device software.
ISO 42001 also provides specific tools and methods to aid in, among other things, the implementation of the AI Act. It specifies requirements for data quality, ethical standards, and provides for the planning of development and educational activities for personnel. These provisions are also implemented in Poland by Article 12 of the Code of Medical Ethics.
Already in the vertical LLMs, the creators, within the meaning of the AI Act, focus on fact-checking in the language models training when training networks of AI models. This phenomenon is especially true for horizontal models, ie, multi-agent models of AI systems, which hold promise for breakthroughs in medicine and scientific breakthroughs (superintelligence). In practice, the problem is that such AI networks are trained on so-called natural language, and the language models that search and precess online resources treat these resources as a representation of reality.
Another challenge is the protection of personal data and privacy. The EHDS is introducing electronic health records, which can also be used for research and innovation. The risk of false data and results must also be considered. Large AI models process vast amounts of information, the reliability of which can be difficult to verify, which can lead to erroneous analyses and clinical decisions. There is a risk of manipulation, for example, training models on falsified data on drug effectiveness, which can be used for unfair competition and falsifying clinical trial results. However, AI can also be used to verify facts and detect false information.
To make the AI system reliable and make fewer errors, an “adaptive model” is typically introduced, based on the model’s ability to analyse data in real time and make automatic corrections.
AI system developers and users are required to implement risk mitigation mechanisms, GDPR compliance, and often a post-market monitoring plan. Systems with medical applications must be appropriately risk-classified, which impacts regulatory obligations, and developers and users must continuously monitor implemented systems.
Establishing a risk management system proportionate to the risk is crucial. The system should be regularly reviewed, updated and properly documented to maximise the protection of fundamental rights and minimise health risks. High-quality training, validation and testing data is essential, especially when personal data is involved (Article 10 of the AI Act). A quality management mechanism is also essential. The required documentation must be kept available to national authorities for ten years (Article 18 of the AI Act).
In the context of AI, it is particularly important to precisely define the scope of the parties’ liability for system errors or defects. Contracts should specify who is responsible for the input data and whether the AI is intended to be supportive or decision-making.
Therefore, it is important to specify the scope of licences and intellectual property rights. A system user can become the creator of industrial property, the rights to which can be transferred to a contractor. Article 28 of the GDPR requires a contract or other legal instrument in situations where the provider comes into contact with a patient’s personal data in any way, and whether they can use the data for further model training.
It is worth updating contracts with employees using AI to include clauses on training and liability.
Clauses regarding damage coverage and conducting audits and inspections may be important.
A set of model contract clauses (“MCC-AI-High-Risk”) for public procurement is regularly updated. It includes, among other things, the necessary conditions and obligations in the context of high-risk AI systems, such as healthcare AI systems.
In the absence of a specific act addressing the issue of liability for AI (the EU has withdrawn the proposed AI Liability Directive), different views are held on the entity responsible for the damage caused by AI, particularly in healthcare sector (with multiple players):
Article 25 of Regulation 2024/1689 only defines a high-risk AI provider and obliges persons providing such a system to provide information in order to use the system.
In the absence of a specific legal act dedicated to AI-related insurance issues, Directive 2009/138/EC on insurance and healthcare will apply, with the indication that the AI Act requires a risk assessment.
One of the most popular ways to implement AI systems is to use algorithms based on such systems. Furthermore, AI systems can be widely used in places where information is collected, such as databases, as they enable rapid information flow and easy access. Another solution could be the creation or designation of a special centre as an additional advisory and support entity. For example, the WHO recently designated the Digital Ethics Centre at Delft University of Technology as a centre for co-operation in the field of AI and healthcare management. Currently, medical personnel are not required to use AI during their training. However, it is worth noting that, by law, physicians must possess professional qualifications (including a university degree) and specialisations. Similar requirements apply to physiotherapists and laboratory diagnosticians. Using AI for educational purposes could be part of the acquisition of knowledge. Pursuant to Polish law, physicians undertake postgraduate internships to improve their theoretical knowledge and practical skills, and this could also be an important element.
AI content providers in medicine must meet numerous requirements imposed by European-wide legislation (AI Act, GDPR, MDR, IVDR). Additionally, national regulations may shape liability rules.
The following strategies are used to address this issue:
Combinations of strategies are possible to achieve this goal. The most important thing is to properly characterise the product.
Warsaw office: Twarda 18 Street,
00-105 Warszawa,
Poland
Krakow office: Sw. Anny 9,
31-008 Kraków,
Poland
+48 50 164 85 00
+48 12 426 47 43
office@kg-legal.pl www.kg-legal.euLegal Tendencies
The current European legal framework for AI systems in healthcare has emerged from a tendency towards the creation of both comprehensive sectoral regulations, as exemplified by the latest European Health Data Space (EHDS) regulation, and cross-cutting regulations, which comprehensively regulate the introduction to the market, supervision and responsibilities of AI system manufacturers across all industry sectors, as exemplified by Regulation (EU) 2024/1689 (AI Act). As a result, AI healthcare systems in Europe operate in a multi-layered legal environment, and this trend is set to continue.
Training Algorithms in Healthcare AI
Working with AI algorithms begins with collecting and preparing the appropriate data to be used to train models. AI components are then used to analyse this data, recognise patterns and make decisions based on the information collected. Because of this, the transparency, representativeness and adequacy of the data used to train language models in healthcare AI systems become crucial.
Already in the vertical large language models, the creators, within the meaning of the AI Act, are focusing on the fact-checking in those language models. This is especially true for horizontal models, ie, multi-agent models of AI systems, which hold promise for breakthroughs in medicine and scientific breakthroughs (superintelligence).
In practice, the problem is that these AI networks are trained on so-called natural language, and the language models that search and process online resources treat these resources as a representation of reality. Unfortunately, this information is sometimes false, yet the language models treat it as true. Infecting a language model with erroneous sources is therefore also one of the major challenges in the healthcare sector, where AI systems provide informational and scientific functions in medicine.
For example, a language model can be “fed” false or falsified information about adverse drug effects, false scientific papers, non-existent medical disciplines, or false cases of medical errors in procedures involving a medical device or clinic. Therefore, the feeding of language models with false information, for example, that a given drug works and has superior effects, or the feeding of an AI system model with information about an allegedly large number of medical malpractice lawsuits involving a given drug, can pose a very significant challenge in applying AI systems in the medical sector.
AI systems are exposed to services for manipulating internet content on a large scale, so that the fact-checking algorithms verifying real information for network training are lost, and positive information about a product or negative information about a service or institution is smuggled into the trained networks. Therefore, from the perspective of the fundamental functionality of AI based on a language model, special regulatory oversight is necessary for the authorisation of AI systems, based on fair competition standards, data processing in the clinical evaluation process, and regulation of the introduction of tools and drugs to the market. The challenge is to create tools that the healthcare sector can use to identify itself online to ensure that the data sources used for training come from real persons, given that 60% of social media content is created by bots for specific purposes.
Because of the above, sentiment analysis algorithms used in language models to “excise” emotional overtones are insufficient to create the required level of objectivity, and this means AI systems in healthcare must be classified as high-risk AI systems. A challenge legislators face when regulating AI is the issue of liability for errors – especially when AI systems operate fully autonomously.
Practical Challenges of Liability and Safety
Currently, there are two approaches in law to liability in AI: one that focuses on liability for a “defective product” and one that places the liability on the user making the final decision. Therefore, AI transparency is crucial. Another challenge is the protection of personal data and privacy. The EHDS framework introduces a system of electronic medical records, which can also be used for research and innovation. The risk of false data and results must also be considered. Large AI models process vast amounts of information, the reliability of which can be difficult to verify, which can lead to erroneous analyses and clinical decisions.
To ensure an AI system is reliable and makes fewer errors, an “adaptive model” is typically introduced, based on the model’s ability to analyse data in real time and make automatic corrections. However, an AI system that updates itself based on new data falls outside the standard clinical evaluation standards of Regulation (EU) 2017/745 (Medical Devices Regulation, MDR), as any change may require revalidation. Also, the MDR and Regulation (EU) 2017/746 (In Vitro Diagnostic Medical Devices Regulation, IVDR) do not specify how to classify immersive technologies. Hybrid devices combining AI, robotics and augmented reality go beyond the current framework. Therefore, work is under way to amend and adapt the MDR to incorporate rules for hybrid devices, including AI, such that ex-post inspections will be permitted instead of static certification.
In 2025, the International Medical Device Regulators Forum published two key documents regulating AI and Software as a Medical Device. These documents contain guidelines on safety, effectiveness and risk assessment, introducing, among other things, continuous market surveillance of the operation of such systems. Furthermore, the EU is working on regulations that will define liability standards.
AI-Based Drug Discovery and Development
Currently, in Polish law, AI-based drug discovery and development rules should be inferred primarily from the provisions of t the Pharmaceutical Law (Polish Act of 6 September 2001 - Pharmaceutical Law (consolidated text Journal of Laws of 2025, item 750, as amended) and the Regulation of the Minister of Health of 9 November 2015 on the requirements of Good Manufacturing Practice. Before drug design begins, a therapeutic target must be identified – a specific enzyme, a mutated gene or a key signalling pathway. By analysing large biological datasets, including genomic and transcriptomic information derived from next-generation sequencing, AI helps identify the best therapeutic options. This creates regulatory challenges because, according to Polish Pharmaceutical Law, an application for marketing authorisation for a medicinal product must include a description of the medicinal product’s manufacturing process and a description of the control methods used in the manufacturing process. To this end, the regulatory authority will examine issues such as analysis of data from multiomics studies, machine learning and molecular dynamics, pharmacokinetic modelling, molecular docking, lead compound identification, and the application of AI in these processes. Therefore, the regulations must be interpreted to account for the production of drugs using AI systems.
Healthcare AI Legislative Trends
In the field of AI regulation, legislative action is primarily undertaken at the EU level to harmonise regulations regarding emerging technologies. Standardising rules for AI systems requires a special approach, as implementing AI-based solutions badly can pose significant risks to life, health and safety. Therefore, in healthcare, it is crucial to regulate both technical and algorithmic components that directly impact diagnosis or treatment, as well as to ensure a high level of personal data protection, especially sensitive health data.
At the EU level, regulations already exist governing medical devices (such as the MDR and IVDR), data flow and protection (Regulation (EU) 2016/679 and Regulation (EU) 2023/2854), and clinical trials (Regulation (EU) 536/2014). However, until now, there has been no comprehensive legal act directly addressing the use of AI. The AI Act aims to fill this gap by establishing a single regulatory framework enabling the lawful and safe implementation of AI systems in the EU single market, while maintaining consistency with the existing EU acquis.
Although there is currently no specific regulation dedicated solely to AI applications in healthcare, the legal framework established by the AI Act provides the foundation for future implementations in this area. Poland, as part of its obligations arising from EU membership, is taking steps to adapt its institutional structures and supervisory mechanisms to the requirements of the AI Act to ensure the effective application of the new regulatory regime in the national healthcare system.
Creating Institutions for AI Technologies for Medicine and Supervising AI in Medicine Using Data
Digital health authorities already exist in most Member States and address electronic health records, interoperability, security and standardisation. In carrying out their tasks, digital health authorities should co-operate in particular with the supervisory authorities established under Regulation (EU) 2016/679 and with the supervisory authorities established under Regulation (EU) 910/2014. Digital health authorities may also co-operate with the European Artificial Intelligence Board established by the AI Act, the Medical Device Coordination Group established by the MDR and IVDR, the European Data Innovation Board established in accordance with Regulation (EU) 2022/868, and competent authorities under Regulation (EU) 2023/2854.
Member States should enable the participation of national entities in EU-level co-operation, so they can provide expertise and advice on the design of solutions necessary to achieve the objectives of the EHDS.
Therefore, at both the EU and national levels, Member States, including Poland, must adapt existing institutions or create new ones that will help implement the precepts arising from the above-mentioned legal acts. Article 65 of the AI Act established the European Artificial Intelligence Board, a key advisory body composed of representatives from each Member State, supported by the AI Office. The Board’s primary role is to ensure the effective implementation of the AI Act across the EU by co-ordinating the activities of national authorities, sharing technical and regulatory expertise, and advising on AI policy, innovation and international partnerships. Its goal is to foster a coherent and forward-looking AI policy framework that benefits all EU citizens while maintaining high standards of safety and ethics.
The Medical Device Coordination Group was introduced under the MDR and IVDR, and its purpose is to co-ordinate the implementation of these regulations. Furthermore, a European Data Innovation Board will be established to support the European Commission in implementing existing EU legislation, programmes and policies.
At the national level, the Ministry of Health is supported by the e-Health Centre, which manages healthcare organisation and protection and supports the management decisions of the Minister of Health based on analyses that have been conducted. The Centre is also responsible for monitoring the planning, construction and operation of IT systems at the central and regional levels. Additionally, a Polish draft bill on AI systems is currently being developed, which aims to introduce the AI Act into the national legal system. In Poland, the authority responsible for market surveillance will most likely be a Commission for the Development and Security of Artificial Intelligence. It is also proposed to designate the Minister of Digital Affairs as the authority responsible for the notification of conformity assessment bodies and the notifying authority – that is, the authority responsible for developing and applying the procedures necessary for assessing, designating and notifying bodies that assess the conformity of AI systems, as well as for monitoring them. Matters related to accreditation will be entrusted to the Polish Centre for Accreditation based on the provisions of the Act of 13 April 2016 on conformity assessment and market surveillance systems.
AI Models in Healthcare
AI is one of the most important trends driving change in the healthcare sector. AI systems are presently used primarily in diagnostics, personalising treatment and organising healthcare. The largest percentage of healthcare providers say they use AI in diagnostics, for example, diagnostic imaging to detect pathological changes. Telemedicine solutions and chatbots are also gaining in popularity. Furthermore, AI systems are being used in surgical robotics, for example, da Vinci robots assist surgeons through advanced AI systems that provide automatic support and feedback. In the context of medical administration, AI supports the automation of processes for healthcare organisations. Data organisation through AI systems also facilitates personalised medicine, where genetic data and patients’ medical histories are subject to computer-aided analysis. AI is also finding its way into home use, for example, in watches, rings and smart endoscopic capsule mechanisms. Although these offer health monitoring functions, they are not considered medical devices in Poland.
In the EU, AI is seen as a tool that supports humans, not replaces them. The goal is to build a world of augmented human intelligence, where humans remain at the helm. The EU’s approach is based on supporting research and industrial development while protecting security and the rights guaranteed by the Charter of Fundamental Rights of the European Union.
However, AI users in the healthcare sector face several barriers that obstruct the use of new technologies. A key issue in regulatory law is data management and data processing rules.. This facilitates compliance with Regulation (EU) 2016/679 (GDPR – namely personal data, including non-medical data) and Regulation (EU) 2023/2854 (Data Act – namely non-personal data, including medical data). Another prerequisite for the proper application of AI in healthcare is understanding the market and business. AI is not an end in itself, but a tool for implementing strategies. In Poland, the acceptance of AI systems in the healthcare sector by users, especially physicians, also raises legal questions. AI models will be trained at medical universities, in collaboration with start-ups.
A key obstacle identified by Polish healthcare companies is legislation, particularly the AI Act. It significantly limits model training for business purposes. Obtaining multiple consents is becoming increasingly difficult, especially when they concern data. Recital 69 of the AI Act emphasises respect for privacy and personal data protection throughout the entire life cycle of an AI system, including the training phase. It requires the application of principles such as data minimisation, privacy by design, anonymisation, encryption, etc.
According to Article 10 of the AI Act, training AI models, especially high-risk ones, requires rigorous quality management and data compliance. The AI Act does not seek to hinder pure research and development activities, and therefore excludes AI systems developed and put into use solely for scientific purposes (Recital 25). However, when an AI model is also innovated for business purposes, restrictions apply (Article 2(6)). However, some companies expect deregulation because, in their expert view, this is the only way for the EU to be competitive with global technological powers.
Prohibited Practices Regarding the Use of AI in Medicine
The AI Act prohibits the use of AI systems that use techniques that influence humans so that they are unable to control their own behaviour and engage in behaviour they would not normally engage in. An example of this potential threat would be an AI system that identifies a patient’s hesitancy over undertaking expensive treatment and, as a result, presents manipulative images or messages intended to decrease their hesitancy and influence them to change their mind.
Another category of prohibited practices involves influencing the behaviour of individuals who are particularly vulnerable to such influence due to age, disability or difficult financial circumstances. An example would be manufacturers of special care robots (whose software is based on AI) for the elderly, installing a special function into those robots encouraging the purchase of expensive but as yet unproven medications.
Another prohibited AI activity is so-called social systems scoring, which evaluates people based on their characteristics and behaviours. This could lead to unlawful discrimination and result in unjustified restrictions on citizens’ rights or worse treatment of individuals in terms of access to healthcare. Using AI to assess medical risk is permissible as long as it does not result in denial of access to healthcare services or violation of patient rights.
A significant prohibition is on the creation of illegal facial recognition databases if this is done without the consent of the data subjects. A potential situation in which an AI system mass-downloads images of symptoms and faces of sick people, even if this were intended to eliminate a given condition, could violate fundamental rights to privacy and personal data protection.
AI systems are also prohibited from analysing emotions in specific contexts. However, in the healthcare sector, such systems may be permissible as long as their goal is to improve the patient’s health. For their comfort and safety, the patient should be informed about the system’s operation and consent to it, and the system itself should undergo appropriate assessment and verification beforehand.
The EU also prohibits the use of biometric systems to categorise individuals based on characteristics such as race, political views or religion. Automatic patient classification based on unverified data and the generation of false or misleading scenarios by AI systems can undermine trust in the doctor-patient relationship, which in the long run may negatively impact the effectiveness and quality of treatment.
Important Issues for Entities Using AI in Medicine
Entities implementing AI systems that impact the health and lives of patients have a primary obligation to ensure transparency in the operation of these systems. In medicine, transparency should involve informing physicians and patients about the use of AI systems in the treatment process. Furthermore, appropriate technical and organisational measures should be implemented to ensure the systems are fit for purpose.
The AI Act further introduces the requirement to establish an appropriate human oversight system, meaning that entities creating such a system should designate appropriately qualified individuals responsible for such oversight.
The user is also responsible for ensuring the appropriateness of input data, ie, ensuring that the data feeding the system is relevant, current, accurate, and fit for its intended purpose. A hospital using an X-ray image analysis algorithm will therefore be responsible for ensuring its high quality and ensuring that key information is not omitted.
It is also essential to continuously monitor the operation of the AI system throughout its use, so that if a fault is detected, the system’s creator can be immediately notified. Furthermore, the entity using the system is obligated to retain logs (event records) generated by the AI system for at least six months.
Further, a requirement to register the system in a special database dedicated to high-risk AI systems is provided (Article 71 of the AI Act). This ensures that untested systems do not enter the market.
Digitalisation in Healthcare – Frequent Investments in AI
AI-based solutions are increasingly being considered for use in healthcare, particularly given staffing shortages. AI can play a significant role in both recording and automating bureaucratic processes and provide real support for doctors at every stage – from diagnosis, through treatment, to recovery. A growing percentage of Poland’s funds allocated to the digitalisation of the healthcare system is being directed towards AI-based projects. Among the innovative solutions streamlining the healthcare system, many focus on supporting diagnostics, health monitoring and preventive measures. An example is the use of AI-assisted chest CT scan analysis, which allows for the identification of lesions and the determination of their size, extent and location.
Other solutions include the use of pseudonymised medical data and AI algorithms to detect individuals potentially at risk of specific diseases – including rare and ultra-rare diseases – and to create lists of patients requiring further diagnosis. AI is also intended to support infertility treatment, including in vitro fertilisation.
Although questions may arise in the future about the impact of automation on physician autonomy and ethical issues, currently AI systems only play an auxiliary role – they support physicians, but do not replace them in making medical decisions.
NIL Innovation Network
An important “space on the map” of Polish healthcare innovations is the NIL Innovation Network (Physician Innovators Network at the Supreme Medical Chamber), an initiative aimed at supporting and promoting innovation in medicine. It also aims to integrate physicians from various fields interested in new technologies, treatments, and approaches to healthcare.
The NIL Innovation Network comprises several groups. The largest is the Working Group on Artificial Intelligence (WGAI). Its goal is to monitor, develop and summarise the implementation of AI-based technologies in the Polish healthcare sector. WGAI operates by collecting information on the performance of such technologies from providers, healthcare recipients and patients who have had experience with such solutions.
The NIL Innovation Network also has groups on innovation in hospitals, outpatient care, medical technology, health and well-being, medical data, and medical workflow and culture. Each of these areas may be expanded to include AI-based innovations in the future.
Warsaw office: Twarda 18 Street,
00-105 Warszawa,
Poland
Krakow office: Sw. Anny 9,
31-008 Kraków,
Poland
+48 50 164 85 00
+48 12 426 47 43
office@kg-legal.pl www.kg-legal.eu