In Austria, the use of AI in the healthcare setting is already quite advanced, with the technology being employed in a variety of ways to improve patient care and safety. AI is used in the following areas of healthcare.
Diagnostic Tools
AI is widely used in Austria for medical diagnostics, particularly in radiology, pathology, and dermatology. These systems support physicians in identifying diseases such as cancer, lung infections, and skin conditions through automated image analysis. Hospitals across the country have also integrated these tools into routine workflows, making diagnostics one of the most mature and widely adopted areas of AI in Austrian healthcare.
Treatment Planning and Clinical Decision Support
AI applications are increasingly used to support clinical decision-making and personalised treatment planning. These tools analyse patient data, including genetic and clinical indicators, to identify optimal therapies and predict outcomes. AI is particularly valuable in oncology and chronic disease management, where it helps in tailoring interventions to individual patients. While still in development in some areas, these systems are expanding in research hospitals and are emphasised in Austria’s national eHealth strategy.
Drug Discovery and Development
In Austria’s strong biotech and pharmaceutical sectors, AI is being applied to accelerate drug discovery and clinical research. Tools are used to identify potential drug targets, simulate molecule behaviour, and design more efficient clinical trials. Universities and life science companies frequently integrate AI into preclinical research and development, supported by national and EU-level funding initiatives.
Operational and Administrative Applications
AI is also being deployed in hospital administration to improve efficiency. Applications include automated patient scheduling, triage systems, documentation, billing, and fraud detection. Furthermore, chatbots are used for patient communication and basic inquiries. These tools help reduce administrative workload and streamline hospital operations. While adoption is not yet universal, many public hospitals are piloting or implementing these technologies.
Remote Monitoring and Telemedicine
AI-enabled remote monitoring is gaining momentum, particularly in the management of chronic diseases and elderly care. Wearables and smart devices track patients’ vital signs and behaviour in real time, triggering alerts when needed. Telemedicine services, enhanced by AI triage tools, allow for virtual consultations and follow-up care. These technologies gained prominence during the COVID-19 pandemic and are now a central focus of Austria’s long-term digital health strategy.
Several key benefits drive the use of AI in the Austrian healthcare sector. One of the most notable impacts is the improvement of patient outcomes by enabling early diagnosis and personalised treatment, particularly in fields such as oncology, radiology, and chronic disease management. It is also assumed that the use of AI can lead to better results in patient studies, since it can analyse vast amounts of data quickly and accurately. At the same time, AI increases system efficiency by automating administrative tasks (eg, scheduling and documentation, billing, etc) and optimising clinical workflows, resulting in a reduced workload for healthcare professionals and improved resource utilisation. AI also enhances clinical decision-making by integrating vast amounts of medical data to support therapy selection and risk prediction. In the pharmaceutical sector, AI accelerates drug development by expediting processes such as target identification, molecule screening and clinical trial design. Finally, AI is driving the expansion of scalable digital health solutions, eg, telemedicine, remote monitoring and patient self-management tools.
Despite the growing potential of AI and digital technologies in healthcare, several key challenges and concerns remain. A significant challenge is the ongoing lack of data integration. Despite the availability of existing systems, interoperability between different healthcare sectors and digital platforms remains limited, hindering the seamless exchange of information. Another pressing challenge is the disparity in digital literacy and access to technology. Both patients and healthcare providers frequently encounter challenges in utilising digital tools effectively. The integration of AI disrupts existing workflows, requiring healthcare professionals to adapt to new roles and develop additional skills. Regulatory and ethical considerations also pose significant concerns. As digital technologies become increasingly embedded in clinical settings, clear legal frameworks are necessary to address issues related to data protection, transparency, and accountability. Finally, the absence of suitable reimbursement models constitutes a substantial challenge. Many digital health tools have yet to receive formal approval or secure funding pathways, which limit their integration into routine care and slow broader system-wide adoption.
Austria is experiencing steady growth in the application of artificial intelligence in healthcare, with key trends emerging in diagnostics, digital health services, and pharmaceutical research. AI is increasingly used in clinical decision support and diagnostic imaging – particularly in radiology, oncology, and pathology – where it enhances the accuracy and speed of disease detection and supports the shift toward personalised treatment approaches. In parallel, the expansion of AI-enabled telemedicine and remote monitoring is a national priority, supported by Austria’s eHealth Strategy 2024–2030, which emphasises a “digital before ambulant before inpatient” approach to healthcare delivery. AI is also being used in the analysis of real-world health data and predictive modelling for population health management. Additionally, the integration of AI into pharmaceutical research and clinical trial design is advancing, contributing to more efficient drug development and supporting Austria’s growing position as a centre for life science innovation.
The development and adoption of AI in Austrian healthcare are driven by a broad network of stakeholders, including healthcare providers, public institutions, research organisations, and private sector innovators. Hospitals and university medical centres are leading adopters, especially in applying AI for diagnostics, clinical decision-making, and operational optimisation. Public institutions such as the Federal Ministry of Health, the Austrian Research Promotion Agency (Österreichische Forschungsförderungsgesellschaft mbH – “FFG”), and the Austrian Business Service Agency (Austria Wirtschaftsservice Gesellschaft mbH – “aws”) play a crucial role by offering funding, regulatory guidance, and strategic coordination. Research institutions contribute through foundational AI and medical research, often in partnership with clinical facilities. Technology developers and digital health startups, many of which are supported by national innovation programs, provide AI solutions tailored to healthcare use cases. While insurance companies are not yet primary drivers, they are increasingly involved in discussions around reimbursement models and regulatory pathways for digital health tools and AI-supported care.
Austria has also established a strong foundation for collaboration between healthcare institutions and technology developers, particularly in the context of public–private partnerships and coordinated innovation platforms. The healthcare AI ecosystem benefits from national initiatives that connect academic research, clinical practice, and the development of digital technology.
For example, new interdisciplinary institutes focused on AI in biomedicine have been launched in collaboration with academic and public research bodies, reflecting a long-term commitment to integrating AI into personalised and predictive medicine. Coordination platforms, such as LISAvienna, and innovation hubs, like the Future Health Lab, promote active collaboration among hospitals, researchers, startups, and public agencies to co-develop and implement AI-driven healthcare solutions. These efforts are further supported by EU-level networks, such as EIT Health Austria, which facilitates cross-border knowledge exchange and access to innovation funding. Together, these collaborations demonstrate Austria’s strategic focus on building an integrated, innovation-friendly environment for healthcare AI.
In Austria, healthcare AI systems are regulated under the EU Medical Devices Regulation (“MDR”) and the Austrian Medical Devices Act (Medizinproduktegesetz – “MPG”). This is because, the MDR (and so does the MPG) also classifies software as medical device if it is intended by the manufacturer to be used, alone or in combination, for human beings for diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease, and/or diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability, and/or investigation, replacement or modification of the anatomy or a physiological or pathological process or state, and/or providing information by means of in vitro examination of specimens derived from the human body, including organ, blood and tissue donations, and which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body.
Therefore, AI systems intended for the above purposes are classified as software as a medical device (“SaMD”). According to Rule 11 of Annex VIII of the MDR, software intended to provide information that is used to make decisions for diagnostic and therapeutic purposes falls into class IIa, unless the decisions may cause death or an irreversible deterioration of a person’s state of health (in this case it falls into class III), or a serious deterioration of a person’s state of health or a surgical intervention (in this case it falls into class IIb). Software intended to monitor physiological processes falls into Class IIa, except when it is intended for monitoring vital physiological parameters, where the nature of variations in those parameters is such that they could result in immediate danger to the patient; in such cases, it falls into Class IIb. All other software is classified as class I.
With the adoption of the EU Artificial Intelligence Act (“AI Act”), AI systems already classified as medical devices under the MDR are now also considered “high-risk AI systems” under Article 6(1) AI Act and must therefore comply with the provisions of the AI Act.
In Austria, the governance of AI in healthcare is shaped by a combination of national laws and EU regulations. The most important regulatory frameworks are:
EU Regulations (Directly Applicable in Austria)
Austrian Regulations
In Austria, healthcare AI systems used for medical purposes are generally regulated as medical devices under the MDR and the MPG. Most of the AI systems used fall into Class IIa or higher, requiring Notified Body involvement for certification. Developers must, therefore (following the MDR), complete a conformity assessment, prepare technical documentation, and conduct a clinical evaluation showing the AI’s safety and effectiveness. In addition to the information that manufacturers of standard medical devices must provide in the market approval procedure, developers of AI systems must also provide the following:
In Austria, AI-based software intended for diagnostic, therapeutic, or clinical decision-support purposes is regulated as SaMD under the MDR and the MPG. Accordingly, any software with a medical purpose falls within the scope of these regulations (see in detail 2.1 Regulatory Definition and Classification of Healthcare AI and 2.3 Approval and Certification Processes).
While the MDR assumes a fixed algorithmic structure, continuously learning or adaptive AI systems pose specific regulatory challenges. Current Austrian and EU practice requires that such systems be “locked” at the time of certification. Any future changes to algorithm performance or intended use typically require a new conformity assessment unless covered by a pre-defined update protocol.
The AI Act requires that high-risk AI systems be designed and developed in a way that ensures their operation is sufficiently transparent, enabling deployers to interpret the system’s output and use it appropriately. Therefore, a quality and risk management system must be established by which any changes to the algorithm must be documented. Furthermore, high-risk AI systems must have appropriate human-machine interface tools so that natural persons can effectively oversee them during the period in which those systems are in use.
AI in Austrian healthcare is subject to strict data protection rules under the GDPR and the DSG. These laws apply to all stages of data handling in AI development and deployment (ie, data collection, storage, processing, and sharing).
Health data is classified as special category personal data under Article 9 GDPR, and its use requires:
Developers must conduct a Data Protection Impact Assessment (“DPIA”) under Article 35 of the GDPR for any AI system that is likely to pose high risks to individual rights, such as those involving automated decision-making or profiling. Patients must be informed, in clear and accessible terms, whenever AI systems are used in care provision – this is both an ethical and legal requirement under Articles 13 and 14 GDPR.
Data governance issues, such as training data quality, de-identification, and cross-border transfers, are addressed in detail in 6. Data Governance in Healthcare AI of this guide.
In Austria, interoperability and technical standards for healthcare AI systems are guided by a mix of EU regulations, international standards, and national health IT policy frameworks. While there are no AI-specific interoperability laws at the national level, the use of AI in regulated medical contexts must adhere to the technical standards required under the MDR and enforced by the MPG. Mandatory standards include:
The Austrian electronic health record system “ELGA” is based on structured data exchange protocols. The technical specifications are defined in the Ordinance on the Implementation and further in Development of ELGA (ELGA Verordnung 2015 – “ELGA-VO 2015”).
Regulatory oversight of standards compliance is shared between the Austrian Federal Office for Safety in Health Care (Bundesamt für Sicherheit im Gesundheitswesen – “BASG”), which assesses technical documentation during certification, and institutional IT departments, which manage local integration and data security. The upcoming European Health Data Space Regulation (“EHDS”) will introduce mandatory EU-wide interoperability and transparency requirements for AI in healthcare, further shaping the practice in Austria.
In Austria, regulatory oversight of healthcare AI is shared between multiple authorities, depending on the function of the AI system, as outlined below.
Coordination between these authorities is informal but increasing, particularly on cross-cutting issues like data protection during conformity assessments or data sharing in research.
Under the AI Act, coordination is expected to be formalised further through the designation of national supervisory authorities and an EU-level AI Board.
Before placing a healthcare AI system on the Austrian market, developers must ensure conformity with the MDR and the MPG. If the AI system qualifies as a medical device (eg, diagnostic, therapeutic, or decision-support software), it must undergo a conformity assessment, involving a Notified Body, particularly for Class IIa or higher.
Pre-market requirements include:
While algorithmic transparency and bias testing are not currently mandatory under the MDR and MPG, they are increasingly expected by the regulatory authorities overseeing medical device regulations and are explicitly required under the AI Act. That legislation will also impose obligations for explainability, human oversight, and documentation of training data quality (see also 2.4 Software as a Medical Device (SaMD)).
Once an AI system is on the Austrian market, manufacturers must fulfil post-market surveillance obligations under the MDR and MPG. These include:
Any modifications to the AI algorithm, especially those affecting its intended use or safety, may require re-certification or, at the very least, documented change management under the MDR’s significant change rules. Additionally, the MPG requires healthcare institutions utilising AI to collaborate with the authority in post-market surveillance by reporting adverse events and ensuring clinician training and the traceability of AI-assisted decisions. The AI Act also stipulates structured documentation of significant changes, version control and continuous monitoring, especially for adaptive or continuously learning systems.
As of now, Austria has not seen high-profile enforcement actions or recalls involving AI-based medical devices specifically. However, the BASG has broad powers under the MPG to:
The DSB, in turn, has the authority (under the GDPR rules) to conduct investigations, audits, and impose fines of up to EUR20 million or 4% of the company’s global turnover for data breaches or unlawful processing, which could arise from improperly configured AI systems.
In practice, Austrian regulators have so far taken a cautious and cooperative approach to digital health enforcement, prioritising guidance over punishment. However, with the AI Act and growing use of AI in patient care, increased scrutiny and formal enforcement actions are expected, particularly regarding transparency, risk controls, and bias.
In Austria, the liability framework for healthcare AI systems is not governed by a standalone AI liability statute; instead, it applies a combination of traditional tort, medical, and product liability principles, along with specific rules for medical devices and data protection.
Healthcare AI systems that qualify as medical devices under the MPG are subject to strict liability for defective products under the Austrian Product Liability Act (Produkthaftungsgesetz – “PHG”). Manufacturers and developers may be held liable if their AI software causes harm due to design flaws, manufacturing errors, or inadequate user instructions.
Healthcare providers and institutions, on the other hand, remain liable under traditional medical negligence rules as outlined in the Austrian Civil Code (Allgemeines Bürgerliches Gesetzbuch – “ABGB”). If a provider misuses an AI system or fails to exercise proper oversight, they may be held liable, even if the tool itself functions correctly.
Harm caused to patients through the use of AI in medical care is addressed under Austria’s medical malpractice laws, which are grounded in Sections 1299 seq. ABGB and Section 49 ÄrzteG. If a healthcare provider uses an AI system in a way that deviates from accepted medical practice and the patient suffers harm as a result, the provider may be liable for negligence.
Courts assess whether the standard of care – what a competent physician would have done under the same circumstances – was met. This includes evaluating whether the provider:
In AI-related malpractice cases, causation will be a key issue. Austrian courts will examine whether the AI tool materially influenced the clinical decision and whether the provider could have reasonably foreseen its error. Expert testimony is typically used to establish this. However, the “black box” nature of many AI tools, ie, the fact that it is not possible to understand how the AI system arrived at its findings, makes it very difficult to assess causality.
Healthcare AI systems are subject to extensive risk management obligations under both national law (eg, MPG) and EU law (eg, MDR, GDPR, and AI Act). For AI classified as a medical device, manufacturers must implement a risk management system as part of the conformity assessment procedure required for CE marking. This includes ongoing risk evaluation, mitigation strategies, and post-market surveillance. The MPG enforces these EU obligations in Austria and assigns enforcement authority to the BASG. Healthcare institutions that deploy AI systems also carry risk management responsibilities, including proper documentation, clinician training, and ensuring human oversight.
Additionally, when AI processes personal health data, the GDPR and the DSG require Data Protection Impact Assessments under Article 35 of the GDPR. This applies to most AI deployments in clinical settings due to the high risk to individual rights and freedoms.
Finally, the AI Act requires the establishment and maintenance of a quality and risk management system for all high-risk AI systems (see 2. Legal Framework for Healthcare AI for more information).
Although there is no legal requirement in Austria for specific insurance products for AI, institutions typically maintain professional liability insurance and may increasingly obtain cyber or AI-specific coverage. Such coverage is particularly relevant where AI tools are integrated into clinical decision-making processes.
In Austria, developers and users of healthcare AI systems may rely on several legal defences and liability limitations. For manufacturers, compliance with MDR and MPG, including CE certification and proper documentation, can serve as a regulatory compliance defence. This may help demonstrate that the product met applicable safety standards, potentially mitigating liability under the PHG.
For healthcare providers, one of the main defences is demonstrating adherence to established medical practice, including proper training on AI tools and critical assessment of AI outputs. If the provider used the system as intended and documented their decision-making process, liability may be reduced or avoided.
There are no specific safe harbour provisions in Austrian law regarding AI, but courts generally assess the reasonableness of conduct. For example, if an AI system provides a false recommendation due to internal algorithmic bias or flawed training data, a provider who mindlessly relies on the system without verification may still be held responsible.
A unique challenge arises from the “black box” problem, where the internal functioning of AI is non-transparent. In such cases, Austrian courts may apply a burden-shifting approach, requiring the developer or vendor to prove that the system performed correctly. Transparency and explainability will play a growing role in liability assessments going forward.
Austria does not have a single binding ethical code specific to healthcare AI, but developers and institutions operate under a combination of EU-level ethical guidelines, national laws, and institutional codes of conduct. The EU High-Level Expert Group on AI has issued the Ethics Guidelines for Trustworthy AI, which Austria has adopted as a voluntary standard, especially in public research settings. These guidelines emphasise core principles such as human autonomy, prevention of harm, fairness, and explicability.
While not legally binding, these principles influence the regulatory implementation, which will codify many ethical principles into enforceable obligations for high-risk AI systems. In Austria, ethical considerations are also addressed at the institutional level (eg, university hospitals and ethics committees), particularly when health data is processed or AI tools are deployed in clinical trials or patient care.
Transparency and explainability are essential components of healthcare AI compliance in Austria. Under Articles 12, 13, 14, and 15 of the GDPR, patients have the right to know how their data is used, including when automated decision-making or profiling is involved. Where AI contributes to patient care, healthcare providers must ensure patients are informed – especially if the AI influences diagnosis or treatment decisions.
The MDR further requires that medical devices, including AI software, be accompanied by instructions for use that enable clinicians to understand and properly apply the technology.
With the AI Act, transparency will become mandatory for high-risk AI systems. Developers will be required to provide documentation on system logic, capabilities, limitations, and risks, not only for regulators, but also for end users such as clinicians and, where relevant, patients.
In clinical practice, Austrian institutions will have to implement disclosure procedures for AI-assisted diagnostics and decision support, though this is not yet standardised. Nonetheless, regulatory and ethical norms suggest a duty to inform and educate both professionals and patients about the role of AI in care decisions.
Although Austrian laws do not contain obligations to test or mitigate algorithmic bias, this issue is increasingly addressed under GDPR, MDR, and the AI Act.
In addition, Austrian ethics boards and research funders are increasingly expecting applicants to address equity, demographic fairness, and the protection of vulnerable groups.
Human oversight is a core principle in Austrian healthcare AI governance, anchored in both regulatory requirements and medical ethics. The AI Act requires all high-risk AI systems, including those in healthcare, to be designed and implemented with meaningful human oversight to prevent automation bias or over-reliance on algorithmic recommendations.
This aligns with the restriction that medical services may only be provided by physicians according to Section 3(4) ÄrzteG (“Arztvorbehalt”). As a result, AI tools cannot act autonomously in clinical care settings. Even highly sophisticated AI systems must serve in a decision-support role, with the final judgment always made by a medical professional.
Under MDR, manufacturers must include human-factor testing and clear instructions for use, ensuring that healthcare professionals can correctly interpret and override AI outputs. The level of oversight depends on the type of system: for example, triage-support AI may require passive monitoring, while diagnostic tools demand active physician validation.
In short, in Austria, AI may assist, but never replace the healthcare professional.
Training data is central to the performance and reliability of healthcare AI systems. In Austria, the use of health-related datasets is governed primarily by the GDPR and the Austrian DSG. These laws require that personal data, particularly sensitive categories such as health data, be processed lawfully, fairly, and transparently.
For AI systems regulated as medical devices under the MDR, the training dataset forms part of the technical documentation and must be described in terms of representativeness, inclusion and exclusion criteria, and data quality. Developers are expected to use datasets that are complete, clinically relevant, and statistically representative of the patient population in which the system will be used. Particular attention must be paid to avoiding historical biases, which could lead to discriminatory outcomes.
The AI Act additionally requires that:
To mitigate bias, developers are required to:
The AI Act further reinforces this by requiring high-risk AI systems to undergo post-market monitoring to identify and correct emerging biases once they are deployed.
The secondary use of health data – ie, using data originally collected for clinical care for research or AI training – is subject to strict rules under Articles 5, 6, and 9 of the GDPR and the DSG. The usage requires explicit, informed consent from the data subject, particularly for identifiable data. Alternatively, processing may be permitted under the research exemption in Article 9(2)(j) GDPR, provided adequate safeguards are in place.
In Austria, ethics committee approval is generally required for research projects using personal health data. This includes AI development in academic settings or collaborations with healthcare providers. Projects must be registered with local ethics boards (eg, at university hospitals) and often require a Data Protection Impact Assessment due to the high-risk nature of health data processing.
If obtaining consent is not feasible, data must be anonymised or robustly pseudonymised, and its use must comply with both the GDPR and national research law, eg, the Austrian Research Organisation Act (Forschungsorganisationsgesetz – “FOG”).
In Austria, the sharing of personal health data – whether between hospitals, research institutions, or commercial AI developers – is tightly regulated by the GDPR, supplemented by Sections 7 and 8 of the DSG. Any data sharing must be based on a lawful legal basis, such as patient consent, public interest, or research, and must adhere to purpose limitation and data minimisation principles.
Before sharing data, organisations must conclude Data Processing Agreements (“DPA”) in accordance with Article 28 GDPR, clearly defining roles (controller v processor), security obligations, and permitted uses. The Austrian DSB provides guidance on the content of DPA as well as on controller–processor relationships in healthcare.
Cross-border data transfers within the EU/EEA are permitted; however, transfers to third countries or international organisations are only permitted in accordance with Article 44 seqq. GDPR.
For the processing of personal data by AI systems, the GDPR rules apply. According to Recital 26 GDPR, anonymised data does not fall within the scope of data protection law. In contrast, pseudonymised data, ie, where identifiers are replaced but still reversible, is still considered personal data and remains fully regulated under GDPR.
In Austria, there is no legal standard for anonymisation and/or pseudonymisation beyond the GDPR. Thus, the standards for anonymisation are derived from the guidelines 05/2014 of the Article 29 Data Protection Working Party and the guidelines of the European Data Protection Board.
In Austria, healthcare AI innovations can be protected under the Austrian Patent Act (Patentgesetz – “PatentG”), as administered by the Austrian Patent Office (“Patentamt”). To qualify, inventions must meet the criteria of novelty, inventive step, and industrial applicability.
While mathematical methods and abstract algorithms are excluded under Section 1(3)(1) PatentG, AI systems may be patentable if the algorithm is applied in a technical context, such as signal processing for diagnostics or medical device control. This mirrors Article 52 of the European Patent Convention (“EPC”), to which Austria is a contracting state.
However, key challenges in patenting healthcare AI are:
Developers in Austria often file utility model (Gebrauchsmuster) applications under the Utility Model Act (Gebrauchsmustergesetz – “GMG”) for faster protection of AI-based health tech with shorter innovation cycles.
AI software, including source code and documentation, is protected in Austria under the Austrian Copyright Act (Urheberrechtsgesetz – “UrhG”). Software is classified as a literary work pursuant to Section 2(1) UrhG, and protection arises automatically upon creation. Under Section 40a UrhG, this covers various forms of software expression, including source code and machine code, as well as development materials.
However, this must be balanced with obligations under MDR and the AI Act, which require transparency on algorithmic functioning, datasets, and risk assessments. Austrian regulatory practice encourages partial disclosure via technical documentation that complies with legal standards without fully exposing proprietary content.
Under Austrian law, outputs generated by AI – such as diagnostic suggestions, image interpretations, or therapy plans – are generally not considered works protected by copyright unless they are sufficiently original. Therefore, such outputs typically do not create automatic IP rights.
Ownership is primarily determined contractually.
In academic collaborations, IP generated through joint research is often owned by the institution, but spin-offs or licensing agreements may assign usage rights to commercial partners.
In Austria, the licensing and commercialisation of healthcare AI technologies are governed by general contract principles under the ABGB and the Commercial Code (Unternehmensgesetzbuch – “UGB”). While there is no AI-specific licensing law, agreements must be drafted with sufficient clarity and precision to be enforceable.
Common licensing models include SaaS arrangements. These licences typically define the scope of use, such as clinical, research, or non-commercial purposes, along with terms covering user access levels, system updates, maintenance, and technical support.
Compliance with applicable regulations is increasingly built into contractual frameworks. When AI systems are classified as high-risk under the MDR or AI Act, licensing agreements often include specific obligations regarding post-market monitoring, software version control, and audit readiness. Liability clauses, regulatory warranties, and data protection provisions are also routinely negotiated.
AI-based clinical decision support systems are regulated under MDR as SaMD, typically classified as class IIa or higher (see in detail 2.1 Regulatory Definition and Classification of Healthcare AI). Under the AI Act, they will also be considered high-risk systems, requiring risk management, human oversight, and documentation of transparency and data governance.
Validation requires clinical performance evidence, based on real-world or retrospective datasets. Implementation must ensure traceability, usability, and clinician training. Under Austrian law, the restriction that medical services may only be provided by physicians prohibits autonomous diagnostic or therapeutic decisions by non-physicians. Clinical decision support systems must therefore function as assistive tools, with physicians retaining full responsibility for their decisions.
Under the MDR, diagnostic AI tools are classified, based on their intended use and risk, as Class IIa or Class IIb SaMD. They require CE marking, clinical validation, and adherence to ISO standards. The AI Act imposes additional obligations on transparency, traceability, and post-market monitoring. Besides the MDR, MPG and AI Act, there are no specific frameworks, eg, for radiology, pathology, or other diagnostic specialities using AI.
Depending on their clinical impact, AI tools for treatment planning are classified as Class IIb or III SaMD. They must demonstrate safety, clinical benefit, and risk control. The AI Act will apply due to their high-risk nature, requiring explainability, human oversight, and documented training data. In Austria, the ÄrzteG strictly limits therapeutic decisions to physicians. Therefore, AI systems may support decision-making but cannot replace or automate clinical judgment.
AI systems used in remote monitoring and telemedicine are regulated under MDR, GDPR, and national digital health guidelines. Most systems qualify as class IIa or IIb SaMD and must comply with data protection and cybersecurity standards. In home settings, clear consent, explainability, and physician involvement are essential. The ÄrztG requires that diagnostic or treatment decisions, even if AI-assisted, are confirmed by a physician.
AI used in drug discovery is not typically regulated in Austria. However, when applied to clinical trials or patient selection, oversight by BASG or EMA and compliance with Good Clinical Practice (“GCP”) apply. If used for individual-level recommendations, the AI Act could also apply. GDPR governs all data processing. Austrian research institutions are actively engaged in AI-driven pharmaceutical R&D, and developers should closely monitor EU guidance as the regulatory environment evolves.
Austria is preparing for the enforcement of the AI Act, which will be the most significant legislative development affecting healthcare AI. The AI Act classifies most healthcare AI systems, particularly diagnostic and decision-support tools, as high-risk systems. This classification will impose strict obligations on developers and users, including:
Austria will enforce the AI Act through existing bodies such as the BASG (for medical devices) and DSB (for data protection), which will cooperate with an EU-level AI regulatory board. The AI Act introduces additional technical, ethical, and monitoring requirements. Developers in Austria must begin aligning their systems now to meet the forthcoming standards.
Austria participates in regulatory sandboxes and innovation funding programmes relevant to healthcare AI, although no AI-specific sandbox has yet been established nationally. Austrian stakeholders can access EU-wide initiatives such as the Digital Europe Programme and Horizon Europe, which support pilot testing, clinical validation, and regulatory engagement for AI solutions.
At the national level, the Austrian FFG provides funding for AI-related health technology projects, especially under the “AI for Green” and “Health Tech Hub” calls. These programmes offer developers opportunities to trial AI applications in controlled clinical environments through collaboration with academics, industry, and regulators.
Austria contributes to international harmonisation of healthcare AI regulations primarily through its participation in EU policymaking and alignment with global standards. As an EU member state, Austria adheres to the MDR and the EU AI Act, both of which are developed in coordination with international regulatory frameworks and technical standards. Austria’s involvement in the European Health Data Space (EHDS) and EMA data-sharing initiatives further positions it to benefit from harmonised regulation and secure, interoperable AI deployment across borders.
Austria also actively incorporates recommendations from bodies such as the World Health Organisation (“WHO”), the International Medical Device Regulators Forum (“IMDRF”), and ISO/IEC technical committees.
Nevertheless, cross-border regulatory challenges for Austrian AI developers persist, including differing interpretations of AI compliance under the MDR across EU countries, as well as the complexity of data transfer rules under the GDPR. To navigate these issues, developers often engage legal counsel in each target market, implement standard contractual clauses, and seek EU-wide certification to minimise fragmentation.
As healthcare AI technologies advance, Austria is facing several emerging legal and regulatory challenges. One key issue is the integration of adaptive or continuous learning systems, which conflict with the MDR’s requirement for fixed intended performance. Austrian regulators are still determining how to monitor and recertify algorithms that evolve post-deployment.
Another challenge involves autonomous AI systems, especially where AI plays a central role in diagnosis or treatment planning. The AI Act will require robust human oversight mechanisms, but enforcement strategies are still being developed. Questions around traceability, explainability, and accountability remain open, open-particularly for “black box” algorithms used in clinical contexts.
In addition, since the ÄrzteG reserves core diagnostic and therapeutic decisions to licensed physicians, AI systems may only assist, but not replace, medical decision-making in clinical settings. Any attempt to delegate or automate such functions without meaningful human involvement could therefore violate national professional and liability laws.
Finally, the growing convergence of AI with robotics, augmented and virtual reality, and digital therapeutics raises issues around joint regulatory classification, dual certification, and cross-sector liability. Austrian regulators are preparing by participating in EU expert groups, drafting national implementation plans for the AI Act, and encouraging interdisciplinary research into AI governance. As a result, developers are being advised to incorporate legal, ethical, and technical safeguards into their systems now to prepare for future scrutiny and potential legal risks.
Healthcare AI developers in Austria must establish robust compliance structures that align with the MDR, the GDPR, the AI Act, and Austria’s national healthcare laws. Recommended strategies include implementing a quality management system, conducting data protection impact assessments, and maintaining comprehensive documentation covering algorithm functionality, updates, and risk assessments. The AI Act will require risk management, data governance, human oversight design, and transparency documentation. Organisations should establish multidisciplinary AI compliance committees comprising experts from legal, clinical, IT, and data protection fields. Documentation must cover the training data used, bias mitigation efforts, system logic, and update controls.
To balance innovation and compliance, institutions should engage in regulatory sandboxes (eg, via FFG or Horizon Europe), use phased clinical rollouts and consult early with BASG.
Contracts must now anticipate obligations under the AI Act, in addition to the MDR, GDPR and the national legislation.
Key provisions include a clear allocation of regulatory compliance, as follows.
Developers are responsible for AI-specific risk management, data training quality, and conformity documentation, while healthcare institutions manage the clinical use and integration of AI. Indemnities must reflect this, covering algorithmic malfunction, data breaches, or non-compliance penalties.
Limitation of liability clauses remain common but may be adjusted upward due to potential fines under the AI Act (up to 6% of global turnover). Contracts should specify responsibilities for post-market monitoring, incident reporting, and software updates, which are mandatory under both MDR and the AI Act. Where adaptive or continuously learning systems are used, provisions must define who ensures re-certification and when. As transparency and explainability will be legally required, warranties may need to guarantee the availability of documentation for clinicians and patients.
Healthcare AI stakeholders in Austria should consider a combination of professional indemnity, product liability, and cybersecurity insurance. These cover clinical errors, software malfunctions, and data breaches, respectively. Insurers assess AI-related risk based on:
To date, no Austria-specific insurance policies exclusively for AI have been established. However, some EU-based insurers offer tailored products for digital health or AI-based tools. Healthcare providers using third-party AI should verify that vendors maintain adequate product liability coverage and that institutional policies extend to AI-related malpractice.
Risk premiums may be higher for systems that involve autonomous recommendations or have limited explainability. Insurers favour systems with human-in-the-loop oversight, proven performance metrics, and clear documentation.
Successful implementation of healthcare AI in Austrian institutions requires:
Change management strategies should address workflow integration, staff trust-building, and ethics training. Integration with existing IT infrastructure, including interoperability with electronic health records, is critical. Institutions should also allocate time for clinical pilots before full-scale deployment to refine the utility of AI and manage expectations.
Deploying healthcare AI across jurisdictions raises challenges related to MDR conformity, GDPR compliance, and national health laws. In Austria, cross-border deployment within the EU requires:
Diverging interpretations of GDPR or MDR in other jurisdictions may necessitate local legal representation, custom data governance frameworks, or adjustments in software functionalities. Multinational organisations should maintain a compliance matrix that tracks key regulatory differences and develop a modular approach to adapt deployments to each jurisdiction.
Participation in EU harmonisation projects, such as the European Health Data Space (EHDS), will facilitate cross-border compliance in the future.
Dominikanerbastei 11
1010 Vienna
Austria
+43 1 3860 700
vienna.office@kinstellar.com www.kinstellar.com