AI Research and Development in Hungary
AI research and development in Hungary is marked by a strong foundation in academic and government support, and a thriving innovation ecosystem, particularly centred around Budapest and Szeged. Hungarian universities and research institutions, renowned for their prowess in mathematics, computer science, and engineering, are leading research in AI focusing on areas like machine learning, natural language processing, and computer vision.
The Hungarian Academy of Sciences and its network of research institutes, such as the Institute for Computer Science and Control (HUN-REN SzTAKI), are instrumental in advancing theoretical and applied AI research mainly in healthcare, computer vision and language processing. Collaboration is a cornerstone of Hungary’s AI R&D, with Hungarian researchers participating in several international projects within and outside of the EU and forming partnerships with global tech companies and European research initiatives. As Hungary has strong foundations in the automotive sector, many projects are also conducted in Hungary in the field of self-driving cars and connected cars. Publicly known AI research is done in the healthcare, fintech, and automotive sectors.
Government initiatives, most notably the Hungarian National Artificial Intelligence Strategy, aim to set the Hungarian stage for regional AI leadership by enhancing research funding, facilitating public–private partnerships, and encouraging the development of research hubs. Hungary’s AI R&D landscape is thus characterised by its academic foundations, collaborative projects, government backing, and a focus on innovation, placing it on a path towards significant contributions to both the global and regional AI advancements.
One of the most well-known government initiatives is the Hungarian AI Coalition (MI Koalkíció), established in 2019 as part of the Digital Wellbeing Programme. The AI Coalition aims to advance artificial intelligence (AI) in Hungary by fostering collaboration across the public sector, private sector, academia, and research institutions. Its objectives include promoting AI education and research, accelerating innovation and the application of AI technologies, and developing ethical and legal frameworks for AI usage. The Coalition supports the National AI Strategy to position Hungary as a leader in AI in Central and Eastern Europe, organising events and facilitating knowledge exchange. It engages with international partners to integrate Hungarian AI initiatives into the global landscape and attract investment. By prioritising education, ethical AI deployment, and collaborative innovation, the Hungarian AI Coalition plays a key role in advancing Hungary’s AI capabilities and ensuring its competitiveness on the global stage.
Legal framework and enforcement of AI-related laws in Hungary
The current legal framework in Hungary relative to the development, implementation and use of AI systems consists of mainly data protection laws, such as the GDPR, intellectual property laws and consumer protection laws. Hungary’s approach to regulating the development, implementation, and use of AI systems is shaped by national initiatives, pieces of EU legislation, and local, sector-specific laws.
Hungary’s National AI Strategy, launched in 2020, outlines ambitions to become a leading AI hub in Central and Eastern Europe by 2030, focusing on fostering innovation, ethical AI use, and competitiveness. Although not a legal document, this strategy guides Hungary’s policy and regulatory direction regarding AI.
As an EU member state, Hungary adheres to EU-wide regulations that impact AI such as the GDPR, which is not AI-specific, but imposes important requirements on AI systems concerning data protection related requirements. The Hungarian National Authority for Data Protection and Freedom of Information (the NAIH) ensures compliance with GDPR and other data protection laws, essential for AI systems processing personal data. While the NAIH is not specifically designated to oversee and supervise the development, implementation and use of AI systems, it actively enforces data protection requirements related to AI systems in Hungary, in line with the enforcement priorities of the European Data Protection Board. The EU’s ongoing efforts to create a harmonised AI regulatory framework, including the proposed Artificial Intelligence Act, are expected to significantly influence Hungary’s legal landscape for AI once entered into force.
Intellectual property laws in Hungary provide a framework for the protection of software, databases, and algorithms relevant to AI, although they do not specifically address AI technologies. Ethical guidelines and governance frameworks on a regulatory level are also being developed to ensure responsible AI development and use, emphasising transparency, accountability, and human rights respect.
The consumer protection legal framework in Hungary is significantly influenced by European Union directives that have been integrated into the country’s laws. This legal structure is designed to defend consumers from deceptive business practices, guarantee the safety of products and services, and secure the privacy and personal data of consumers. Although these regulations are also not specifically tailored for AI, they are applicable to AI systems utilised in consumer-oriented applications such as mobile applications, pricing algorithms on online marketplaces, in medical devices etc. These regulations mandate that AI technologies should not deceive consumers, put them at risk, or improperly handle their data. The laws advocate for transparency and fairness in all commercial activities, including those that involve AI. Businesses that engage with consumers using AI, for instance, through targeted advertising or tailored recommendations, are required to ensure their interactions are honest and equitable. This obligation includes being upfront about the utilisation of AI to prevent misleading consumers regarding the essence or quality of the goods or services offered.
The integration of AI into the marketplace presents challenges related to competition fairness. The field is resource-intensive and innovation-driven, predominantly occupied by large corporations with substantial technological and financial resources. This scenario risks creating a monopolistic environment where a handful of companies could overshadow smaller competitors, thus impacting the competitive balance within digital marketplaces.
Additionally, the proliferation of AI technologies amplifies concerns around consumer vulnerability, particularly regarding data privacy and targeted advertising. AI enables more efficient data collection and utilisation, leading to practices such as deceptive design (dark patterns) and personalised ads. This raises alarms in contexts like chatbot interactions, where the reliability of information and the presence of biased, sponsored content may not be transparent to users. These issues underscore the necessity for a thoughtful approach to understanding AI’s wider effects on market dynamics and consumer safeguards.
The Hungarian Competition Authority (HCA), as the competent authority responsible for overseeing consumer protection regulations and unfair commercial practices, also actively enforces requirements against businesses offering AI products or incorporating AI tools into their products, such as online marketplaces using personalised pricing algorithms. In 2023, the HCA initiated investigations into Microsoft for potential lapses in informing users about the AI-integrated features of its search engine. The HCA is also analysing the market to assess AI’s implications on competition and consumer rights, highlighting the growing concern and regulatory scrutiny over AI’s impact in Hungary.
The authors expect that Hungary’s legal framework will undergo further developments, including new EU-level AI-specific regulations, updates to existing national laws, and alignment with EU standards, aiming to balance innovation with safety, privacy, and ethical considerations.
Continuing surge of AI use and implementation
AI systems can be categorised into several types, including generative AI (GenAI) for creating new content, predictive AI (PredAI) for forecasting, descriptive and diagnostic AI for analysis and problem-solving, prescriptive AI for decision-making guidance, reinforcement learning AI for optimised actions, Natural Language Processing (NLP) for language understanding, and computer vision for visual interpretation. The most well-known types are GenAIs, PredAIs, computer vision and NLP. These AI tools serve different use-cases and are used in many products. GenAI powers content creation, art design, music composition, and synthetic media. PredAI is used in financial forecasting, demand prediction, healthcare prognosis, and predictive maintenance. Computer vision supports autonomous vehicles, facial recognition, image classification, and agricultural monitoring. NLP AI enables chatbots, translation services, sentiment analysis, and text summarisation.
While the legal framework for developing and/or using various types of AI remains consistent in Hungary, the specific compliance requirements businesses must adhere to are influenced by different use-cases, including sector-specific laws in healthcare, finance, data protection, and intellectual property law that pertain to the training and operation of AI systems. Training and maintaining AI systems pose data protection risks, including privacy violations, data security threats, biases leading to discrimination, and challenges in ensuring transparency and accountability. Intellectual property law risks involve disputes over the ownership of training data and AI-generated content, potential infringement by AI outputs, and complexities in patenting AI technologies.
Further to this, businesses must secure compliance during the operational life-cycle phase of AI systems. In practice business must retain training data in most of the cases which in itself is a challenge due to the data minimisation and storage limitation principles in data protection. Also, maintaining data accuracy in AI operations involves regular updates, data cleaning, robust governance, and feedback loops. Legal challenges include adhering to data protection laws, managing liability for inaccurate data, addressing biases to avoid discrimination, navigating intellectual property rights, and fulfilling transparency and accountability requirements.
Even though the technology is relatively recent, the process of sunsetting (decommissioning) AI systems also carries its own set of risks. These include legal challenges related to data protection and security (such as the need for secure data erasure and sanitising storage devices), possible breaches of service level agreements (SLAs) and contracts, intellectual property concerns, and failure to meet regulatory standards. Furthermore, organisations need to navigate the transition meticulously to prevent operational disruptions and liability for decisions made by the AI in the past. With careful planning and legal advice, these risks can be managed effectively, ensuring a seamless transition while preserving trust and compliance with stakeholders.
Widespread adoption of GenAI services in Hungary
In the public eye and in the press, AI is largely associated with Large Language Models (LLMs) and GenAI services. Business decision makers think that LLMs are powerful AI tools that are changing the way people work, making jobs easier and more efficient. These AI models are great at taking over boring or repetitive tasks, like writing emails or making reports, which lets people focus on more important work. This means workers can spend their time on creative thinking and solving complex problems. LLMs help us communicate better and faster. They can translate languages in real-time, summarise long documents quickly, and answer customer questions through chatbots. This cuts down on confusion and costs, saves time and increases efficiency.
When it comes to making decisions, LLMs are like super-fast researchers. They can look through tonnes of data to spot trends, make predictions, and suggest what to do next. This helps businesses make smart choices quickly. They are also great at helping with research, finding the right information fast without having to dig through piles of data. This means employees can use their time on actions that matter, instead of just looking for information. Lastly, by automating tasks, LLMs help reduce mistakes. They provide consistent and accurate results, improving the quality of work from writing to data analysis. In short, LLMs are making work easier and more efficient by handling routine tasks, improving communication, speeding up decision-making, personalising learning, encouraging creativity, making information more accessible, simplifying research, and reducing errors.
However, LLMs for the Hungarian language face unique challenges due to the language’s complex grammar, the scarcity of large, high-quality datasets, and the need for significant computational resources. The Hungarian language’s agglutinative nature, with numerous suffixes that alter meaning, demands sophisticated models for accurate understanding. The limited availability of diverse training materials hinders the development of models that grasp the nuances of Hungarian, including cultural and contextual subtleties. Furthermore, adapting to language evolution, handling code-switching (mixing languages), and ensuring the model’s fairness without perpetuating biases are additional hurdles. Overcoming these challenges requires more advanced AI techniques and collaborative efforts to generate and share resources that enhance the training of LLMs in the Hungarian language, making them more effective and representative of the language’s complexities.
Despite the above, the expectation is still that business in Hungary will see a continued rise in the adoption of GenAI services, with many companies incorporating AI solutions based on LLMs to streamline their daily operations. These AI services, provided by major tech companies, can be smoothly integrated into existing workflows. There is a trend where primarily multinational companies are adopting these services and promoting the use of such “off-the-shelf” tools within their organisations, including their affiliates in Hungary. The utilisation of GenAI mainly serves two purposes: enhancing internal operations and deploying readily available GenAI services. To mitigate the risk of unauthorised use by employees and the potential for breaches in confidentiality, firms are moving to restrict access to public AI platforms through both policy measures (such as the establishment of AI usage guidelines) and technical barriers (like firewall configurations to block public platforms).
Necessity of new organisational and technical measures for the safe use of AI
Utilising GenAI and LLMs presents risks such as bias, misinformation creation, skill displacement, security vulnerabilities and regulatory challenges. Mitigating these requires a comprehensive approach combining technical, ethical, and legal measures for responsibly deploying GenAI technologies. Maximising the benefits of LLMs demands that employees acquire new skills, prompting businesses to prioritise the training of “power users” adept in prompt engineering. This skill involves crafting inputs that steer LLMs towards generating the intended outcomes. It plays a pivotal role in boosting AI efficiency, streamlining human–AI interactions, and customising outputs to meet specific requirements. Well-crafted prompts serve as a bridge between human users and AI systems, opening up new possibilities for application and enhancing the overall experience by producing more precise and relevant answers. Mastering prompt engineering is critical for fully tapping into the potential of AI technologies, enabling accurate and appropriate interactions without the need for extensive technical knowledge. However, this approach may also present risks and challenges, and as mentioned earlier, the use of the Hungarian language could also serve as a limiting factor.
This will require business to implement further organisational and technical controls within their organisation to safeguard against “undesired” prompts when using LLMs in the workplace. Organisational measures include developing clear usage policies that define acceptable interactions with LLMs, conducting training and awareness programmes to enhance employees’ understanding of ethical and lawful LLM use, and establishing monitoring systems to ensure adherence to company policies. Additionally, setting up a feedback mechanism for reporting problematic LLM interactions and creating an ethics committee to oversee AI use may be important for integrating ethical considerations into company practices.
On the technical side, implementing pre-processing and post-processing prompt filters can help scan and adjust prompts and responses to avoid inappropriate content. Custom fine-tuning LLMs with company-specific data, employing AI content moderation tools, restricting LLM access to authorised personnel, and maintaining audit trails of LLM interactions are effective measures to control content quality and maintain accountability. Together, these organisational and technical measures create a safer and more responsible environment for leveraging LLMs in the workplace. The use of these tools will activate requirements under employment law, intellectual property law, and data protection law that must be addressed and secured. This includes providing transparent information about data processing activities conducted by the AI system and the prompt filtering and monitoring tools, adhering to data processing activities related to employment law, and safeguarding intellectual property rights for both inputs and outputs.
H-1053 Budapest
Károlyi street 9
Central Palace 5th Floor
Hungary
+36 70 605 1000
info@provaris.hu www.provaris.hu