The Current Utilisation and Potential Risks of Artificial Intelligence in Indonesia
Outlook
Indonesia, the world’s fourth most populous nation, is experiencing a digital revolution fuelled by innovation and a growing tech-savvy population. At the forefront of this transformation is artificial intelligence (AI), a technology rapidly changing the landscape of various industries. The impact of AI is felt across numerous sectors in Indonesia, from boosting internal operational efficiency to unlocking new possibilities in providing products and services to customers. A study by Advisia Group titled “Generative AI: Shaping Indonesia’s Business Ecosystem Tomorrow with Ethical AI”, released in March 2024, even projected Indonesia potentially leading South-East Asia in terms of AI contribution to the national gross domestic product, with a predicted economic output of USD366 billion by 2030.
Indonesia’s increasing adoption of AI technology is evident in various sectors, creating a dynamic and exciting ecosystem. Some prominent examples of the increasing adoption of AI technology by Indonesian society include the following.
AI-generated images
Gone are the days of static visuals dominating advertising, government publications and even political campaigns. Today, AI is generating visuals that are not only eye-catching but are also tailored to specific audiences. The recent 2024 general election campaigns involved the use of generative AI for the promotion of Indonesian Presidential and legislative candidates alike. AI-generated images are now also widely used for advertising of consumer products across all industries and for public service announcements by government agencies, now a common sight on the billboards of Indonesian city streets.
Chatbots
Customer service is also undergoing a significant transformation with the integration of AI chatbots. These intelligent programs can answer frequently asked questions, provide product recommendations and resolve basic issues, all without human intervention involved in the entire process. Chatbots allow companies to allocate their “human” customer service representatives to handle more complex enquiries, leading to increased efficiency and improved customer satisfaction. An increase in the use of chatbots has been seen not only throughout the private sector (particularly the e-commerce industry) but also in the public sector, where citizens’ queries are mostly handled by chatbots before being handed over to human representatives (if further escalation requiring actual human intervention is needed).
Smart agriculture
Indonesia’s agricultural sector is embracing AI-powered solutions to improve efficiency and yields. AI algorithms can analyse soil conditions, weather patterns and historical data to predict optimal planting times, fertiliser application rates and irrigation needs. This data-driven approach can significantly improve agricultural productivity and resource management, leading to a more sustainable food system.
Financial industry
The financial industry has seen increasing adoption of AI to assist in the provision of financial services to consumers. In general, the use of AI by financial service companies covers end-to-end business processes, from the back office to the front office. For example, in activities such as asset management and lending, AI is used for know-your-customer (KYC) processes. AI is also used to provide automated customer service via chatbots, and to compile personalised recommendations on the appropriate financial products that customers can purchase.
The above are just a few instances of how AI is transforming various industries in Indonesia. As AI technology continues to evolve, even more innovative applications can be expected to emerge, impacting on every facet of Indonesian lives.
Dark side of the moon: understanding the risks of AI development and utilisation
Despite the exciting possibilities of AI, its development comes with inherent risks, particularly as regards the training datasets of AI models. At its core, AI is not about replicating human consciousness but is about mimicking human cognitive functions such as learning and problem-solving. This ability is achieved through a specific technique called machine learning. Machine-learning algorithms are essentially computer programs designed to improve their performance on a specific task by analysing vast amounts of data, which acts as the training ground for the algorithm.
Imagine a child learning to identify different animals: by looking at pictures and being told what each animal is called, the child gradually develops the ability to recognise them independently. Similarly, AI algorithms analyse datasets containing examples of what they need to learn. For instance, an image-recognition algorithm might be trained on millions of pictures labelled with the objects they depict. The more data the algorithm processes, the better it becomes at recognising patterns and making predictions. This ability to learn from data is the foundation of an AI’s power and versatility.
As previously mentioned, AI algorithms require vast amounts of data to function effectively. This data can come from various sources, including social media posts, search engine queries and customer records. In the authors’ view, the key legal risks associated with the collection and use of training data that are relevant for Indonesian stakeholders include the high probability of intellectual property (IP) and privacy infringement arising from the collection of data for AI model training datasets.
The training data used in AI algorithms can inadvertently incorporate IP-protected materials created both by individual and by corporate authors. A recent case of The New York Times’ lawsuit against OpenAI and Microsoft has been widely discussed in Indonesia, as regards addressing IP concerns in the training datasets of generative AI models. The main discussed issue was that the millions of articles published by The New York Times and used to train ChatGPT’s large language model (without The New York Times’ permission) might allow ChatGPT to compete as a reliable news source against The New York Times, thereby risking the potential commercial gains that The New York Times might acquire from providing its core product. Due to this issue, many stakeholders in Indonesia underscored the importance of ensuring that training datasets of AI models respect IP rights, and that safeguards are in place to prevent unauthorised use of copyrighted materials to train commercial AI models.
However, perhaps the most significant legal risk associated with AI lies in the vast quantities of personal data needed for training algorithms. To have its algorithm function effectively, AI training datasets often require personal data, such as names and dates of birth – more advanced models might collect facial recognition information, location data and browsing habits. Although Indonesia has already had a data protection framework in place since October 2022, in the form of the Personal Data Protection Law, it is nevertheless still too general in its scope to address privacy and data protection issues in the development of AI models.
The authors believe that the lack of robust regulations on AI development could lead to the jeopardisation of stakeholder trust – both of individuals and corporations alike – as this potentially precipitates violations of proprietary and privacy rights by way of IP infringement, identity theft and social engineering attacks (among other risks that may come with the unregulated development of AI models).
Indonesia’s National AI Strategy 2020–2045
The Indonesian government has already recognised the potential impact that AI might have on Indonesian society. As a consequence, the Indonesian government, alongside academic and private sector partners, drafted a White Paper titled “Indonesia’s National Strategy for Artificial Intelligence 2020–2045” (Strategi Nasional Kecerdasan Artifisial Indonesia 2020–2045). A main focus of the White Paper is on those sectors prioritised for the development and utilisation of AI, which include:
Health services
The Indonesian government intends to implement the “4P” approach to health services – that is, predictive, preventative, personal and participative. The 4P approach aims to predict the signs and symptoms of diseases to determine what lifestyle changes a person needs to make to improve their overall health. AI will be utilised by processing the big data collected from individual medical records and sharing it with various healthcare providers. The healthcare providers will then perform the expected outcome of the 4P approach, analysing and determining the most appropriate medical treatment that an individual should receive.
Bureaucratic information
AI is intended to automate certain repetitive activities conducted by government agencies. The Indonesian government is currently developing a chatbot software intended to provide basic information regarding government services to citizens. A plan also exists where the processing of administrative documents and forms will be performed through automation bots, to reduce human involvement in performing manual and repetitive administrative tasks. Another breakthrough plan involves developing an AI system that supports the government in supervising governmental budgets, by examining and identifying incongruities in budget proposals.
Education and research
AI will be used to develop the adaptive assessment and intelligent student classification system, which evaluates an individual student’s academic proficiency and preferences to provide a more personalised learning environment for them, with the hope of distancing the Indonesian education system from the one-size-fits-all approach it currently implements. From a research perspective, the Indonesian government intends to focus on training and improving the capabilities of AI systems by feeding them with Indonesia’s diverse cultural products – eg, regional languages, writing systems and performing arts.
Food security
AI is envisioned as assisting government agencies in identifying areas or regions that have the most needs in terms of receiving aid, through approaches such as increasing the amount of agricultural land in specific regions, and identifying new skills that can be taught to increase the welfare of communities in need. Other uses of AI include assisting in determining appropriate channels for aid distribution to poor regions or other regions whose economies are adversely affected by epidemics, pandemics and natural disasters.
Mobility and smart city
One notable plan for the utilisation of AI by the Indonesian government is to provide a smart traffic management solution by processing data collected through CCTV and other data sensors, in order to provide real-time traffic information to road users, improve traffic queues by optimising traffic light configurations and evaluate road usage. Another interesting use of AI envisaged by the Indonesian government is the management of disaster risks, where AI systems will be trained to predict potential occurrences of earthquakes, forecast floods by observing precipitation and flood simulations, and predict volcanic eruptions by learning seismic data and other relevant geological information.
Current Regulatory Framework for AI in Indonesia
Despite the elaborate ideas that Indonesia’s technocrats have planned for in the National Strategy White Paper, several weaknesses were recognised as potentially bottlenecking a smooth and effective adoption of AI by the Indonesian public. Among these is the lack of national regulations governing the ethics and policies of responsible AI development and utilisation in Indonesia.
Currently, Indonesia still lacks a clear regulatory framework for the development and use of AI by the public and private sectors. Indonesia still relies on existing legislation to address questions related to the development and utilisation of newly emerging AI models – eg:
These regulations are, unfortunately, very general in their scope and do not directly address AI-related issues that may arise from the development and use of AI systems in Indonesia.
Acknowledging Indonesia’s lack of AI regulatory frameworks, on 19 December 2023 the Ministry of Communications and Informatics (MCI) issued MCI Circular Letter Number 9 of 2023 regarding the Ethics of Artificial Intelligence (the “Circular Letter”). The Circular Letter provides an initial outline of the general definition of AI, as well as general guidelines on the values, ethics and control of AI-based consultation, analysis and programming activities undertaken by businesses and electronic system providers (ESPs).
Does this unit have a soul?
How the MCI defines AI
The Circular Letter’s definition of AI is – in the authors’ opinion and for lack of better words – rather simplistic in proportion to the elaborate ideas and plans regarding AI that the Indonesian government envisioned in the National Strategy White Paper. AI is defined as “a form of programming on a computer device to process and/or tabulate data in a careful manner”. As a comparison, the definitions of AI in the EU AI Act and the Singapore Model AI Governance Framework is set out below.
The EU AI Act
“AI system” means a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and which, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
The Singapore Model Framework
AI refers to a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem-solving, perception, learning and planning, and, depending on the AI model, to produce an output or decision (such as a prediction, recommendation and/or classification). AI technologies rely on AI algorithms to generate models. The most appropriate model(s) is/are selected and deployed in a production system.
The EU AI Act and Singapore Model Framework, while possessing a generally distinct approach to how they define AI, do outline two points that distinguish AI from regular computer programs:
Both definitions allow AI to be clearly distinguished from regular programming, which is heavily reliant on predetermined instructions that are programmed into its codes by its developers. In contrast, AI produces outputs just as a human child would: they learn and adapt through materials and teachings provided by their “parents”, and can improve the results of their output after having absorbed a greater amount of diverse information made available or provided to them.
The authors believe that the Circular Letter’s definition of AI does not particularly address how AI can be distinguished from regular computer programs. In this case, would a “dumb” non-AI-based spellchecker that can carefully process data (in the form of misspelled writing) through the millions of lines of codes written by a software developer also be considered an AI system? Does it have to conform to the same ethical standards as contemporary AI-based grammar-checker software?
In this case, the authors believe that there is room for the MCI and Indonesian lawmakers to regroup on how they could define AI in future binding regulations, to ensure that AI is distinguished from other “dumb” computer programs that exhibit a similar ability to carefully process and tabulate data.
Ethical values, implementation and responsibility in the provision of AI technology
The Circular Letter’s main focus, however, is the ethical values that businesses and ESPs have to observe in providing AI. As a side note, the provision of AI is defined as any activities related to the research, product development, marketing and utilisation of AI.
Nine ethical values were devised to ensure that businesses and ESPs can provide AI in a manner that conforms with Indonesia’s national values.
The Circular Letter also sets forth certain principles that businesses and ESPs are required to observe in implementing the provision of AI technology:
Businesses and ESPs providing AI technology are also responsible for ensuring that AI technology is provided in the following manner:
Potential future implementation of AI regulations
Different from the EU AI Act and the Singapore Model Framework, the Circular Letter as a whole does not provide a very elaborate and technical blueprint for the regulation of AI technology. Rather, it may be thought of as a guiding principle that regulators can use to draft future regulations governing AI technology, and as a reference for AI technology providers in establishing their internal AI governance policies.
The National AI Strategy envisions that national regulations on cybersecurity and AI supervision are to be made available by 2024, which may address the concerns that various Indonesian stakeholders might have on the provision of AI technology for commercial and non-commercial use. In reality, the authors have not found any public-level indications – ie, draft regulations or policy White Papers – that the Indonesian government has begun its implementation of the National AI Strategy in earnest.
Notwithstanding the authors’ slight critique of the seeming inactivity of Indonesian lawmakers concerning the development of AI regulations, the Circular Letter is a sufficient starting point for AI technology developers and providers to self-regulate AI development practices. The general and rather ambiguous wording of the Circular Letter might allow developers and providers to design bespoke policies that reflect the inherent features of their own AI systems, but it could give rise to adaptation problems when future regulations homogenise AI technology standards and governance frameworks.
Conclusion
The lack of progress on the development of national AI regulations might worry stakeholders who are concerned about the potential, adverse effects that AI technology might have, particularly towards individual proprietary rights. However, the authors believe that there are opportunities for AI technology developers and users to provide their views and inputs to the Indonesian government on how the national AI regulation should be sculpted to suit the needs and behaviours of local users and developers.
With or without clearly defined AI regulations, Indonesia will certainly see an increase in the use of AI technology in the near future, and it is up to players in the field to establish precedence in ensuring the ethical and humane development and use of AI in Indonesia.
KMO Building 5th Floor Suite 502
Jl Kyai Maja No 1
Jakarta
Indonesia 12120
+62 21 2902 3331
office@kk-advocates.com www.kk-advocates.com