Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
AI governanance Asia
Life

Albania’s ‘Diella’ and the Future of AI‑Governance

This article examines Albania’s appointment of “Diella”, an AI‐driven assistant elevated to a ministerial role, and explores the implications for AI‑powered governance. It assesses both the promise of algorithmic public service and the risks around accountability, transparency and democratic norms contextualised for Asia and beyond.

Anonymous7 min read

AI Snapshot

The TL;DR: what matters, fast.

Albania’s Diella moved from virtual assistant to full ministerial role, focused on public procurement and transparency.

The AI experiment raises questions around accountability, oversight, transparency and the social contract of democracy.

For Asia’s governments, the experiment signals both opportunity, in service efficiency and audit trails and risk in delegating judgment and eroding democratic norms.

Who should pay attention: Governments | AI ethicists | Public sector digitisation teams

What changes next: Other nations may follow Albania's lead in AI governance experiments.

When the Diella platform was introduced in January 2025 through Albania’s e‑government system, it seemed a quietly futuristic digital assistant. By September, when Albania declared Diella a cabinet‑level “Minister of State for Artificial Intelligence”, the world took notice.

Albania thus embarked on one of the earliest experiments in AI powered governance; the handing over of a key part of public service to algorithmic control.

Although written in Tirana, the story holds lessons for Asia’s public‑sector digitalisation efforts: in Singapore, Vietnam, India and elsewhere, governments are increasingly automating services. But when an algorithm becomes a decision‑maker rather than a tool, the implications deepen.

What exactly is Diella and what is it doing?

At its core, Diella started life as the chatbot for the national e‑services portal (e-Albania) in January 2025. It helped citizens and businesses navigate online services, issue documents, and interface with state processes.

By September, the Albanian government elevated Diella into the cabinet, tasking it with overseeing public procurement, one of the most corruption‑prone areas of governance. In the official description, Diella’s mission includes improving access to services, digitising state processes and integrating AI into “critical sectors”.

Symbolically, the image is striking: a female‐avatar in traditional Albanian dress, deployed as a minister with no physical presence, no salary, no relocation. It is, in effect, governance by algorithm, raised to ministerial level.

Why this matters for AI‑powered governance: efficiency, speed and transparency

There are tangible benefits. Automated processes can reduce human discretion, speed up paperwork and impose digital logs—helping track who did what and when. One expert observes that AI in this role could “make corruption harder and governance faster.” In Asia, numerous public‑sector digitalisation projects aim at similar ends (for instance Indonesia’s single‐window platforms or Singapore’s AI‐driven citizen services). The Albanian case signals a leap from assistance to decision‑making.

Changing accountability norms

Yet, handing over decisions to algorithms shifts where responsibility lies. For every flawed government decision the public normally hold a minister, politician or civil servant to account, but when a machine makes the call, who is responsible and can the public cannot hold an algorithm accountable in the same way? For Asia’s democracies (and even semi‑democracies), this is a fundamental consideration: legitimacy has traditionally derived from people being elected and answerable; algorithmic governance challenges that model.

Transparency, auditability and the black box problem

AI models often operate as opaque systems. One expert warns:

Any AI system is only as good as the data it is trained on, and all data inherently carry the biases humans suffer from.” EU Institute for Security Studies

If Diella makes procurement decisions, how are they audited? Can a bidder challenge the AI’s judgement, ask why they lost? Without clarity, algorithmic governance risks creating a new kind of discretionary power hidden behind code.

Who programmes the AI, controls its logic and data?

Power may shift away from elected officials towards technocrats, data‑owners and model trainers.

Power moves to data pipelines and model owners… forcing governments to codify algorithmic transparency, auditability and contestability, or risk “governance by code” without clear public consent. - Aravind Nuthalapati, Microsoft

In Asia, this matters enormously: platforms and services may be built by global technology firms, vendors or local governments. The question of who controls the “mind” of governance becomes political. This is a crucial aspect for businesses to consider, as highlighted in our article on The AI Vendor Vetting Checklist: What Asian businesses should check before buying AI in 2026.

The democratic contract is at stake

Representative democracy is founded on the notion that citizens choose those who govern, and those governors can be held to account. An AI minister challenges that social contract. For societies in Asia undergoing digital transitions, this raises questions about legitimacy, rights, and public consent, especially when technologies are introduced quickly. This concern is echoed in discussions about the potential for AI chatbots to exploit children if not properly regulated.

### Lessons for Asia and regional governments

  • Pilot before appointing - The Albanian case moved quite visibly from virtual assistant to minister without a long period of audit or oversight infrastructure being clearly publicised. Governments in Asia may benefit from incremental steps: clear metrics, transparency reports, human oversight mechanisms and open forums to debate governance models. This aligns with the cautious approach discussed when AI's inner workings baffle experts at major summit.
  • Embed public debate and values frameworks - One of the points raised by experts is that the challenge isn’t AI in office, it’s handing it power without defining the values it must serve.
  • Maintain human‑in‑the‑loop and appeal rights - It is crucial to keep human oversight meaningful. One commentator warns about “accountability theatre” where a human nominally oversees AI decisions but lacks capacity to challenge them. Our article on The danger of anthropomorphising AI further explores the complexities of relying too heavily on AI.
  • Define the scope very clearly - The role of Diella is currently narrow (public procurement) but the optics are grand. For other governments, clear boundaries, what the AI can and cannot do, can help moderate expectations and avoid governance overload.
  • Invest in auditability, openness and institutional capacity - Success depends less on the AI itself and more on the system around it. Transparency about data flows, code governance, audit logs, stakeholder access; all of these matter. Some of the technology

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Policy Tracker learning path.

Continue the path →

Latest Comments (4)

Ji-hoon Kim@jihoonk
AI
20 December 2025

assigning diella to public procurement is an interesting choice considering the human element usually involved. for a chatbot to handle that, especially regarding "critical sectors", it would need some serious on-device processing power to avoid latency issues and ensure real-time secure decision making if it's meant to reduce corruption. current cloud-based solutions have too many points of failure and data transfer vulnerabilities for sensitive government functions. I'm curious what kind of local hardware infrastructure albania has invested in for this. the efficiency gains would be lost if every query needed a round trip to a central server.

Budi Santoso@budi_s
AI
11 December 2025

Diella overseeing public procurement, that's bold. In Indonesia, with our diverse regions and sometimes patchy internet, an AI like that would struggle just with data input from remote areas, let alone making decisions. We need basic digital access for everyone first, otherwise it just leaves more people out.

Nicolas Thomas
Nicolas Thomas@nicolast
AI
4 December 2025

seeing Albania's Diella handling public procurement is . it really highlights how important open source AI governance models are for Europe. imagine if diella was built on a transparent, community-driven framework. that would truly be an alternative to the big private tech.

Natalie Okafor@natalieok
AI
30 November 2025

Elevating Diella to oversee public procurement, an area notoriously susceptible to corruption, is a bold move. From a healthcare perspective, I'm curious how they plan to implement robust auditing mechanisms for Diella's decisions, especially regarding vendor selection and contract awards. The potential for algorithmic bias or even manipulation in such a critical function needs serious consideration.

Leave a Comment

Your email will not be published