When the Diella platform was introduced in January 2025 through Albania’s e‑government system, it seemed a quietly futuristic digital assistant. By September, when Albania declared Diella a cabinet‑level “Minister of State for Artificial Intelligence”, the world took notice.
Albania thus embarked on one of the earliest experiments in AI powered governance; the handing over of a key part of public service to algorithmic control.
Although written in Tirana, the story holds lessons for Asia’s public‑sector digitalisation efforts: in Singapore, Vietnam, India and elsewhere, governments are increasingly automating services. But when an algorithm becomes a decision‑maker rather than a tool, the implications deepen.
What exactly is Diella and what is it doing?
At its core, Diella started life as the chatbot for the national e‑services portal (e-Albania) in January 2025. It helped citizens and businesses navigate online services, issue documents, and interface with state processes.
By September, the Albanian government elevated Diella into the cabinet, tasking it with overseeing public procurement, one of the most corruption‑prone areas of governance. In the official description, Diella’s mission includes improving access to services, digitising state processes and integrating AI into “critical sectors”.
Symbolically, the image is striking: a female‐avatar in traditional Albanian dress, deployed as a minister with no physical presence, no salary, no relocation. It is, in effect, governance by algorithm, raised to ministerial level.
Why this matters for AI‑powered governance: efficiency, speed and transparency
There are tangible benefits. Automated processes can reduce human discretion, speed up paperwork and impose digital logs—helping track who did what and when. One expert observes that AI in this role could “make corruption harder and governance faster.” In Asia, numerous public‑sector digitalisation projects aim at similar ends (for instance Indonesia’s single‐window platforms or Singapore’s AI‐driven citizen services). The Albanian case signals a leap from assistance to decision‑making.
Changing accountability norms
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
Yet, handing over decisions to algorithms shifts where responsibility lies. For every flawed government decision the public normally hold a minister, politician or civil servant to account, but when a machine makes the call, who is responsible and can the public cannot hold an algorithm accountable in the same way? For Asia’s democracies (and even semi‑democracies), this is a fundamental consideration: legitimacy has traditionally derived from people being elected and answerable; algorithmic governance challenges that model.
Transparency, auditability and the black box problem
AI models often operate as opaque systems. One expert warns:
Any AI system is only as good as the data it is trained on, and all data inherently carry the biases humans suffer from.” EU Institute for Security Studies
If Diella makes procurement decisions, how are they audited? Can a bidder challenge the AI’s judgement, ask why they lost? Without clarity, algorithmic governance risks creating a new kind of discretionary power hidden behind code.
Who programmes the AI, controls its logic and data?
Power may shift away from elected officials towards technocrats, data‑owners and model trainers.
Power moves to data pipelines and model owners… forcing governments to codify algorithmic transparency, auditability and contestability, or risk “governance by code” without clear public consent. - Aravind Nuthalapati, Microsoft
In Asia, this matters enormously: platforms and services may be built by global technology firms, vendors or local governments. The question of who controls the “mind” of governance becomes political. This is a crucial aspect for businesses to consider, as highlighted in our article on The AI Vendor Vetting Checklist: What Asian businesses should check before buying AI in 2026.
The democratic contract is at stake
Representative democracy is founded on the notion that citizens choose those who govern, and those governors can be held to account. An AI minister challenges that social contract. For societies in Asia undergoing digital transitions, this raises questions about legitimacy, rights, and public consent, especially when technologies are introduced quickly. This concern is echoed in discussions about the potential for AI chatbots to exploit children if not properly regulated.
### Lessons for Asia and regional governments
- Pilot before appointing - The Albanian case moved quite visibly from virtual assistant to minister without a long period of audit or oversight infrastructure being clearly publicised. Governments in Asia may benefit from incremental steps: clear metrics, transparency reports, human oversight mechanisms and open forums to debate governance models. This aligns with the cautious approach discussed when AI's inner workings baffle experts at major summit.
- Embed public debate and values frameworks - One of the points raised by experts is that the challenge isn’t AI in office, it’s handing it power without defining the values it must serve.
- Maintain human‑in‑the‑loop and appeal rights - It is crucial to keep human oversight meaningful. One commentator warns about “accountability theatre” where a human nominally oversees AI decisions but lacks capacity to challenge them. Our article on The danger of anthropomorphising AI further explores the complexities of relying too heavily on AI.
- Define the scope very clearly - The role of Diella is currently narrow (public procurement) but the optics are grand. For other governments, clear boundaries, what the AI can and cannot do, can help moderate expectations and avoid governance overload.
- Invest in auditability, openness and institutional capacity - Success depends less on the AI itself and more on the system around it. Transparency about data flows, code governance, audit logs, stakeholder access; all of these matter. Some of the technology











Latest Comments (3)
"Diella" in a ministerial role is quite something. While the promise of efficiency is alluring, I'm a bit wary about accountability. What happens when an algorithm makes a bum decision, *pare*? Pinpointing responsibility could become a real headache, not just in Albania but anywhere this model might be adopted, even here in Asia.
This is fascinating. In Japan, we’re always looking at how technology can improve public services, especially with our ageing population. An AI minister like Diella could offer incredible efficiency, but the transparency around its decision-making, and how we ensure citizen accountability, is a huge concern. It's a bold move by Albania.
Really interesting read on Albania's Diella! It makes me wonder though, beyond the obvious accountability issues, how does a "ministerial" AI like this navigate the nuanced politics of policymaking? It's not just about data crunching; there's a lot of human persuasion and negotiation involved, right? Are they planning for an AI orator as well, or is it more like a super-powered permanent secretary?
Leave a Comment