Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Business

    Revolutionising Critical Infrastructure: How AI is Becoming More Reliable and Transparent

    Explore the top 10 AI trends transforming Asia by 2025, including Agentic AI, AI governance, and post-quantum cryptography.

    Anonymous
    4 min read30 September 2024
    AI in critical infrastructure

    AI Snapshot

    The TL;DR: what matters, fast.

    AI is essential for managing critical infrastructure but can produce inaccurate or unclear results known as "hallucinations."

    Researchers developed a four-stage method to make AI more reliable and minimize hallucinations, focusing on critical national infrastructure.

    This method involves deploying anomaly detection systems, combining them with Explainable AI tools, human oversight, and a scoring system for AI explanations.

    Who should pay attention: Critical infrastructure operators | AI developers | Regulators

    What changes next: Further research will likely focus on real-world applications and broader adoption of this method.

    Artificial Intelligence (AI) has become an essential tool for managing critical infrastructure like power stations, gas pipelines, and dams. However, AI systems can sometimes produce inaccurate or unclear results, known as 'hallucinations'. These errors can lead to significant problems. To tackle this issue, researchers have developed a new method to make AI more reliable and understandable.

    The Problem with AI Hallucinations

    AI hallucinations often occur due to poor-quality or biased training data. They can also happen when user prompts lack context. Some AI algorithms don't involve humans in the decision-making process, making it hard to understand how the AI made its predictions. This lack of transparency can lead to wrong decisions, especially in critical infrastructure management.

    The Black Box AI Algorithms

    Some anomaly detection systems use 'black box' AI algorithms. These systems are difficult to understand because their decision-making processes are not clear. This makes it challenging for operators to determine why an AI system identified an anomaly.

    A Multi-Stage Approach to Improve AI

    Researchers have proposed a four-stage method to make AI more reliable and minimise hallucinations. They focused on AI used for critical national infrastructure (CNI), such as water treatment.

    Stage 1: Deploying Anomaly Detection Systems

    The researchers used two anomaly detection systems:

    Empirical Cumulative Distribution-based Outlier Detection (ECOD) Deep Support Vector Data Description (DeepSVDD)

    Both systems were efficient and detected various attack scenarios. However, ECOD had a slightly higher recall and F1 score than DeepSVDD. F1 scores consider the precision of anomaly data and the number of anomalies identified.

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    Stage 2: Combining with Explainable AI (XAI)

    The researchers combined these detectors with Explainable AI (XAI) tools. These tools help humans understand AI results better. For instance, Shapley Additive Explanations (SHAP) allows users to see how different features of a machine learning model contribute to its predictions. This improves human decision-making. You can learn more about how AI with Empathy for Humans can shape future interactions.

    Stage 3: Human Oversight and Accountability

    Humans can question AI algorithms when given clear explanations of AI-based recommendations. This allows them to make more informed decisions about CNI. The importance of transparency is also a key theme in discussions around Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.

    Stage 4: Scoring System for AI Explanations

    A scoring system measures the accuracy of AI explanations. This gives human operators more confidence in AI-based insights. Sarad Venugopalan, co-author of the study, explained that this system depends on the AI model, the application use-case, and the correctness of the input values.

    Improving AI Transparency

    Sarad Venugopalan highlighted that this method allows plant operators to check if AI recommendations are correct. He said, "This is done via message notifications to the operator and includes the reasons why it was sent. It allows the operator to verify its correctness using the information provided by the AI, and resources available to them." This approach aligns with the growing need for ProSocial AI Is The New ESG.

    Rajvardhan Oak, an applied scientist at Microsoft, praised the research. He said, “With explanations attached to AI model findings, it is easier for subject matter experts to understand the anomaly, and for senior leadership to confidently make critical decisions. For example, knowing exactly why certain web traffic is anomalous makes it easier to justify blocking or penalizing it."

    Eerke Boiten, a cybersecurity professor at De Montfort University, also sees the benefits. He said, “This research is not about reducing hallucinations, but about responsibly using other AI approaches that do not cause them.” For a deeper dive into the technical aspects of AI reliability, you might find the NIST's work on Explainable Artificial Intelligence (XAI) to be a valuable resource.

    The Future of AI in Critical Infrastructure

    This research shows a promising future for AI in critical infrastructure management. By making AI more transparent and reliable, we can ensure that human operators make better decisions. This will help keep our critical infrastructure safe and efficient.

    Comment and Share:

    What do you think about the future of AI in critical infrastructure management? How can we make AI even more transparent and reliable? Share your thoughts and experiences in the comments below. Don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

    Anonymous
    4 min read30 September 2024

    Share your thoughts

    Join 2 readers in the discussion below

    Latest Comments (2)

    Luis Torres
    Luis Torres@luis_t_ph
    AI
    28 December 2025

    This is really spot-on! I was just discussing with my cousin who works in utilities about how crucial AI is becoming for our power grid, especially with all the brownouts we get. He mentioned the need for more accountability, so "AI governance" truly resonates. It’s exciting to think about the advancements coming, but we definitely need to ensure fair play, ya know?

    Lavanya Murthy
    Lavanya Murthy@lavanya_m
    AI
    18 November 2024

    This is fascinating! I'm just getting around to reading more deeply into AI trends, and this article really caught my eye. The idea of Agentic AI transforming critical infrastructure is quite something. My main ponderance is this: how do we ensure that the "transparency" aspect of AI governance truly trickles down to local, regional infrastructure projects, especially in diverse geographical landscapes like India where implementation challenges can be unique? It’s one thing to have a policy, quite another to see it consistently applied on the ground. Definitely bookmarking this to revisit the trends mentioned.

    Leave a Comment

    Your email will not be published