Business

Revolutionising Critical Infrastructure: How AI is Becoming More Reliable and Transparent

Explore the top 10 AI trends transforming Asia by 2025, including Agentic AI, AI governance, and post-quantum cryptography.

Published

on

TL;DR:

  • AI ‘hallucinations’ can lead to serious errors in critical infrastructure management.
  • Researchers propose a four-stage method to improve AI accuracy and transparency.
  • Explainable AI (XAI) tools enhance human understanding and decision-making.

Artificial Intelligence (AI) has become an essential tool for managing critical infrastructure like power stations, gas pipelines, and dams. However, AI systems can sometimes produce inaccurate or unclear results, known as ‘hallucinations’. These errors can lead to significant problems. To tackle this issue, researchers have developed a new method to make AI more reliable and understandable.

The Problem with AI Hallucinations

AI hallucinations often occur due to poor-quality or biased training data. They can also happen when user prompts lack context. Some AI algorithms don’t involve humans in the decision-making process, making it hard to understand how the AI made its predictions. This lack of transparency can lead to wrong decisions, especially in critical infrastructure management.

The Black Box AI Algorithms

Some anomaly detection systems use ‘black box’ AI algorithms. These systems are difficult to understand because their decision-making processes are not clear. This makes it challenging for operators to determine why an AI system identified an anomaly.

A Multi-Stage Approach to Improve AI

Researchers have proposed a four-stage method to make AI more reliable and minimise hallucinations. They focused on AI used for critical national infrastructure (CNI), such as water treatment.

Stage 1: Deploying Anomaly Detection Systems

The researchers used two anomaly detection systems:

Advertisement
  • Empirical Cumulative Distribution-based Outlier Detection (ECOD)
  • Deep Support Vector Data Description (DeepSVDD)

Both systems were efficient and detected various attack scenarios. However, ECOD had a slightly higher recall and F1 score than DeepSVDD. F1 scores consider the precision of anomaly data and the number of anomalies identified.

Stage 2: Combining with Explainable AI (XAI)

The researchers combined these detectors with Explainable AI (XAI) tools. These tools help humans understand AI results better. For instance, Shapley Additive Explanations (SHAP) allows users to see how different features of a machine learning model contribute to its predictions. This improves human decision-making.

Stage 3: Human Oversight and Accountability

Humans can question AI algorithms when given clear explanations of AI-based recommendations. This allows them to make more informed decisions about CNI.

Stage 4: Scoring System for AI Explanations

A scoring system measures the accuracy of AI explanations. This gives human operators more confidence in AI-based insights. Sarad Venugopalan, co-author of the study, explained that this system depends on the AI model, the application use-case, and the correctness of the input values.

Improving AI Transparency

Sarad Venugopalan highlighted that this method allows plant operators to check if AI recommendations are correct. He said, “This is done via message notifications to the operator and includes the reasons why it was sent. It allows the operator to verify its correctness using the information provided by the AI, and resources available to them.”

Rajvardhan Oak, an applied scientist at Microsoft, praised the research. He said, “With explanations attached to AI model findings, it is easier for subject matter experts to understand the anomaly, and for senior leadership to confidently make critical decisions. For example, knowing exactly why certain web traffic is anomalous makes it easier to justify blocking or penalizing it.”

Advertisement

Eerke Boiten, a cybersecurity professor at De Montfort University, also sees the benefits. He said, “This research is not about reducing hallucinations, but about responsibly using other AI approaches that do not cause them.”

The Future of AI in Critical Infrastructure

This research shows a promising future for AI in critical infrastructure management. By making AI more transparent and reliable, we can ensure that human operators make better decisions. This will help keep our critical infrastructure safe and efficient.

Comment and Share:

What do you think about the future of AI in critical infrastructure management? How can we make AI even more transparent and reliable? Share your thoughts and experiences in the comments below. Don’t forget to subscribe for updates on AI and AGI developments.

You may also like:

Author

Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version