Anthropic CEO Dario Amodei says AI's decision-making is still largely a mystery — even to the people building it. His new goal? Create an “MRI for AI” to decode what’s going on inside these models.
The admission marks a rare moment of transparency from a major AI lab about the risks of unchecked progress. This sentiment echoes concerns about the ethical implications of AI, a topic frequently discussed, for instance, in relation to the need for India's AI Future: New Ethics Boards. Understanding how AI makes decisions is crucial for developing ProSocial AI Is The New ESG and fostering trust in these technologies.
Does Anyone Really Know How AI Works?
“This lack of understanding”, he writes, “is essentially unprecedented in the history of technology.”A recent paper from Google DeepMind explores the challenges of building safe AI systems, highlighting the complexity that Amodei refers to. This difficulty in fully grasping AI's internal workings also contributes to the ongoing debate around the Deliberating on the Many Definitions of Artificial General Intelligence.













Latest Comments (2)
An "MRI for AI"? C'est drôle. I do wonder if a machine can ever truly explain itself to us.
This is spot on. I've always thought it's quite worrying that these complex systems operate like black boxes. An "MRI for AI" is an ingenious idea, honestly. It’s high time we had some transparency for everyone, considering the vast applications. It's a bit like driving a car without understanding the engine, eh?
Leave a Comment