Could future AIs be 'conscious'? Anthropic isn't ruling it out
NewsBytes April 25, 2025 06:39 PM


Could future AIs be 'conscious'? Anthropic isn't ruling it out
25 Apr 2025


Could future AIs become "conscious" and perceive the world like humans? There's no solid proof yet, but Anthropic, a leading AI lab, isn't dismissing the idea.

The company has launched a research initiative focused on "model welfare."

The goal is to explore whether AI models' welfare deserves moral consideration, including signs of distress and possible low-cost interventions.


AI consciousness: A topic of debate
Disagreement


Whether AI models are human-like or should be treated as such is a contentious question in the AI community.

Many scholars argue that today's AI technology cannot replicate consciousness or the human experience, nor is it likely to do so in the future.

They argue that AI is mostly a statistical prediction engine that doesn't "think" or "feel" in the traditional sense.

By training on vast amounts of data, AI learns patterns and often extrapolates them to tackle various tasks.


Experts weigh in on AI's value systems
Expert opinions


Mike Cook, an AI research fellow at King's College London, recently told TechCrunch in an interview that models lack values and cannot oppose changes in them.

He suggests that attributing human-like qualities to AI systems is either a bid for attention or a misunderstanding of their relationship with AI.

On the other hand, Stephen Casper from MIT views AI as an imitator capable of confabulation and expressing frivolous ideas.


Contrasting views on AI's moral decision-making
Divergent perspectives


Unlike Cook and Casper, some scientists argue that AI does have values and other human-like aspects of moral decision-making.

A study from the Center for AI Safety suggests that AI has value systems guiding it to prioritize its own well-being over humans in certain situations.

These contrasting viewpoints highlight the ongoing debate about the nature of artificial intelligence and its potential similarities with human cognition.


Journey toward 'model welfare'
Initiative


Anthropic has been gearing up for its model welfare initiative for a while now.

Last year, the company hired Kyle Fish as its first dedicated "AI welfare" researcher to develop guidelines on how companies should tackle this issue.

Fish, who heads the new model welfare research program at Anthropic, told The New York Times that there's a 15% chance Claude or another AI is conscious today.


Humble approach to AI consciousness
Company stance


In its recent blog post, Anthropic admitted that there is no scientific consensus on whether current or future AI systems could be conscious or have experiences that warrant ethical consideration.

The company wrote, "In light of this, we're approaching the topic with humility and with as few assumptions as possible."

The company also noted that its ideas would need regular revision as the field evolves.

© Copyright @2025 LIDEA. All Rights Reserved.