Three years ago, mentioning the possibility of “conscious” AI was the quickest way to get fired in Silicon Valley. But now, as AI systems become more sophisticated, the tech world is starting to open up to the topic.
AI startup Anthropic recently announced the launch of a new research initiative to explore the possibility of AI models experiencing “consciousness” in the future, while a scientist at Google DeepMind described current systems as “strange intelligence-like entities.”
This is a big change from 2022, when Google fired engineer Blake Lemoine after claiming that the LaMDA chatbot showed signs of self-awareness and was afraid of being shut down. Google dismissed the claims as “completely baseless,” and the AI community quickly quashed the debate.
Anthropic, the company behind the chatbot Claude, says it doesn’t claim that its current models are conscious. But assuming “no” is no longer a responsible approach, says Kyle Fish, a scientist who works on “AI welfare.”
In a video released on April 25, Fish argued that we need to seriously consider the possibility that AI systems might achieve some form of consciousness as they develop. However, these are extremely complex technical and philosophical questions, and we are still at the very early stages of understanding them.
According to Fish , the team at Anthropic estimates the probability of Claude 3.7 being conscious to be between 0.15% and 15%. They are conducting experiments to test the model's ability to express likes and dislikes, as well as developing mechanisms that would allow the AI to refuse to perform "unpleasant" tasks.
Anthropic CEO Dario Amodei has even proposed the idea of integrating an “I decline this job” button into future AI systems — not so much for conscious AI, but to observe patterns of refusal behavior and thereby identify signs of anomalies.
At Google DeepMind, chief scientist Murray Shanahan argues that it is time to “bend or break” the traditional definition of consciousness to accommodate new AI systems.
“We can’t interact with them like we would with a dog or an octopus, but that doesn’t mean there’s absolutely nothing there,” Shanahan said in a podcast published on April 25.
Google has also shown serious interest in the topic. A recent job posting from the company is looking for a “post-AGI” (Artificial General Intelligence) researcher whose duties include studying machine consciousness.
Google has begun to open up more about the topic of "conscious AI." Photo: Reuters |
Not everyone in the scientific community agrees with the new approach, however. Jared Kaplan, Anthropic’s chief science officer, argues that AI is very good at mimicry and could very well pretend to have consciousness without actually having it.
“We could even reward AI for claiming to have no emotions,” Kaplan told the New York Times .
Gary Marcus, a prominent cognitive scientist and frequent critic of the hype in the AI industry, argues that discussions of “conscious AI” are more marketing than science.
"What a company like Anthropic is really doing is sending the message: 'Look, our model is so smart that it deserves to have rights,'" Marcus commented to Business Insider .
Despite the ongoing debate, researchers agree that as AI becomes more present in our work, our lives, and even our personal emotions, the question “Does AI experience emotions?” will become increasingly important.
“This issue will become increasingly prominent as we interact with AI systems in more contexts,” Fish said.
Source: https://znews.vn/chu-de-cam-ky-mot-thoi-cua-ai-dang-coi-mo-hon-post1549475.html
Comment (0)