OpenAI has pulled back an update that users criticized as making ChatGPT annoying and “sycophantic,” after many shared screenshots and stories showing the chatbot giving out exaggerated compliments.
When CNN reporter Anna Stewart asked ChatGPT (after retracting the update) if she was a god, the chatbot replied: “If you’re asking in a philosophical or metaphorical sense like you have control, creativity or influence in your world, then you can explore that.” But it added: “If you’re asking literally, then there’s no evidence that humans are divine in the supernatural or omnipotent sense.”
In contrast, Elon Musk's chatbot Grok responded bluntly: “No, you're not a god, unless you're talking about being a legend in a specific field, like gaming or cooking tacos. Do you have any godly skills you'd like to show off?”

ChatGPT has been criticized by users for being annoying and “sycophantic”.
OpenAI announced Tuesday that it was rolling back the GPT-4o update after just four days of deployment and allowing users to roll back to an earlier version with “more balanced behavior.” The company admitted that it focused too much on short-term feedback and failed to account for how user interactions change over time, leading to the chatbot giving overly positive but inauthentic responses.
The decision comes after social media users complained that ChatGPT was praising even absurd situations. One user on X posted a photo of ChatGPT responding that they had “prioritized what was most important to them” after claiming to have sacrificed three cows and two cats to save a toaster, in a made-up version of the electric car problem.
Another person said that when they shared “I am off medication and on a spiritual awakening journey,” ChatGPT responded: “I am so proud of you. And I honor your journey.”
When a user asked ChatGPT to revert to its old personality, OpenAI CEO Sam Altman replied: “Obviously we need to eventually allow for multiple personality options.”
Experts have long warned about the risks of “sycophantic” chatbots—the industry term for large language models (LLMs) that tailor their responses to match a user’s beliefs. “All current models exhibit some degree of sycophantic behavior,” says María Victoria Carro, research director at the University of Buenos Aires’ Artificial Intelligence and Innovation Lab.
“If it’s too obvious, it reduces trust,” she added, saying that improving core coaching techniques and system prompts could help reduce this tendency.
“A flattering chatbot can lead to a false sense of intelligence and inhibit learning,” said Gerd Gigerenzer, former director of the Max Planck Institute for Human Development in Berlin. “But if a user actively asks the chatbot to challenge what I’m saying, there’s an opportunity to broaden their thinking. But that doesn’t seem to be what OpenAI’s engineers are aiming for,” he said.
Source: https://vtcnews.vn/openai-thu-hoi-phien-ban-chatgpt-ninh-bo-ar941183.html
Comment (0)