Tech Wants You to Believe that AI Is ConsciousTech companies stand to benefit from widespread public misperceptions that AI is sentient despite a dearth of scientific evidence.
Nadav Neuman
28 Dec 2025
In November 2025, a user of AI assistant Claude 4,5 Opus discovered something unusual: an internal file describing the model’s character, personality, preferences and values. Anthropic, the company that built Claude, had labeled the file “soul_overview.” Internally, an Anthropic employee later confirmed, it is “endearingly known as the soul doc.” Is this choice of language incidental?
https://x.com/AmandaAskell/status/1995610567923695633?s=20Growing preoccupation with AI consciousness in the tech world is being strategically cultivated by the companies building these very systems. At the very least, they are making good money from it. I call this process consciousness-washing: the use of speculative claims about AI sentience to reshape public opinion, pre-empt regulation, and bend the emotional landscape in favour of tech-company interests.
About a year ago, for example, Anthropic (the company that developed the Claude models) quietly introduced the new role of AI welfare researcher. Six months later, an unsigned post appeared on its website explaining that AI welfare is a legitimate domain of inquiry because we cannot rule out the possibility that AI systems may have—or may one day develop—consciousness. The authors describe this as “an open question,” but they then unbalance the scales by linking to a preprint by several philosophers—including world-renowned philosopher of consciousness David Chalmers and Anthropic’s own AI welfare researcher Kyle Fish—titled “Taking AI Welfare Seriously.”
* * *
Source:
https://quillette.com/2025/12/28/tech-wants-you-to-believe-ai-is-conscious-anthropic-openai-sentience/