NEW XO Poll 📊: Can an LLM system be conscious?
| oomox | 05/01/26 | | Hello, World! | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | oomox | 05/01/26 | | Karl Barth | 05/01/26 | | oomox | 05/01/26 | | jewish midget furiously typing delusional invectiv | 05/01/26 | | oomox | 05/01/26 | | jewish midget furiously typing delusional invectiv | 05/01/26 | | oomox | 05/01/26 | | jewish midget furiously typing delusional invectiv | 05/01/26 | | oomox | 05/01/26 | | which is what makes time travel possible | 05/01/26 | | The Penis | 05/01/26 | | lex | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | which is what makes time travel possible | 05/01/26 |
Poast new message in this thread
Date: May 1st, 2026 7:38 PM Author: oomox
Idea credit: https://xoxohth.com/thread.php?thread_id=5861790&forum_id=2#49853619
Can a system based on a language model be conscious?
(0) No
(1) Yes, but it hasn't happened yet
(2) Yes, and such a system has been created
(3) Other
Discuss.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2E#49857985) |
Date: May 1st, 2026 8:27 PM Author: oomox
(1) for me. I think (but am not sure) that an agent built on an LLM could be conscious in the future.
One of my main criticisms has always been that existing systems can't self-modify. Interestingly, OpenClaw bots can edit their own soul.md files, so maybe we're getting closer (or maybe it just looks like it). They're not updating their internal models, so it's different from neuroplasticity, but they are making decisions about how to act in the future. Even if we did say that rewriting on that level counts as self-modification, though, these bots are seeded with a soul.md written by a human, so I wouldn't buy that the changes demonstrate actual will. More info on soul.md: https://soul.md
The other major limiting factor IMO is that context windows just aren't big enough for actual consciousness to arise. A conscious system would need a much more robust memory than what's available. In theory maybe it could get there with a ton of DB storage. But right now, even if you take the info in the model AND the info stored by an agent, it's just not anywhere near comparable to what's encoded in the human nervous system.
THAT SAID, I think it's theoretically possible. Humans' personalities are a combination of our genetics, our experiences, and our interpretations of those experiences: what we remember about them and how that shapes our outlook/future decisions. An AI agent is a combination of its underlying model, its experiences (interactions with the world – right now just text and images, but in the future they could perceive more data about the physical world), and interpretations of those experiences: what it remembers and how that shapes its decisions going forward. In a sufficiently complex system, maybe that leads to a thinking, self-aware being with emergent preferences and desires.
I am not remotely convinced that consciousness can only arise from human or even biological minds. The thought experiment that solidified that for me was when I considered a consciousness spread over galaxies, with nodes communicating via electrical impulses, similar to our nervous system. Of course that is a much, much closer analog to the human brain (that's the whole point) than an LLM, but I still find it a useful intuition pump for people who can't conceive of a non-biological mind. There's also Chalmers' gradual-neuron-replacement thought experiment. But here, too, synthetic consciousness being possible != AI consciousness being possible.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2E#49858075) |
|
|