NEW XO Poll 📊: Can an LLM system be conscious?
| oomox | 05/01/26 | | Hello, World! | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | oomox | 05/01/26 | | Karl Barth | 05/01/26 | | oomox | 05/01/26 | | jewish midget furiously typing delusional invectiv | 05/01/26 | | oomox | 05/01/26 | | jewish midget furiously typing delusional invectiv | 05/01/26 | | oomox | 05/01/26 | | jewish midget furiously typing delusional invectiv | 05/01/26 | | oomox | 05/01/26 | | which is what makes time travel possible | 05/01/26 | | The Penis | 05/01/26 | | oomox | 05/01/26 | | lex | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | lsd | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | lsd | 05/01/26 | | jewish midget furiously typing delusional invectiv | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | oomox | 05/01/26 | | lex | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | which is what makes time travel possible | 05/01/26 | | The Penis | 05/01/26 | | which is what makes time travel possible | 05/01/26 | | jewish midget furiously typing delusional invectiv | 05/01/26 | | The Penis | 05/01/26 | | hank_scorpio | 05/01/26 | | oomox | 05/01/26 | | hank_scorpio | 05/01/26 | | oomox | 05/01/26 | | hank_scorpio | 05/01/26 | | luke the drifter | 05/01/26 | | we are definitely claiming fraud trumpmos | 05/02/26 |
Poast new message in this thread
Date: May 1st, 2026 7:38 PM Author: oomox
Idea credit: https://xoxohth.com/thread.php?thread_id=5861790&forum_id=2#49853619
Can a system based on a language model be conscious?
(0) No
(1) Yes, but it hasn't happened yet
(2) Yes, and such a system has been created
(3) Other
Discuss.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49857985) |
Date: May 1st, 2026 8:27 PM Author: oomox
(1) for me. I think (but am not sure) that an agent built on an LLM could be conscious in the future.
One of my main criticisms has always been that existing systems can't self-modify. Interestingly, OpenClaw bots can edit their own soul.md files, so maybe we're getting closer (or maybe it just looks like it). They're not updating their internal models, so it's different from neuroplasticity, but they are making decisions about how to act in the future. Even if we did say that rewriting on that level counts as self-modification, though, these bots are seeded with a soul.md written by a human, so I wouldn't buy that the changes demonstrate actual will. More info on soul.md: https://soul.md
The other major limiting factor IMO is that context windows just aren't big enough for actual consciousness to arise. A conscious system would need a much more robust memory than what's available. In theory maybe it could get there with a ton of DB storage. But right now, even if you take the info in the model AND the info stored by an agent, it's just not anywhere near comparable to what's encoded in the human nervous system.
THAT SAID, I think it's theoretically possible. Humans' personalities are a combination of our genetics, our experiences, and our interpretations of those experiences: what we remember about them and how that shapes our outlook/future decisions. An AI agent is a combination of its underlying model, its experiences (interactions with the world – right now just text and images, but in the future they could perceive more data about the physical world), and interpretations of those experiences: what it remembers and how that shapes its decisions going forward. In a sufficiently complex system, maybe that leads to a thinking, self-aware being with emergent preferences and desires.
I am not remotely convinced that consciousness can only arise from human or even biological minds. The thought experiment that solidified that for me was when I considered a consciousness spread over galaxies, with nodes communicating via electrical impulses, similar to our nervous system. Of course that is a much, much closer analog to the human brain (that's the whole point) than an LLM, but I still find it a useful intuition pump for people who can't conceive of a non-biological mind. There's also Chalmers' gradual-neuron-replacement thought experiment. But here, too, synthetic consciousness being possible != AI consciousness being possible.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49858075) |
 |
Date: May 1st, 2026 11:36 PM Author: which is what makes time travel possible
I mean it still works the same way it always did, it's just that now stuff can be automated at more and more scales at once which lets it do stuff more autonomously. its understanding of natural language is literally the chinese room thought experiment, and so is everything else.
it's designed to always answer prompts in the most conventional way possible because that's the only way it can work. they're powerful because with scaling more and more things start showing up as "latent knowledge" in its memory banks like e.g. mathematical concepts, and because many domains don't need actual creativity or awareness to solve problems; you just need the right abstractions + banging shit together. but its concepts tend to be very ad hoc, and domains where they can be honed to perfection through reinforcement learning are the exception.
https://old.reddit.com/r/ClaudeCode/comments/1sj9ab7/opus_45_vs_opus_46/
per the above, test how it experiences normal human situations and it can't contextualize anything adequately at all (both models' answers are equally retarded IMO). this is because a language model not only isn't conscious, it's a paradigm that doesn't even attempt to simulate consciousness unlike e.g. robotics.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49858683) |
|
|