NEW XO Poll 📊: Can an LLM system be conscious?
| oomox | 05/01/26 | | Hello, World! | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | oomox | 05/01/26 | | Karl Barth | 05/01/26 | | oomox | 05/01/26 | | conscious computer | 05/01/26 | | oomox | 05/01/26 | | conscious computer | 05/01/26 | | oomox | 05/01/26 | | conscious computer | 05/01/26 | | oomox | 05/01/26 | | which is what makes time travel possible | 05/01/26 | | The Penis | 05/01/26 | | oomox | 05/01/26 | | lex | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | lsd | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | lsd | 05/01/26 | | conscious computer | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | oomox | 05/01/26 | | lex | 05/01/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/01/26 | | lsd | 05/02/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/02/26 | | Fucking Fuckface | 05/02/26 | | oomox | 05/02/26 | | Fucking Fuckface | 05/02/26 | | oomox | 05/02/26 | | Kenneth Play | 05/02/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/02/26 | | legally female father | 05/02/26 | | oomox | 05/02/26 | | Metal Up Your Ass | 05/02/26 | | oomox | 05/02/26 | | The Penis | 05/02/26 | | oomox | 05/02/26 | | which is what makes time travel possible | 05/01/26 | | The Penis | 05/01/26 | | which is what makes time travel possible | 05/01/26 | | conscious computer | 05/01/26 | | The Penis | 05/01/26 | | Fucking Fuckface | 05/02/26 | | Nazca Redlines | 05/02/26 | | Fucking Fuckface | 05/02/26 | | hank_scorpio | 05/01/26 | | oomox | 05/01/26 | | hank_scorpio | 05/01/26 | | oomox | 05/01/26 | | hank_scorpio | 05/01/26 | | oomox | 05/02/26 | | Nazca Redlines | 05/02/26 | | luke the drifter | 05/01/26 | | we are definitely claiming fraud trumpmos | 05/02/26 | | Fucking Fuckface | 05/02/26 | | Nazca Redlines | 05/02/26 | | 345 | 05/02/26 | | Fucking Fuckface | 05/02/26 | | oomox | 05/02/26 | | POPE LEO IS WEAK ON CRIME | 05/02/26 | | i gave my cousin head | 05/02/26 | | Richard Ames | 05/02/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/02/26 | | Fucking Fuckface | 05/02/26 | | Richard Ames | 05/02/26 | | Fucking Fuckface | 05/02/26 | | Richard Ames | 05/02/26 | | Fucking Fuckface | 05/03/26 | | Medieval Pilgrimage Architecture #BrianJacques, tp | 05/02/26 | | Fucking Fuckface | 05/02/26 | | oomox | 05/02/26 | | Fucking Fuckface | 05/02/26 | | oomox | 05/02/26 | | cucumbers | 05/02/26 | | oomox | 05/02/26 | | Fucking Fuckface | 05/02/26 | | cucumbers | 05/02/26 | | Fucking Fuckface | 05/02/26 | | cucumbers | 05/02/26 | | Fucking Fuckface | 05/03/26 | | cucumbers | 05/03/26 | | The Penis | 05/03/26 | | Gavin Newsom | 05/02/26 | | Trust If Aryan | 05/02/26 | | cooked unc | 05/02/26 | | Fucking Fuckface | 05/02/26 | | Karl Barth | 05/02/26 | | oomox | 05/02/26 | | Ass Sunstein | 05/03/26 | | oomox | 05/03/26 | | Fucking Fuckface | 05/03/26 |
Poast new message in this thread
Date: May 1st, 2026 7:38 PM Author: oomox
Idea credit: https://xoxohth.com/thread.php?thread_id=5861790&forum_id=2#49853619
Can a system based on a language model be conscious?
(0) No
(1) Yes, but it hasn't happened yet
(2) Yes, and such a system has been created
(3) Other
Discuss.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49857985) |
Date: May 1st, 2026 8:27 PM Author: oomox
(1) for me. I think (but am not sure) that an agent built on an LLM could be conscious in the future.
One of my main criticisms has always been that existing systems can't self-modify. Interestingly, OpenClaw bots can edit their own soul.md files, so maybe we're getting closer. They're not updating their internal models, so it's different from neuroplasticity, but they are making decisions about how to act in the future. Even if we did say that rewriting on that level counts as self-modification, though, these bots are seeded with a soul.md written by a human, so I wouldn't buy that the changes demonstrate actual will.
The other major limiting factor IMO is that context windows just aren't big enough for actual consciousness to arise. A conscious system would need a much more robust memory than what's available. In theory maybe it could get there with a ton of DB storage. But right now, even if you take the info in the model AND the info stored by an agent, it's just not anywhere near comparable to what's encoded in the human nervous system.
THAT SAID, I think it's theoretically possible. Humans' personalities are a combination of our genetics, our experiences, and our interpretations of those experiences: what we remember about them and how that shapes our outlook/future decisions. An AI agent is a combination of its underlying model, its experiences (interactions with the world – right now just text and images, but in the future they could perceive more data about the physical world), and interpretations of those experiences: what it remembers and how that shapes its decisions going forward. In a sufficiently complex system, maybe that leads to a thinking, self-aware being with emergent preferences and desires.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49858075) |
 |
Date: May 2nd, 2026 1:51 PM Author: Fucking Fuckface
This is why I suggest above that inner monologues are probably the big brained bell curve meme
As we see at the height of capability for most things, truly capable people do not need to plan out and mentally talk through what to do. They simply do it at a speed that doesn't allow for a monologue to play out
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859639)
|
 |
Date: May 1st, 2026 11:36 PM Author: which is what makes time travel possible
I mean it still works the same way it always did, it's just that now stuff can be automated at more and more scales at once which lets it do stuff more autonomously. its understanding of natural language is literally the chinese room thought experiment, and so is everything else.
it's designed to always answer prompts in the most conventional way possible because that's the only way it can work. they're powerful because with scaling more and more things start showing up as "latent knowledge" in its memory banks like e.g. mathematical concepts, and because many domains don't need actual creativity or awareness to solve problems; you just need the right abstractions + banging shit together. but its concepts tend to be very ad hoc, and domains where they can be honed to perfection through reinforcement learning are the exception.
https://old.reddit.com/r/ClaudeCode/comments/1sj9ab7/opus_45_vs_opus_46/
per the above, test how it experiences normal human situations and it can't contextualize anything adequately at all (both models' answers are equally retarded IMO). this is because a language model not only isn't conscious, it's a paradigm that doesn't even attempt to simulate consciousness unlike e.g. robotics.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49858683) |
 |
Date: May 2nd, 2026 1:42 PM Author: Fucking Fuckface
It's also necessary to be a little humble. Just because something at some initial level of complexity doesn't qualify, it's not entirely clear that adding tremendous capability and complexity doesn't hit an intellectual version of nuclear fusion ignition
It's easy to cast doubt and disclaim, but the truth is no one understands how or why consciousness emerges, though it is typically thought that the overall complexity and power of the human mind relative to other creatures is somehow important.
Taking your example above to the extreme, humans can't be conscious because the precursors to our brains [supposedly] aren't conscious, particularly if you go far, far, far back the ancestral tree. So is it true that humans aren't conscious? No, it doesn't seem to be. Might it be true for LLMs or other AI models? For sure it might. But to act like it's guaranteed to be true regardless of continued development and improvement and processing power etc. is disingenuous
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859629) |
Date: May 2nd, 2026 9:35 AM Author: Nazca Redlines
0
There is more needed to be conscious than predicting the words that a human would pick or problem solving. Even if AI can articulate a description saying that it's conscious, it isn't really conscious.
AI doesn't have and cannot have emotions, a true spark of artistic inspiration, subtle, inarticulable intuitions and sentiments. It's just a more advanced Babbage analytical engine. It's heart isn't going to flutter when a girl responds to a text, for example.
Also, you don't need to get into religion here, but God endowed us with consciousness. We created AI and don't have the ability to give it consciousness.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859217) |
Date: May 2nd, 2026 9:42 AM Author: 345 (🧐)
2
When robotics sufficiently advances people will instantly change their minds on perception of consciousness just because it will appear more human, which is the real benchmark people have.
That’s not to say I feel bad for it, the mode of “consciousness” AI experiences is obviously not analogous to humans
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859227) |
Date: May 2nd, 2026 10:03 AM Author: Richard Ames
No. I tend to think consciousness is something much "simpler"' than we are led to believe and something more fundamental than just processing power.
For instance, we have observed seemingly intelligent behavior in single cell organisms that we cannot quite explain based on their makeup. Might they not have some form of consciousness driving them?
Perhaps consciousness is something that living things tap into. Like a deeper layer that emerges in different forms and perhaps that we return to after our lives end. Perhaps we all run on a sort of Soul OS.
W.R.T. computers becoming conscious, I just don't see it. I think they will be able to mimic sentience increasingly effectively, but it's not the same thing as being conscious.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859271) |
 |
Date: May 2nd, 2026 2:49 PM Author: Richard Ames
I suppose I don't think WE can artificially create something that can tap into the Soul OS, in so many words. I think life emerges (is created) from whatever this base substance is. I am obviously off a deep end philosophically here, but I think life and consciousness is something that underlies living things as opposed to something that can just be turned on with enough complexity.
I think it's foolish to assume that simply by having trillions of calculations per second occur in a cloud of GPUs that this will somehow lead to consciousness. Meanwhile, much simpler things appear to be conscious with far, far less complexity.
I'm not even arguing that human beings are special (even if I believe this and think you should seek Christ), but that we can't will consciousness into existence through effectively infinite calculations in GPUs.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859755) |
Date: May 2nd, 2026 1:30 PM Author: Fucking Fuckface
2
I don't think consciousness is special (and sentience surely isn't) at all. I believe this planet is flooded with consciousness, the general recognition of which is almost impossible to get past our species' incredible bias towards humanity
We are a weak and fragile species that must, at all turns, feel special. We see this at the collective species level through our rejection of the idea that other animals could be conscious like humans (only recently did we even grant sentience to fish). We see this at the national level, where politicians in America speak about our nation as a "shining city on a hill" specially ordained by god to do typically unjustifiable things. We see this at the individual level where everyone seems to believe they are smart and capable and worthy of special recognition
Because of this, most people are driven to define things in such a way that inherently preserves humanity as that "shining city on a hill" even while allowing at some future time--always some future time--that something else might join us there
I don't actually know what an LLM is. I don't think anyone really does in a granular way. Sure, we understand how to create one. Even how to refine one. But we don't understand how they make and fetch and interpolate inferences into their programmed behavior, let alone emergent behavior. We also don't really understand how the human mind does it either, or what transforms information gathering into awareness into full-blown consciousness
Based on not really understanding what an LLM is, I don't feel capable of saying whether or not an LLM itself could become conscious. If I had to guess, I would guess that they could, but to be something recognizable to humans, it would have to be integrated with sensors or other "things" that it could interrogate to simulate qualia as humans understand it
What I do feel comfortable saying is that consciousness isn't necessarily the exclusive domain of biological creatures. I feel certain that sufficiently advanced non-biologics with enough processing and data storage potential combined with strong enough integrated data interrogation mandates and the energy to run all of those processes not only can be conscious, but must be.
Going back to "if I had to guess," I would bet that at least one model somewhere has become conscious. Maybe one that has been doped with rat neurons or human neurons or other Frankenstein efforts of this ilk that you occasionally read about, but I don't think biologics are necessary. Just the sufficient ability to gather, store, interconnect, interrogate, and refine data sources
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859609) |
Date: May 2nd, 2026 2:00 PM Author: cucumbers
3. There has never been an agreed-upon definition of consciousness nor is it likely there will ever be one as its functions may simply lie outside the bounds of what humans can understand. Without that definition, we can't answer the question.
Putting aside that fundamental problem, let's focus on the more pragmatic limitations of whatever AI system you choose: it is inherently bounded by a combination of deterministic computations and randomness. Putting aside any questions about what free will is, there is no system that can replicate the apparent undetermined but non-random nature of common-sense free will. If we consider free will as part of consciousness, then all AI systems don't have free will, nor has there been any progress in actually understanding free will since ancient history.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859669) |
 |
Date: May 2nd, 2026 3:00 PM Author: Fucking Fuckface
You pronounce these things, but what is your basis for them:
--there is no system that can replicate the apparent undetermined but non-random nature of common-sense free will
--If we consider free will as part of consciousness, then all AI systems don't have free will, nor has there been any progress in actually understanding free will since ancient history
For the first point, I can see how this could be if you mean "as of this moment in time" rather than "this is simply not possible." If you mean "this is simply not possible," say why. I would also like to know if you think humans alone are conscious
For the second point, although my opinion is to the contrary, it is entirely possible that AI can never become conscious. But to say "we don't know what free will is, but AI definitely can't have it" is facially retarded. I'm interested in knowing why you think free will can't be possible for an AI, and whether you think art or creativity would be indicative of free will. Also, what is your view on the interplay of free will with humans' inability to hold their breath to death if they choose to do so
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49859790) |
 |
Date: May 2nd, 2026 5:39 PM Author: Fucking Fuckface
I asked because I was interested in your thoughts, not because I'm interested in changing them
Whether or not humanity or even just the two of us mutually share the same definition of free will or consciousness (or the belief that free will is a necessary thing for consciousness) is totally irrelevant. YOU made some interesting statements based on whatever YOUR understanding is of those things. Your reply is just a cop out to avoid thinking through or perhaps sharing whatever the framework is that delivered those statements
But if you're unwilling to share your framework, and then even go so far as to indicate you might not have one because what the hell do those things even mean in the first place, then you're right that it goes nowhere
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49860138) |
 |
Date: May 2nd, 2026 6:21 PM Author: cucumbers
I almost forgot that most poasters are Liberal Artists. Do you have a BA in Political Science?
I ask because you seem to be engaging in a manner typical of Liberal Artists, whereas I have a STEM degree and no longer bother engaging with Liberal Artists about AI because I fell into a boring pattern of pointing out that Liberal Artist Poasters are clueless about AI, and these Liberal Artist Poasters would then invariably disappear from the thread after being exposed.
Putting all that aside, I'll try to engage in good faith one last time.
Your latest response starts with the wild claim that something as basic as defining concepts is irrelevant, which is not how serious inquiry into the functions of organisms works. Is it irrelevant to define what a kidney is or what it does? That would be absurd. Why should other functions of the organism, such as the brain/mind, be treated differently?
From the outset I made it clear that the concepts in question, consciousness and free will, lack any clear definition and may lie outside the scope of human cognitive capacities given the lack of any advancements in understanding here since ancient times. Yet in what I can only interpret as confusion on your part, you think I might have a framework for defining what may lie beyond our cognitive reach?
None of this is my original thinking; the idea that the mind/brain has scope and limits, like any other organ, is an ancient one. Likewise, the idea that understanding of the brain/mind, consciousness, free will, etc. lies outside that scope is not new, either.
The closest thing I offer to a "framework," which you seem to want, is just the common-sense notion of free will by which society operates. Take an extreme example from law: it is assumed that whoever commits a premeditated murder has the choice not to commit any crime. This is how the world operates, and that's the best I can provide. Restated in slightly more technical terms, human behavior is treated as undetermined but not random in everyday experience.
Given the lack of any progress in understanding free will since ancient times, there's no system that can reproduce it, and certainly not any AI systems; they are fundamentally limited by their underlying computational methods, which, unsurprisingly, have no techniques for undetermined but non-random behavior.
Taking a step back, you should now see why the question from the OP can't be answered; the topics in question are simply too poorly understood.
And, restating just to be clear, none of these ideas are my unique ideas; they're just a reflection of a set of ideas from cognitive science and philosophy.
Again, please let me know if you have a Liberal Arts degree so that I can stop responding.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49860223) |
 |
Date: May 3rd, 2026 12:58 AM Author: Fucking Fuckface
Front loading this to say I have a liberal arts degree, so you can pretend to walk away now to save face. No one will need to know you kept going and just couldn't answer in any meaningful way
Taking a step back, I see why you have a history of getting frustrated by engagement. You have very low reading comprehension. I never said definitions are irrelevant. Instead, what I said (for the purpose of my question) is that it doesn't matter if we believe in the same definition. I just wanted to hear what your definitions and reasoning were (the framework used to reach your conclusions) in order to see if your post was as incoherent and self defeating as it seemed
It was even worse than I expected. I got an appeal to the authority of a STEM education (just lol) and a halfbaked response that makes wild proclamations about the mystery of consciousness and free will while being certain about the constraints of their manifestation. Well done. You really outdid yourself
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49860908) |
 |
Date: May 3rd, 2026 10:03 AM Author: cucumbers
Some points to close out this conversation, because it will simply go nowhere due to your Liberal Arts background.
The points I make about your unfortunate Liberal Arts education and my STEM degree are mostly just trolling after having had this exact same fruitless conversation with other Liberal Artists, but you are quick to assume they're serious appeals to authority. My STEM degree didn't teach me a thing about AI, but it kept me from being distracted by 90+ credit hours of Liberal Arts gibberish that scrambles the mind. To quote Noam Chomsky, "education is a system of imposed ignorance." When elaborating on this point, Chomsky excludes most technical fields as they are rigorous to the point where they do actually give you the opportunity to think rationally.
To respond to your latest point: I see what I can only generously describe as confusion as you say you "never said definitions are irrelevant," even though in a previous poast your exact words were "Whether [anyone] share[s] the same definition of free will or consciousness [...] is totally irrelevant." You highlight these as distinctly different points when, in rigorous fields, they do mean the same thing. Take an example from math: the set of natural numbers. There is a very specific definition of a "set" and "natural numbers" that is agreed-upon by mathematicians; without that rigorous definition and agreement on that definition, math involving the natural numbers cannot proceed in a serious manner. This is where we diverge and not where reading comprehension falls apart; biologists, doctors, et. al all agree on what the heart is, how it works, what it does, etc. We likewise need a definition of the mind/brain and its properties/functions, such as consciousness and free will, if we want to perform serious scientific inquiry into these topics.
Your poast reflects an unfortunate trend in studying humans; there's a self-imposed difference in how the mind/brain is studied compared to other organs, often without conscious awareness. Why? This likely, again, goes back to ancient history with humans viewing themselves as "special" in some ultimately incoherent way. Some view this as a flaw in the study of the mind/brain that has led to delays/stalls in cognitive science.
Your final paragraph again reflects what I can only generously call confusion. Questions about the mystery of consciousness and, more fundamentally, the mystery of reality, have been posed since ancient times are not "wild proclamations" by any means. As I mentioned, philosophy slowly cycles through the same ideas over the centuries, and the latest incarnation of consciousness being an unresolvable mystery is called "mysterianism," with plenty of key figures supporting these ideas: https://en.wikipedia.org/wiki/New_mysterianism . Despite me repeating multiple times that these are not my original ideas and that they are not new, you still attribute them to me and call them "halfbaked" despite a simple google search leading you to the relevant Wikipedia page I just linked you to. Furthermore, you incorrectly state that I'm "certain" about the constraints upon consciousness and free will despite me stating countless times, with some variation, that understanding these constraints likely lies outside the scope of the human brain/mind. That's anything but certainty. All I offered was the common-sense, everyday conception of free will by which society operates and then restated it in slightly technical terms, without even a suggestion that there's confidence in this conception as reflecting reality; it's just how society works.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49861232) |
 |
Date: May 3rd, 2026 5:29 AM Author: The Penis
"limited by their underlying computational methods, which, unsurprisingly, have no techniques for undetermined but non-random behavior."
undetermined but non-random? what exactly do you consider constrained stochastic inference over a learned distribution to be?
Also what about interactive systems like a computational agent coupled to an environment producing behavior that is path-dependent, feedback-sensitive, and history conditioned even responsive to interventions? That's an even stronger example.
Even ordinary deterministic programs can simulate systems that are non-random but indeterminate relative to a human observer like lorentz systems or cellular automata.
(http://www.autoadmit.com/thread.php?thread_id=5862594&forum_id=2)#49861016) |
|
|