How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI
| cock of michael obama | 07/13/25 | | harold lauder | 07/13/25 | | cock of michael obama | 07/13/25 | | .,,,.,.,.,.,.,,,,..,,..,.,.,., | 07/13/25 | | harold lauder | 07/13/25 | | the world series of poasting | 07/13/25 | | .,,,.,.,.,.,.,,,,..,,..,.,.,., | 07/13/25 | | the world series of poasting | 07/13/25 | | .,,,.,.,.,.,.,,,,..,,..,.,.,., | 07/13/25 | | the world series of poasting | 07/13/25 |
Poast new message in this thread
Date: July 13th, 2025 5:13 PM
Author: .,,,.,.,.,.,.,,,,..,,..,.,.,.,
He is always eager to declare victory from a few weak, recent datapoints in a fast moving field. There is now a long history of him excitedly pointing out problems with current models that are resolved by the next generation of connectionist models. I predict the mechanics of these tools are learnable in such a way that anything currently “neurosymbolic” can be learned end to end with a connectionist model.
(http://www.autoadmit.com/thread.php?thread_id=5749924&forum_id=2...id#49098313) |
 |
Date: July 13th, 2025 5:40 PM Author: the world series of poasting
I have never understood why symbolic reasoning supposedly cannot be derived from/by connectionist models
This is almost certainly what is happening in the human brain, since symbolic reasoning is not something that humans naturally have 'out-of-the-box', and has to be learned. Until very very recently in our evolutionary history we were not capable of it at all, and probably ~50% of homo sapiens sapiens "humans" living in the world today are still not capable of it
I think he's right about the over-enthusiasm for LLMs in isolation and throwing money at scaling though. But I don't see it as a setback or a point of serious contention. It has still sped things up by dumping an enormous amount of money, resources, attention, and talent into AI, which would not have been dedicated to AI otherwise. It has been unequivocally a good thing
(http://www.autoadmit.com/thread.php?thread_id=5749924&forum_id=2...id#49098373) |
 |
Date: July 13th, 2025 5:55 PM
Author: .,,,.,.,.,.,.,,,,..,,..,.,.,.,
this is exactly right. symbolic reasoning is being learned by connectionist principles in the brain. it's not like you have synapses that can call a formal logic system to do other types of computations when necessary. it's all just learned connections.
i agree with you about the standard scaling view. i do not see scaling of the generic transformer architecture as likely to produce broad, human level capabilities. connectionism could implemented in many different ways though. i agree with him about the generalization problems with current models, but he makes the leap straight to "therefore we need symbolic methods." i think we just need a different form of connectionism - that's what the biology says.
(http://www.autoadmit.com/thread.php?thread_id=5749924&forum_id=2...id#49098447) |
 |
Date: July 13th, 2025 11:09 PM
Author: .,,,.,.,.,.,.,,,,..,,..,.,.,.,
(http://www.autoadmit.com/thread.php?thread_id=5749924&forum_id=2...id#49099221) |
Date: July 13th, 2025 5:31 PM Author: the world series of poasting
Lol fuckin Gary Marcus
He's a great case study on why it's important to not be a fucking clown. He has been right about some things and he is imo right about his criticisms of LLMs but nobody takes him seriously because he's an obnoxious clown who is obviously on a personal crusade against people who he believes have wronged him personally
You can't cry wolf over and over again and expect to make a positive difference in the world or in your chosen field
(http://www.autoadmit.com/thread.php?thread_id=5749924&forum_id=2...id#49098342) |
|
|