\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Leading academic mathematicians now worried about AI putting them out of work

https://x.com/wtgowers/status/2052830948685676605?s=46 AI...
,.,....,..,.,.,,,,..,..,.,..,.,.,.,...
  05/10/26
xo's own Terry Tao didn't seem so worried when he appeared o...
Giant Naked Hitler
  05/10/26
Yeah because he's already part of the priesthood that needs ...
The Penis
  05/10/26
underrated
Heterosexual White Male Hierarch Gamer
  05/10/26
...
Post nut horror
  05/10/26
and he also thinks that will always be the case, when 4 year...
,.,....,..,.,.,,,,..,..,.,..,.,.,.,...
  05/10/26
Hallucinating is part of the reason that LLMs work at all, t...
Giant Naked Hitler
  05/10/26
cr
Heterosexual White Male Hierarch Gamer
  05/10/26
Humans confabulate shit all the time too and it often gets r...
The Penis
  05/10/26
...
oomox
  05/11/26
there's nothing inherent about the model architectures that ...
,.,....,..,.,.,,,,..,..,.,..,.,.,.,...
  05/10/26
LLMs have no empirical model of the world. They cannot check...
Heterosexual White Male Hierarch Gamer
  05/10/26
Math isn't empirical though edit: I guess you actually al...
The Penis
  05/10/26
multimodal LLMs clearly learn predictive models of the world...
.,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.
  05/11/26
No, they are only capable of predictions within their traini...
Heterosexual White Male Hierarch Gamer
  05/11/26
What precisely do you mean when you say they can’t gen...
.,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.
  05/11/26
i meant out of distribution generalization, yes, sorry th...
Heterosexual White Male Hierarch Gamer
  05/11/26
“language does not capture anything close to Reality.&...
The Penis
  05/11/26
"it's like, a compression layer over reality, more or l...
Heterosexual White Male Hierarch Gamer
  05/11/26
This is wildly false. Not defensible. Any model that can han...
The Penis
  05/11/26
Lol fuck each and every one of them.
Post nut horror
  05/10/26
lol @ ugly nerds seeing their one asset (being 'smart') comp...
...,.,.,.,....,..,.,.,,.,.,,..,.
  05/10/26
and it appears good looking hot people can now be good looki...
.,.)
  05/10/26
And soon ugly people will become hot thanks to radical new b...
harold lauder
  05/10/26
Or, in the alternative, turned into Soylent Green.
CriminalConversation
  05/10/26
good thing law relies on artificial bullshit not actual smar...
...,,..;...,,..,..,...,,,;..,
  05/11/26
...
...,.,.,.,....,..,.,.,,.,.,,..,.
  05/10/26
...
.,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.
  05/11/26
...
UN peacekeeper
  05/11/26
Good luck learning statistics from an AI
Jared Baumeister
  05/11/26
"mathematics departments, who owe a duty of care to the...
,.,..,.,..,.,.,.,..,.,.,,..,..,.,,..,.,,.
  05/11/26


Poast new message in this thread



Reply Favorite

Date: May 10th, 2026 12:58 PM
Author: ,.,....,..,.,.,,,,..,..,.,..,.,.,.,...


https://x.com/wtgowers/status/2052830948685676605?s=46

AI can now generate results in a couple hours that are basically equivalent to a chapter in a math phd thesis. Pretty 180. Hopefully these people can learn to do something actually productive for humanity instead of getting paid to solve puzzles only they care about. SWE and math being wrecked by AI should free up a lot of human capital.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878547)



Reply Favorite

Date: May 10th, 2026 12:58 PM
Author: Giant Naked Hitler (hitler did nothing wrong)

xo's own Terry Tao didn't seem so worried when he appeared on Dwarkesh

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878549)



Reply Favorite

Date: May 10th, 2026 1:27 PM
Author: The Penis

Yeah because he's already part of the priesthood that needs to be kept around to "interpret" ai results

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878571)



Reply Favorite

Date: May 10th, 2026 1:28 PM
Author: Heterosexual White Male Hierarch Gamer

underrated

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878572)



Reply Favorite

Date: May 10th, 2026 1:33 PM
Author: Post nut horror



(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878578)



Reply Favorite

Date: May 10th, 2026 1:38 PM
Author: ,.,....,..,.,.,,,,..,..,.,..,.,.,.,...


and he also thinks that will always be the case, when 4 years ago these models couldn't reliably solve grade school level problems.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878587)



Reply Favorite

Date: May 10th, 2026 1:50 PM
Author: Giant Naked Hitler (hitler did nothing wrong)

Hallucinating is part of the reason that LLMs work at all, they have to be non deterministic is my basic understanding. Therefore there will almost certainly need to be a human in the loop for truly important problems. With that being said, Erdos problems appear to not be that meaningful for real world applications so maybe he will get automated by clankers.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878606)



Reply Favorite

Date: May 10th, 2026 1:51 PM
Author: Heterosexual White Male Hierarch Gamer

cr

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878609)



Reply Favorite

Date: May 10th, 2026 2:50 PM
Author: The Penis

Humans confabulate shit all the time too and it often gets reified as knowledge for long periods of time. Look at literally the entire history of human knowledge.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878670)



Reply Favorite

Date: May 11th, 2026 7:47 PM
Author: oomox



(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880821)



Reply Favorite

Date: May 10th, 2026 4:01 PM
Author: ,.,....,..,.,.,,,,..,..,.,..,.,.,.,...


there's nothing inherent about the model architectures that means they will always bullshit. the sampling aspect is beside the point. they are being trained to minimize prediction error on arbitrary human text. at the limit, this means no hallucinations. this isn't practical in reality because the models can't memorize everything, but different types of post-training will allow the models to recognize potentially incorrect tokens and self-correct. they can already do this reasonably well, which is people can now vibe code reasonably large SWE projects. model reliability in verifiable domains is growing quite rapidly.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878796)



Reply Favorite

Date: May 10th, 2026 4:08 PM
Author: Heterosexual White Male Hierarch Gamer

LLMs have no empirical model of the world. They cannot check hypotheses. All they can do is create them. They need a human to check them

The reason why they are able to self-check hypotheses in math and coding is because you can literally run checks of those within digital engines, on a computer

You can't do that for irl. LLMs will always be confined to digital computer environments

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878810)



Reply Favorite

Date: May 10th, 2026 4:48 PM
Author: The Penis

Math isn't empirical though

edit: I guess you actually already addressed this to an extent in your poast re: your "math and coding" example, but this thread is about it's use in math which is purely abstract, which is what humans traditionally sucked at except for the smartest ones and also the perfect use-case for ai. In math and theoretical physics ai can traverse vast theoretical spaces in short periods of time and find new structure and rule out others which vastly speeds up discovery in those areas. We need humans to prompt and curate sure, but AI is going to be hugely important to these fields. I think its a mistake to think the only use for AI is completely decommissioning meat machines. Although it likely ultimately will retire many of them.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878839)



Reply Favorite

Date: May 11th, 2026 3:53 PM
Author: .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.


multimodal LLMs clearly learn predictive models of the world. There is a reason why generative models can create realistic lighting and scattering effects in images and videos. Predicting or generating the next token requires the model to construct a way to synthesize reality. As the models create ever more realistic output, they can do RL inside these models and create thought patterns that work increasingly well to deal with real life problems. Think of something like MuZero but for arbitrary domains.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880358)



Reply Favorite

Date: May 11th, 2026 4:00 PM
Author: Heterosexual White Male Hierarch Gamer

No, they are only capable of predictions within their training data. They cannot generalize, period. LLMs are incapable of generalization as a basic property of how they work

I can't tell if people who claim they can generalize have become so entranced by LLMs that they genuine believe it, or they're just lying to hype up the technology, or a blend of both

Legit general AI is coming. People will build virtual RL environments with real world physics and throw massive amounts of money and compute at them. They will find clever ways to impart some base level of virtual environment RL training to robot brains and so the robots have enough training scaffolding that they can gather their own data irl and learn effectively from that, slowly but surely

But LLMs cannot generalize. LLM technology is limited to the world of language. Coding, writing, anything digital bc everything digital is code based. Nothing beyond that irl

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880386)



Reply Favorite

Date: May 11th, 2026 6:41 PM
Author: .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.


What precisely do you mean when you say they can’t generalize? Every time you give them a novel coding problem they are reusing learned patterns and algorithms to generalize. There has to be generalization involved or they wouldn’t be useable for anything and could only repeat exactly what they saw in the training data. I assume you mean something like out of distribution generalization. They can’t do that well, although in practice interpolation in a high dimensional space necessarily means extrapolation, so there is no clean break there. All you can say is the generalization is imperfect (although rapidly getting better and conceivably solved by models that can use more inference compute at run time)

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880685)



Reply Favorite

Date: May 11th, 2026 7:24 PM
Author: Heterosexual White Male Hierarch Gamer

i meant out of distribution generalization, yes, sorry

the issue is that language does not capture anything close to Reality (the physical world). it seems to me like people are getting way too caught up into thinking there actually is some kind of overarching Platonic World Of Forms that sits above Physical Reality that LLMs are tapping into and operating on. but this isn't true. language is just language. it's not the real world. LLMs are only ever going to be able to "model" and extrapolate within this limited Language-World, because that is what they are trained on. they have no mechanism to interact directly with Reality

like i said above, this only applies to LLMs. other forms of AI that *can* interact directly with Reality will be developed. but this distinction matters imo because LLMs are being dishonestly hyped up as the end all be all of AI, and purported to be able to do things they cannot actually do (or will ever be able to do). this dishonesty can potentially become very dangerous if people in power decide to cynically (or even just ignorantly) appeal to LLM outputs for their decision-making and moral judgments

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880777)



Reply Favorite

Date: May 11th, 2026 7:29 PM
Author: The Penis

“language does not capture anything close to Reality.”

That is way too strong. Language is not reality, but language is not arbitrary noise either. It is a high-bandwidth social compression layer over reality. Models in theoretical physics aren't reality per se either (general relativity, quantum theory), but they are still immensely predictive representations however lossy, actually the most accurate we have. If representations are nothing then I guess might as well throw out all of human knowledge.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880787)



Reply Favorite

Date: May 11th, 2026 7:36 PM
Author: Heterosexual White Male Hierarch Gamer

"it's like, a compression layer over reality, more or less, you know?" isn't anything close to Reality though. "kinda sorta in the ballpark" is not even close to Good Enough

the trajectory for LLMs seems obvious. they will keep getting better and better (at increasingly inefficient compute ratios) at Language-World tasks. but they're never doing anything outside of that. we need AI that can train directly on Reality for general intelligence (which depends on world-modeling of Reality)

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880790)



Reply Favorite

Date: May 11th, 2026 7:25 PM
Author: The Penis

This is wildly false. Not defensible. Any model that can handle a sentence, code snippet, analogy, instruction, or combination of concepts it has never seen verbatim is exhibiting some form of generalization. LLMs plainly do this.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880779)



Reply Favorite

Date: May 10th, 2026 1:25 PM
Author: Post nut horror

Lol fuck each and every one of them.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878567)



Reply Favorite

Date: May 10th, 2026 1:31 PM
Author: ...,.,.,.,....,..,.,.,,.,.,,..,.


lol @ ugly nerds seeing their one asset (being 'smart') completely devalued overnight. being smart is worthless now. they have nothing. now they're just ugly *and* useless. (this applies to xo, inasmuch as anyone was ever smart here).

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878577)



Reply Favorite

Date: May 10th, 2026 1:35 PM
Author: .,.)

and it appears good looking hot people can now be good looking, hot AND smart if they so require

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878582)



Reply Favorite

Date: May 10th, 2026 1:47 PM
Author: harold lauder (https://www.youtube.com/watch?v=hPMyxMyJr-0)

And soon ugly people will become hot thanks to radical new biology unlocked by AI

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878599)



Reply Favorite

Date: May 10th, 2026 3:44 PM
Author: CriminalConversation

Or, in the alternative, turned into Soylent Green.

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878761)



Reply Favorite

Date: May 11th, 2026 7:29 PM
Author: ...,,..;...,,..,..,...,,,;..,


good thing law relies on artificial bullshit not actual smartness

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880786)



Reply Favorite

Date: May 10th, 2026 1:41 PM
Author: ...,.,.,.,....,..,.,.,,.,.,,..,.




(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49878590)



Reply Favorite

Date: May 11th, 2026 3:47 PM
Author: .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.




(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880334)



Reply Favorite

Date: May 11th, 2026 7:22 PM
Author: UN peacekeeper



(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880771)



Reply Favorite

Date: May 11th, 2026 7:42 PM
Author: Jared Baumeister

Good luck learning statistics from an AI

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880801)



Reply Favorite

Date: May 11th, 2026 7:45 PM
Author: ,.,..,.,..,.,.,.,..,.,.,,..,..,.,,..,.,,.


"mathematics departments, who owe a duty of care to their students"

LOL, is this rube not familiar with the modern university system?

(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2...id#49880809)