Leading academic mathematicians now worried about AI putting them out of work
| ,.,....,..,.,.,,,,..,..,.,..,.,.,.,... | 05/10/26 | | Giant Naked Hitler | 05/10/26 | | The Penis | 05/10/26 | | Ray Poast | 05/10/26 | | Post nut horror | 05/10/26 | | ,.,....,..,.,.,,,,..,..,.,..,.,.,.,... | 05/10/26 | | Giant Naked Hitler | 05/10/26 | | Ray Poast | 05/10/26 | | The Penis | 05/10/26 | | ,.,....,..,.,.,,,,..,..,.,..,.,.,.,... | 05/10/26 | | Ray Poast | 05/10/26 | | The Penis | 05/10/26 | | Post nut horror | 05/10/26 | | ...,.,.,.,....,.. | 05/10/26 | | .,.,.,.,.,,,.,.,.,.,., | 05/10/26 | | harold lauder | 05/10/26 | | CriminalConversation | 05/10/26 | | ...,.,.,.,....,.. | 05/10/26 |
Poast new message in this thread
Date: May 10th, 2026 12:58 PM
Author: ,.,....,..,.,.,,,,..,..,.,..,.,.,.,...
https://x.com/wtgowers/status/2052830948685676605?s=46
AI can now generate results in a couple hours that are basically equivalent to a chapter in a math phd thesis. Pretty 180. Hopefully these people can learn to do something actually productive for humanity instead of getting paid to solve puzzles only they care about. SWE and math being wrecked by AI should free up a lot of human capital.
(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2#49878547) |
 |
Date: May 10th, 2026 1:38 PM
Author: ,.,....,..,.,.,,,,..,..,.,..,.,.,.,...
and he also thinks that will always be the case, when 4 years ago these models couldn't reliably solve grade school level problems.
(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2#49878587) |
 |
Date: May 10th, 2026 4:01 PM
Author: ,.,....,..,.,.,,,,..,..,.,..,.,.,.,...
there's nothing inherent about the model architectures that means they will always bullshit. the sampling aspect is beside the point. they are being trained to minimize prediction error on arbitrary human text. at the limit, this means no hallucinations. this isn't practical in reality because the models can't memorize everything, but different types of post-training will allow the models to recognize potentially incorrect tokens and self-correct. they can already do this reasonably well, which is people can now vibe code reasonably large SWE projects. model reliability in verifiable domains is growing quite rapidly.
(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2#49878796) |
 |
Date: May 10th, 2026 4:08 PM Author: Ray Poast
LLMs have no empirical model of the world. They cannot check hypotheses. All they can do is create them. They need a human to check them
The reason why they are able to self-check hypotheses in math and coding is because you can literally run checks of those within digital engines, on a computer
You can't do that for irl. LLMs will always be confined to digital computer environments
(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2#49878810) |
 |
Date: May 10th, 2026 1:35 PM Author: .,.,.,.,.,,,.,.,.,.,.,
and it appears good looking hot people can now be good looking, hot AND smart if they so require
(http://www.autoadmit.com/thread.php?thread_id=5865573&forum_id=2#49878582) |
|
|