\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

8/15/25 AI thread

gpt-5 beats pokemon gen 1 3x as fast as o3 https://www.re...
zyn
  08/15/25
these people claim that you can recover pre-LORA fine-tuning...
zyn
  08/15/25
BREAKING: Sam Altman just revealed who GPT-5 is really built...
zyn
  08/15/25
congrats tommy!
cock of michael obama
  08/15/25
These are two excellent poasts on forms of continual learnin...
zyn
  08/16/25


Poast new message in this thread



Reply Favorite

Date: August 15th, 2025 11:20 AM
Author: zyn

gpt-5 beats pokemon gen 1 3x as fast as o3

https://www.reddit.com/r/singularity/comments/1mq2irv/gpt5_just_finished_pokemon_red/

google releases 270m parameter gemma 3 local model. people claim it is fast and produces good results on a phone

https://x.com/fchollet/status/1956059444523286870

https://x.com/code_star/status/1956033343465906379

gpt5-reasoning gets perfect scores on some medical test. hard to know what this means if anything

https://x.com/omarsar0/status/1956003145349521780

apparently closed-source models are much, much more efficient with token output than open source models are (since they're being engineered to be that way). pretty interesting. means that open source models will eventually be much more efficient, which is what people running local models want anyway

https://x.com/NousResearch/status/1956090990005248341

free instruction on building deep research agent architectures

https://x.com/LangChainAI/status/1956027411302375631

https://x.com/hwchase17/status/1956036358709108979

"agentic RAG" system that is claimed to be able to reason and decision-tree its way through prompts requesting local data. it "learns" from patterns in your queries and only chunks data at query time. this is actually really interesting. tsinah you should check this out

https://x.com/philipvollet/status/1955945448860008655

(http://www.autoadmit.com/thread.php?thread_id=5762709&forum_id=2в#49187503)



Reply Favorite

Date: August 15th, 2025 3:24 PM
Author: zyn

these people claim that you can recover pre-LORA fine-tuning weights of models

pretty crazy if true

https://x.com/jxmnop/status/1956382800707240297

(http://www.autoadmit.com/thread.php?thread_id=5762709&forum_id=2в#49188249)



Reply Favorite

Date: August 15th, 2025 3:25 PM
Author: zyn

BREAKING: Sam Altman just revealed who GPT-5 is really built for:

> India is our 2nd largest market

> It may become our LARGEST

> We’ve taken a lot of feedback from users in india what they’d like from us…

> more affordable access

> and we’ve put that in GPT-5

Now it all makes sense..

https://x.com/ns123abc/status/1956404043783270853

It's Over

(http://www.autoadmit.com/thread.php?thread_id=5762709&forum_id=2в#49188256)



Reply Favorite

Date: August 15th, 2025 3:26 PM
Author: cock of michael obama

congrats tommy!

(http://www.autoadmit.com/thread.php?thread_id=5762709&forum_id=2в#49188259)



Reply Favorite

Date: August 16th, 2025 11:10 AM
Author: zyn

These are two excellent poasts on forms of continual learning and the possible ways to use RL to continue to scale models. I agree with almost everything this guy says. I am starting to change my mind about the impact of large frontier model training/inference infrastructure vs many smaller local models. If it does turn out to be cr to train models with RL for different specialized agentic tasks/fields, having the compute to generate vast amounts of synthetic data is going to be really important, because the data will be so large and complex and the time needed to train on these tasks will be so long

https://www.interconnects.ai/p/contra-dwarkesh-on-continual-learning

https://www.interconnects.ai/p/what-comes-next-with-reinforcement

This guy is mfcr btw:

https://x.com/natolambert/status/1956394775705239726

Good article with various cr and interesting thoughts about what exactly LLMs are missing and how we might get around their limitations

https://secondthoughts.ai/p/thoughts-about-agi-and-gpt-5?triedRedirect=true

OpenAI now making GPT-5 "friendlier" because Redditors had a lot of tantrums. Lol

https://x.com/OpenAI/status/1956461718097494196

Anthropic giving its models the ability to cut worthless problem/mentally ill users off. They're giving the excuse of "model welfare" and people are actually buying it. Lmao

https://x.com/AnthropicAI/status/1956441209964310583

(http://www.autoadmit.com/thread.php?thread_id=5762709&forum_id=2в#49189889)