\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

i'm Yann LeCun pilled on AI now

LLMs are pretty much maxed out and we need to train AI on re...
covert schizoid chad incel slayer w hunter eyes
  06/05/25
it's always been that bad
blow off some steam
  06/05/25
https://www.youtube.com/watch?v=4__gg83s_Do
covert schizoid chad incel slayer w hunter eyes
  06/06/25
Anthropic tripled their revenue just over the last couple of...
Peter Andreas Thiel
  06/06/25
yeah coding/some other white-collar replacement is the other...
covert schizoid chad incel slayer w hunter eyes
  06/06/25
LeCun implicitly says in this video that meta is going to tr...
covert schizoid chad incel slayer w hunter eyes
  06/06/25
brother this kind of thinking is what led xo to claim that f...
Peter Andreas Thiel
  06/06/25
i agree in principle but it costs real, significant money to...
covert schizoid chad incel slayer w hunter eyes
  06/06/25
...
Edmonton Oilers
  06/06/25
https://www.youtube.com/watch?v=qvNCVYkHKfg this whole in...
covert schizoid chad incel slayer w hunter eyes
  06/06/25
Yann Lecun has said AGI will take several years and maybe a ...
.,,.,,.,,,....
  06/06/25
lmao, people said the same thing in the 60s. We won't have ...
Ass Sunstein
  06/06/25
what would count as AGI to you
Peter Andreas Thiel
  06/06/25
Embodied AI robot moving around freely in the physical world...
Ass Sunstein
  06/06/25
But why would a robot do any of that? Why would it take a st...
martin heidegger
  06/06/25
Why do any of us do anything, brother?
Ass Sunstein
  06/06/25
Because God gave us the divine spark.
martin heidegger
  06/06/25
that isn't an interesting argument. some people were wrong i...
.,,.,,.,,,....
  06/06/25
Ok, we'll see what unfolds.
Ass Sunstein
  06/06/25
"AI 2027" was published in april (2 months ago) an...
covert schizoid chad incel slayer w hunter eyes
  06/06/25
...
Ass Sunstein
  06/06/25
i think what gave them credibility is that they were right f...
.,,.,,.,,,....
  06/06/25
"it's not too hard to imagine highly competent AI agent...
covert schizoid chad incel slayer w hunter eyes
  06/06/25
They have slowed down in most ways, but SWE bench/Aider type...
.,,.,,.,,,....
  06/06/25
thoughts on JEPA? see poast below
covert schizoid chad incel slayer w hunter eyes
  06/06/25
interesting but i'm not sure it's relevant to LLMs (which al...
.,,.,,.,,,....
  06/06/25
JEPA, or Joint Embedding Predictive Architecture, is a self-...
covert schizoid chad incel slayer w hunter eyes
  06/06/25


Poast new message in this thread



Reply Favorite

Date: June 5th, 2025 11:05 PM
Author: covert schizoid chad incel slayer w hunter eyes

LLMs are pretty much maxed out and we need to train AI on real-world physical experience now

current AI companies are mostly going to pivot to really dystopian commercial use cases to try to get some of their money back. romantic chat bots, targeted ads, surveillance/data selling, etc

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48991123)



Reply Favorite

Date: June 5th, 2025 11:08 PM
Author: blow off some steam

it's always been that bad

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48991130)



Reply Favorite

Date: June 6th, 2025 11:47 AM
Author: covert schizoid chad incel slayer w hunter eyes

https://www.youtube.com/watch?v=4__gg83s_Do

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992140)



Reply Favorite

Date: June 6th, 2025 12:00 PM
Author: Peter Andreas Thiel (🧐)

Anthropic tripled their revenue just over the last couple of months and for now they almost exclusively focus on being good at coding workflows

even if the limit of current AI is automating much of white collar knowledge work (programming, corporate law, finance, etc.) that's still a very large and fundamental shift. there's also stuff like having an interactive personal tutor which can adapt to your exact skill level/knowledge available on demand. multimodal models being the "missing piece" to make general purpose robots feasible, even if their use is very limited at first. etc.

meta has also been an absolute fucking disaster in the AI race given the amount of resources they've put into it

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992167)



Reply Favorite

Date: June 6th, 2025 12:03 PM
Author: covert schizoid chad incel slayer w hunter eyes

yeah coding/some other white-collar replacement is the other commercial application that isn't dystopian. that's why all these guys are switching gears to try to optimize models for being good at coding

everyone smart has figured out that "AGI" is not coming from LLMs. makes that AI 2027 report really funny in retrospect. maybe people should stop listening to scott alexander and miscellaneous indians? also ljl at yudkowsky et al shamelessly continuing their doomerism grift

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992175)



Reply Favorite

Date: June 6th, 2025 12:06 PM
Author: covert schizoid chad incel slayer w hunter eyes

LeCun implicitly says in this video that meta is going to try to leverage their existing userbase into getting customers to use personal assistant AI as their commercial application. it's unclear to me how much money you're going to be able to get out of 74 IQ dirt poor filipinos and africans in exchange for a daily-use chatbot. my guess is: not enough

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992180)



Reply Favorite

Date: June 6th, 2025 12:11 PM
Author: Peter Andreas Thiel (🧐)

brother this kind of thinking is what led xo to claim that facebook was "massively overvalued" in 2010 or whatever. their goal isn't revenue driven by chatbot subscriptions, it's keeping you on their screenslop for as long as possible to sell ads and promote shit. keeping users drooling over screens talking to their AI Buddy all day is a money printer for whoever pulls it off

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992187)



Reply Favorite

Date: June 6th, 2025 12:16 PM
Author: covert schizoid chad incel slayer w hunter eyes

i agree in principle but it costs real, significant money to run inference compute compared to search engines, social media, etc. i don't know if that's a good value proposition for third world users, who are much of meta's userbase at this point compared to other frontier models

for first world users, yeah, targeted ads will be very profitable, and all of the frontier model companies will do them

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992197)



Reply Favorite

Date: June 6th, 2025 1:41 PM
Author: Edmonton Oilers



(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992421)



Reply Favorite

Date: June 6th, 2025 12:26 PM
Author: covert schizoid chad incel slayer w hunter eyes

https://www.youtube.com/watch?v=qvNCVYkHKfg

this whole interview is quite good tbh

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992216)



Reply Favorite

Date: June 6th, 2025 12:29 PM
Author: .,,.,,.,,,....

Yann Lecun has said AGI will take several years and maybe a decade. That isn't much longer than the LLM proponents.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992221)



Reply Favorite

Date: June 6th, 2025 12:33 PM
Author: Ass Sunstein

lmao, people said the same thing in the 60s. We won't have AGI for at least 100 years

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992229)



Reply Favorite

Date: June 6th, 2025 12:34 PM
Author: Peter Andreas Thiel (🧐)

what would count as AGI to you

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992232)



Reply Favorite

Date: June 6th, 2025 12:37 PM
Author: Ass Sunstein

Embodied AI robot moving around freely in the physical world, interacting with humans safely, accomplishing tasks and plans on the level of a human.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992238)



Reply Favorite

Date: June 6th, 2025 3:50 PM
Author: martin heidegger

But why would a robot do any of that? Why would it take a stand on its own being?

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992797)



Reply Favorite

Date: June 6th, 2025 3:51 PM
Author: Ass Sunstein

Why do any of us do anything, brother?

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992801)



Reply Favorite

Date: June 6th, 2025 4:29 PM
Author: martin heidegger

Because God gave us the divine spark.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992893)



Reply Favorite

Date: June 6th, 2025 12:42 PM
Author: .,,.,,.,,,....

that isn't an interesting argument. some people were wrong in the past, but that doesn't imply different people are wrong now about a completely different paradigm. the level of generality with the current wave of progress is clearly much, much greater than in the 60s and there was nothing backing up their predictions other than overconfidence in the idea that the mind is built out of symbolic logic. transformers successfully being used for game playing, language generation, audio generation, image generation and understanding, video generative models, etc is the sort of thing you would only expect to happen in worlds where connectionism is true and notions of AI being built out of complicated, hand engineered modules are wrong.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992253)



Reply Favorite

Date: June 6th, 2025 12:44 PM
Author: Ass Sunstein

Ok, we'll see what unfolds.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992264)



Reply Favorite

Date: June 6th, 2025 12:41 PM
Author: covert schizoid chad incel slayer w hunter eyes

"AI 2027" was published in april (2 months ago) and had almost everyone buzzing about how accurate and credible its predictions were (the world ending in a few years from AGI taking over)

i believe that privately, LLM proponents know that this is complete nonsense, and we're nowhere even remotely near AGI, and it's not coming from LLMs. but publicly, people are still taking these delusions seriously, because it's to their advantage to do so in order to hype up their own self-interest

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992250)



Reply Favorite

Date: June 6th, 2025 12:42 PM
Author: Ass Sunstein



(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992252)



Reply Favorite

Date: June 6th, 2025 12:53 PM
Author: .,,.,,.,,,....

i think what gave them credibility is that they were right for the past several years. if you looked at GPT-2 and GPT-3 and then extrapolated from there based on likely increases in training compute, what has happened in 2022-2025 was predictable. even after GPT-3, many people were surprised by the advances in benchmark performance when they shouldn't have been. if you do the same exercise with the likely increases in training compute in 2026 and 2027 and inference scaling, it's not too hard to imagine highly competent AI agents that could substantially automate AI research.

it's interesting to note that that while certain things with easily verifiable rewards are advancing rapidly still (such as programming and math benchmark performance), other things seem to be stalling. the Claude 4 benchmarks don't see too promising for the LLM maximalist point of view but it's hard to know and we only have limited data points. the other issue is that we have very little idea what the labs are working on internally. transformers with chain of thought might soon hit a wall, but how can we be confident that someone doesn't have something else cooking that could address the remaining problems? LLMs are not necessarily synonymous with transformers trained with stochastic gradient descent.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992303)



Reply Favorite

Date: June 6th, 2025 1:35 PM
Author: covert schizoid chad incel slayer w hunter eyes

"it's not too hard to imagine highly competent AI agents that could substantially automate AI research."

it's pretty hard, man. that's a huge leap from what we have right now. the recent models that have been released across the board have been very small improvements and it's very clear that progress has massively slowed down and we're not going to see the same scaling off added compute trendline going forward

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992410)



Reply Favorite

Date: June 6th, 2025 2:03 PM
Author: .,,.,,.,,,....

They have slowed down in most ways, but SWE bench/Aider type benchmarks are still rapidly improving. These are the benchmarks most closely approximating autonomous software engineer work. The models are capable of handling larger context windows (now up to 1 million tokens with 2.5 pro), can more effectively use tools and interact with large code bases, and are significantly less error prone and capable of correcting their own errors when they do occur. I think it’s very hard to predict how much 2027 (or even 2026) models can speed AI research. Part of the issue is that labs are still compute constrained, so even if the models can do useful AI research, they might not be able to leverage that effectively. I think what makes me more open to the idea this might be possible is that deep learning has been fundamentally an empirical field and not theory driven. If you totally embrace that idea, autonomous, not super smart SWE AIs trying a lot of crap and training on the results is a plausible way to significantly boost AI performance.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992456)



Reply Favorite

Date: June 6th, 2025 3:41 PM
Author: covert schizoid chad incel slayer w hunter eyes

thoughts on JEPA? see poast below

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992759)



Reply Favorite

Date: June 6th, 2025 6:43 PM
Author: .,,.,,.,,,....

interesting but i'm not sure it's relevant to LLMs (which already have the advantage of learning a predictive model in an abstract space of human concepts). seems like the sort of thing that would be useful for learning unsupervised models from video that could then be rapidly adapted to RL tasks. it seems clearly true that people don't learn strong generative models (something like veo 3) and learning abstractions is likely to make things much more computationally tractable.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48993120)



Reply Favorite

Date: June 6th, 2025 3:41 PM
Author: covert schizoid chad incel slayer w hunter eyes

JEPA, or Joint Embedding Predictive Architecture, is a self-supervised learning framework designed to encourage models to form internal “world models” by predicting abstract representations of future (or missing) data rather than reconstructing raw inputs. Below is an overview of how JEPA works and why it is particularly well-suited for letting AI systems learn their own latent understanding of the world.

1. Core Idea: Predicting in Latent Space

Traditional self-supervised approaches—like autoencoders or generative masked‐modeling—often try to reconstruct pixels or raw tokens, which forces the model to spend capacity on both relevant and irrelevant details (e.g., exact pixel colors). JEPA sidesteps this by having two networks:

A context encoder that processes observed parts of the input (e.g., an image with masked regions or a video clip missing certain frames) and produces a “context embedding.”

A target encoder that separately encodes the actual data (or future frames) into “target embeddings.”

The training objective is to align the context embedding with the correct target embedding (or to distinguish it from incorrect ones) in latent space, rather than to reconstruct raw pixels or tokens. By comparing embeddings directly, the model can discard unpredictable noise (e.g., lighting variations, background clutter) and focus on stable, high-level features that are useful for prediction and planning

turingpost.com

arxiv.org

.

2. Architecture Variants (I-JEPA, V-JEPA, etc.)

I-JEPA (Image JEPA): Given a single image, a large “context crop” (covering a broad spatial area) is encoded, and the model predicts embeddings of several “target crops” from that image. Target crops are often chosen at scales large enough to require understanding semantics (e.g., object identities), not trivial low-level details

ai.meta.com

.

V-JEPA (Video JEPA): Extends I-JEPA to video by having the context encoder ingest previous frames (and possibly actions), then predicting the embedding of future frames. Because it only needs to predict abstract representations, the model can choose which features of the future are predictable (e.g., object positions) and ignore the unpredictable (e.g., exact pixel noise)

linkedin.com

ai.meta.com

.

By operating in this “embedding space” rather than pixel space, JEPA-based models learn world‐model representations: latent features that capture how a scene or environment evolves over time (e.g., object motion, physical interactions) without being burdened by pixel-level reconstruction

arxiv.org

medium.com

.

3. Loss Function and Training Dynamics

JEPA typically uses a contrastive or predictive loss at the embedding level. A common choice is InfoNCE: the context embedding must be close (in representation space) to the true target embedding and far from negative samples (embeddings of unrelated patches or frames). In some variants, an exponential moving average is used to stabilize the target encoder, ensuring that the targets change more slowly than the context encoder (similar to BYOL or MoCo strategies)

arxiv.org

.

Because the model is encouraged to predict only abstracted features, it effectively learns which aspects of the environment are predictable and worth modeling. For instance, in V-JEPA, predicting where a car will be next frame is feasible, whereas predicting the precise noise pattern on its surface is not. By focusing capacity on the predictable latent variables, JEPA induces a more robust internal “world model” that can be reused for downstream tasks (classification, reinforcement learning, planning) with far fewer labeled samples

linkedin.com

arxiv.org

.

4. Why JEPA Enables Self-Formed World-Models

Abstract Prediction vs. Generative Modeling: Generative models (e.g., diffusion, autoregressive transformers) must allocate capacity to model every detail, including inherently unpredictable factors. JEPA’s abstraction means that if some aspect of the future cannot be predicted from the context (e.g., random background flicker), the model can “discard” it and focus on the stable dynamics (e.g., object trajectories)

ai.meta.com

arxiv.org

.

Efficiency & Generalization: Empirically, JEPA variants (I-JEPA, V-JEPA) show 1.5×–6× gains in sample efficiency compared to pixel-based generative pre-training, because they aren’t forced to learn noise patterns or outliers. This leads to embeddings that capture “common sense” world dynamics—e.g., gravity, object permanence—encouraging the model to form its own latent simulation or predictive engine that generalizes to new tasks with minimal adaptation

linkedin.com

medium.com

.

Scalability & Modularity: The separation between context encoder and target encoder (or predictor) means that JEPA can be stacked hierarchically. A higher-level JEPA might predict scene-level embeddings (e.g., “a red car turns right”), while a lower-level JEPA predicts pixel embeddings or optical flow. This hierarchy mirrors how humans build world models: first conceptualizing objects and actions, then filling in details

rohitbandaru.github.io

medium.com

.

5. Practical Outcomes & Extensions

Recent work has shown that JEPA-trained backbones (e.g., ViT with I-JEPA) outperform standard self-supervised baselines on tasks like object detection, depth estimation, and policy learning when used as initializations

arxiv.org

. Furthermore, extensions like seq-JEPA incorporate sequences of views plus “action embeddings,” allowing the model to learn representations that are both invariant (for classification) and equivariant (for tasks requiring precise spatial dynamics), effectively learning a richer world model in a single architecture

arxiv.org

.

In summary, JEPA’s strength lies in its ability to force the model to abstract away unpredictable noise and extract only the predictable, semantically meaningful features of its inputs. By learning to align context embeddings with the embeddings of masked or future data, the model inherently constructs an internal world model—a latent simulation of its environment—that can be leveraged for downstream reasoning, planning, and decision-making with high efficiency.

(http://www.autoadmit.com/thread.php?thread_id=5734039&forum_id=2...id.#48992758)