\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Nvidia ASHAMED of its current GPU lineup, cancels all reviews

https://youtu.be/4n_J7jclifM?si=Ndp-VEWuRXlOJIe_
Curious Flesh Principal's Office Bbw
  05/11/25
the days of NVIDIA caring about the consumer gaming market a...
cyan rough-skinned address
  05/12/25
Google is doing fine without Nvidiashit. Apple too. I wonder...
Curious Flesh Principal's Office Bbw
  05/12/25
Google is running NVIDIA GB200 NVL72s champ.
cyan rough-skinned address
  05/12/25
I got things mixed up. Apple did it without Nvidia, using Go...
Curious Flesh Principal's Office Bbw
  05/12/25
Apple is behind in "AI" and Google only sells proc...
cyan rough-skinned address
  05/12/25
Apple doesn't seem "behind" in any meaningful way....
Curious Flesh Principal's Office Bbw
  05/12/25
Apple Intelligence is a Joke https://cybernews.com/tech/ipho...
cyan rough-skinned address
  05/12/25
Oh wow an AI gives inaccurate responses? Shoulda used Nvidia...
Curious Flesh Principal's Office Bbw
  05/12/25
It seems that TPUs provide most of their training and infere...
sickened cuck
  05/12/25
?
cyan rough-skinned address
  05/12/25
Their nvidia GPUs are primarily for their cloud services. Ma...
sickened cuck
  05/12/25
cr, their edge was never in hardware but in developer suppor...
Curious Flesh Principal's Office Bbw
  05/12/25
Right. The advances in AI coding will also cause this to hap...
sickened cuck
  05/12/25
now is the time for Intel to really flood the GPU market if ...
cyan rough-skinned address
  05/12/25
They are working on dedicated AI chips and aren't pushing GP...
Curious Flesh Principal's Office Bbw
  05/12/25
are you talking about NPUs? they already have those in their...
cyan rough-skinned address
  05/12/25
Yeah they are going that way. No one thinks Intel chips will...
Curious Flesh Principal's Office Bbw
  05/12/25
Yes, and that’s still using traditional models where a...
sickened cuck
  05/12/25
tyft
Curious Flesh Principal's Office Bbw
  05/12/25
I am blown away by even current local LLMs. Gemma 3 is great...
sickened cuck
  05/12/25
Gemma is the only one I mess with anymore. I might do some c...
Curious Flesh Principal's Office Bbw
  05/12/25
The two people I know who work in the biz (one at Meta) run ...
Curious Flesh Principal's Office Bbw
  05/12/25
explainw hy it needs 64 gig ram wtf? there are only 1 billio...
Brindle exhilarant karate
  05/12/25
You don't, 32 is fine. However some people think it's nbd to...
Curious Flesh Principal's Office Bbw
  05/12/25
I just ordered a Core Ultra 9 285K and a 5070ti. Rate me.
cyan rough-skinned address
  05/12/25
If you got a z890 chipset you won the gen.
Curious Flesh Principal's Office Bbw
  05/12/25
I did but it's one of the mid-range ones, not the crazy ASUS...
cyan rough-skinned address
  05/12/25
That looks fine.
Curious Flesh Principal's Office Bbw
  05/12/25


Poast new message in this thread



Reply Favorite

Date: May 11th, 2025 12:10 PM
Author: Curious Flesh Principal's Office Bbw

https://youtu.be/4n_J7jclifM?si=Ndp-VEWuRXlOJIe_

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48921540)



Reply Favorite

Date: May 12th, 2025 4:33 PM
Author: cyan rough-skinned address

the days of NVIDIA caring about the consumer gaming market are over. it's all about AI Data Center for them now, anything else is scraps.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924790)



Reply Favorite

Date: May 12th, 2025 4:36 PM
Author: Curious Flesh Principal's Office Bbw

Google is doing fine without Nvidiashit. Apple too. I wonder how long it will take others to notice.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924793)



Reply Favorite

Date: May 12th, 2025 4:37 PM
Author: cyan rough-skinned address

Google is running NVIDIA GB200 NVL72s champ.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924796)



Reply Favorite

Date: May 12th, 2025 4:39 PM
Author: Curious Flesh Principal's Office Bbw

I got things mixed up. Apple did it without Nvidia, using Google chips:

https://www.reuters.com/technology/apple-says-it-uses-no-nvidia-gpus-train-its-ai-models-2024-07-29/

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924801)



Reply Favorite

Date: May 12th, 2025 4:43 PM
Author: cyan rough-skinned address

Apple is behind in "AI" and Google only sells processing units through their GCP cloud platform. Not remarkable - Apple is a hardware company first and foremost.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924808)



Reply Favorite

Date: May 12th, 2025 4:44 PM
Author: Curious Flesh Principal's Office Bbw

Apple doesn't seem "behind" in any meaningful way. What opportunities do you think they're missing here?

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924813)



Reply Favorite

Date: May 12th, 2025 4:45 PM
Author: cyan rough-skinned address

Apple Intelligence is a Joke https://cybernews.com/tech/iphone-users-disable-apple-intelligence/

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924815)



Reply Favorite

Date: May 12th, 2025 4:51 PM
Author: Curious Flesh Principal's Office Bbw

Oh wow an AI gives inaccurate responses? Shoulda used Nvidia I guess.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924835)



Reply Favorite

Date: May 12th, 2025 4:47 PM
Author: sickened cuck

It seems that TPUs provide most of their training and inference capacity and that will only become more true as time goes on. Companies like Microsoft going in a similar direction is not a positive thing.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924818)



Reply Favorite

Date: May 12th, 2025 4:48 PM
Author: cyan rough-skinned address

?

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924820)



Reply Favorite

Date: May 12th, 2025 4:50 PM
Author: sickened cuck

Their nvidia GPUs are primarily for their cloud services. Major AI companies using their own chips cuts Nvidia out of the market as time goes on. They will not retain the market share they have now.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924832)



Reply Favorite

Date: May 12th, 2025 4:52 PM
Author: Curious Flesh Principal's Office Bbw

cr, their edge was never in hardware but in developer support. They could dump more money into developer support than anyone else, but it was always only a matter of time until developers started doing their own thing.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924836)



Reply Favorite

Date: May 12th, 2025 4:58 PM
Author: sickened cuck

Right. The advances in AI coding will also cause this to happen sooner than it would otherwise. Even some of the local LLm market will likely migrate to AMD or Intel GPUs as the software improves

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924854)



Reply Favorite

Date: May 12th, 2025 5:03 PM
Author: cyan rough-skinned address

now is the time for Intel to really flood the GPU market if they want to escape Ignominy, I don't know much about their ARC shit but I hear it's ok.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924866)



Reply Favorite

Date: May 12th, 2025 5:06 PM
Author: Curious Flesh Principal's Office Bbw

They are working on dedicated AI chips and aren't pushing GPUs for that purpose, although they let you run local LLMs on them now using their own proprietary software. It's not bad.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924877)



Reply Favorite

Date: May 12th, 2025 5:08 PM
Author: cyan rough-skinned address

are you talking about NPUs? they already have those in their latest chips afaik.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924880)



Reply Favorite

Date: May 12th, 2025 5:10 PM
Author: Curious Flesh Principal's Office Bbw

Yeah they are going that way. No one thinks Intel chips will be used to train models, but running local inference should be easy. I think the cloudshit is going to become less relevant as performance of local LLMs improves. You can already get models that are <1gb that run great.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924885)



Reply Favorite

Date: May 12th, 2025 6:41 PM
Author: sickened cuck

Yes, and that’s still using traditional models where all the weights are in the GPU. For many specialized tasks, the model likely only needs a small amount of additional information in order to predict a particular token well. The neural networks that play particular games, for example, are quite small and could be quickly transferred from an NVME solid state drive into GPU memory if it was necessary for predicting the next token. I can imagine mixture of expert type models that retrieve from a database of thousands of expert modules based on the current context. No need for hardware with tons of memory next to the tensor cores.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925085)



Reply Favorite

Date: May 12th, 2025 6:46 PM
Author: Curious Flesh Principal's Office Bbw

tyft

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925095)



Reply Favorite

Date: May 12th, 2025 6:51 PM
Author: sickened cuck

I am blown away by even current local LLMs. Gemma 3 is great.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925110)



Reply Favorite

Date: May 12th, 2025 6:52 PM
Author: Curious Flesh Principal's Office Bbw

Gemma is the only one I mess with anymore. I might do some coding at some point but I have no urge to use LLMs for that rn.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925113)



Reply Favorite

Date: May 12th, 2025 5:03 PM
Author: Curious Flesh Principal's Office Bbw

The two people I know who work in the biz (one at Meta) run local LLMs on Macbooks with unified RAM. There's nothing you can buy for $5k that gives you better performance than a M4 ultra, or even the M2 ultra with 64-128gb unified RAM

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48924868)



Reply Favorite

Date: May 12th, 2025 6:53 PM
Author: Brindle exhilarant karate

explainw hy it needs 64 gig ram wtf? there are only 1 billion parameters

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925119)



Reply Favorite

Date: May 12th, 2025 6:55 PM
Author: Curious Flesh Principal's Office Bbw

You don't, 32 is fine. However some people think it's nbd to drop $5k on a Macbook, and the models with high capacity are in short supply so people grab them as they become available. Mac Minis are really the best value right now if you want to run M4 Max or M2 Ultra. You need at least the Max to run LLMs, and if you're getting more than 32gb you really want the Ultra

EDIT oh shit there's a M3 Ultra now. 180

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925124)



Reply Favorite

Date: May 12th, 2025 7:05 PM
Author: cyan rough-skinned address

I just ordered a Core Ultra 9 285K and a 5070ti. Rate me.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925151)



Reply Favorite

Date: May 12th, 2025 7:06 PM
Author: Curious Flesh Principal's Office Bbw

If you got a z890 chipset you won the gen.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925155)



Reply Favorite

Date: May 12th, 2025 7:07 PM
Author: cyan rough-skinned address

I did but it's one of the mid-range ones, not the crazy ASUS "gamerz" one or whatever

https://pcpartpicker.com/product/xPtLrH/msi-mpg-z890-carbon-wifi-atx-lga1851-motherboard-mpg-z890-carbon-wifi

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925159)



Reply Favorite

Date: May 12th, 2025 7:10 PM
Author: Curious Flesh Principal's Office Bbw

That looks fine.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2:#48925167)