\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

I can do shit with Qwen3.5 that not even Chat-GPT can do

I checked with Claude and asked if this was possible http...
wine foreskin
  02/26/26
None of those gpt models exist anymore
ruby excitant fortuitous meteor stain
  02/27/26
Whatever man, I'm pretty sure OpenAI won't just spot you 28g...
wine foreskin
  02/27/26
Show me qwen3.5 do reasoning as good as gpt 4.5/o3 and I'll ...
ruby excitant fortuitous meteor stain
  02/27/26
I'm not saying Qwen wins at everything. And even if a questi...
wine foreskin
  02/27/26
You're assuming the tokens are equal among models. They aren...
awkward chapel
  02/27/26
I didn't say shit about tokens, Claude did.
wine foreskin
  02/27/26


Poast new message in this thread



Reply Favorite

Date: February 26th, 2026 11:48 PM
Author: wine foreskin

I checked with Claude and asked if this was possible

https://i.imgur.com/yAOtRAk.png

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2...id.#49698763)



Reply Favorite

Date: February 27th, 2026 12:01 AM
Author: ruby excitant fortuitous meteor stain

None of those gpt models exist anymore

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2...id.#49698781)



Reply Favorite

Date: February 27th, 2026 12:11 AM
Author: wine foreskin

Whatever man, I'm pretty sure OpenAI won't just spot you 28gb of VRAM on demand

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2...id.#49698794)



Reply Favorite

Date: February 27th, 2026 12:39 AM
Author: ruby excitant fortuitous meteor stain

Show me qwen3.5 do reasoning as good as gpt 4.5/o3 and I'll be impressed

Anybody can buy higher context limits with a local llm. It's a cost function, not a major technical hurdle like higher level reasoning

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2...id.#49698827)



Reply Favorite

Date: February 27th, 2026 1:16 AM
Author: wine foreskin

I'm not saying Qwen wins at everything. And even if a question of resource constraints, it's still true that ChatGPT won't give me the same context window size. I don't even know what the upper limit is on Qwen. Pretty sure I can make it 3-4x what it is now without losing any quality. Deepseek 4 is supposed to have a 1mm token context window

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2...id.#49698850)



Reply Favorite

Date: February 27th, 2026 12:07 AM
Author: awkward chapel

You're assuming the tokens are equal among models. They aren't

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2...id.#49698790)



Reply Favorite

Date: February 27th, 2026 12:09 AM
Author: wine foreskin

I didn't say shit about tokens, Claude did.

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2...id.#49698791)