Like 85% of my ChatGPT cites have been wholesale hallucinations lately.
| the walter white of this generation (walt jr.) | 10/22/25 | | ;;......,.,.,.;.,.,.,.,., | 10/22/25 | | the walter white of this generation (walt jr.) | 10/22/25 | | ;;......,.,.,.;.,.,.,.,., | 10/22/25 | | elefantastisch | 10/22/25 | | SneakersSO | 10/22/25 | | Smoker | 10/22/25 | | the walter white of this generation (walt jr.) | 10/22/25 | | elefantastisch | 10/22/25 | | chandler (retired) | 10/22/25 | | woke dog | 10/22/25 | | the walter white of this generation (walt jr.) | 10/22/25 | | https://imgur.com/a/o2g8xYK | 10/22/25 |
Poast new message in this thread
Date: October 22nd, 2025 5:02 AM Author: the walter white of this generation (walt jr.)
It's dropped off a cliff in like the past week. Worse than it's ever been before. Not just fake quotes, but not a single goddamn citation takes you to the case name GPT gives.
Anyone else getting this?
(http://www.autoadmit.com/thread.php?thread_id=5788578&forum_id=2в#49365648) |
Date: October 22nd, 2025 6:59 AM
Author: ;;......,.,.,.;.,.,.,.,., ( )
Go to settings and turn "Anticipation" to off
(http://www.autoadmit.com/thread.php?thread_id=5788578&forum_id=2в#49365679) |
 |
Date: October 22nd, 2025 2:21 PM
Author: ;;......,.,.,.;.,.,.,.,., ( )
It's in settings, and it anticipates sources that will exist rather than merely sources that do exist. It's good for some things, but bad for legal research.
(http://www.autoadmit.com/thread.php?thread_id=5788578&forum_id=2в#49366456) |
 |
Date: October 22nd, 2025 11:05 AM Author: elefantastisch
I have mostly not been able to get even the pro version to catch issues that a 2nd year associate should be able to identify lately. It’s good at giving deep dives into existing law and quickly putting together a pretty good memo that competently covers the basics/intermediates (which still is very helpful for me).
For example there’s an exception to the economic loss rule in tort cases where the breach of a public safety statute (in my jdx) allows recovery of economic losses. Only after 5-6 deep research queries did it identify that and I was asking it in a way that should have caught it much earlier.
I haven’t experienced hallucination on deep research mode though.
(http://www.autoadmit.com/thread.php?thread_id=5788578&forum_id=2в#49366003) |
Date: October 22nd, 2025 11:03 AM Author: woke dog
Agreed op, it's gotten noticeably worse lately. I think they are having inference compute problems. They've gotten so popular that they don't have enough compute for all users and so they are starting to skimp out
I canceled my subscription and I'm gonna try out gemini and see how it goes
(http://www.autoadmit.com/thread.php?thread_id=5788578&forum_id=2в#49366001) |
|
|