"But it can't THINK" Said the software engilawyer in year 6 of unemployment
| Peter Andreas Thiel | 06/08/25 | | cucumbers | 06/08/25 | | Peter Andreas Thiel | 06/08/25 | | cucumbers | 06/08/25 | | Peter Andreas Thiel | 06/08/25 | | cucumbers | 06/08/25 | | Peter Andreas Thiel | 06/08/25 | | cucumbers | 06/08/25 | | .,.,.,.,.;;.,.,..,.,.,.,..,., | 06/08/25 | | ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,, | 06/08/25 | | cucumbers | 06/08/25 | | ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,, | 06/08/25 | | cucumbers | 06/08/25 | | ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,, | 06/08/25 | | The Wandering Mercatores | 06/08/25 | | cucumbers | 06/08/25 | | ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,, | 06/08/25 | | Hot Gamer Dad | 06/08/25 | | The Wandering Mercatores | 06/08/25 | | The Wandering Mercatores | 06/08/25 | | cucumbers | 06/08/25 | | The Wandering Mercatores | 06/08/25 | | cucumbers | 06/08/25 | | Hot Gamer Dad | 06/08/25 | | .,.,.,.,.;;.,.,..,.,.,.,..,., | 06/08/25 | | cowgod | 06/08/25 | | cowgod | 06/08/25 | | .,.,.,.,.;;.,.,..,.,.,.,..,., | 06/08/25 | | cowgod | 06/08/25 | | Emotionally + Physically Abusive Ex-Husband | 06/08/25 | | cucumbers | 06/08/25 |
Poast new message in this thread
 |
Date: June 8th, 2025 8:07 AM Author: cucumbers
i will admit that i'm looking for a no code managing job because i want to get away from coding. and i'll also admit that there are differing opinions on how AI will go.
but unlike liberal artists who fellate AI, i have the following credentials:
- STEM degree from a "prestigious" STEM school and ~15 years of professional coding experience
- past experience at an AI company where i was a principal engineer (these are harder to land than most management jobs) where i led actual AI projects
- spent a lot of time reading up on AI and experimenting with it outside of my AI job
- currently forcing my direct reports to use AI for coding due to senior management pushing for it
so, unlike the liberal artists here who fellate AI, i have an informed perspective. it can be a useful tool for a narrow subset of problems that humans human can't perform (e.g., computationally-complex things that exploit the compute speed and memory of computers, which has been done under different names long before AI became a thing) and a few semi-gimmicky additions to that, but in the end, after years of working with it, my professional opinion is that it will never be able to replace humans for virtually all tasks that are challenging.
there are those who disagree with me, but who are they? companies whose revenue depends on AI development and startup founders running AI companies, both known for relying heavily on exaggeration, puffery, and often out-right fraud to misrepresent what AI can and will do in order to make more money. this is the source of the hype behind AI and its current bubble.
there's also the subset of engineers who disagree with me, but my best guess is that they lack the historical background behind AI outside modern AI research and/or have fallen for the hype.
btw which field did you study for your liberal arts degree?
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996310) |
 |
Date: June 8th, 2025 8:18 AM Author: Peter Andreas Thiel (🧐)
"there's also the subset of engineers who disagree with me, but my best guess is that they lack the historical background behind AI outside modern AI research and/or have fallen for the hype."
You realize it's far more than "jealous liberal artists" who see some future in which most of us no longer have a job in 5-10 years? Given your entire schtick a couple weeks ago was "I tried the company mandated copilot and it wasn't very helpful AI is a bust" I seriously doubt how much you've actually bothered to explore what current tooling is capable of beyond what you've been mandated to care about.
All of the arguments in this line hinge on some nebulous definition of "challenging" that never seems to actually get defined. Sure, AI won't replace John Carmack anytime soon (who himself seems to use it quite a bit). But I'm not John Carmack, and I'm pretty sure you're not John Carmack, so who cares if it's not literally an unstoppable super intelligence if the net outcome is in 5-10 years you need 2% of the current number of developers to achieve the current amount of work?
I have a CS degree and it means fuck all in this argument. What have you actually tried in "experimenting"? I was able to spin up a full tutorial, walk through, and sample project which showed me (a backend drone whose brain has turned to mush over the last decade) how to develop a device driver for a microphone, something I have zero knowledge in seeing as what little operating systems evaporated years ago. It was pretty fun but also obvious that we're not too far awaw from the tooling to be good enough to automate away everyone except a very few. The only way out is massively increased demand, but the potential productivity gains are so incredibly high I don't see this being realistic (and the fact that there's a real limit on just how much software alone can do, meatspace engineers will fare better for a good while)
You're just defensive about your self image and are hilariously credentials checking in a sperg out, something that techmos are famously not supposed to really give a shit about.
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996320) |
 |
Date: June 8th, 2025 8:49 AM Author: cucumbers
thanks for misrepresenting what i just wrote. i made it very clear that it's far more than "jealous liberal artists" who disagree with me.
my "schtick" was very simple when i started discussing this because i didn't realize i'd see a new liberal artist fellate AI every single day here.
you mock my "credential checking," yet you seem to ignore that i literally led AI projects at an AI company (admittedly, i left several years ago). i wouldn't mention that if it wasn't relevant to the discussion.
moving more broadly to the topic of credentials and experience, they do actually matter when discussing technical topics, unlike nearly all non-technical topics. an outsider can write a coherent, compelling, and novel political critique, historical interpretation, etc. without any relevant credentials. but when it comes to technical topics, they're so far removed from ordinary human experience and so highly-dependent on technical training that credentials and/or relevant experience do matter. what happens when outsiders contact someone from the physics community about how they've figured out quantum gravity? they're all dismissed as cranks.
i have defined what i mean by "challenging" in other poasts, but neglected to in this subthread. as an example, for a sufficiently complex software project, AI falls apart quickly. as i no longer work on developing AI software but am instead being forced to use it for coding, it's right maybe 10% of the time at best.
in fact, i can think of simpler examples within the scope of software. it does not have a sufficient capability to understand best practices, fails to identify flawed prompts often, etc. you can see this even without complex software.
you made quite the jump from being able to develop a driver for old technology to being able to automate away nearly everything. have you ever worked on a complex software system built over decades, using multiple languages, dead/unmaintained frameworks, multiple infrastructures, etc.? that's what i do today, and AI is much more of a hindrance and distraction than a helpful tool at this level.
it's a small sample and just anecdotal, but all my direct reports have mentioned that AI is just slowing them down, and that it spits out garbage most of the time.
also, the credential-checking is largely a tongue-in-cheek response to the credential-dodging i see whenever i accuse some pro-AI poaster of having a liberal degree.
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996355) |
 |
Date: June 8th, 2025 9:25 AM Author: Peter Andreas Thiel (🧐)
"have you ever worked on a complex software system built over decades, using multiple languages, dead/unmaintained frameworks, multiple infrastructures, etc.?"
I think pretty much anybody aside from the lucky few who spend their entire career riding the $tartup slush money carousel do in some manner. As a direct example, the current codebase I work in is relatively new (~12 years), so the backend stuff is all relatively easy to follow (I fled my old job specifically to have to stop giving a shit about 25 year old Perl code), but still old enough such that there's a fair amount of gruntwork to be done in migrating frontends from AngularJS (much of which is piles of excessively verbose shitty code that was written by contractors at the time, so you can only imagine how bad the coding standards of Indians 12 years ago was) to Vue3. I know fuck all about modern frontend, and don't really care to become an expert in it seeing as it seems like a fairly miserable ecosystem, but nonetheless migrating this is something that needs to be done and everyone is expected to contribute given it's not exactly rocket science. We're expected to all spend at least X% of our time migrating the frontend codebase, and I get this shit done noticably faster (3-4x) than everyone else while at the same time being praised for "paying attention to standards" and all that gay shit (ie, people reviewing the code are very happy with the output). I didn't suddenly become 3-4x the programmer everyone else is, it's because I've given Claude Code a fairly organized system of best practices, testing patterns, our internal idioms, and component organization to follow.
Given a rigorous set of requirements one of the things AI is *best* at is translating Old->New. Of course most cases aren't this simple because if you're e.g. migrating some ancient COBOL banking system the cost of a mistake is infinitely higher, and it takes 20x the effort to understand what a given chunk of code is trying to do, but at the same time it is not hard at all to envision a world where such a migration goes from being an unrealistically long undertaking (maybe 100 devs spending their entire career or something similarly absurd) to being tacklable on a reasonable human/project timescale.
As an another example, many modern devs are basically saddled with a sea of generally poorly documented microservices which are talking to each other in decidedly not obvious ways. When I first started at where I am three years ago I spent weeks combing through a bunch of terraform and painstakingly taking notes and making diagrams of how all the pieces fit together. I wanted to blow my fucking brains out. AI is now capable of doing 90% of that work in 5% of the time with a little oversight (specifically instructing it on how to break down the analysis since it's far too much data to fit in a single context window). Back then I wouldn't have even dreamed about this being possible, and it's not hard to envision a scenario where it scales to the next order of magnitude even if the underlying models stay fixed at their current capabilities. Complexity is not a moat, it is completely solvable given enough resources and some modest initial guidance.
The only way out is with radically increased demand for software engineers. That may end up happening, but I'm skeptical given that there is a much more real limitation of how few meatspace engineers there are to help build products which drive software demand. This isn't a phenomenon unique to software engineering even though it gets the most press, but really basically any desk-oriented "knowledge profession" job. But unlike lawyers we have no cartel to stand in the way of capitalism mowing down our our overpaid masses over the next 10 years.
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996434) |
 |
Date: June 8th, 2025 11:05 AM Author: cucumbers
even Vue seems dated from my perspective. NextJS seems to be the the next "big" frontend framework, but i'm admittedly also more of a backend engineer, so i can't say any of this with certainty
you mention the need to give Claude "a fairly organized system of best practices, testing patterns, our internal idioms, and component organization to follow." to me, this is an obvious example of the limitations of AI as it requires significant input, context, and oversight that only an experienced engineer like you can provide
you mocked my tongue-in-cheek credential-checking, but it's relevant here: back when i was leading AI projects, i literally built an MVP of an LLM tool to translate COBOL to Java. it had a 60% compile success rate before i had to abandon it for other priorities. even with more modern tools, i wouldn't trust that kind of translation tool without extensive, exhaustive testing. maybe it'll get better, but my opinion at the time was that it was merely a computational aid that could save time but did not come close to replacing engineers
your example of poor documentation of what sounds like spaghettified microservices is typically a result of poor management, but it does demonstrate AI's value. i never claimed it's not a valuable tool, but if we take a step back, let's say AI was used to build the microservices. it would require significant oversight and likely end up reflecting the mess that comes out of managerial demands. AI can spit out garbage code very easily
IMO it's too broad to say "complexity is completely solvable." i can think of a bunch of counterexamples
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996598) |
Date: June 8th, 2025 7:42 AM
Author: .,.,.,.,.;;.,.,..,.,.,.,..,., ( )
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996290) |
Date: June 8th, 2025 7:47 AM
Author: ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,
The funny thing to me about this criticism of AI is that it presupposes that human beings typically reason from first principles.
Have they ever read a normal persons writing? It much more closely resembles ChatGPT word vomit than syllogistic A to B to C. Most people think this way - pattern recognition, assemble the stuff they want to say, get it in close proximity to similar stuff, and hit send. 95 percent of the time it gets the job done. It has limits, but the world already runs on it.
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996295) |
 |
Date: June 8th, 2025 8:44 AM
Author: ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,
I can’t speak to how it works on complex technical projects. My point is just that if “reasoning” is something that AI struggles with, it’s also something that most humans struggle with or just ignore. So the benchmark of reasoning isn’t really an accurate place to look when asking whether AIs can or will replace human workers.
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996344) |
 |
Date: June 8th, 2025 9:13 AM
Author: ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,
Yeah, but humans fuck up and need oversight too. My question is which tasks can it perform to a human worker standard of accuracy? Supervising law associates, and working with chat gpt quite a bit, my takeaway is that it performs at least as well and exponentially more quickly than the associate on most tasks already. I think you’d find similar take from professionals in medicine, banking, etc.
The goalposts keep getting moved in these insane ways, in part because there is a whole economy of AI wrapper grifters who overpromise on everything. But much of the criticism stems from a very false assumption that human knowledge workers themselves perform to an unrealistically high degree of accuracy. They don’t; to the contrary, I’m kind of amazed that society functions as well as it does given how endemic mistakes are in the work of highly trained and credentialed people. At some point soon, we will have validated models that equal or exceed human accuracy and reliability in many core disciplines. Perhaps not in your narrow application, but 99 percent of knowledge workers aren’t maintaining super complicated code bases or whatever you do.
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996404) |
 |
Date: June 8th, 2025 9:31 AM Author: cucumbers
in some fields, particularly software engineering, it leads to more oversight. right now i manage four engineers, so i'm there for oversight. but now that they're being forced to use AI at work, they themselves have to oversee what AI spits out because it's simply not good enough to trust.
i'm obviously not a lawyer, but your law example neglects to factor in the business impact of AI in the field. let's assume AI can indeed perform better and more quickly than associates, and your firm can reduce the number of associates. do the partners want that? absolutely not. faster work and fewer associates means fewer hours billed and less money made.
i'm too lazy to respond to the rest of your poast sry. maybe later
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996441) |
 |
Date: June 8th, 2025 11:35 AM
Author: ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,
Law firms will race to the bottom with AI. And they have already started .
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996657) |
 |
Date: June 8th, 2025 9:17 AM Author: cucumbers
i would say that the "human alignment" bias, as i interpret it, is helpful for the future of AI rather than a crippling factor. if AI ever reaches the level of being able to recreate the "creative" aspect of human thought ("creative" as defined in the technical sense seen in research on human thought, not the broader, ordinary definition), it will be a huge milestone in human achievement.
the fact that AI can "transform across thousands of natural languages, mathematical and logical systems and computer languages" is not unique to AI, was predicted around 100 years ago, and was demonstrated in the mid 20th century when computers could work beyond the computational and memory limitations of the human mind. first sign you've fallen for the AI hype.
AI often fails even in cases simpler than the "perfectly compiled code" example you give; it fails to understand best practices in coding, conventions unique to some languages/frameworks, etc.
i don't have an "irrational desire to prove the machine isn't as good as some people say." i'm doing this because i'm bored as fuck and happen to know AI very well. those with vested interests in AI have hyped it up, just as people with interests in other things will hype them up. sometimes it reaches the level of fraud. i'm trying to demonstrate that.
as mentioned in another subthread in this thread, i used to be a principal engineer on AI projects at an AI company, so i would not be surprised if i'm among the most-informed poasters when it comes to AI.
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996414) |
Date: June 8th, 2025 8:10 AM
Author: .,.,.,.,.;;.,.,..,.,.,.,..,., ( )
*is a 'liberal artist'*
*emails the 'office hacker' when he needs a vlookup*
"Lol, but AI can't reason, it just guesses the next character."
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996313) |
 |
Date: June 8th, 2025 10:00 AM
Author: .,.,.,.,.;;.,.,..,.,.,.,..,., ( )
Are you trying this on some commercial service or a local LLM? If it's the former, it isn't a technical limitation, they just eventually cut off what you can do with one prompt to contain how much you run up their power bills. Writing one book that's at least as coherent as the churnslop you pick up at the airport bookstore is entirely feasible. It might have inconsistent characterization, plot holes, and no real point but that's true of human written books too.
(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2...id.#48996485) |
|
|