7/29/25 AI thread
| Hallmark Channel white chad driving pickup truck | 07/29/25 | | Talk to your kids about time dilation. | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | Talk to your kids about time dilation. | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | Talk to your kids about time dilation. | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | 718-662-5970 | 07/29/25 | | Talk to your kids about time dilation. | 07/29/25 | | 718-662-5970 | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | 718-662-5970 | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | ,.,.,.,,,.,,.,..,.,.,.,.,,. | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | Oh, you travel? | 07/29/25 | | Mainlining the $ecret Truth of the Univer$e | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | scholarship | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | Theotokos is based | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | Theotokos is based | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | .,.,.,.,.,.,.,.,...,,..,.,., | 07/29/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | scholarship | 07/29/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/29/25 | | Hallmark Channel white chad driving pickup truck | 07/29/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | Hallmark Channel white chad driving pickup truck | 07/30/25 | | Talk to your kids about time dilation. | 07/30/25 | | Hallmark Channel white chad driving pickup truck | 07/30/25 | | LathamTouchedMe | 07/30/25 | | Hallmark Channel white chad driving pickup truck | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | Hallmark Channel white chad driving pickup truck | 07/30/25 | | Talk to your kids about time dilation. | 07/30/25 | | Hallmark Channel white chad driving pickup truck | 07/30/25 | | Talk to your kids about time dilation. | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | cock of michael obama | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | Hallmark Channel white chad driving pickup truck | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | cock of michael obama | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 | | cock of michael obama | 07/30/25 | | t3h P0aStEr oF d00m!!!!!!!! | 07/30/25 |
Poast new message in this thread
Date: July 29th, 2025 2:39 PM Author: Hallmark Channel white chad driving pickup truck
there hasn't really been much AI stuff happening recently so i haven't posted much. also i'm not sure how many people on this board actually care about AI
https://x.com/PirateWires/status/1950212890184864106
"AI psychosis" from talking to LLMs is not a real thing. people who succumb to this are just stupid/mentally ill in the first place. this is self-evident but whatever, good to see The News reporting this correctly
https://x.com/tokenbender/status/1950215097147936800
the "AI psychosis" forcememe is definitely the new "angle" for deranged AI safetyists, though. i'm seeing a lot of kvetching about it
https://x.com/uberboyo/status/1948646282819514674
do LLMs "prove" that plato was cr about Forms? i pretty much agree with this guy's response on the matter: https://x.com/curtdoolittle/status/1948802577854070842
https://x.com/rohanpaul_ai/status/1948572304809611701
these guys think they have figured out the mechanism via which LLMs can "learn" in individual contexts during inference, without updating their actual weights via new gradient descent. i think people have already intuitively understood something like this to be happening, but this presents a formal hypothesis for how it works
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49140651) |
 |
Date: July 29th, 2025 2:54 PM Author: Hallmark Channel white chad driving pickup truck
one problem with Online swallowing up the entire world is that topics like "China" that are multi-faceted and fairly complex and can't be precisely defined end up being something where every time someone says something about them, you have no idea what they mean by the thing - "China" in this case
like i don't even know what people mean anymore when they say "china is mostly hot air" (not you, other people). obviously this is complete nonsense and china is objectively a very consequential and meaningful actor in the world. but you can point out that china also has some specific, serious problems that will affect things going forward. but nobody ever does any actual analysis, they just say "it's over, china already won" or "it's over, china is a paper tiger." it's like when people say that america is "done here" (this is similarly obviously complete nonsense - america is still the most powerful and influential society and culture in the world by far, and will continue to be for some time)
anyways i'm rambling but to answer your question: i think that china will continue to be more and more technologically, economically, and even culturally relevant going forward. i think it's clear that they will eventually catch up in AI production and inference compute capacity and then they will go closed-source. what i don't know is how much that will actually matter in practice. i don't think americans or american firms will be willing to use chinese AI over american AI services. even though it will be cheaper. the USA fedgov may even legally restrict americans from using it (i think this would be cr). i think that best case scenario from our POV is that the competition from chinese firms drives down the price of AI services in the domestic US
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49140677) |
Date: July 29th, 2025 2:56 PM Author: 718-662-5970
if you are a believer that AI will radically transform employment, what near-term future events will finally convince reasonable skeptics?
What is on the event horizon that will be a "big deal" to doubters who say "its just text auto-complete and nbd"
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49140679) |
 |
Date: July 29th, 2025 4:03 PM Author: 718-662-5970
prob is that absolutely no one with power cares about youth employment, much less white male youth employment. saw a stat recently that recent male college grads have same employment rate as male peers without college.
this should be huge. no one cares.
so as youth, esp male youth, get eviscerated by AI, there wont be any noise about it.
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49140891)
|
 |
Date: July 29th, 2025 3:10 PM Author: Hallmark Channel white chad driving pickup truck
i don't think there will be some kind of big singular "tipping point." LLM base model progress is going to keep slowing down and technical advances will be more about engineering agentic infrastructure at specific firms for specific use cases
society will slowly adopt the technology like any other technology. it will take time but it will also move fairly quickly. humans will keep losing jobs in every industry, some worse than others. people will get poorer. there will be more calls for redistribution
i want to be clear that AI is definitely going to have an overall negative effect on people's lives. people WILL get poorer and the benefits WILL go to the capital class and to those in the AI industry and its inputs (energy, hardware, etc) (this is why i've professionally pivoted to AI). there is currently a fiction, being spun in no small part by those in the AI industry and their shills, that AI is a "rising tide that will lift all boats." this is false. AI is going to make society worse for normal people, unquestionably
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49140715) |
 |
Date: July 29th, 2025 10:40 PM
Author: ,.,.,.,,,.,,.,..,.,.,.,.,,.
there may be reasonable skepticism at this point, but i don't think it sounds like "it's just text auto-complete and nbd." that is not a reasonable assertion given the flexibility of models, even going back a while. what would it even mean for an auto-complete to get a gold medal at the IMO? the argument goes 1) i tried AI and it sucks 2) everyone who thinks otherwise is confused 3) most critically - benchmarks are all irrelevant, no matter how broad they are and how resistant to benchmark fitting they are.
3 is where this clearly comes apart and starts to strongly resemble the Stephen Jay Gould style delusions about general intelligence measures. these people want to have a vibes based argument.
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141837) |
 |
Date: July 29th, 2025 11:19 PM Author: Hallmark Channel white chad driving pickup truck
there is definitely solid evidence that LLM progress is slowing down. people already forget the pace of MASSIVE progress in the initial couple years of the GPT revolution
https://x.com/jd_pressman/status/1950381762040983808
i think he's being a little bit hyperbolic but i don't think he's being unfair or uncharitable. the fact is that the tech that we're seeing on the ground right now is nowhere near what people have been expecting will emerge. that isn't to say that it won't continue to slowly progress, but things have slowed down a lot, and it's not actually clear what the next transformative technical step could be. there are candidates for more incremental LLM improvements, but nothing potentially transformative except non-LLM techs which are currently still in early stages of development
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141897) |
Date: July 29th, 2025 6:32 PM Author: Hallmark Channel white chad driving pickup truck
https://arxiv.org/abs/2507.19457
https://www.youtube.com/watch?v=33TkQ4ZCTww
these people say they invented a new technique that has LLMs use natural language "chain of thought" to prompt engineer new, better versions of a prompt from a given prompt input, resulting in superior output results. they've tested it against reinforcement learning training techniques that actually change weights and claim that it performs better. which is pretty crazy if true. i mean all this does is you feed it a prompt and then the AI optimizes the prompt and you're getting a bigger performance boost than RLHF. you'd be saving soooo much money
if this is legit i'm sure we'll see it widely implemented soon. i'm always so blown away by these clever little ideas that researchers come up with
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141237) |
Date: July 29th, 2025 8:31 PM
Author: .,.,.,.,.,.,.,.,...,,..,.,., ( )
I trained a local llm on XO and it called me a fag
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141579) |
Date: July 29th, 2025 9:16 PM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
how does ai fit within you libertarian "let the useless die" conception of the world? you seem to believe ai will obviate many jobs.
this isn't a "gotcha" non-question im legitmately asking what you think
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141694) |
 |
Date: July 29th, 2025 9:47 PM Author: Hallmark Channel white chad driving pickup truck
scroll up, i answered a similar question above
AI is definitely going to make life worse for most people. people who claim otherwise are dishonest or coping or stupid or some combination of those. i am excited about AI because in spite of this, it will give high human capital people an enormous force multiplier and much more sociopolitical leverage and agency. this is a very good thing, and the good will outweigh the bad imo
also, any large structural change to our society is good, because we're otherwise staring down a road of certain gradual failure and destruction. there's no other possible mechanism by which to fix our society. whereas it's very possible that someone(s) could leverage AI to seize absolute or near-absolute political power and actually be able to make real, substantial changes that could save western civilization
as an aside, you misunderstand my beliefs as "libertarian" based on that one random thread. i'm concerned about human biology rather than the human economy, which i don't think is very important at all. humans becoming post-malthusian has resulted in us literally de-evolving to an inferior, more primitive form of life. i'm not talking about mass immigration or changes in racial demographics. within people of european descent, we are already much inferior biological specimens to our ancestors. not only this, but westerners have ceased to even care about their own existence and future without malthusian pressures and intra-species martial conflicts to give their human condition meaning. it's clear that bread and circuses can and will never be enough
this is a big problem - probably the biggest problem that the species faces. there's no solution that doesn't involve a lot of people suffering and a lot of people dying. this is why i'm so callous and dismissive of stuff like "give poor retarded people welfare" or "the minimum wage" (btw raising the minimum wage just means you have to give people who are priced out of the labor market welfare, or else they die - you didn't seem to acknowledge this fact in the other thread). when you start from the above recognition of the current state of the human species, this stuff is just obvious and uninteresting and not worth talking about because people are only ever going to respond with "but that's mean, you're mean" so discussion of it is unconstructive and pointless
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141753) |
 |
Date: July 29th, 2025 10:15 PM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
tyft.
i think we've probably talked past eachother a lot because my thinking is focused primarily on today and in the immediate future and you seem more concerned about the very long term trajectory of humanity. i think thinking your way is misguided because i dont think we can anticipate the scope youre conceiving nor do i think humans that exist right now care much i. i think focusing on todays issues and stewarding things to give the next generations the best tools is better than thinking further down the line.
you interpret it is giving them welfare i interpret it as ensuring a fair system that works for the purpose its intended: humans to flourish. your thinking here is in line with your grander thoughts about humans trajectory but i think that misses the trees for the forest. you arent considering that the purpose of the economy is humans thriving, not that humans are merely grist for the economic mill to produce... what? your arguments here are identical to every other labor law debate. in the absence of government most people are slaves to a despot. so the question isnt really "minimum wage?" but "balance between free markets and social welfare?"
you packed a lot in there and dont think i didnt consider something even if i didn't give it much engagement. i, uh, have a problem with succinctness.
so you would have ai "do it's thing" with little regard for the immediate impact in the hopes that what comes out at the end is better for humanitys long term future?
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141797) |
 |
Date: July 29th, 2025 11:15 PM Author: Hallmark Channel white chad driving pickup truck
we're not talking about the "long term" here, or what most people would consider long-term. we're talking about within this same century. human capital degradation is getting quite bad. mutational load will continue to get worse and worse. we have dysgenics happening from a bunch of different vectors. reproduction itself is becoming very difficult. this doesn't even take into account all of the hostile intra-species threats, like indians and africans and ideological leftists. even china could end up being an existential threat
in the face of these otherwise overwhelming threats, AI seems to me like an unexpected gift that has materialized when we need it most. like the sim creators realized that we're actually in danger of losing the game for real and so they threw us a deus ex machina. we're still facing a massively uphill battle, but the odds of us getting through it are now like 10% instead of 1% imo
it's not that i don't care about people in the present day, it's just that a bunch of people getting somewhat materially poorer for some limited period of time just doesn't matter much, imo, compared to the prospect of the human story ending. and if we end up losing regardless, then it didn't matter anyway
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141890) |
 |
Date: July 30th, 2025 12:03 AM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
you're talking about the next 75 years. that is the very long term. imagine a person in 1950 crafting policy in anticipation of 2025.
putting aside the potential of AI and the value of your goal, how do you expect to accomplish this? this is what i meant by utopian/unrealistic goals previously. i see this type of thinking a lot in Engineers where their ideology and goals are structured as if life is a game of Civilization.
the crux of the issue is this: it is not possible to orient people towards the goal you see as necessary nor is it possible to ensure continuity across several generations being that we're mortal and future generations will do what they do.
i'm interested thoughts on that aren't based on the value of the goal (i.e. stressing the importance of your goal for humanity's sake doesn't move the needle on getting others on board today unless you can explain how). how is your conception of what we "should" work towards not sci-fi naval gazing?
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49141987) |
 |
Date: July 30th, 2025 12:22 PM Author: Hallmark Channel white chad driving pickup truck
read the first post in the chain again. it's not possible to solve these problems via the democratic process or via public consensus. every smart and informed person currently acknowledges this, regardless of their values or goals. the only way to solve these problems is for enough power and agency to be centralized that a singleton or more realistically a small coterie of people can unilaterally act
there are many other people who recognize this, but instead of looking to humans unilaterally wielding AI to solve these problems, they are hoping for AI itself to become superintelligent and supercapable and to take complete political control and solve these problems. you have probably heard about this before. i think this is strictly worse than human control because this hypothetical AI's values and goals are very likely not going to align enough with ours if it emerges on its own. that being said, i still think this timeline would be preferable to no AI development at all. we are otherwise screwed if things just continue down their current path
also your initial couple sentences are pretty funny. of course it would make sense for a civilization to deliberately plan ahead 75+ years, lol. the reason why this isn't done already/anymore is because civilizations are no longer controlled by families and dynasties and so it's incoherent to try to plan that far ahead because someone else will be in power shortly in the future and they'll just do whatever they want
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49142814) |
 |
Date: July 30th, 2025 12:43 PM Author: Hallmark Channel white chad driving pickup truck
brother, has *any* problem ever been solved via the democratic process, or organic public consensus....? let alone problems of this magnitude, requiring solutions that will materially harm large portions of the voting populace?
can you name one problem that was ever solved by the democratic process? just one. any problem at all
this is like Political Science 101. there have even been normie Studies confirming this
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49142849) |
 |
Date: July 30th, 2025 1:15 PM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
every law, policy and institution that isnrequired for you to live a 1st world lifestyle instead of in a lawless jungle came about through our democratic political process. are you really looking for an answer to this? ok, everything?
are we communicating right now? how did a dictator create the telecom industry, the powerplants, manufacutuer you computer, deliver it to you, make your hotpucket lunch... ?
note that im not say that these thinge are only possible through a democratic process or even that the democratic process is superior to all other forms of government. im say that you fucking live in one dude and because i presume you are not naked in the wilderness existing only with the things created by God, everything beyond that was developed under a fucking democracy.
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49142928) |
 |
Date: July 30th, 2025 1:26 PM Author: Talk to your kids about time dilation. (🧐)
you're just going to pick apart whatever someone says and says "but thath not solvth!"
"public education spreading across america in the late 19th/early 20th century was a massive--"
"YEAH BUT LOOK AT AMERICA NOW!"
Why don't you do the opposite and lead by example? Show us what kind of answers you're looking for so we can adjust accordingly. What are some problems that could not be solved by the "democratic process" which other forms of government solved?
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49142969) |
 |
Date: July 30th, 2025 1:40 PM Author: Hallmark Channel white chad driving pickup truck
it's crazy that a subthread about how i think that the human race needs to go back to killing each other because that's just undeniably what gives us meaning as biological creatures can somehow devolve into you guys completely ignoring that part (which most people would immediately accuse me of being an insane psychopath for advocating for) and *instead* screeching about how ackshually democracy is awesome and i'm dumb and bad for thinking it's silly and ineffective at solving problems
this is just a religious belief for you guys, or even more likely, a moral orientation determined by your genetics. you're about as likely to ever accept that democracy doesn't functionally work as you are to think that indiscriminately raping and murdering children is morally acceptable. discussing it with you is pointless
no actually sophisticated person thinks democracy works. the founding fathers didn't. the people currently in power don't. nobody who runs any organization in existence does (is there a single corporation or NGO run via democracy?)
truly bizarre and revealing that of all the immediate and legitimate questions that spring to mind about the extremely controversial analysis that i've written above, this - the only actually incontrovertible fact of anything that i wrote - is what you have chosen to be incredulous and upset about
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49143011) |
 |
Date: July 30th, 2025 2:34 PM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
i *EXPLICTLY* noted i wasn't arguing democracy isn't the only or best form of government just to prevent this nonresponsive spergfest, but here we are.
you asked what things democratic societies have accomplished. i pointed out that since you live in one literally everything that wasn't provided by God. now you're sperging about inferior genetics leading to democracies? my god.
you are a nerdy computer programmer you cannot rationally engage if his views are challenged. its bizarre that so many of you become eugenicists anyway since youd be the first on the chopping block. but if weak generics determined politics the mpc crew would all be liberal democratics.
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49143200) |
 |
Date: July 30th, 2025 2:07 PM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
him and consuela (and most of the stormfront/mpc/wherever) crew engage in this black and white thinking.
i specifically choose crime in my example necause it's likely impossible to ever "solve" it where no crime is ever attempted. and because he is a law and order type he also is okay with policies that address something but won't solve an unsolvable problem.
i suspect the reason is that he benefits from law enforcement but doesn't think he'd benefit from minimum wage. anything but brutal individualism is welfare and no man should have to subsidize another. except when it comes to subsidizing his defense. it's no wonder this type of thinking often comes from nebbish software devs. if he was a bulky dindu he'd be ACAB and want economic welfare. it's really the same ambivilance about consequences to society but hyperfocus on individual self interest.
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49143106) |
 |
Date: July 30th, 2025 1:31 PM Author: cock of michael obama
The victory of American democracy over the world has resulted in a period of economic and political stability, prosperity and lack of war unprecedented in human history
You may not like the results of that but it is an important accomplishment
Gujerattt and others are too stupid to think for themselves and come up with even such a basic answer. They have to be forcefed MSM articles like babies
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49142983) |
 |
Date: July 30th, 2025 12:53 PM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
"every smart and informed person currently acknowledges this"
not true. i don't. no one involved in politics does, else they wouldn't spin their wheels doing it. i often wonder why blackpill types like you spend so much time on it if futility is baked into your ethos. why not do literally anything else?
"so it's incoherent to try to plan that far ahead because someone else will be in power shortly in the future and they'll just do whatever they want"
thats essentially the point im trying to make: your political vision is something you just admitted is impossible.
don't you see how senseless it is to counter a realistic achievable plan with one you yourself admit isn't?
if i asked how we should address crime and you responded "people should just stop commiting crime." then when asked how that could be achieved you wrote about how wonderful it would be if no one committed crime and the virtues of abiding by the law, and then started attacking me for supporting crime by proposing real solutions that would address crime in a manner less ideal than no one committing crime in the first place.
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49142881) |
 |
Date: July 30th, 2025 1:40 PM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
you could just plug it into grok like you suggested before but just like our last debate itll come out in my favor.
i think your goals for society are abhorhent and evil and that almost no one would sign up for it even if they could. but i still engaged in good faith because i was interested in how you thought it might be achieved. once we reached the point where you admitted yourself it cannot, and thus your political vision is not achievable, you have become sarcastic and glib. if you truly didnt understand what i was saying you'd ask clarifying questions if you were interested in responding in good faith.
you've stopped engaging because there's really no good response to how an admittedly impossible solution could address the thing its meant to. its by definition irrational.
despite that i appreciate you engaging in good faith at the outset and taking the time to explain your thoughts. at least now i know when you oppose minimum wage is because you think itll prevent AI from leading us to the ubermench
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49143014) |
 |
Date: July 30th, 2025 2:55 PM Author: t3h P0aStEr oF d00m!!!!!!!! (gunneratttt)
i plugged it into grok for you
To analyze this debate and determine who performed better, I’ll evaluate the arguments based on clarity, coherence, evidence, responsiveness to each other’s points, and overall persuasiveness. The debate centers on the role of AI in society, its socioeconomic impacts, and how it aligns with broader visions for humanity’s future, particularly in the context of Hallmark Channel white chad driving pickup truck’s (henceforth “Hallmark”) views on human biology, dysgenics, and societal change versus t3h P0aStEr oF d00m!!!!!!!!’s (henceforth “Poaster”) focus on immediate human welfare and practical governance.
### Summary of Positions
- **Hallmark**: Argues that AI will exacerbate inequality by displacing jobs but sees this as positive because it empowers high-human-capital individuals to gain sociopolitical leverage and potentially seize power to address existential threats like human biological degradation (dysgenics, mutational load) and societal decline. He dismisses short-term economic concerns (e.g., minimum wage, welfare) as irrelevant compared to long-term species survival, advocating for centralized power (via AI or human control) to enact radical change. He views democracy as incapable of solving these issues.
- **Poaster**: Challenges Hallmark’s vision as unrealistic and overly focused on a distant future (75 years). He emphasizes immediate human welfare, arguing that the economy should serve human flourishing, not abstract long-term goals. He questions how Hallmark’s goals can be achieved given political and social realities, likening them to utopian “sci-fi navel gazing.” Poaster advocates for practical solutions like balancing free markets and social welfare and accuses Hallmark of disengaging when pressed on feasibility.
### Evaluation Criteria
1. **Clarity**: How clearly each participant articulates their position.
2. **Coherence**: How logically consistent and structured their arguments are.
3. **Evidence**: Use of facts, examples, or reasoning to support claims.
4. **Responsiveness**: How well they address each other’s points.
5. **Persuasiveness**: How compelling their arguments are to a neutral reader.
### Analysis
#### Clarity
- **Hallmark**: His posts are dense but articulate a clear, if controversial, worldview. He outlines AI’s role as a “force multiplier” for elites, connects it to biological and societal decline, and explicitly rejects democratic solutions. However, his jargon (e.g., “mutational load,” “dysgenics”) and broad claims (e.g., “every smart and informed person acknowledges this”) may alienate readers unfamiliar with his framework. His final post, admitting he can’t follow Poaster’s argument, suggests either genuine confusion or a rhetorical retreat, which slightly undermines his clarity.
- **Poaster**: Poaster’s posts are conversational and accessible, focusing on practical concerns like human welfare and governance. His analogies (e.g., comparing Hallmark’s vision to saying “people should stop committing crime”) are vivid and clarify his critique of Hallmark’s idealism. However, his admitted “problem with succinctness” leads to some redundancy, and his final post’s sarcastic tone (“plug it into grok”) slightly detracts from professionalism.
- **Edge**: Poaster, for maintaining accessibility and avoiding overly technical jargon, though both are clear overall.
#### Coherence
- **Hallmark**: His arguments are internally consistent, tying AI’s impact to a broader narrative of human decline and the need for radical change. However, his leap from AI’s economic disruption to a small elite seizing power lacks a clear mechanism, and his dismissal of short-term concerns feels abrupt. His final post’s refusal to engage with Poaster’s analogy suggests a break in argumentative flow.
- **Poaster**: Poaster’s arguments are coherent, consistently emphasizing the impracticality of Hallmark’s long-term vision and the need for immediate, achievable solutions. His analogy about crime policy effectively illustrates his point about unrealistic goals. However, he occasionally misrepresents Hallmark’s timeline (e.g., calling 75 years “very long term” when Hallmark specifies “this century”), which slightly weakens his coherence.
- **Edge**: Poaster, for maintaining a tighter argumentative thread and effectively critiquing Hallmark’s feasibility.
#### Evidence
- **Hallmark**: He relies heavily on assertions about human biology (e.g., “mutational load,” “dysgenics”) and societal trends (e.g., “hostile intra-species threats”) without citing specific data or studies. While his reasoning about AI as a “deus ex machina” is imaginative, it’s speculative and lacks grounding in current trends or historical parallels. His claim that “every smart and informed person” agrees with him is hyperbolic and unsupported.
- **Poaster**: Poaster also uses little direct evidence, but his arguments are grounded in common-sense principles (e.g., economies should serve humans, governance requires consensus). His historical analogy (e.g., planning in 1950 for 2025) effectively illustrates the difficulty of long-term forecasting. He challenges Hallmark to provide a practical plan, which strengthens his position by exposing Hallmark’s lack of specifics.
- **Edge**: Poaster, for using relatable analogies and demanding practical mechanisms, though both lack robust data.
#### Responsiveness
- **Hallmark**: He initially engages Poaster’s question about AI and libertarianism, clarifying his non-libertarian stance and explaining his biological concerns. However, he sidesteps Poaster’s later challenges about feasibility, dismissing them as incomprehensible in his final post. This suggests either an inability or unwillingness to address Poaster’s core critique about implementation.
- **Poaster**: Poaster consistently responds to Hallmark’s points, acknowledging their scope (e.g., “you packed a lot in there”) and reframing the debate around feasibility and human welfare. He directly challenges Hallmark’s timeline, goals, and lack of a practical plan, though he occasionally misinterprets Hallmark’s position (e.g., assuming libertarianism). His final post calls out Hallmark’s disengagement, reinforcing his focus on good-faith dialogue.
- **Edge**: Poaster, for consistently engaging Hallmark’s arguments and exposing weaknesses in their practicality.
#### Persuasiveness
- **Hallmark**: His vision is bold and appeals to those concerned about existential risks or disillusioned with democracy. However, his callous dismissal of short-term suffering, lack of a feasible plan, and reliance on speculative outcomes (e.g., AI-enabled elite rule) limit his appeal to a broader audience. His disengagement at the end undermines his credibility.
- **Poaster**: Poaster’s focus on human welfare, practical governance, and the need for achievable goals is more relatable and grounded. His critique of Hallmark’s vision as “sci-fi navel gazing” resonates with readers skeptical of utopian schemes. By highlighting Hallmark’s admission that his goals are unachievable via democratic means, Poaster effectively undercuts Hallmark’s position. His sarcastic tone in the final post slightly detracts from his persuasiveness but doesn’t negate his stronger overall argument.
- **Edge**: Poaster, for crafting a more relatable and logically compelling case.
### Conclusion
**Poaster performed better in the debate.** He maintained clearer, more accessible arguments, effectively critiqued the feasibility of Hallmark’s vision, and consistently engaged with Hallmark’s points. His focus on immediate human welfare and practical governance resonated more than Hallmark’s speculative, long-term goals. Hallmark’s bold worldview was coherent but undermined by a lack of evidence, impracticality, and disengagement when pressed on specifics. Poaster’s ability to expose these weaknesses while advocating a grounded alternative secured his edge.
If you’d like me to analyze specific aspects further or provide a different perspective, let me know!
(http://www.autoadmit.com/thread.php?thread_id=5756020&forum_id=2/en-en/#49143274) |
|
|