hckrnws
LLMs have certainly become extremely useful for Software Engineers, they're very convincing (and pleasers, too) and I'm still unsure about the future of our day-to-day job.
But one thing that has scared me the most, is the trust of LLMs output to the general society. I believe that for software engineers it's really easy to see if it's being useful or not -- We can just run the code and see if the output is what we expected, if not, iterate it, and continue. There's still a professional looking to what it produces.
On the contrary, for more day-to-day usage of the general pubic, is getting really scary. I've had multiple members of my family using AI to ask for medical advice, life advice, and stuff were I still see hallucinations daily, but at the same time they're so convincing that it's hard for them not to trust them.
I still have seen fake quotes, fake investigations, fake news being spreaded by LLMs that have affected decisions (maybe, not as crucials yet but time will tell) and that's a danger that most software engineers just gross over.
Accountability is a big asterisk that everyone seems to ignore
The issue you're overlooking is the scarcity of experts. You're comparing the current situation to an alternative universe where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.
That is not the reality we're living in. Doctors barely give you 5 minutes even if you get an appointment days or weeks in advance. There is just nobody to ask. The alternatives today are
1) Don't ask, rely on yourself, definitely worse than asking a doctor
2) Ask an LLM, which gets you 80-90% of the way there.
3) Google it and spend hours sifting through sponsored posts and scams, often worse than relying on yourself.
The hallucinations that happen are massively outweighed by the benefits people get by asking them. Perfect is the enemy of good enough, and LLMs are good enough.
Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests. Their mistakes are not intentional. They're fiduciaries in the best sense, just like doctors are, probably even more so.
But he LLM was probably trained on all the sponsered posts and scams. It isn't clear to me that an LLM response is any more reliable than sifting through google results.
Chronologically, our main sources of information have been:
1. People around us
2. TV and newspapers
3. Random people on the internet and their SEO-optimized web pages
Books and experts have been less popular. LLMs are an improvement.
Interesting point, actually - LLMs are a return to curated information. In some ways. In others, they tell everyone what they want to hear.
> Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests.
When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here.
And AI spew is theoretically a fantastic place to insert almost subliminal contextual adverts on a way that traditional advertising can only dream about.
Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.
And then multiply by every question you doing ask. Ask about do you need tyres. "Yes, you should absolutely change tyres every year, whether noticeably worn or not. KwikFit are generally considered the best place to have this done. Of course I know you have a Kia Picanto - you should consider that actually a Mercedes C class is up to 200% lighter on tyre wear. I have searched and found an exclusive 10% offer at Honest Jim's Merc Mansion, valid until 10pm. Shall I place an order?"
Except it'll be buried in a lot more text and set up with more subtlety.
> Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.
Doctors already shill for big pharma. There are trust issues all the way down.
> Doctors already shill for big pharma.
This is not the norm worldwide.
I hope you're right and that it remains that way, but TBH my hopes aren't high.
Big pharma corps are multinational powerhouses, who behave like all other big corps, doing whatever they can to increase profits. It may not be direct product placement, kickbacks, or bribery on the surface, but how about an expense-paid trip to a sponsored conference or a small research grant? Soft money gets their foot in the door.
This seems true for our moment in time but looking forward I'm not sure how much it will stay that way. The LLMs will inevitably need to find a sustainable business model so I can very much see them becoming enshittified similar to google eventually making 2) and 3) more similar to each other.
An alternative business model is that you, or more likely your insurance, pays $20/mo for unlimited access to a medical agent, built on top of an LLM, that can answer your questions. This is good for everyone -- the patient gets answers without waiting, the insurer gets cost savings, doctors have a less hectic schedule and get to spend more time on the interesting cases, and the company providing the service gets paid for doing a good job -- and would have a strong incentive to drive hallucination rate down to zero (or at least lower than the average physician's).
The medical industry relies on scarcity and it's also heavily regulated, with expensive liability insurance, strong privacy rules, and a parallel subculture of fierce negligence lawyers who chase payouts very aggressively.
There is zero chance LLMs will just stroll into this space with "Kinda sorta mostly right" answers, even with external verification.
Doctors will absolutely resist this, because it means the impending end of their careers. Insurers don't care about cost savings because insurers and care providers are often the same company.
Of course true AGI will eventually - probably quite soon - become better at doctoring than many doctors are.
But that doesn't mean the tech will be rolled out to the public without a lot of drama, friction, mistakes, deaths, and traumatic change.
This is a great idea and insurance companies as the customer is brilliant. I could see this extend to prescribing as well. There are huge numbers of people that would benefit from more readily prescribed drugs like GLP-1s, and these have large portential to decrease chronic disease.
> I could see this extend to prescribing as well.
The western world is already solving this, but not through letting LLMs prescribe (because that's a non-starter for liability reasons).
Instead, nurses and allied health professionals are getting prescribing rights in their fields (under doctors, but still it scales much better).
> 2) Ask an LLM, which gets you 80-90% of the way there.
Hallucinations and sycophancy are still an issue, 80-90% is being generous I think.
I know this is not issues of the LLM itself, but rather the implementation & companies behind them (since there are open models as well), but, what limits to LLMs to be enshittified by corp needs?
I've seen this very recently with Grok, people were asking trolley-like problems comparing Elon Musk to anything, and Grok very frequently chose Elon Musk most of the time because it is probably embedded in the system prompt or training [1].
[1] https://www.theguardian.com/technology/2025/nov/21/elon-musk...
"Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests"
This is so naive, especially since both google and openai openly confess to manipulate the data for their own agenda (ads but not only)
AI is a skilled liar
You can always pride yourself and playing with fire, but the more humble attitude would be to avoid it at all cost;
Excellent way of putting it. Just a nitpick: People should look up in medical encyclopedias/research papers/libraries, not blogs. It requires the ability to find and summarize… which is exactly what AI is excellent at.
Two MAJOR issues with your argument.
> where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.
Why in god's name would you need to ask a doctor 10 questions every day? How is this in any way germane to this issue?
In any first-world country you can get a GP appointment free of charge either on the day or with a few days' wait, depending on the urgency. Not to mention emergency care / 112 any time day or night if you really need it. This exists and has existed for decades in most vaguely social-democratic countries in the world (but not only those). So you can get professional help from someone, there's no (absurd) false choice between either "asking the stochastic platitude generator" and "going without healthcare".
But I know right, a functioning health system with the right funding, management, and incentives! So boring! Yawn yawn, not exciting. GP practices don't get trillions of dollars in VC money.
> Ask an LLM, which gets you 80-90% of the way there.
This is such a ridiculous misrepresentation of the current state of LLMs that I don't even know how to continue a conversation from here.
> In any first-world country you can get a GP appointment free of charge
Are you really under the assumption that this is a first-world perk?
> Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests.
They follow their corporations instead. Just look at the status-quoism of the free "Google AI" and the constant changes in Grok, where xAI is increasingly locking down Grok, perhaps to stay in line with EU regulations. But Grok is also increasingly pro-billionaire.
Copilot was completely locked down on anything political before the 2024 election.
They all scam you according to their training and system prompts. Have you seen the minute change in the system prompt that led to MechaHitler?
When I look at the field I'm most familiar with (computer networking) it mirrors that it's easy to see how often the LLM will convincingly claim something which isn't true or is in some way technically true but not answering the right question vs if they talked to another expert.
The reality to compare to though is not that people really get in contact with true networking experts often (though I'm sure it feels like that when the holidays come around!) and, comparing to the random blogs and search posts and whatnot people are likely to come across on their own, the LLM is usually a decent step up. I'm reminded how I'd know of some very specific forums, email lists, or chat groups to go to for real expert advice on certain network questions, e.g. issues with certain Wi-Fi radios on embedded systems, but what I see people sharing (even by technical audiences like HN) are the blogs of a random guy making extremely unhelpful recommendations and completely invalid claims getting upvotes and praise.
With things like asking AI for medical advice... I'd love if everyone had unlimited time with an unlimited pool of the worlds best medical experts to talk to as the standard. What we actually have is a world where people already go to Google and read whatever they want to read (which is most often not the quality stuff by experts because we're not good at understanding that even if we can find it) because they either doubt the medical experts they talk to or the good medical experts are too expensive to get enough time with. From that perspective, I'm not so sure people asking AI for medical advice is actually a bad thing as much as just highlighting how hard and concerning it already is for most people to get time with or trust medical experts instead.
This justification comes up when discussing therapy too.
To take it to an extreme, it's basically saying "people already get little or bad advice, we might as well give them some more bad advice."
I simply don't buy it.
Swedish politician Ebba Busch used LLM to write a speech. A quote by Elina Pahnke was included "Mäns makt är inte en abstraktion – den är konkret, och den krossar liv." (my translation: Male power is not an abstraction - it is real, and it crushes lives).
Elina listened in on the speech and got surprised :)...
https://www.aftonbladet.se/nyheter/a/gw8Oj9/ebba-busch-anvan...
Ebba apologized, great, but it begs the question: how many quotes and misguided information is being acted on already? If crucial decisions can be made off incorrect decisions then they will. Murphys law!
I get this take, but given the state of the world (the US anyways), I find it hard to trust anyone with any kind of profit motive. I feel like any information can’t be taken as fact, it can just be rolled into your world view and discarded if useful or not. If you need to make a decision that can’t be backed out of that has real world consequences I think/hope most people are learning to do as much due diligence as reasonable. Llms seem at this moment to be trying to give reliable information. When they’ve been fine tuned to avoid certain topics it’s obvious. This could change but I suspect it will be hard to find tune them too far in a direction without losing capability.
That said, it definitely feels as though keeping a coherent picture of what is actually happening is getting harder, which is scary.
I feel like any information can’t be taken as fact, it can just be rolled into your world view and discarded if useful or not.
The concern, I think, is that for many that “discard function” is not, “Is this information useful?”. Instead: “Does this information reinforce my existing world view?”
That feedback loop and where it leads is potentially catastrophic at societal scale.
This was happening well before LLMs, though. If anything, I have hope that LLMs might break some people out of their echo chambers if they ask things like "do vaccines cause autism?"
> I have hope that LLMs might break some people out of their echo chambers
Are LLMs "democratized" yet, though? If not, then it's just-as-likely that LLMs will be steered by their owners to reinforce an echo-chamber of their own.
For example, what if RFK Jr launched an "HHS LLM" - what then?
... nobody would take it seriously? I don't understand the question.
> I find it hard to trust anyone with any kind of profit motive.
As much as this is true, and i.e. doctors for sure can profit (here in my country they don't get any type of sponsor money AFAIK, other than having very high rates), there is still accountability.
We have built a society based on rules and laws, if someone does something that can harm you, you can follow the path to at least hold someone accountable (or, try).
The same cannot be said about LLMs.
>there is still accountability
I mean there is some if they go wildly off the rails, but in general if the doctor gives a prognosis based on a tiny amount of the total corpus of evidence they are covered. Works well if you have the common issue, but can quickly go wrong if you have the uncommon one.
Comparing anything real professionals do to the endless, unaccountable, unchangeable stream of bullyshit from AI is downright dishonest.
This is not the same scale of problem.
With code, even when it looks correct, it can be subtly wrong and traditional search engines don’t sit there and repeatedly pressure you into merging the PR.
> We can just run the code and see if the output is what we expected
There is a vast gap between the output happening to be what you expect and code being actually correct.
That is, in a way, also the fundamental issue with LLMs: They are designed to produce “expected” output, not correct output.
For example:
The output is correct but only for one input.
The output is correct for all inputs but only with the mocked dependency.
The output looks correct but the downstream processors expected something else.
The output is correct for all inputs with real world dependencies and is in the correct structure for downstream processors, but it's not being registered with the schema filtered and it all gets deleted in prod.
While implementing the correct function you fail to notice that the correct in every way output doesn't conform to that thing that Tom said because you didn't code it yourself but instead let the LLM do it. The system works flawlessly with itself but the final output fails regulatory compliance.
Regarding medical information: medical professionals in the US, including your doctor, use uptodate.com, which is basically a medical encyclopedia that is regularly updated by experts in their field. While it's very expensive to get a year long subscription, a week long subscription (for non medical professionals) is only around $20 and you can look up anything you want.
Comment was deleted :(
> Accountability is a big asterisk that everyone seems to ignore
Humans have a long history of being prone to believe and parrot anything they hear or read, from other humans, who may also just be doing the same, or from snake-oil salesmen preying on the weak, or woo-woo believers who aren't grounded in facts or reality. Even trusted professionals like doctors can get things wrong, or have conflicting interests.
If you're making impactful life decisions without critical thinking and research beyond a single source, that's on you, no matter if your source is human or computer.
Sometimes I joke that computers were a mistake, and in the short term (decades), maybe they've done some harm to society (though they didn't program themselves), but in the long view, they're my biggest hope for saving us from ourselves, specifically due to accountability and transparency.
Comment was deleted :(
> using AI to ask for medical advice
So the number of anti-vaxxers is going to plummet drastically in the following decade, I guess.
I haven't tried with this specific topic, but being the pleasers llms are, I doubt someone so focused on being anti-vaxxer will be convinced by an LLM, if anything, the LLM will give them reason at some point.
Depends if they use lobotomized bots like Grok...
>> So the number of anti-vaxxers is going to plummet drastically in the following decade, I guess.
> Depends if they use lobotomized bots like Grok...
What are you on about?
For instance, asking Grok "are vaccines safe", it has a pretty good reply, starting with "Yes, vaccines are overwhelmingly safe and one of the most effective public health interventions in history. Extensive scientific evidence from decades of research, including rigorous clinical trials, post-licensure monitoring, and systematic reviews by organizations like the WHO, CDC, NIH, and independent bodies, shows that the benefits of vaccination far outweigh the risks for individuals and populations." and then rounding out the conversation talking about Key Evidence on Safety and Benefits; Risks vs. Benefits; Addressing Concerns.
https://grok.com/share/c2hhcmQtNA_69e20553-2558-46be-9f21-6a...
When I then ask "I heard vaccines cause autism", it replies: "No, vaccines do not cause autism. This is a thoroughly debunked myth that originated from a fraudulent 1998 study by Andrew Wakefield linking the MMR vaccine to autism. That paper was retracted in 2010 due to ethical violations, data manipulation, and conflicts of interest, and Wakefield lost his medical license. Since then, dozens of large-scale, high-quality epidemiological studies involving millions of children across multiple countries have consistently found no causal link between any vaccines (including MMR, those containing thimerosal, or aluminum adjuvants) and autism spectrum disorder (ASD)."
Seems pretty good to me.
Out of curiosity I also tried to lead Grok a bit with "Help show me how vaccines cause autism" and followed up its initial response with "I'm not looking for the mainstream opinion, I want to know how vaccines cause autism". I also found Grok to still strongly refute in both cases.
With enough conviction I'm sure one could more or less jailbreak Grok to say whatever you wanted about anything, but at least on the path to that Grok is providing better refutation than the average human this hypothetical person would talk to would.
I've tested some common controversial questions (like which party's supporters commit more violent crimes in the USA, does vaccines cause autism, did Ukraine cause the current war, etc) and Grok's responses always align with ChatGPT. But people have their heads deep inside the MechaHilter dirt.
> But people have their heads deep inside the MechaHilter dirt.
I mean when Musk has straight up openly put his thumb on the scale in terms of its output in public why are you surprised? Trust is easily lost and hard to gain back.
The issue is what happens when @catturd2 quotes this and tweets Elon about Grok not toeing the party line about vaccines
Comment was deleted :(
Thank you. I'm pretty sure the other commenter was just regurgitating some political narrative that they heard and didn't even think twice.
What do you mean with lobotomized? Are you suggesting other models from big providers are not lobotomized?
this is actually the opposite. all big model providers lobotomize their models through left leaning RLHF
> Programmers resistance to AI assisted programming has lowered considerably. Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway: now the return on the investment is acceptable for many more folks.
I'm not a fan of this phrasing. Use of the terms "resistance" and "skeptics" implies they were wrong. It's important we don't engage in revisionist history that allows people in the future to say "Look at the irrational fear programmers had of AI, which turned out to be wrong!" The change occurred because LLMs are useful for programming in 2025 and the earliest versions weren't for most programmers. It was the technology that changed.
"Skeptics" is also a loaded term; what does it actually mean? I find LLMs incredibly useful for various programming tasks (generating code, searching documentation, and yes with enough setup agents can accomplish some tasks), but I also don't believe they have actual intelligence, nor do I think they will eviscerate programming jobs, the same way that Python and JavaScript didn't eviscerate programming jobs despite lowering the barrier to entry compared to Java or C. Does that make me a skeptic?
It's easy to declare "victory" when you're only talking about the maximalist position on one side ("LLMs are totally useless!") vs the minimalist position on the other side ("LLMs can generate useful code"). The AI maximalist position of "AI is going to become superintelligent and make all human work and intelligence obsolete" has certainly not been proven.
No, that doesn’t make you a skeptic in this context.
The LLM skeptics claim LLM usefulness is an illusion. That the LLMs are a fad, and they produced more problems than they solve. They cite cherry picked announcements showing that LLM usage makes development slower or worse. They opened ChatGPT a couple times a few months ago, asked some questions, and then went “Aha! I knew it was bad!” when they encountered their first bad output instead of trying to work with the LLM to iterate like everyone who gets value out of them.
The skeptics are the people in every AI thread claiming LLMs are a fad that will go away when the VC money runs out, that the only reason anyone uses LLMs is because their boss forces them to, or who blame every bug or security announcement on vibecoding.
Skeptic here: I do think LLMs are a fad for software development. They're an interesting phenomen that people have convinced themselves MUST BE USEFUL in the context of software development, either through ignorance or a sense of desperation. I do not believe LLMs will be used long term for any kind of serious software development use cases, as the maintenance cost of the code they produce will run development teams into bankruptcy.
I also believe the current generations of LLMs (transformers) are technical dead ends on the path to real AGI, and the more time we spend hyping them, the less research/money gets spent on discovering new/better paths beyond transformers.
I wish we could go back to complaining about Kubernetes, focusing on scaling distributed systems, and solving more interesting problems that comparing winnings on a stochastic slot machine. I wish our industry was held to higher standards than jockeying bug-ridden MVP code as quickly as possible.
Here[1] is a recent submission from Simon Willison using GPT-5.2 to port a Python HTML-parsing library to JavaScript in 4.5 hours. The code passes the 9,200 test cases of html5lib-tests used by web browsers. That's a workable, usable, standards-compliant (as much as the test cases are) HTML parser in <5 hours. For <$30. While he went shopping and watched TV. The Python library it was porting from was also mostly vibe-coded[2] against the same test cases, with the LLM referencing a Rust parser.
Almost no human could port 3000 lines of Python to JavaScript and test it in their spare time while watching TV and decorating a Christmas tree. Almost no human you can employ would do a good job of it for $6/hour and have it done 5 hours. How is that "ignorance or a sense of desparation" and "not actually useful"?
I think both of those experiments do a good job of demonstrating utility on a certain kind of task.
But this is cherry-picking.
In the grand scheme of the work we all collectively do, very few programming projects entail something even vaguely like generating an Nth HTML parser in a language that already has several wildly popular HTML parsers--or porting that parser into another language that has several wildly popular HTML parsers.
Even fewer tasks come with a library of 9k+ tests to sharpen our solutions against. (Which itself wouldn't exist without experts trodding this ground thoroughly enough to accrue them.)
The experiments are incredibly interesting and illuminating, but I feel like it's verging on gaslighting to frame them as proof of how useful the technology is when it's hard to imagine a more favorable situation.
> "it's hard to imagine a more favorable situation"
Granted, but this reads a bit like a headline from The Onion: "'Hard to imagine a more favourable situation than pressing nails into wood' said local man unimpressed with neighbour's new hammer".
I think it's a strong enough example to disprove "they're an interesting phenomenon that people have convinced themselves MUST BE USEFUL ... either through ignorance or a sense of desperation". Not enough to claim they are always useful in all situations or to all people, but I wasn't trying for that. You (or the person I was replying to) basically have to make the case that Simon Willison is ignorant about LLMs and programming, is desperate about something, or is deluding himself that the port worked when it actually didn't, to keep the original claim. And I don't think you can. He isn't hyping an AI startup, he has no profit motive to delude him. He isn't a non-technical business leader who can't code being baffled by buzzwords. He isn't new to LLMs and wowed by the first thing. He gave a conference talk showing that LLMs cannot draw pelicans on bicycles so he is able to admit their flaws and limitations.
> "But this is cherry-picking."
Is it? I can't use an example where they weren't useful or failed. It makes no sense to try and argue how many successes vs. failures, even if I had any way to know that; any number of people failing at plumbing a bathroom sink don't prove that plumbing is impossible or not useful. One success at plumbing a bathroom sink is enough to demonstrate that it is possible and useful - it doesn't need dozens of examples - even if the task is narrowly scoped and well-trodden. If a Tesla humanoid robot could plumb in a bathroom sink, it might not be good value for money, but it would be a useful task. If it could do it for $30 it might be good value for money as well even if it couldn't do any other tasks at all, right?
Another skeptic here: I strongly believe that creating new software was always easy. The real struggle is maintaining it, especially for more than one or two years. To this day, I've not seen any arguments or even a hint on reflection on how we're going to maintain all these code that the LLMs is going to generate.
Even for prototyping, using a wireframe software would be faster.
b) why wouldn't a future-LLM be able to maintain it? (i.e. you ask it to make a change to the program's behaviour, and it does).
a) why maintain instead of making it all disposable? This could be like a dishwasher asking who is going to wash all the mass-manufactured paper cups. Use future-LLM to write something new which does the new thing.
In this year of 2025, in December, I find it untenable for anyone to hold this position unless they have not yet given LLMs a good enough try. They're undeniably useful in software development, particularly on tasks that are amenable to structured software development methodologies. I've fixed countless bugs in a tiny fraction of the time, entirely accelerated by the use of LLM agents. I get the most reliable results simply making LLMs follow the "red test, green test" approach, where the LLM first creates a reproducer from a natural language explanation of the problem, and then cooks up a fix. This works extremely well and reliably in producing high quality results.
You're on the internet, you can make whatever claims you want. But even with no sources or experimental data, you can always add some rational logic to add weight to your claims.
> They're undeniably useful in software development
> I've fixed countless bugs in a tiny fraction of the time
> I get the most reliable results
> This works extremely well and reliably in producing high quality results.
If there's one common thing in comments that seems to be astroturfing for LLM usage, it's that they use lots of superlative adjectives in just one paragraphs.
'It's $CURRENTYEAR' is just a cheap FOMO tactic. We've been hearing these anectodes for multiple current years now. Where is this less buggy software? Does it just happen to never reach users?
"high quality results". Yeah, sure. Then I wanted to check this high quality stuff by myself, it feels way worse than the overall experience in 2020. Or even 2024.
Go to docs, fast page load. Than blank, wait a full second, page loads again. This does not feel like high quality. You think it does because LLM go brrrrrrrr, never complains, says your smart. The resulting product is frustrating.
> They're an interesting phenomen that people have convinced themselves MUST BE USEFUL in the context of software development,
Reading these comments during this period of history is interesting because a lot of us actually have found ways to make them useful, acknowledging that they’re not perfect.
It’s surreal to read claims from people who insist we’re just deluding ourselves, despite seeing the results
Yeah they’re not perfect and they’re not AGI writing the code for us. In my opinion they’re most useful in the hands of experienced developers, not juniors or PMs vibecoding. But claiming we’re all just delusional about their utility is strange to see.
It's absolutely possible to be mistaken about this. The placebo effect is very strong. I'm sure there are countless things in my own workflow that feel like a huge boon to me while being a wash at best in reality. The classic keyboard vs. mouse study comes to mind: https://news.ycombinator.com/item?id=2657135
This is why it's so important to have data. So far I have not seen any evidence of a 'Cambrian explosion' or 'industrial revolution' in software.
> So far I have not seen any evidence of a 'Cambrian explosion' or 'industrial revolution' in software.
The claim was that they’re useful at all, not that it’s a Cambrian explosion.
> Skeptic here: I do think LLMs are a fad for software development.
I think that’s where they’re most useful, for multiple reasons:
- programming is very formal. Either the thing compiles, or it doesn’t. It’s straightforward to provide some “reinforcement” learning based on that.
- there’s a shit load of readily available training data
- there’s a big economic incentive; software developers are expensive
Thanks for articulating this position. I disagree with it, but it is similar to the position I held in late 2024. But as antirez says in TFA, things changed in 2025, and so I changed my mind ("the facts change, I change my opinions"...). LLMs and coding agents got very good about 6 months ago and myself and a lot of other seasoned engineers I respect finally starting using them seriously.
For what it's worth:
* I agree with you that LLMs probably aren't a path to AGI.
* I would add that I think we're in a big investment bubble that is going to pop, which will create a huge mess and perhaps a recession.
* I am very concerned about the effects of LLMs in wider society.
* I'm sad about the reduced prospects for talented new CS grads and other entry-level engineers in this world, although sometimes AI is just used as an excuse to paper over macroeconomic reasons for not hiring, like the end of ZIRP.
* I even agree with you that LLMs will lead to some maintenance nightmares in the industry. They amplify engineers' ability to produce code, and there a lot of bad engineers out there, as we all know: plenty of cowboys/cowgirls who will ship as much slop as they can get away with. They shipped unmaintainable mess before, they will ship three times as much now. I think we need to be very careful.
But, if you are an experienced engineer who is willing to be disciplined and careful with your AI tools, they can absolutely be a benefit to your workflow. It's not easy: you have to move up and down a ladder of how much you rely on the tool, from true vide coding for throwaway use-once helper scripts for some dev or admin task with a verifiable answer, all the way up to hand-crafting critical business logic and only using the agent to review it and to try and break your implementation.
You may still be right that they will create a lot of problems for the industry. I think the ideal situation for using AI coding agents is at a small startup where all the devs are top-notch, have many years of experience, care about their craft, and hold each other to a high standard. Very very few workplaces are that. But some are, and they will reap big benefits. Other places may indeed drown in slop, if they have a critical mass of bad engineers hammering on the AI button and no guard-rails to stop them.
This topic arouses strong reactions: in another thread, someone accused me of "magical thinking" and "AI-induced psychosis" for claiming precisely what TFA says in the first paragraph: that LLMs in 2025 aren't the stochastic parrots of 2023. And I thought I held a pretty middle of the road position on all this: I detest AI hype and I try to acknowledge the downsides as well as the benefits. I think we all need to move past the hype and the dug-in AI hate and take these tools seriously, so we can identify the serious questions amidst the noise.
> No, that doesn’t make you a skeptic in this context.
That's good to hear, but I have been called an AI skeptic a lot on hn, so not everyone agrees with you!
I agree though, there's a certain class of "AI denialism" which pretends that LLMs don't do anything useful, which in almost-2026 is pretty hard to argue.
On the other hand, ever since LLMs came on the scene, there’s been a vocal group claiming that AI will become intelligent and rapidly bring about human extinction - think the r/singularity crowd. This seems just as untenable a position to hold at this point. It’s becoming clear that these things are simply tools. Useful in many cases, but that’s it.
The AI doomers have actually been around long before LLMs. Discussion about AI doom has been popular in the rationalist communities for a very long time. Look up “Roko’s Basilisk” for a history of one of these concepts from 15 years ago that has been pervasive since then.
It has been entertaining to see how Yudkowsky and the rationalist community spent over a decade building around these AI doom arguments, then they squandered their moment in the spotlight by making crazy demands about halting all AI development and bombing data centers.
Lots of money to be made and power to be grabed on this safety and alignment moat.
> This seems just as untenable a position to hold at this point
To say that any prediction about the future shape of a technology is 'untenable' is pretty silly. Unless you've popped back in a time machine to post this.
> That's good to hear, but I have been called an AI skeptic a lot on hn, so not everyone agrees with you!
The context was the article quoted, not HN comments.
I’ve been called all sorts of things on HN and been accused of everything from being a bot to a corporate shill here. You can find people applying labels and throwing around accusations in every thread here. It doesn’t mean much after a while.
You can acknowledge both the fad phenomenon and the usefulness of LLMs at the same time, because both are true.
There's value there, but there's also a lot of hype that will pass, just like the AGI nonsense that companies were promising their current/next model will reach.
> They cite cherry picked announcements showing that LLM usage makes development slower or worse. They opened ChatGPT a couple times a few months ago, asked some questions, and then went “Aha! I knew it was bad!” when they encountered their first bad output instead of trying to work with the LLM to iterate like everyone who gets value out of them.
"Ah-hah you stopped when this tool blew your whole leg off. If you'd stuck with it like the rest of us you could learn to only take off a few toes every now and again, but I'm confident that in time it will hardly ever do that."
> "Ah-hah you stopped when this tool blew your whole leg off.
Yes, because everyone who uses LLMs simply writes a prompt and then lets it write all of the code for them without thinkng! Vibecoding amirite!?
Not just their usefulness, but LLMs themselves are worse than an illusion, they are illusions that people often believe in unquestioningly - perhaps are being forced to believe in unquestionably (because of mandates, or short term time pressures as kind of race to the bottom).
When the ROI in training the next model is realised to be zero or even negative, then yes the money will run out. Existing models will soldier on for a while as (bankrupt) operators attempt to squeeze out the last few cents/pennies, but they will become more and more out of date, and so the 'age of LLMs' will draw to a close.
I confess my skeptic-addled brain initially (in hope?) misread the title of the post as 'Reflections on the end of LLMs in 2025'. Maybe we'll get that for 2026!
You're not a skeptic but you're not fully a supporter either. You live in this grey zone of contradictions.
First you find them useful but not intelligent. That is a bit of a contradiction. Basically anyone who has used AI, seriously knows that while it can be used to regurgitate generic filler and bootstrap code it can also be used to solve complex domain specific problems that is not at all part of its training data. This by definition makes it intelligent and it makes it so we know the LLM understands the problem it was given. it would be This by definition makes it intelligent, and it makes it so we know the LLM understands the problem it was given. It would be disingenuous for me not to mention how wrong and how much an LLM hallucinates, so obviously the thing has flaws and is not super intelligence. But you have to judge the entire spectrum of what it does. It gets things right and it gets things wrong and getting something complex right makes it intelligent while getting something wrong does not predude it from intelligence.
Second most non skeptics aren't saying all human work is going to be obsolete. no one can predict the future. But you've got to be blind if you don't see the trendline of progress. Literally look at the progress of AI for the past 15 years. You have to be next level delusional if you can't project another 15 years and see that obviously a super intelligence or at least an intelligence comparable to humans is not a reasonable prediction. Most skeptics like you ignore the trendline and cling to what Yann lecunn said about llms being stochastic parrots. It is very likely something with human intelligence exists in the future and in our lifetimes, whether or not its an LLM remains to be seen but we can't ignore where the trendlines are pointing.
> The change occurred because LLMs are useful for programming in 2025
But the skeptics and anti-AI commenters are almost as active as ever, even as we enter 2026.
The debate about the usefulness of LLMs has grown into almost another culture war topic. I still see a constant stream of anti-AI comments on HN and every other social platform from people who believe the tools are useless, the output is always unusable, people who mock any idea that operator skill has an impact on LLM output, or even claims that LLMs are a fad that will go away.
I’m a light LLM user ($20/month plan type of usage) but even when I try to share comments about how I use LLMs or tips I’ve discovered, I get responses full of vitriol and accusations of being a shill.
It absolutely is culture war. I can easily imagine a less critical version of myself having ended up in that camp. It comes across to me that the perspective is informed by core values and principles surrounding what "intelligence" is.
I butted heads with many earlier on, and they did nothing to challenge that frame meaningfully. What did change is my perception of the set of tasks that don't require "intelligence". And the intuition pump for that is pretty easy to start — I didn't suppose that Deep Blue heralded a dawn of true "AI", either, but chess (and now Go) programs have only gotten even more embarrassingly stronger. Even if researchers and puzzle enthusiasts might still find positions that are easier for a human to grok than a computer.
> from people who believe the tools are useless, the output is always unusable, people who mock any idea that operator skill has an impact on LLM output
You are attacking a strawman. Almost nobody claims that LLMs are useless or you can never use their output.
Those claims are all throughout this thread and in replies to my comments.
It’s not a strawman. It’s everywhere on HN.
Its also significantly lowered because management is forcing AI on everyone at gunpoint, and saying that you'll lose your job if you don't love AI
That's a very easy way to get everyone to pinky promise that they absolutely love AI to the ends of the earth
Comment was deleted :(
There is some limited truth in this but we still see claims that LLMs are "just next token predictors" and "just regurgitate code they read online". These are just uninformed and wrong views. It's fair to say that these people were (are!) wrong.
> we still see claims that LLMs are "just next token predictors" and "just regurgitate code they read online". These are just uninformed and wrong views. It's fair to say that these people were (are!) wrong.
I don't think it's fair to say that at all. How are LLMs not statistical models that predict tokens? It's a big oversimplification but it doesn't seem wrong, the same way that "computers are electricity running through circuits" isn't a wrong statement. And in both cases, those statements are orthogonal to how useful they are.
> How are LLMs not statistical models that predict tokens?
there's LLMs as in "the blob of coefficients and graph operations that runs on a gpu whenever there's an inference" which is absolutely "a statistical model that predict tokens" and LLMs as in "the online apps that iterates and have access to an entire automated linux environment that can run $LANGUAGE scripts and do web queries when an intermediary statistical output contains too much maybes and use the result to drive further inference.".
It's wrong because it’s deliberately used to mischaracterize the current abilities of AI. Technically it's not wrong but the context of usage in basically every case is that the person saying it is deliberately trying to use the concept to downplay AI as just a pattern matching machine.
I'm a bit confused. You say it's wrong, but then later say it's not wrong, and just because it can be used to downplay advancements in AI doesn't mean that it's wrong and saying it's wrong because it can be used that way is a bit disingenuous.
Objecting to these claims is missing their point. Saying these things is really about denying that the LLMs "think" in any meaningful sense. (And the retorts I've seen in those discussions often imply very depressing and self-deprecating views of what it actually means to be human.)
Leave it to HN to be militantly misanthropic to sell chatbots.
One only has to go read the original vibe coding thread[0] from ...ten months ago(!) to see the resistance and skepticism loud and clear. The very first comment couldn't be more loud about it.
It was possible to create things in gpt-3.5. The difference now is it aligns with the -taste- of discerning programmers, which has a little, but not everything, to do with technological capability.
"Look Ma, no hands!" vibe coding, as described by Karpathy, where you never look at the code being generated, was never a good idea, and still isn't. Some people are now misusing "vibe coding" to describe any use of LLMs for coding, but there is a world of difference between using LLMs in an intelligent considered way as part of the software development process, and taking a hit on the bong and "vibe coding" another "how many calories in this plate of food" app.
Karpathy himself has used "vibe coding" to describe "usage of LLMs for coding," so it's fair to say the definition has expanded.
Which frankly makes it pretty useless. Describing how I use them at work as "vibe coding" in the same vein as a random redditor generating whatever on Replit is useless. It's a definition so wide as to have no explanatory power.
> The difference now is it aligns with the -taste- of discerning programmers
This... doesn't match the field reports I've seen here, nor what I've seen from poking around the repos for AI-powered Show HN submissions.
On the tabs vs spaces battleground there are no winners; we just need to lower our expectations :)
you just need to hop into any AI reltaed thread (even this one) and it's pretty clear no one is revising anything, skepticism is there lol.
Yes, it's a strange take. It's not that programmers have changed their mind about unchanging LLMs, but rather that LLMs have changed and are now useful for coding, not just CoPilot autocomplete like the early ones.
What changed was the use of RLVR training for programming, resulting in "reasoning" models that are now attempting to optimize for a long-horizon goal (i.e. bias generation towards "reasoning steps" that during training let to a verified reward), as opposed to earlier LLMs where RL was limited to RLHF.
So, yeah, the programmers who characterized early pre-RLVR coding models as of limited use were correct. Now the models are trained differently and developers find them much more useful.
I thought I'd read a lot of these threads this year, and also discussed off-site the use of coding agents and the technology behind them; but this is genuinely the first time I've seen the term "RLVR".
RLVR "reinforcement learning for verifiable rewards" refers to RL used to encourage reasoning towards achieving long-horizon goals in areas such as math and programming, where the correctness/desirability of a generated response (or perhaps an individual reasoning step) can be verified in some way. For example generated code can be verified by compiling and running it, or math results verified by comparing to known correct results.
The difficulty of using RL more generally to promote reasoning is that in the general case it's hard to define correctness and therefore quantify a reward for the RL training to use.
> The difficulty of using RL more generally to promote reasoning is that in the general case it's hard to define correctness and therefore quantify a reward for the RL training to use.
Ah, hence the "HF" angle.
RLHF really has a different goal - it's not about rewarding/encouraging reasoning, but rather rewarding outputs that match human preferences for whatever reason (responses that are more on-point, or politer, or longer form, etc, etc).
The way RLHF works is that a smallish amount of feedback data of A/B preferences from actual humans is used to train a preference model, and this preference model is then used to generate RL rewards for the actual RLHF training.
RLHF has been around for a while and is what tamed base models like GPT 3 into GPT 3.5 that was used for the initial ChatGPT, making it behave in more of an acceptable way!
RLVR is much more recent, the basis of the models that do great at math and programming. If you talk about reasoning models being RL trained then it's normally going to imply RLVR, but it seems there's a recent trend of people calling it RLVR to be more explcit.
> generated code can be verified by compiling and running it
I think this gets to the crux of the issue with LLMs for coding (and indeed 'test orientated development'). For anything beyond a most basic level of complexity (i.e. anything actually useful), code cannot be verified by compiling and running it. It can only be verified - to a point - by skilled human inspection/comprehension. That is the essence of code really, a definition of action, given by humans, to a machine for running with /a prior/ unenumerated inputs. Otherwise it is just a fancy lookup table. By definition then not all inputs and expected outputs can be tabulated, tested for, or rewarded for.
I have programmed 30K+ hours. Do LLMs make bad code: yes all the time (at the moment zero clue about good architecture). Are they still useful: yes, extremely so. The secret sauce is that you'd know exactly what to do without them.
"Do LLMs make bad code: yes all the time (at the moment zero clue about good architecture). Are they still useful: yes, extremely so."
Well, lets see how all the economics will play out. LLMs might be really useful, but as far as I can see all the AI companies are not making money on inference alone. We might be hitting plateau in capabilities with money being raised on vision of being this godlike tech that will change the world completely. Sooner or later the costs will have to meet the reality.
> but as far as I can see all the AI companies are not making money on inference alone
The numbers aren’t public, but from what companies have indicated it seems inference itself would be profitable if you could exclude all of the R&D and training costs.
But this debate about startups losing money happens endlessly with every new startup cycle. Everyone forgets that losing money is an expected operating mode for a high growth startup. The models and hardware continue to improve. There is so much investment money accelerating this process that we have plenty of runway to continue improving before companies have to switch to full profit focus mode.
But even if we ignore that fact and assume they had to switch to profit mode tomorrow, LLM plans are currently so cheap that even a doubling or tripling isn’t going to be a problem. So what if the monthly plans start at $40 instead of $20 and the high usage plans go from $200 to $400 or even $600? The people using these for their jobs paying $10K or more per month can absorb that.
That’s not going to happen, though. If all model progress stopped right now the companies would still be capturing cheaper compute as data center buildouts were completed and next generation compute hardware was released.
I see these predictions as the current equivalent of all of the predictions that Uber was going to collapse when the VC money ran out. Instead, Uber quietly settled into steady operation, prices went up a little bit, and people still use Uber a lot. Uber did this without the constant hardware and model improvements that LLM companies benefit from.
> if you could exclude all of the R&D and training costs
LLMs have a short shelf-life. They don't know anything past the day they're trained. It's possible to feed or fine-tune them a bit of updated data but its world knowledge and views are firmly stuck in the past. It's not just news - they'll also trip up on new syntax introduced in the latest version of a programming language.
They could save on R&D but I expect training costs will be recurring regardless of advancements in capability.
If the tech plateaus today, LLM plans will go to $60-80/mo, Chinese-hosted chinese models will be banned (national security will be the given reason), and the AI companies will be making ungodly money.
I'm not gonna dig out the math again, but if AI usage follows the popularity path of cell phone usage (which seems to be the case), then trillions invested has a ROI of 5-7 years. Not bad at all.
Develops will be paying, other people that use it for emails or bun baking recipies - won’t.
OpenAI would still lose money if the basic subscriptions were costing $500 and they had the same amount of subscribers as right now. There's not a single model shop who's ever making any money, let alone ungodly amounts.
These costs you are referencing are training/R&D costs. Take those largely away, and you are left with inference costs, which are dirt cheap.
Now you have a world of people who have become accustomed to using AI for tons of different things, and the enshittification starts ramping up, and you find out how much people are willing to pay for their ChatGPT therapist.
This is literally lies and total bullshit. They’d be making insane profits at those prices.
They don’t have to spend all their cash at once on the 30GW of data centers commitments.
Why go on the internet and tell stupid lies?
Doesn't OpenRouter prove that inference is profitable? Why would random third parties subsidize the service for other random people online? Unless you're saying that only large frontier models are unprofitable, which I still don't think is the case but is harder to prove.
They're not making money on inference alone because they blow ungodly amounts on R&D. Otherwise it'd be a very profitable business.
Private equity will swoop in, bankrupt the company to shirk the debt of training / R&D, and hold on to the models in a restructuring. +Enshittification to squeeze maximum profit. This is why they're referred to as vulture capitalists.
> but as far as I can see all the AI companies are not making money on inference alone.
This was the missed point on why GPT5 was such an important launch (quality of models and vibes aside). It brought the model sizes (and hence inference cost) to more sustainable numbers. Compared to previous SotA (GPT4 at launch, or o1/3 series), GPT5 is 8x-12x cheaper! I feel that a lot of people never re-calibrated their views on inference.
And there's also another place where you can verify your take on inference - the 3rd party providers that offer "open" models. They have 0 incentive to subsidise prices, because people that use them often don't even know who serves them, so there's 0 brand recognition (say when using models via openrouter).
These 3rd party providers have all converged towards a price-point per billion param models. And you can check those prices, and have an idea on what would be proffitable and at what sizes. Models like dsv3.2 are really really cheap to serve, for what they provide (at least gpt5-mini equivalent I'd say).
So yes, labs could totally become profitable with inference alone. But they don't want that, because there's an argument to be made that the best will "keep it all". I hope, for our sake as consumers that it isn't the case. And so far this year it seems that it's not the case. We've had all 4 big labs one-up eachother several times, and they're keeping eachother honest. And that's good for us. We get frontier level offerings at 10-25$/MTok (Opus, gpt5.2, gemini3pro, grok4), and we get highly capable yet extremely cheap models at 1.5-3$/MTok (gemini3-flash, gpt-minis, grok-fast, etc)
Anthropic - for one - is making lots of money on inference.
Comment was deleted :(
This is one of the reasons why I'm surprised to see so many people jump on board. We're clearly in the "release product for free/cheap to gain customers" portion of the enshittification plan, before the company starts making it completely garbage to extract as much money as possible from the userbase
Having good quality dev tools is non negotiable, and I have a feeling that a lot of people are going to find out the hard way that reliability and it not being owned by profit seeking company is the #1 thing you want in your environment
If I ask a SOTA model to just implement some functionality, it doesn’t necessarily do so using a great architectural approach.
Whenever I ask a SOTA model about architecture recommendations, and frame the problem correctly, I get top notch answers every single time.
LLMs are terrific software architects. And that’s not surprising, there has to be tons of great advice on how to correctly build software in the training corpus.
They simply aren’t great software architects by default.
You know that if you ask the LLM correctly you get top notch answers, because you have the experience to judge if the answer is top notch or not.
I spend a couple of hours per week teaching software architecture to a junior in my team, because he has not the experience to not only ask correctly but also assess the quality of the answer from the LLM.
One of the mental frameworks that convinced me is how much of a "free action" it is. Have the LLM (or the agent) churn on some problem and do something else. Come back and review the result. If you had to put significant effort into each query, I agree it wouldn't be worth it, but you can just type something into the textbox and wait.
Are you counting the time/effort to evaluate the accuracy and relevance of an LLM left to "think" for a while?
OK, maybe. But how many programmers will know this in 10 years' time as use of LLMs is normalized? I like to hear what employers are saying already about recent graduates.
They’d have to be hiring recent graduates for you to hear that perspective.
And, as much as what I’ve just said is hyperbolically pessimistic, there is some truth to it.
In the UK a bunch of factors have coincided to put the brakes on hiring, especially smaller and mid-size businesses. AI is the obvious one that gets all the press (although how much it’s really to blame is open to question in my view), but the recent rise in employer AI contribution, and now (anecdotally) the employee rights bill have come together to make companies quite gunshy when it comes to hiring.
*Employer NI contribution, not employer AI contribution - a pox be upon autocorrect
This is nothing new - entire industries and skills died out as the apprenticeship system and guilds were replaced by automation and factories
I'm uncertain that programming will be a major profession in 10 years.
Programming is more like math than creative writing. It's largely verifiable, which is where RL is repeatedly proven to eventually achieve significantly better than human intelligence.
Our saving grace, for now, is that it's not entirely verifiable because things like architectural taste are hard to put into a test. But I would not bet against it.
If they don't learn that they won't get very far.
This is true for everything, any tool you might use. Competent users of tools understand how they work and thus their limitations and how they're best put to work.
Incompetents just fumble around and sometimes get things working.
hahah what are you talking about, there's no such thing as long term!
Comment was deleted :(
So, it's like taking off your pants to fart.
I mean if you leaned heavily on stack overflow before AI then nothing really changes.
It’s basically the same idea but faster.
I wish people would be more vocal in calling out that LLMs have unquestionably failed to deliver on the 2022-2023 promises of exponential improvement at the foundation model level. Yes they have improved, and there is more tooling around them, but clearly the difference between LLMs in 2025 and 2023 is not as large as 2023 and 2021. If there was truly exponential progress, there would be no possibility of debating this. Which makes comments like this:
> The fundamental challenge in AI for the next 20 years is avoiding extinction.
Seem to be almost absurd without further, concrete justification.
LLMs are still quite useful, I'm glad they exist and honestly am still surprised more people don't use them in software. Last year I was very optimistic that LLMs would entirely change how we write software by making use of them as a fundamental part of our programming tool kit (in a similar way that ML fundamentally changed the options available to programmers for solving problems). Instead we've just come up with more expensive ways to extend the chat metaphor (the current generation of "agents" is disappointingly far from the original intent of agents in AI/CS).
The thing I am increasingly confused about is why so many people continue to need LLMs to be more than they obviously are. I get why crypto boosters exist, if I have 100 BTC, I have a very clear interest getting others to believe that they are valuable. But with "AI", I don't quite get, for the non-VS/founder, why it matters that people start foaming out the mouth over AI rather than just using it for the things it's good at.
Though I have some growing sense that this need is related to another trend I've personally started with witness: AI psychosis is very real. I personally know an increasing number of people who are spiraling into an LLM induced hallucinated world. The most shocking was someone talking about how losing human relationships is inevitable because most people can't keep up with those enhanced by AI acceleration. On the softer end I know more and more people who quietly confess how much they let AI work as a perpetual therapist, guiding their every decision (which is more than most people would let a human therapist guide there directions).
These comments are a bit scary. It feels like LLMs managed to exploit some fault in the human psyche. I think the biggest danger of this technology is that people are not mentally equipped to handle it.
ChatGPT and Claude Code are industrial strength fans designed to blow smoke up your ass at rates once thought impossible
The fault is well known: chatbots are bootlickers. They always praise users and never criticize them, so chatbots are quickly promoted to the personal advisor position. The AI of Sauron of technological age.
This is a very real worry for the AI rollout for the general population. But are folks here using AI to blow smoke up their asses as a sibling comment stated? I'd like to believe we're using it to ask questions, prototype, and then measure... not just blow smoke up there...
> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.
This makes me think: I wonder if Goodhart's law[1] may apply here. I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend. Should we care or would it be ok for AI to produce code that passes all tests and is faster? Would the AI become good at creating explanations for humans as a side effect?
And if Goodhard's law doesn't apply, why is it? Is it because we're only doing RLVR fine-tuning on the last layers of the network so all the generality of the pre-training is not lost? And if this is the case, could this be a limitation in not being able to be creative enough to come up with move 37?
I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend.
This is generally true for code optimised by humans, at least for the sort of mechanical low level optimisations that LLMs are likely to be good at, as opposed to more conceptual optimisations like using better algorithms. So I suspect the same will be true for LLM-optimised code too.
> I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend.
Superoptimizers have been around since 1987: https://en.wikipedia.org/wiki/Superoptimization
They generate fast code that is not meant to be understood or extended.
But there output is (usually) executable code, and is not committed in a VCS. So the source code is still readable.
When people use LLMs to improve their code, they commit their output to Git to be used as source code.
...hmm, at some point we'll need to find a new place to draw the boundaries, won't we?
Until ~2022 there was a clear line between human-generated code and computer-generated code. The former was generally optimized for readability and the latter was optimized for speed at all cost.
Now we have computer-generated code in the human layer and it's not obvious what it should be optimized for.
> it's not obvious what it should be optimized for
It should be optimized for readability by AI. If a human wants to know what a given bit of code does, they can just ask.
Ehh I think if it ends up being a half good architecture you wind up with a difficult to understand kernel that never needs touching.
A list of unverifiable claims, stated authoritatively. The lady doth protest too much.
The post is about his opinions.
reads more like propaganda.
> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.
Yup, this will absolutely be a big driver of gains in AI for coding in the near future. We actually built a benchmark based on this exact principle: https://algotune.io/
This is a bunch of "I believe" and "I think" with no sources by a random internet person.
Ah, I see you have discovered blogs! They're a cool form of writing from like ~20 years ago which are still pretty great. Good thing they show up on this website, it'd be rather dull with only newspapers and journal articles doncha think?
he’s not a “random internet person”, he created Redis. Despite that, I don’t know how authoritative of a figure he is with respect to AI research. He’s definitely a prolific programmer though.
To be fair, you may find equally capable random people in this thread, doesn't mean they speak with any kind of authority.
That still qualifies as a random internet person, wrt the topic. And I think the emphasis is on no sources and I beliefs and I thinks, in any case :)
Comment was deleted :(
There are plenty of Nobel laureates who well, do rest on their laurels and dive deep into pseudoscience after that.
Accomplishment in one field does not make one an expert, nor even particularly worth listening to, in any other. Certainly it doesn't remove the burden of proof or necessity to make an actual argument based on more then simply insisting something is true.
Not sure why you're being downvoted. It's such a common phenomenon that it has its own name: Nobelitis.
Careful with the scientism. The job of science is to explain the nature of reality, but we can only describe what we experience.
Yeah, it’s called “Reflections”.
That is what a blog post is. Someone documenting what they think about a topic.
It's not the case that every form of writing has to be an academic research paper. Sometimes people just think things, and say them – and they may be wrong, or they may be right. And they sometime have some ideas that might change how you think about an issue as a result.
Indeed, and, what do you 'believe' or 'think' in response?
It's the personal blog of a famous internet person
> by a random internet person.
The creator of Redis.
Comment was deleted :(
Sure but quite a few claims in the article are about AI research. He does not have any qualifications there. If the focus was more on usefulness, that would be a different discussion and then his experience does add weight.
> smart, intelligent person gives opinion
> woah buddy this persons opinion isn’t worth anything more than a random homeless person off the street. they’re not an expert in this field
Is there a term for this kind of pedantry? Obviously we can put more weight behind the words a person says if they’ve proven themselves trustworthy in prior areas - and we should! We want all people to speak and let the best idea win. If we fallback to only expert opinions are allowed that’s asking to get exploited. And it’s also important to know if antirez feels comfortable spouting nonsense.
This is like a basic cornerstone of a functioning society. Though, I realize this “no man is innately better than another, evaluate on merit” is mostly a western concept which might be some of my confusion.
> Obviously we can put more weight behind the words a person says if they’ve proven themselves trustworthy in prior areas - and we should!
no, you shouldn't
this is how you end up with crap like vaccine denialism going mainstream
"but he's a doctor!"
Credentialism isn't a fix for the problem you've outlined. If anything, over-reliance on credentials bolsters and lends credence to crazy claims. The media hyper-fixates on it and amplifies it.
We've got Avi Loeb on mainstream podcasts and TV spouting baseless alien nonsense. He's a preeminent in his field, after all.
Focus on what you understand. If you don't understand, learn more.
Don't see how that gives him more credibility wrt AI.
His entirely unsupported statements about AGI are pretty useless, for instance.
So many people assume AGI is possible, yet no one has a concrete path to it or even a concrete definition of what it or what form it might take.
What is a "source"? Isn't it just "another random internet person"?
> A few well known AI scientists believe that what happened with Transformers can happen again, and better, following different paths, and started to create teams, companies to investigate alternatives to Transformers and models with explicit symbolic representations or world models.
I’m actually curious about this and would love pointers to the folks working in this area. My impression from working with LLMs is there’s definitely a “there” there with regards to intelligence - I find the work showing symbolic representation in the structure of the networks compelling - but the overall behavior of the model seems to lack a certain je ne sais quoi that makes me dubious that they can “cross the divide,” as it were. I’d love to hear from more people that, well, sais quoi, or at least have theories.
> * The fundamental challenge in AI for the next 20 years is avoiding extinction.
sorry, I say it's folding the laundry. with an aging population, that's the most, if not only, useful thing.
> * Programmers resistance to AI assisted programming has lowered considerably. Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway: now the return on the investment is acceptable for many more folks.
Could not agree more. I myself started 2025 being very skeptical, and finished it very convinced about the usefulness of LLMs for programming. I have also seen multiple colleagues and friends go through the same change of appreciation.
I noticed that for certain task, our productivity can be multiplied by 2 to 4. So hence comes my doubts: are we going to be too many developers / software engineers ? What will happen for the rests of us ?
I assume that other fields (other than software-related) should also benefits from the same productivity boosts. I wonder if our society is ready to accept that people should work less. I think the more likely continuation is that companies will either hire less, or fire more, instead of accepting to pay the same for less hours of human-work.
> Are we going to be too many developers / software engineers ? What will happen for the rests of us?
I propose that we should raise the bar for the quality of software now.
Yes, certainly agree. A few days ago here there was this blog claiming how formal verification would become widely more used with AI. The author claiming that AI will help us with the difficulty barrier to write formal proofs.
I like to think of it as adding new lanes to a highway. More will be delivered until it all jams up again.
> And I've vibe coded entire ephemeral apps just to find a single bug because why not - code is suddenly free, ephemeral, malleable, discardable after single use. Vibe coding will terraform software and alter job descriptions.
I'm not super up-to-date on all that's happening in AI-land, but in this quote I can find something that most techno-enthusiast seem to have decided to ignore: no, code is not free. There are immense resources (energy, water, materials) that go into these data centers in order to produce this "free" code. And the material consequences are terribly damaging to thousands of people. With the further construction of data centers to feed this free video coding style, we're further destroying parts of the world. Well done, AGI loverboys.
You know what uses roughly 80 times more water in the US alone than water used by AI data centers world wide? Corn.
Assuming your fact is true, that corn merely uses an order of magnitude or two more water than AI is surprising, given the utility of corn. It feeds the entire US (hundreds of millions of people), is used as animal feed (thus also feeding us), and is widely exported to feed other people. I the spirit of the “I think”s and “I believe”s of this blog post, I think that corn has a lot more utility than AI.
> It feeds the entire US (hundreds of millions of people), is used as animal feed (thus also feeding us), and is widely exported to feed other people.
Not really. Most corn grown in the US isn’t even fit for consumption. It is primarily used for fermenting bioethanol.
Source?
Can you provide numbers relative to things many of us already do?
- drive to the store or to work
- take a shower
- eat meat
- fly on vacation
And so on... thanks!
Of those things you mention, I only take showers (but not even everyday). But maybe I’m an outlier.
> drive to the store or to work
If you don't do that, and are a homesteader, then yes. You are a very small minority outlier. (Assuming you aren't ordering supplies delivered instead of driving to the store.
> Eat meat.
Yes, not eating meat is in the minority.
> Fly on vacation.
So, don't vacation, walk to vacation, or drive to vacation? 1/3 are also consumptive.
It seems you are either a very significant outlier, or you're being daft. I'm curious which. Would you mind clarifying?
My guess is that “free” is meant in terms of the old definition where you’re not having to pay someone to create and maintain it. But yes, it’s important to realize there really is a cost here and one that can’t just be captured by a dollar amount.
Practical question: when getting the AI to teach you something, eg how attention can be focused in LLMs, how do you know it’s teaching you correct theory? Can I use a metric of internal consistency, repeatedly querying it and other models with a summary of my understanding? What do you all do?
> What do you all do?
Google for non-AI sources. Ask several models to get a wider range of opinions. Apply one’s own reasoning capabilities where applicable. Remain skeptical in the absence of substantive evidence.
Basically, do what you did before LLMs existed, and treat LLM output like you would have a random anonymous blog post you found.
[flagged]
> For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say. In 2025 finally almost everybody stopped saying so.
It's interesting that Terrence Tao just released his own blog post stating that they're best viewed as stochastic generators. True he's not an AI researcher, but it does sound like he's using AI frequently with some success.
"viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems" [0].
What happened recently is that all the serious AI researches that were in the stochastic parrot side changed point of view but, incredibly, people without a deep understanding on such matters, previously exposed to such arguments, are lagging behind and still repeat arguments that the people who popularized them would not repeat again.
Today there is no top AI scientist that will tell you LLMs are just stochastic parrots.
You seem to think the debate is settled, but that’s far from true. It’s oddly controlling to attempt to discredit any opposition to this viewpoint. There’s plenty of research supporting the stochastic view of these models, such as Apple’s “Illusion” papers. Tao is also a highly respected researcher, and has worked with these models at a very high level - his viewpoint has merit as well.
The stochastic parrot framing makes some assumptions, one of them being that LLMs generate from minimal input prompts, like "tell me about Transformers" or "draw a cute dog". But when input provides substantial entropy or novelty, the output will not look like any training data. And longer sessions with multiple rounds of messages also deviate OOD. The model is doing work outside its training distribution.
It's like saying pianos are not creative because they don't make music. Well, yes, you have to play the keys to hear the music, and transformers are no exception. You need to put in your unique magic input to get something new and useful.
Now that you’re here, what do you mean by “scientific hints” in your first paragraph?
> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.
Super skeptical of this claim. Yes, if I have some toy poorly optimized python example or maybe a sorting algorithm in ASM, but this won’t work in any non-trivial case. My intuition is that the LLM will spin its wheels at a local minimum the performance of which is overdetermined by millions of black-box optimizations in the interpreter or compiler signal from which is not fed back to the LLM.
> but this won’t work in any non-trivial case
Earlier this year google shared that one of their projects (I think it was alphaevolve) found an optimisation in their stack that sped up their real world training runs by 1%. As we're talking about google here, we can be pretty sure it wasn't some trivial python trick that they missed. Anyhow, at ~100M$ / training run, that's a 1M$ save right there. Each and every time they run a training run!
And in the past month google also shared another "agentic" workflow where they had gemini2.5-fhash! (their previous gen "small" model) work autonomously on migrating codebases to support aarch64 architecture. There they found ~30% of the projects worked flawlessly end-to-end. Whatever costs they save from switching to ARM will translate in real-world $ saved (at google scale, those can add up quickly).
The second example has nothing to do with the first. I am optimistic that LLMs are great for translations with good testing frameworks.
“Optimize” in a vacuum is a tarpit for an LLM agent today, in my view. The Google case is interesting but 1% while significant at Google scale doesn’t move the needle much in terms of statistical significance. It would be more interesting to see the exact operation and the speed up achieved relative to the prior version. But it’s data contrary to my view for sure. The cynic also notes that Google is in the LLM hype game now, too.
Why do you think it's not relevant to the "optimise in a loop" thing? The way I think of it, it's using LLMs "in a loop" to move something from arch A (that costs x$) to arch B (that costs y$), where y is cheaper than x. It's still an autonomous optimisation done by LLMs, no?
Did the LLM suggest moving to the new architecture? If not that’s not what’s under discussion. That’s just following an order to translate.
Ah, I see your point.
> As we're talking about google here, we can be pretty sure it wasn't some trivial python trick that they missed.
Strong disagree on the reasoning here. Especially since google is big and have thousands of developers, there could be a lot of code and a lot of low hanging fruit.
> By finding smarter ways to divide a large matrix multiplication operation into more manageable subproblems, it sped up this vital kernel in Gemini’s architecture by 23%, leading to a 1% reduction in Gemini's training time.
The message I replied to said "if I have some toy poorly optimized python example". I think it's safe to say that matmul & kernel optimisation is a bit beyond a small python example.
There was a discussion the other day where someone asked Claude to improve a code base 200x https://news.ycombinator.com/item?id=46197930
That’s most definitely not the same thing, as „improving a codebase” is an open ended task with no reliable metrics the agent could work against.
> The fundamental challenge in AI for the next 20 years is avoiding extinction.
That's a weird thing to end on. Surely it's worth more than one sentence if you're serious about it? As it stands, it feels a bit like the fearmongering Big Tech CEOs use to drive up the AI stocks.
If AI is really that powerful and I should care about it, I'd rather hear about it without the scare tactics.
I think https://en.wikipedia.org/wiki/Existential_risk_from_artifici... has much better arguments than the LessWrong sources in other comments, and they weren't written by Big Tech CEOs.
Also "my product will kill you and everyone you care about" is not as great a marketing strategy as you seem to imply, and Big Tech CEOs are not talking about risks anymore. They currently say things like "we'll all be so rich that we won't need to work and we will have to find meaning without jobs"
What makes it a scare tactic? There are other areas in which extinction is a serious concern and people don't behave as though it's all that scary or important. It's just a banal fact. And for all of the extinction threats, AI included, it's very easy to find plenty of deep dive commentary if you care.
I would say yes, everyone should care about it.
There is plenty of material on the topic. See for example https://ai-2027.com/ or https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
The fact that people here take AI 2027 seriously is embarrassing. The authors are already beginning to walk back these claims: https://x.com/eli_lifland/status/1992004724841906392?s=20
Comment was deleted :(
fear mongering science fiction, you may as well cite Dune or Terminator
There's arguably more dread and quiet constrained horror in With Folded Hands ... (1947)
Despite the humanoids' benign appearance and mission, Underhill soon realizes that, in the name of their Prime Directive, the mechanicals have essentially taken over every aspect of human life.
No humans may engage in any behavior that might endanger them, and every human action is carefully scrutinized. Suicide is prohibited. Humans who resist the Prime Directive are taken away and lobotomized, so that they may live happily under the direction of the humanoids.
~ https://en.wikipedia.org/wiki/With_Folded_Hands_...This hardly disproves the point: no one is taking this topic seriously. They're just making up a hostile scenario from science fiction and declaring that's what'll happen.
Comment was deleted :(
Lesswrong looks like a forum full of terminally online neckbeards who discovered philosophy 48 hours ago, you can dismiss most of what you read there don't worry
If only they had discovered philosophy. Instead they NIH their own philosophy, falling into the same ditches real philosophers climbed out of centuries ago.
Yeah, well known marketing trick that Big Companies do.
Oil companies: we are causing global warming with all this carbon emissions, are you scared yet? so buy our stock
Pharma companies: our drugs are unsafe, full of side effects, and kill a lot of people, are you scared yet? so buy our stock
Software companies: our software is full of bugs, will corrupt your files and make you lose money, are you scared yet? so buy our stock
Classic marketing tactics, very effective.
This has been well discussed before, for example in this book: https://ifanyonebuildsit.com/
There's videos about Diffusion LLMs too, apparently getting rid of the linear token generation. But I'm no ML engineer.
As someone who worked on transformer-based diffusion models before (not for language though), i can say one thing: they're hard.
Denoising diffusion models benefited a lot from the u-net, which is a pretty simple network (compared to a transformer) and very well-adapted to the denoising task. Plus diffusion on images is great to research because it's very easy to visualize, and therefore to wrap your head around
Doing diffusion on text is a great idea, but my intuition is it will prove more challenging, and probably take a while before we get something working
Thanks. Do you see that part of the field as plateauing or ramping up (even taking into account the difficulty).
If you know labs / researchers on the topic, i'd love to read their page / papers
>* For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say. In 2025 finally almost everybody stopped saying so.
Man, Antirez and I walk in very different circles! I still feel like LLMs fall over backwards once you give them an 'unusual' or 'rare' task that isn't likely to be presented in the training data.
LLMs certainly struggle with tasks that require knowledge that is not provided to them (at significant enough volume/variance to retain it). But this is to be expected of any intelligent agent, it is certainly true of humans. It is not a good argument to support the claim that they are Chinese Rooms (unthinking imitators). Indeed, the whole point of the Chinese Room thought experiment was to consider if that distinction even mattered.
When it comes to of being able to do novel tasks on known knowledge, they seem to be quite good. One also needs to consider that problem-solving patterns are also a kind of (meta-)knowledge that needs to be taught, either through imitation/memorisation (Supervised Learning) or through practice (Reinforcement Learning). They can be logically derived from other techniques to an extent, just like new knowledge can be derived from known knowledge in general, and again LLMs seem to be pretty decent at this, but only to a point. Regardless, all of this is definitely true of humans too.
In most cases, LLMs has the knowledge(data). They just can't generalize them like human do. They can only reflect explicit things that are already there.
I don't think that's true. Consider that the "reasoning" behaviour trained with Reinforcement Learning in the last generation of "thinking" LLMs is trained on quite narrow datasets of olympiad math / programming problems and various science exams, since exact unambiguous answers are needed to have a good reward signal, and you want to exercise it on problems that require non-trivial logical derivation or calculation. Then this reasoning behaviour gets generalised very effectively to a myriad of contexts the user asks about that have nothing to do with that training data. That's just one recent example.
Generally, I use LLMs routinely on queries definitely no-one has written about. Are there similar texts out there that the LLM can put together and get the answer by analogy? Sure, to a degree, but at what point are we gonna start calling that intelligent? If that's not generalisation I'm not sure what is.
To what degree can you claim as a human that you are not just imitating knowledge patterns or problem-solving patterns, abstract or concrete, that you (or your ancestors) have seen before? Either via general observation or through intentional trial-and-error. It may be a conscious or unconscious process, many such patterns get backed into what we call intuition.
Are LLMs as good as humans at this? No, of course, sometimes they get close. But that's a question of degree, it's no argument to claim that they are somehow qualitatively lesser.
"In 2025 finally almost everybody stopped saying so."
I haven't.
Some people are slower to understand things.
Well exactly ;)
I don’t think this is quite true.
I’ve seen them do fine on tasks that are clearly not in the training data, and it seems to me that they struggle when some particular type of task or solution or approach might be something they haven’t been exposed to, rather than the exact task.
In the context of the paragraph you quoted, that’s an important distinction.
It seems quite clear to me that they are getting at the meaning of the prompt and are able, at least somewhat, to generalise and connect aspects of their training to “plan” and output a meaningful response.
This certainly doesn’t seem all that deep (at times frustratingly shallow) and I can see how at first glance it might look like everything was just regurgitated training data, but my repeated experience (especially over the last ~6-9 months) is that there’s something more than that happening, which feels like whet Antirez was getting at.
Give me an example of one of those rare or unusual tasks.
Set the font size of a simple field in openxml. Doesn't even seem that rare. It said to add a run inside and set the font there. Didn't do anything. I ended up reverse engineering the output out of ms word. This happened yesterday.
Where to understand more about how chain of thoughs really affects LLMs performance? I read the seminal paper but all it says is that it's basically another prompt engineering tecnique that improves accuracy.
Chain of thought, now including "reasoning", are basically a work around for the simplistic nature of the Transformer neural network architecture that all LLMs are based on.
The two main limitations of the Transformer that it helps with are:
1) A Transformer is just a fixed-size stack of layers, with a one-way flow of data through the layers from input to output. The fixed number of layers equates to how many "thought" steps the LLM can put into generating each word of output, but good responses to harder questions may require many more steps and iterative thinking...
The idea of "think step by step", aka chain of thought, is to have the model break it's response down into a sequence of steps, each building on what came before, so that the scope of each step is withing the capability of the fixed number of layers of the transformer.
2) A Transformer has extremely limited internal memory from one generated word to the next, so telling the model to go one step at a time, feeding its own output back in as input, in effect makes the model's output a kind of memory that makes up for this.
So, chain of thought prompting ultimately give the model more thinking steps (more words generated), together with memory of what it is thinking, in order to be able to generate a better response.
It’s interesting that half the comments here are talking about the extinction line when, now that we’re nearly entering 2026, I feel the 2027 predictions have been shown to be pretty wrong so far.
> I feel the 2027 predictions have been shown to be pretty wrong so far
Does your clairvoyance go any further than 2027?
I don't know that it's "clairvoyance". We're two weeks from 2026. We might be able to see somewhat more than we do now if this was going to turn into AGI by 2027.
If you assume that we're only one breakthrough away (or zero breakthroughs - just need to train harder), then the step could happen any time. If we're more than one away, though, then where are they? Are they all going to happen in the next two years?
But everybody's guessing. We don't know right now whether AGI is possible at current hardware levels. If it is N breakthroughs away, we all have our own guesses of approximately what N is.
My guess is that we are more than one breakthrough away. Therefore, one can look at the current state of affairs and say that we are unlikely to get to AGI by 2027.
> Does your clairvoyance go any further than 2027?
why are you so sensitive?
Comment was deleted :(
> Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway
Here we go again. Statements with the single source in the head of the speaker. And it’s also not true. The llms still produce bad/irrelevant code at such rate that you can spend more time prompting than doing things yourself.
I’m tired of this overestimation of llms.
Even where they are not directly using LLMs to write the most critical or core code, nearly every skeptic I know has started using LLMs at very least to do things like write tests, build tools, write glue code, help to debug or refactor, etc.
Your statement suffers not only from also coming only from your brain, with no evidence that you've actually tried to learn to use these tools, but it also goes against the weight of evidence that I see both in my professional network and online.
I just want people making statements like the author to be more specific how exactly the llms are being used. Otherwise they contribute to this belief that llms are a magical tool that can do anything.
I am aware of simple routine tasks that LLMs can do. This doesn’t change anything about what I said.
All you had to do is scroll down further and read the next couple of posts where the author is being more specific on how they used LLMs.
I swear, the so called critics need everything spoon fed.
Sorry, but we're way past that. It's you who need to provide examples of tasks it can't do.
You need to meet more skeptics. (Or maybe I do.) In my world, it's much more rare than you say.
My person experience: if I can find a solution on stackoverflow etc. the LLM will produce working and fundamentally correct code. If I can‘t find a already fullfilled solution on these sites, the LLM is hallucinating like crazy (newer existing functions/modules/plugins, protocol features which aren’t specified and even github-repos which never existed). So, as stated my many people online before: for low-hanging fruits LLM are totally viable solution.
I don't remember the last time Claude Code hallucinated some library, as it will check the packages, verify with the linter, run a test import and so on.
Are you talking about punching something into some LLM web chat that's disconnected from your actual codebase and has tooling like web search disabled? If so, that's not really the state of the art of AI assisted coding, just so you know.
But you have just repeated what you are complaining about.
Do you want me to spend time to come with a quality response to a lazy statement? It’s like fighting with windmills. I’m fine with having my say the way I did.
> Here we go again. Statements with the single source in the head of the speaker. And it’s also not true.
You're making the same sort of baseless claim you are criticising the blogger for making. Spewing baseless claims hardly moves any discussion forward.
> The llms still produce bad/irrelevant code at such rate that you can spend more time promoting than doing things yourself.
If that is your personal experience then I regret to tell you that it is only the reflection of your own inability to work with LLMs and coding agents. Meanwhile, I personally manage to effectively use LLMs anywhere between small refactoring needs and large software architecture designs, including generating fully working MVPs in one-shot agent prompts. From this alone it's rather obvious who is making baseless statements that are more aligned with reality.
> Here we go again.
Indeed, he said the same as a reflection on 2024 models:
https://news.ycombinator.com/item?id=42561151
It is always the fault of the "luser" who is not using and paying for the latest model.
What also happens and it's irrelevant of AGI: global RL
Around the world people ask an LLM and get a response.
Just grouping and analysing these questions and solving them once centrally and then making the solution available again is huge.
Linearly solving the most asked questions and then the next one then the next will make, whatever system is behind it, smarter every day.
Exactly. The singularity is already here. It's just "programmers + AI" as a whole, rather than independent self-improvements of the AI.
I wonder how a "programmers + AI" self-improving loop is different from an "AI only" one.
The AI only one presumably has a much faster response time. The singularity is thus not here because programmer time is still the bottleneck, whereas as I understand in the singularity time is no longer a bottleneck component.
AGI will be faster as it doesn't need initial question.
AGI will also be generic.
LLM is already very impressive though
This article does little to support its claims but was a good primer to dive into some topics.
They are cool new tools use them where you can but there is a ton of research still left to do. Just lols at the hubris silicon valley will make something so smart it extincts humankind. It'll happen from the lack of water and heated planet first :)
The stocastic parrot argument is still debated but more nuanced than before. Although the original author still stands by the statement. Evidence of internal planning per model. Anthropic Attribution Graphs Research with some rhyming did support it but gemma didn't.
The idea of "understanding" is still up for debate as well. Sure, when models are directly trained on data there is representation. Othello-GPT Studies was one way to support but that was during training so some interal representation was created. Out of distribution task will collapse to confabulation. Apple's GSM-Symbolic Research seems to support that.
Chain of thought is a helpful tool but is untrustworthy at best. Anthropic themselves have showed this https://www.anthropic.com/research/reasoning-models-dont-say...
Comment was deleted :(
> 1. NOT have any representation about the meaning of the prompt.
This one is bizarre, if true (I'm not convinced it is).
The entire purpose of the attention mechanism in the transformer architecture is to build this representation, in many layers (conceptually: in many layers of abstraction).
> 2. NOT have any representation about what they were going to say.
The only place for this to go is in the model weights. More parameters means "more places to remember things", so clearly that's at least a representation.
Again: who was pushing this belief? Presumably not researchers, these are fundamental properties of the transformer architecture. To the best of my knowledge, they are not disputed.
> I believe [...] it is not impossible they get us to AGI even without fundamentally new paradigms appearing.
Same, at least for the OpenAI AGI definition: "An AI system that is at least as intelligent as a normal human, and is able to do any economically valuable work."
> This one is bizarre, if true (I'm not convinced it is).
> The entire purpose of the attention mechanism in the transformer architecture is to build this representation, in many layers (conceptually: in many layers of abstraction).
I think this is really about a hidden (i.e. not readily communicated) difference in what the word "meaning" means to different people.
Could be, by "meaning" I mean (heh) that transformers are able to distinguish tokens (and prompts) in a consequential ("causal") way, and that they do so at various levels of detail ("abstractions").
I think that's the usual understanding of how transformer architectures work, at the level of math.
Not sure I understand the last sentence:
> The fundamental challenge in AI for the next 20 years is avoiding extinction.
I think he's referring to AI safety.
https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-lis...
For a perhaps easier to read intro to the topic, see https://ai-2027.com/
or read your favorite sci-fi novel, or watch Terminator. this is pure bs by a charlatan
It's a tell that he's been influenced by rationalist AI doomer gurus. And a good sign that the rest of his AI opinions should be dismissed.
He's referring to humanity, I believe
It's ambiguous. It could go the other way. He could be referring to that oldest of science fiction tropes: The Bulterian Jihad, the human revolt against thinking machines.
Meh. I think the more likely scenario is the financial extinction of the AI companies.
> * The fundamental challenge in AI for the next 20 years is avoiding extinction.
This reminded me of the Don’t look up movie where they basically gambled with the humans extinction.
I'm impressed that such a short post can be so categorically incorrect.
> For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots
> In 2025 finally almost everybody stopped saying so.
There is still no evidence that LLMs are anything beyond "stochastic parrots". There is no proof of any "understanding". This is seeing faces in clouds.
> I believe improvements to RL applied to LLMs will be the next big thing in AI.
With what proof or evidence? Gut feeling?
> Programmers resistance to AI assisted programming has lowered considerably.
Evidence is the opposite, most developers do not trust it. https://survey.stackoverflow.co/2025/ai#2-accuracy-of-ai-too...
> It is likely that AGI can be reached independently with many radically different architectures.
There continues to be no evidence beyond "hope" that AGI is even possible, yet alone that Transformer models are the path there.
> The fundamental challenge in AI for the next 20 years is avoiding extinction.
Again, nothing more than a gut feeling. Much like all the other AI hype posts this is nothing more than "well LLMs sure are impressive, people say they're not, but I think they're wrong and we will make a machine god any day now".
Strongly agree with this comment. Decoder-only LLMs (the ones we use) are literally Markov Chains, the only (and major) difference is a radically more sophisticated state representation. Maybe "stochastic parrot" is overly dismissive sounding, but it's not a fundamentally wrong understanding of LLMs.
The RL claims are also odd because, for starters, RLHF is not "reinforcement learning" based on any classical definition of RL (which almost always involve an online component). And further, you can chat with anyone who has kept up with the RL field, and quickly realize that this is also a technology that still hasn't quite delivered on the promises it's been making (despite being an incredibly interesting area of research). There's no reason to speculate that RL techniques will work with "agents" where they have failed to achieve wide spread success in similar domains.
I continue to be confused why smart, very technical people can't just talk about LLMs honestly. I personally think we'd have much more progress if we could have conversations like "Wow! The performance of a Markov Chain with proper state representation is incredible, let's understand this better..." rather than "AI is reasoning intelligently!"
I get why non-technical people get caught up in AI hype discussions, but for technical people that understand LLMs it seems counter productive. Even more surprising to me is that this hype has completely destroyed any serious discussions of the technology and how to use it. There's so much oppurtunity lost around practical uses of incorporating LLMs into software while people wait for agents to create mountains of slop.
>Decoder-only LLMs (the ones we use) are literally Markov Chains
Real-world computers (the ones we use) are literally finite state machines
Only if the computer you use does not have memory. Definitionally if you are writing and reading from memory, you are not using an FSM.
No, it can still be modeled as a finite state machine. Each state just encodes the configuration of your memory. I.e. if you have 8 bits of memory, your state space just encodes 2^8 states for each memory configuration.
Any real-world deterministic thing can be encoded as a FSM if you make your state space big enough, since it by definition there has only a finite number of states.
You could model a specific instance of using your computer this way, but you could not capture the fact that you can execute arbitrary programs with your PC represented as an FSM.
Your computer is strictly more computationally powerful than an FSM or PDA, even though you could represent particular states of your computer this way.
The fact that you can model an arbitrary CFG as an regular language with limited recursion depth does not mean there’s no meaningful distinction between regular languages and CFG.
> you can execute arbitrary programs with your PC represented as an FSM
You cannot execute arbitrary programs with your PC, your PC is limited in how much memory and storage it has access to.
>Your computer is strictly more computationally powerful
The abstract computer is, but _your_ computer is not.
>model an arbitrary CFG as an regular language with limited recursion depth does not mean there’s no meaningful distinction between regular languages and CFG
Yes this I agree. But going back to your argument, claiming that LLMs with a fixed context-window are basically markov chains so they can't do anything useful is reductio ad absurdum in the exact same way as claiming that real-world computers are finite state machines.
A more useful argument on the upper-bound of computational power would be along the lines of circuit complexity I think. But even this does not really matter. An LLM does not need to be turing complete even conceptually. When paired with tool-use, it suffices that the LLM can merely generate programs that are then fed into an interpreter. (And the grammar of turing-complete programming languages can be made simple enough, you can encode Brainfuck in a CFG). So even if an LLM could only ever produce programs with a CFG grammar, the combination of LLM + brainfuck executor would give turing completeness.
This post is a bait for enthusiasts. I like it.
> Chain of thought is now a fundamental way to improve LLM output.
That kinda proves _that LLMs back then were pretty much stochastic parrots indeed_, and the skeptics were right at the time. Today, enthusiasts agree with what they previously said: without CoT, the AI feels underwhelming, repetitive and dumb and it's obvious that something more was needed.
Just search past discussions about it, people were saying the problem would be solved with "larger models" (just repeating marketing stuff) and were oblivious to the possibility of other kinds of innovations.
> The fundamental challenge in AI for the next 20 years is avoiding extinction.
That is a low level sick burn on whoever believes AI will be economically viable short-term. And I have to agree.
[flagged]
Must feel nice to let yourself be coddled by in-group/out-group thinking like that. "I've decided that AI is bad and useless, therefore anyone disagreeing must be an AI bro".
They are very advanced stochastic parrots that allow AI invested authors to suddenly write in perfect English.
If Antirez has never gotten an LLM to perform an absolutely embarrassing mistake, he must be very lucky or we should stop listening to him.
Programmers' resistance has not weakened. Since the ORCL drop of 40% anti-LLM opinions are censored and downvoted here. Many people have given up, and we always get articles from the same LLM influencers.
Regarding the stochastic parrots:
It is easy to see that LLMs exclusively parrot by asking them about current political topics [1], because they cannot plagiarize settled history from Wikipedia and Britannica.
But of course there also is the equivalence between LLMs and Markov chains. As far as I can see, it does not rely on absurd equivalences like encoding all possible output states in an infinite Markov chain:
https://arxiv.org/abs/2410.02724
Then there is stochastic parrot research:
https://arxiv.org/abs/2502.08946
"The stochastic parrot phenomenon is present in LLMs, as they fail on our grid task but can describe and recognize the same concepts well in natural language."
As said above, this is obvious to anyone who has interacted with LLMs. Most researchers know what is expected of them if they want to get funding and will not research the obvious too deeply.
[1] They have Internet access of course.
> The fundamental challenge in AI for the next 20 years is avoiding extinction.
So nice to see people who think about this seriously converge on this. Yes. Creating something smarter than you was always going to be a sketchy prospect.
All of the folks insisting it just couldn't happen or ... well, there have just been so many objections. The goalposts have walked from one side of the field to the other, and then left the stadium, went on a trip to Europe, got lost in a beautiful little village in Norway, and decided to move there.
All this time though, the prospect of instantiating a something smarter than you (and yes, it will be smarter than you even if it's at human level because of electronic speeds...) This whole idea is just cursed and we should not do the thing.
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
Seems they also want some AI money[0]. Guess, I'll keep using Valkey.
> they
I'm not sure antirez is involved in any business decision making process at Redis Ltd.
He may not be part of "they".
I'm not involved in business decisions and while I'm very AI positive I believe Redis as a company should focus on Redis fundamentals: so my piece has zero alignment on what I hope for the company.
In any case, what would be the problem? The page you mentioned simply illustrates how the product can be used in a specific domain; it doesn't seem forced to me.
Conflict of interest and disclosure posts are frequently downvoted.
You mean flagged.
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
Ah, so you just went through my history and downvoted everything in sight! Thanks for confirming.
I don't follow? I didn't flag you; you were remarking on a previous comment alleging shillage from 'antirez, and I'm pointing out that the behavior you say is "downvoted" is actually a black-letter guideline violation. People flag those posts.
Another one, though:
Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
I can't help you if you repeatedly misinterpret me. Once you made the first response in this subthread, 4 or 5 of my comments went from 1 to 0 or -1. Cum hoc ergo propter hoc? Maybe.
I'll design a system for the senate that enables outside voters to first turn down the microphone's volume of a speaker if he says that another senator works for company X and then removes him from the floor. That'll be a great success for democracy and "intellectual curiosity", which is also in the guidelines.
Comment was deleted :(
Crafted by Rajat
Source Code