hckrnws
The most useful reframe I've found: codegen changes the cost structure of writing code, not the cost structure of knowing what to write.
Before, if you had a vague spec you'd write a small prototype to clarify your thinking. Now you can have a complete implementation in minutes — but you still have an unclear spec. You've just moved the uncertainty forward in the process, where it's more expensive to catch.
The teams I've seen use LLMs well treat the output as a rough draft that requires real review, not a finished product. The teams that get into trouble treat generation speed as the goal. Both groups produce the same lines of code. Very different results.
> codegen changes the cost structure of writing code, not the cost structure of knowing what to write.
Yes, and knowing what to write has always been the more important challenge, long before AI. But - one thing I’ve noticed is that in some cases, LLMs can help me try out and iterate on more concepts and design ideas than I was doing before. I can try out the thing I thought was going to work and then see the downsides I didn’t anticipate, and then fix it or tear it down and try something else. That was always possible, but when using LLMs this cycle feels much easier and like it’s happening much faster and going through more rough draft iterations than what I used to do. I’m trying more ideas than I would have otherwise, and it feels like it’s leading in many cases to a stronger foundation on which to take the draft through review to production. It’s far more reviewing and testing than before, but I guess in short, there might be an important component of the speed of writing code that feeds into figuring out what to write; yes we should absolutely focus entirely on priorities, requirements, and quality, but we also shouldn’t underestimate the impact that iteration speed can have on those goals.
Yes. I'll go down a wrong path in 20 minutes that'd have taken me half a day to go down by hand, and I keep having to remind myself that code is cheap now (and the robot doesn't get tired) so it's best to throw it away and spend 10 more minutes and get it right.
It would be interesting seeing how good LLMs are at interactive system design type work. I find them to be way too positive when I need them to shut me down or redirect my ideas entirely.
We need a comparison between an LLM and an experienced engineer reviewing a juniors system design for some problem. I imagine the LLM will be way too enthusiastic about whatever design is presented and will help force poor designs into working shape.
I’ve found them to be pretty good if you tell them to be more critical and to operate as a sophisticated rubber duck. They are actually pretty decent at asking questions that I can answer to help move things forwards. But yeah by default they really like to tell me I’m a fucking genius. Such insight. Wow.
The most exciting thing for me is it has changed the cost structure of studying code for refinement.
I’d never done half as much code profiling & experimenting before. Now that generating one-shot code is cheap, I can send the agent off on a mission to find slow code and attempt to speed it up. This way, only once it has shown speedup is there and reasonably attainable do I need to think about how to speed the code up “properly”. The expected value was too low when the experimenting was expensive.
I have to be honest. I’ve written a lot of pro-ai / dark-software articles and I think Im due an update, cause it worked great, till it didn’t.
I could write a lot about what I’ve tried and learnt, but so far this article is a very based view and matches my experience.
I definitely suffered under the unnecessary complexity and wished to never’ve used AI at moments and even with OPUS 4.6 I could feel how it was confused and couldn’t understand business objectives really. It became way faster to jump in code, clean it up and fix it myself. I’m not sure yet where and how the line is and where it will be.
I recently started using AI for personal projects, and I find it works really well for 'spike' type tasks, where what you're trying to do is grow your knowledge about a particular domain. It's less good at discovering the correct way of doing things once you've decided on a path forward, but still more useful than combing through API docs and manpages yourself.
It might not actually deliver working things all that much faster than I could, but I don't feel mentally drained by the process either. I used to spend a lot of time reading architecture docs in order to understand available solutions, now I can usually get a sense for what I need to know just from asking ChatGPT how certain things might be done using X tool.
In the last few days, I've stood up syncthing, tailscale with a headscale control plane, and started making working indicators and strategies in PineScript, TradingView's automated trading platform. Things I had no energy for or would have been weeklong projects take hours or a day or so. AI's strengths synergize really well with how humans want to think.
I just paste an error message in, and ChatGPT figures out what I'm trying to do from context, then gives me not just a possible resolution, but also why the error is happening. The latter is just as useful as the former. It's wrong a lot, but it's easy to suss out.
I continue to jump into these discussions because I feel like these upvoted posts completely miss what’s happening…
- guardrails are required to generate useful results from GenAI. This should include clear instructions on design patterns, testing depth, and iterative assessments.
- architecture decision records are one useful way to prevent GenAI from being overly positive.
- very large portions of code can be completely regenerated quickly when scope and requirements change. (skip debugging - just regenerate the whole thing with updated criteria)
- GenAI can write thorough functional and behavioral unit tests. This is no longer a weakness.
- You must suffer the questions and approvals. At no time can you let agents run for extended periods of time on progressive sets of work. You must watch what is generated. One thing that concerns me about the new 1mm context on Claude Code is many will double down on agent freedom. You can’t. You must watch the results and examine functionality regularly.
- No one should care about actual code ever again. It’s ephemeral. The role of software engineering is now molding features and requirements into functional results. Choosing Rust, C#, Java, or Typescript might matter depending on the domain, but then you stop caring and focus on measuring success.
My experience is rolled up in https://devarch.ai/ and I know I get productive and testable results using it everyday on multiple projects.
No one should care about actual code ever again. It’s ephemeral.
Caveat: it still works best in a codebase that is already good. So while any one line of code is ephemeral, how is the overall codebase trending? Towards a bramble, or towards a bonsai?
If the software is small and not mission critical, it doesn’t matter if it becomes a bramble, but not all software is like that.
I think it works great in codebases that are good, but I think it will degrade the quality of the codebase compared to what it was before.
A good codebase depends on the business context, but in my case its an agile one that can react to discovered business cases. I’ve written great typed helpers that practically allow me to have typed mongo operators for most cases. It makes all operations really smooth. AI keeps finding cretaive ways of avoiding my implementations and over time there are more edge cases, thin wrappers, lint ignore comments and other funny exceptions. Whilst I’m losing the guarantees I built...
The post is about using LOC as a metric when making any sort of point about AI. Nowhere do I suggest someone shouldn't use it, nor that they should expect negative results if they opt to.
No one I’ve ever worked with in 40 years has ever seriously used loc as a measurement of progress or success. I honestly don’t know where this comes from.
That’s the odd psychosis here. Everyone knew loc was a terrible measure. But perhaps the instinctual pull was always there, and now that you can generate halfway coherent tens of thousands of loc in hours, our sensibilities are overwhelmed.
Yes, but it comes up in conversations of LLMs a lot. Thus, the rant in question. I think we are in agreement, or at least we lack disagreement, because that is the only stance I endeavored to take in the post.
> No one should care about actual code ever again. It’s ephemeral.
> very large portions of code can be completely regenerated quickly when scope and requirements change.
This is complete and utter nonsense coming from someone who isn't actually sticking around maintaining a product long enough in this manner to see the end result of this.
All of this advice sounds like it comes from experience instead of theoretical underpinning or reasoning from first principles. But this type of coding is barely a year old, so there's no way you could have enough experience to make these proclamations.
Based on what I can talk about from decades of experience and study:
No natural language specification or test suite is complete enough to allow you to regenerate very large swaths of code without changing thousands of observable behaviors that will be surfaced to users as churn, jank, and broken workflows. The code is the spec. Any spec detailed enough to allow 2 different teams (or 2 different models or prompts) to produce semantically equivalent output is going to be functionally equivalent to code. We as an industry have learned this lesson multiple times.
I'd bet $1,000 that there is no non-trivial commercial software in existence where you could randomly change 5% of the implementation while still keeping to the spec and it wouldn't result in a flood of bug reports.
The advantage of prompting in a natural language is that the AI fills in the gaps for you. It does this by making thousands of small decisions when implementing your prompt. That's fine for one offs, and it's fine if you take the time to understand what those decisions are. You can't just let the LLM change all of those decision on a whim, which is the natural result of generating large swaths of code, ignoring it, and pretending it's ephemeral.
> - No one should care about actual code ever again. It’s ephemeral. The role of software engineering is now molding features and requirements into functional results. Choosing Rust, C#, Java, or Typescript might matter depending on the domain, but then you stop caring and focus on measuring success.
I think this has always been the case. "Bad programmers worry about the code. Good programmers worry about data structures and their relationships." Perhaps you mean that they shouldn't worry about structures & relationships either but I think that is a fools errand. Although to be fair neither of those need to be codified in the code itself, but ignore those at your own peril...
Data structures are still conversational items. I come from the DDD community and adamantly push back on data first architectures. Modules or Bounded Contexts reveal their relationships and data over time.
Perhaps you can think of the modules or "Bounded Contexts" as a type of data structure and the relationships between them. Idk. I don't have a particularly great view of DDD fwiw.
Comment was deleted :(
A well considered article, despite the author categorizing it as a rant. I appreciate the appendix quotations, as well as the acknowledgement that they are appeals to authority.
Whilst the author clearly has a belief that falls down on one side of the debate, I hope folks can engage with the "Should we abandon everything we know" question, which I think is the crux of things. Evidence that AI-driven-development is a valuable paradigm shift is thin on the ground, and we've done paradigm shifts before which did not really work out, despite massive support for them at the time. (Object-Oriented-Everything, Scrum, etc.)
I didn't set out to teach you anything, change your behavior, or give you practical takeaways, so it's a rant (: Emotions can be expressed with citations.
I am fully on board with gen AI representing a paradigm shift in software development. I tried to be careful not to take a stance on other debates in the larger conversation. I just saw too many people talking about how much code they're generating as proof statements when discussing LLMs. I think that, specifically---i.e., using LOC generated as the basis of any meaningful argument about effectiveness or productivity---is a silly thing to do. There are plenty of other things we should discuss besides LOC.
I guess I over-diagnosed your stance, apologies.
I wonder if you have a take on measuring productivity in light of the potential difficulty of achieving good outcomes across the general population?
You mention in the second appendix (which I skipped on my first read), that you are a rather experienced LLM user, with experiences in all the harnesses and context management which are touted as "best practice" nowadays. Given the effort this seems to take, do you think we're vulnerable to mis-measuring.
My mind is always thrown to arguments about Agile, or even Communism. "True Communism has never been tried" or "Agile works great when you do it right", which are still thrown about in the face of evidence that these things seem impossible, or at least very difficult, to actually implement successfully across the general population. How would we know if AI-driven-development had a theoretical higher maximum "productivity" (substitute with "value", "virtue", "the general good", whatever you want here) than non AI-driven-development, but still a lower actual productivity due to problems in adoption of the overall paradigm?
Measuring productivity in software development is a hard problem, beyond the typical categorizations used in computer science. Unfortunately, I think my best answer is to go read the book I linked in the conclusion: https://link.springer.com/chapter/10.1007/978-1-4842-4221-6_...
That is an unsatisfying answer. I can point to anecdotes that suggest AI is hurting productivity or improving it, but those don't make an argument. And the extremes on either side make it very difficult to consider. How do you weigh "An LLM deleted my production database" against "I built a business on the back of AI-assisted software"?
I think we have to wait and see. And we should revisit questions of cost and value continuously, not just about LLMs, but generally in life. Most of my motivation (though not an overwhelming majority) around using LLMs right now is a mix of curiosity and wanting to avoid the fate of the steam shovel.
I feel there is a set of codebases in which LLMs aren't showing the 2-10x lift in productivity.
There is also a set of codebases in which LLMs are one-shotting the most correct code and even finding edgecases that would've been hard to find in human reviews.
At a surface level, it seems obvious that legacy codebases tend to fall in the first category and more greenfield work falls in the second category.
Perhaps, this signals an area of study where we make codebases more LLM-friendly. It needs more research and a catchy name.
Also, certain things that we worry about as software artisans like abstractions, reducing repeated code, naming conventions, argument ordering,... is not a concern for LLMs. As long as LLMs are consistent in how they write code.
For e.g. One was taught that it is bad to have multiple "foo()" implementations. In LLM world, it isn't _that_ bad. You can instruct the LLM to "add feature x and fix all the affected tests" (or even better "add feature x to all foo()") and if feature x relies on "foo()", it fixes every foo() method. This is a big deal.
Bold claims that writing code was never the bottleneck. It may not be the only bottleneck but we conveniently move goal posts now that there is a more convenient mechanism and our profession is under threat.
our profession is under threat.
It is. But I don't think it's AI that threatens it. It's susceptible to hype people who, unfortunately, have the power over people's jobs. C-level management who don't know anything better than parroting what others in the industry are saying. How is that "all engineers will be replaced in 6 months" going?We've seen something similar in the past decades: outsourcing. And it worked completely differently how it was envisioned a few decades ago, at least in the field of software development. So let's wait what happens. Some kind of backlash has been already started in the past months.
Don't forget the threat from incompetent developers who have been working by copy pasting things from stackoverflow without understanding what they are doing since several years.
This is a case of "depends on the project".
For very small projects, code may be the main bottleneck. Just to write the code is what takes most of the time. Adding code faster can accelerate development.
For larger projects, design, integration, testing, feature discovery, architecture, bug fixing, etc. takes most of the time. Adding code faster may slow down development and create conflicts between teams.
Discussing without a common context makes no sense in this situation.
So, depending on your industry and the size of the projects that you have worked on one thing or the other may be true.
There's plenty of evidence of this line of thinking even from before the turn of the Millennium. Mythical Man Month, No Silver Bullet, Code Complete, they all gesture at this point.
Writing the code can definitely feel like the bottleneck when it's a single-person project and you're doing most of the other hard parts in your head while staring at the code.
I actually consider that the claim is not that bold, and in fact has been common in our industry for most of the short time it has been around. I included a few articles and studies with time breakdowns of developer activity that I think help to illustrate this.
If an activity (getting code into source files) used to take up <50% of the time of programmers, then removing that bottleneck cannot even double the throughput of the process. This is not taking into account non-programmer roles involved in software development. This is akin to Amdahl's law when we talk about the benefits of parallelism.
I made no argument with regard to threat to the profession, and I make none here.
Writing good code might be a bottleneck and the same can't be said about code in general.
This article describes the body of knowledge I was taught when I joined the industry and parallels with my experience with and thoughts about AI.
I have come to the realization that most people in the industry don't know this body of knowledge, or even that it exists.
I'm now seeing the same people trying to solve their ineffectiveness with AI.
I don't know what to think about this situation. My intuition hints at it not being good.
Hey, author here. Never thought I'd see my pokey little blog on HN and all that.
Happy to discuss further.
Hey, I like your writing. You got an rss feed or anything?
Thanks!
I think there's some goldilocks speed limit for using these tools relative to your skillset. When you're building, you forget that you're also learning - which is why I actually favour some AI code editors that aren't as powerful because it gets me to stop and think.
There is a saying you need to write an essay 3 times. The first time its puked out, the second is decent and the third is good.
It’s quite similar with code, and with code less is more. for try 1 and 2
Unfortunately, this post was published at the puked out phase (;
(author here)
It's really well written
> Humans and LLMs both share a fundamental limitation. Humans have a working memory, and LLMs have a context limit.
But there’s a more important difference: I can’t spin up 20 decent human programmers from my terminal.
The argument that "code was never the bottleneck" is genuinely appealing, but it hasn’t matched my experience at all. I’m getting through dramatically more work now. This is true for my colleagues too.
My non-technical niece recently built a pretty solid niche app with AI tools. That would have been inconceivable a few years ago.
Would you entertain the idea that "work was never the bottleneck", or even "building products was never the bottleneck"?
We need to address Jevons' Paradox somehow.
I love Jevons’ paradox too, but if we apply it here don’t we still end up with more software?
Definitely would entertain -- I do agree with your framing. I just think the article undersells the impact of fast+cheap codegen.
Lowering the cost of implementation will (has) expose new bottlenecks elsewhere. But imho many of those bottlenecks probably weren’t worth serious investment in solving before. The codegen change will shift that.
I think that's where a heck of a lot of the frustration on this topic is coming from. Some engineers claim to have solved the code generation issue well enough that it hasn't been the bottleneck in their local environment, and have been trying to pivot to widening the new bottlenecks for a while now, but have been confounded by organisational dynamics.
To see the other bottlenecks starting to be taken seriously now, but (if I'm to be petulant) all the "credit" of solving the code bottleneck being taken by LLM systems, it's painful, especially when you are in a local domain where the code gen bottleneck doesn't matter very much and hasn't for a long time.
I suspect engineers that managed to solve the code generation bottlenecks are compulsive problem solvers, which exacerbates the issue.
That isn't to say there are some domains where it still does matter, although I'm dubious that LLM codegen is the best solve, but I am not dubious that it is at least a solve.
I guess that what people debate on here is what “decent” mean. From my experience, these llms spit out dog shit code, so 20 agents equal 20x more dog shit.
The collaboration aspect is what many AI enthusiasts miss. As humans, our success is dependent on our ability to collaborate with others. You may believe that AI could replace many individual software engineers, but if it does so at the expense of harming collaboration, it’s a massive loss. AI tools are simply not good at collaborating. When you add many humans to a project, the result becomes greater than the sum of its parts. When you add many AI tools to a project, it quickly becomes a muddled mess.
I look at it backwards: A few humans improves a project. But once you get to sufficient sizes, principal-agent problems dominate. What is good for a division and what is good for the company disagree. What is good for a developer that needs a big project for their promotion package is not what the company needs. A company with a headcount of 700 is more limber and better aligned than when it's 3,000 or 30,000. It's amazing how little alignment there ever is when you get to the 300k range.
AI, if anything, is amazing at collaborating. It's not perfectly aligned, but you sure can get it to tell you when your idea is unsound, all while having lessened principal-agent issues. Anything we can do to minimize the number of people that need to align towards a goal, the more effectively we can build, precisely due to the difficulties of marshalling large numbers of people. If a team of 4 can do the same as a team of 10, you should always pick the team of 4, even if they are more expensive put together than the 10.
Yes, which is why every successful company has exactly 4 people and no more. Collaboration goes beyond your immediate team members - if you work for an organization, you’re supported by it in ways you may take for granted. Replace this structure with AI models, and the whole thing would fall apart.
> AI tools are simply not good at collaborating
My primary use of LLM tools is as a collaborator.
I agree that if you try to use the LLM as a wholesale outsourcing of your thought process the results don’t scale. That’s not the only way to use them, though.
> When you add many humans to a project, the result becomes greater than the sum of its parts. When you add many AI tools to a project, it quickly becomes a muddled mess.
I have absolutely been on projects where there were too many cooks in the kitchen, and adding more people to the team only led to additional chaos, confusion, and complexity. Ever been in a meeting where a designer, head of marketing, and the CTO are all giving feedback on what size font a button should be? I certainly have, and it's absurd.
One of my worst experiences arose due to having a completely incompetent PM. Absolutely no technical knowledge; couldn't even figure out how to copy and paste a URL if his life depended on it. He eventually had to be be removed from a major project I was on, and I was asked to take over PM duties, while also doing my dev work. I was actually happy to do so, because I was already having to spend hours babysitting him; now I could just get the same tasks done without the political BS.
Could adding many AI tools to a project become problematic? Maybe. But let's not pretend throwing more humans at a project is going to lead to some synergistic promised land.
AI will allow us to collaborate on higher level decisions and not on whether we should use for loops or functional interfaces.
This attitude is acceptable for principal engineers, but anyone lower in the chain with this perspective typically presides over a product that is constantly breaking due to hundreds of unhandled edge cases, AI or not.
Speak for yourself, I have never thrown away code at this rate in my entire career. I couldn't keep up this pace without AI codegen.
Did you read the article? I don’t think that refutes anything the author said even a little bit.
XD
In practical terms, "productivity" is any metric that people with power can manipulate (cheating numbers, changing narratives, etc) to affect behavior of others to their interests.
ALL OF IT is meaningless. It's a pointless discussion.
I'd recommend you read the book referenced in the conclusion: https://link.springer.com/chapter/10.1007/978-1-4842-4221-6_...
The full PDF is available for download. It's mostly a series of essays, so you can pick and choose and read nonlinearly. It's worth thinking about beyond nihilistic takes.
honestly the thing that trips me up is when codegen makes me feel productive but I haven't actually validated anything. like I'll have claude write a whole data pipeline in 20 minutes and then spend 2 hours debugging edge cases it didn't think about because it doesn't know our data
the speed is real but it mostly just moves where I spend my time. less typing, more reading and testing. which is... fine? but it's not the 10x thing people keep claiming
Would getting to the same edge-case-free outcome have taken you less than 2h20min if you didn't have AI?
I think it would typically have taken you longer.
> I think it would typically have taken you longer.
That's actually highly doubtful to me.
Tons of studies and writing about how reading and debugging code is wildly more time consuming than writing it. That time goes up even more when you're not the one that wrote the code in the first place. It's why we've spent decades on how to write readable/maintainable code.
So either all this shit about reading/maintaining code being difficult was lies and we've spent decades wasting our time or AIs can only improve productivity if you stop verifying/debugging code.
So I find it very unlikely that it would have taken more than a couple hours to just write it the first time.
It's so difficult to quantify productivity over an entire field, especially when it's so vast. Chris Lattner recently concluded this about LLM tooling [0]:
> AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.
This matches my experience, there is a lot of code that we probably should not need to write and rewrite anymore but still do because this field has largely failed at deriving complete and reusable solutions to trivial problems - there is a massive coordination problem that has fragmented software across the stack and LLMs provide one way of solving it by generating some of the glue and otherwise trivial but expensive and unproductive interop code required.
But the thing about productivity is that it's not one thing and cannot be reduced to an anecdote about a side-project, or a story about how a single company is introducing (or mandating) AI tooling, or any single thing. Being able to generate a bunch of code of varying quality and reliability is undeniably useful, but there are simply too many factors involved to make broad sweeping claims about an entire industry based on a tool that is essentially autocomplete on crack. Thus it's not surprising that recent studies have not validated the current hype cycle.
[0] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...
I went to look at some of the authors other posts and found this:
https://www.antifound.com/posts/advent-of-code-2022/
So much of our industry has spent the last two decades honing itself into a temple built around the idea of "leet code". From the interview to things like advent of code.
Solving brain teasers, knowing your algorithms cold in an interview was always a terrible idea. And the sort of engineers it invited to the table the kinds of thinking it propagated were bad for our industry as a whole.
LLM's make this sort of knowledge, moot.
The complaints about LLM's that lack any information about the domains being worked in, the means of integration (deep in your IDE vs cut and paste into vim) and what your asking it to do (in a very literal sense) are the critical factors that remain "un aired" in these sorts of laments.
It's just hubris. The question not being asked is "Why are you getting better results than me, am I doing something wrong?"
> The complaints about LLM's that lack any information about the domains being worked in, the means of integration (deep in your IDE vs cut and paste into vim) and what your asking it to do (in a very literal sense) are the critical factors that remain "un aired" in these sorts of laments.
I'm not sure if this is a direct response to the article or a general point. The article includes an appendix about my use of LLMs and the domains I have used them in.
Not GP, but your appendix about LLM usage matches exactly how I use it too: mainly for rubber ducking and research. The codegen it's useful for (that I've found) is generating unit tests. Code coverage tools and a quick skim are more than sufficient for quality checks since unit tests are mostly boilerplate and you want to make sure that different branches are being covered.
I've had a large project recently which has biased my view on unit testing from LLMs. It includes a lot of parsing and other workflows that require character-specific placement in strings for a lot of tests. Due to how tokenization works, that is a gnarly use case for LLMs. I am trying not to form too many strong opinions about LLM-driven-TDD based on it. My forays into other domains show better results for unit tests, but the weight of my experience is in this particular low point lately.
My career predates the leetcode phenomenon, and I always found it mystifying. My hot take is that it’s what happens when you’re hiring what are essentially human compilers: they can spit out boilerplate solutions at high speed, and that’s what leetcode is testing for.
For someone like that, LLMs are much closer to literally replacing what they do, which seems to explain a lot of the complaints. They’re also not used to working at a higher level, so effective LLM use doesn’t come naturally to them.
For me it's simple:
1. Assume you're to work on product/feature X.
2. If God were to descend and give you a very good, reality-tested spec:
3. Would you be done faster? Of course, because as every AI doomer says, writing code was never the bottleneck!!1!
4. So the only bottleneck is getting to the spec.
5. Guess what AI can help you with as well, because you can iterate out multiple versions with little mental effort and no emotional sunk cost investment?
ergo coding is a solved problem
Man, if this were true we’d see a crazy, massive explosion of quality products being written, and launched. While we see some use, i don’t perceive an acceleration. In fact, i see a lot of trivial bugs being deployed to prod.
And then it turns out God wrote the spec in code because that’s what any spec sufficient to produce the same program from 2 different teams/LLMs would be.
4 doesn't follow from 3
rules of thumb for when to take blog posts about AI coding seriously:
- must be using the latest state of the art model from the big US labs
- must be on a three digit USD per month plan
- must be using the latest version of a full major harness like codex, opencode, pi
- agent must have access to linting, compilation tools and IDE feedback
- user must instruct agent to use test driven development and write tests for everything and only consider something done if tests pass
- user must give agent access to relevant documentation, ie by cloning relevant repositories etc
- user must use plan mode and iterate until happy before handing off to agent
- (list is growing every month)
---
if the author of a blog post about AI coding doesnt respect all of these, reading his blog posts is a waste of time because he doesn't follow best practices
As stated in the article, I have unlimited access to multiple frontier models and I use Claude Code, among other harnesses. The rest of your list is not directly addressed in the post, because it is irrelevant to the point being made, but I do all of those things and more. You will note that in the appendix on LLM usage, some of the things I constantly have to correct in LLM-generated code are testing mistakes. And if you care to ask, yes I have context files to address these mistakes, and I iterate them to try to improve the experience.
I would honestly appreciate constructive feedback on LLM usage, because, as I stated, I am constantly having to rework code that LLMs generate for me. The value I get from LLMs is not in code generation.
You're missing the point, and also demonstrating it. This blog isn't about personal experience, and it makes no claims about LLM capability at all. It is simply about whether code, in either volume or quality, should be used as a proof claim.
> LLMs entice us with code too quickly. We are easily led.
Arguably _is_ your argument. That people aren't doing the above and it's causing problems. You probably agree that just spinning up Claude code on the regular plan without doing the above can still generate a fuck-ton of code but that shouldn't be used as evidence either for or against AI effectiveness.
> All my comments are written by AI. Quite meta, isn't it, knowing you came here after I triggered you with my "guys, this is AI generated slop" comment?
Maybe knock it off since the rules changed to not allow AI comments.
[dead]
You can write cope like this all you want but it doesn't change the fact I can ship a feature in few days that previously would have taken me a few weeks.
> I can
You can because, I guess, your project may have a small scope, few people working on it, no dependencies etc.
I cannot, because each line that I change has an effect in millions of other lines and hundreds of other people, and millions of users.
Different situation, different needs.
Wise words. Exactly what I would except from a millenium-old elf :P
> each line that I change has an effect in millions of other lines
That sounds like an architectural problem.
It probably is, but show me a large AI assisted or vibe coded code base where this won’t be the case.
Shipping the happy path faster is easy. The part codegen hides is the month after, when teh weird edge cases start colliding and you're stuck reading a pile of code no one would have written on purpose.
That trade can still make sense for a throwaway MVP. Most people underestimate how fast the maintenance bill add up once the first non-obvious bug report lands.
I read that as you have never been debugging a production issue at 3am while losing data and/or revenue.
I don't. I did those things before AI so what's new?
A feature wasn't 10000 loc written by a AI before that no one except the AI with the context understands. If you review all that to understand it fully your productivity gain diminishes, the gain might not go away fully but it is much less when you can be woken up at 3am because of an incident and start reviewing everything.
If I'm the responder it makes no difference to me if an AI wrote it or another worker; it's alien either way.
The Googling you do to get an understanding of something you've never seen before can be done in a fraction of the time by AI.
AI helps in both, so not sure what your point is?
its like saying "don't write code because we will have to debug it later".
Well you aren't writing the code, the AI is and you are letting the AI debug that created it in the first place and it doesn't learn from the experience in the same way. Hopefully you understand the problem in such a degree that you can spec away the problem in the next iteration. I'm seeing that issue now, people just forget to learn what the issue is and keep repeating mistakes that are regurgitated from the training material.
Can you provide some examples?
THIS
Crafted by Rajat
Source Code