hckrnws
I think "AI software engineer" is the wrong target for the current generation of models, though it's good for generating buzz.
I'm working on an open source, terminal-based tool that uses agents to build complex software (https://github.com/plandex-ai/plandex), and I have found that the results are generally much better when you target 80-90% of a task and then finish up the rest manually, as opposed to burning lots of time and tokens on trying to get the LLM to do it all end-to-end.
But are they a 10x engineer? Seriously though, we’re already seeing the role of the software engineer change. How will an AI engineer compare to a senior+ engineer who uses AI?
The credulousness of “engineers” is embarrassing. The concept of a “fully autonomous” AI software engineer is incoherent in and of itself. That means it is self-debunking.
It’s just a tool that does mysterious things. It cannot take responsibility for itself or its own work. The software engineer who uses it must be able to supervise and test what it does— except modern software engineers have systematically kneecapped the culture of career testers, in favor of cutesy output checking that isn’t going to help much.
You need solid skeptical critical thinking. Good luck.
Comment was deleted :(
Crafted by Rajat
Source Code