hckrnws
Cloudflare claimed they implemented Matrix on Cloudflare workers. They didn't
by JadedBlueEyes
My charitable read on this is that an individual vibe-coded both the post and repository and was able to publish to the Cloudflare blog without it actually being reviewed or vetted. They also are not an engineer and when the agent hallucinated “I have built and tested this and it is production grade,” they took it at face value.
You can tell since the code is in a public repository and not Cloudflare’s, which IMO is the big giveaway that this is a lesson for Cloudflare in having appropriate review processes for public comms and for the individual to avoid making claims they cannot substantiate or verify independently.
Technical blogs from infrastructure companies used to serve two purposes: demonstrate expertise and build trust. When the posts start overpromising, you lose both.
I don't know enough about this specific implementation to say whether "implemented Matrix" is accurate or marketing stretch. But the pattern of "we did X" blog posts that turn out to be "we did a demo of part of X" is getting tiresome across the industry.
The fix is boring: just be precise about what you built. "We prototyped a Matrix homeserver on Workers with these limitations" is less exciting but doesn't erode trust.
To be fair, the technical posts from Cloudflare are usually very insightful.
Usually. Previously.
I raised this point on a previous Cloudflare blog post - they've turned quite vapid these days. If you pay attention, they're stuffed to the brim with generated text which is sloppy and under-opinionated on the audience for the writing in the first place.
Yeah normally the CF blog ranks as one of the best in the world in my book, so a post of lower quality and potentially AI slop really stands out here.
That said I think the concept of a full matrix server running all on CF infrastructure/services is an awesome blog post from CF.
Honestly I wish CF would simply unpublish/retract this blog post, put another engineer on it to help the PM, and spend another couple of weeks polishing the post/code to republish the same blog post.
Even acknowledging that blunder and the lost of trust that could have followed for such sloppy work would be a minimum.
I am quite shocked by such lack of care, and it does tarnish the reputation of Cloudflare in my eyes :/
That's demonstrating expertise
I'd love to see a root cause analysis post by Cloudflare for this one. The ones they do after outages are always interesting to read. How did this make it into the blog? What is the review process for these posts and what failed this time? What measures will be taken to restore Cloudflare blog's reputation?
I found the source code Jade was referring to, and it looks like the author just noticed this thread: https://github.com/nkuntz1934/matrix-workers/commit/0823b47c...
New damage control commit just came in, removing "production grade" from README, mentioning AI assistance, and fixing the misaligned ASCII diagram. https://github.com/nkuntz1934/matrix-workers/commit/fd412f41...
Should have just nuked the whole thing to be honest, the blog post and the repo.
Agreed. And the diagrams still lack substance, IMO.
Previously someone might sketch out a purposeful one in Monodraw or something (https://monodraw.helftone.com). But only when necessary.
Now Claude shits out this vacuous nonsense by the bucketload———but it's some interconnected boxes in a code block in a README.md, so it must be good.
Your commit is orphaned now; it seems he amended the log to a vague "Clean up code comments" to try to make the purpose less obvious: https://github.com/nkuntz1934/matrix-workers/commit/2d3969dd...
I wouldnt judge if he were to come clean and admit his AI slop. Instead he just makes it worse.
That honestly makes everything so much worse.
Days after the fake story about Cursor building a web browser from scratch with GPT-5.2 was debunked. Disbelief should be the default reaction to stories like this.
Btw, after I wrote that initial article ("Cursor's latest "browser experiment" implied success without evidence"), I gave it my own try to write a browser from scratch with just one agent, using no 3rd party crates, only commonly available system libraries, and just made a Show HN about it: https://news.ycombinator.com/item?id=46779522
The end result: Me and one agent (codex) managed to build something more or less the same as Cursor's "hundreds of agents" running for weeks and producing millions of lines of code, in just 20K LOC (this includes X11, macOS and Windows support). Has --headless, --screenshot, handles scaling, link clicking and scrolling, and can render basic websites mostly fine (like HN) and most others not so fine. Also included CI builds and automatic releases because why not.
The repository itself is here and should run out of the box on most modern OSes, downloads can be found at the Releases page: https://github.com/embedding-shapes/one-agent-one-browser
This project is awesome - it really does render HTML+CSS effectively using 20,000 lines of dependency-free Rust (albeit using system libraries for image rendering and fonts).
Here's a screenshot I took with it: https://bsky.app/profile/simonwillison.net/post/3mdg2oo6bms2...
1 MB binary? That IS very impressive
Releases are here: https://github.com/embedding-shapes/one-agent-one-browser/re...
one-agent-one-browser-Linux-X64 1.14 MB
one-agent-one-browser-macOS-ARM64 1.02 MB
one-agent-one-browser-Windows-X64.exe 847 KB
I wonder if I did a Wayland version it'd be bigger or smaller, right now only x11 (so via xWayland on Wayland).Yes, this is what Ai assisted coding is good at.
A poc that would usually take a team of engineers weeks to make because of lack of cross disciplinary skills can now be done by one at the cost of long term tech debt because of lack of cross disciplinary knowledge.
> Yes, this is what Ai assisted coding is good at.
This is where I wish we spent more energy, figuring out better ways to work with the AI, rather than trying replace some parts wholesale with AI. Wrote a bunch more specifically about that, while I was watching the agent work on the browser itself, here: https://emsh.cat/good-taste/ (it's like a companion-piece I guess)
Would be interested to know what people think of the locking implementation for the net worker pool.
I’m no expert but it seems like a strange choice to me - using a mutex around an MPSC receiver, so whoever locks first gets to block until they get a message.
Is that not introducing unnecessary contention? It wouldn’t be that hard to just retain a sender for each worker and just round robin them
I haven’t looked at the code, but what you’re describing doesn’t sound that bad. If the queue is empty then it doesn’t matter whether a worker is waiting on the lock or waiting on the receiver itself. If the queue is non-empty then whoever has the lock will soon complete the receive and release the lock. It would be better to just use an actual MPMC channel, but if the traffic on the queue isn’t too high then it probably doesn’t make a significant difference. With round robin in contrast, the sender would risk sending a job to a worker that was already busy, unless it took additional measures to avoid that.
That's fairly impressive.
The outrageous part of this is nowhere in the blog post or the repository indicates it's vibe coded garbage (hopefully I didn't miss it?). You expect some level of bullshit in AI company's latest AI vibe coding announcements. This can be mistaken for a classical blog post.
Although the tell is obvious if you spent one second looking at https://github.com/nkuntz1934/matrix-workers. That misaligned ASCII diagram, damn.
Why is Cloudflare paying this guy again, just to vibe a bunch of garbage without even checking above the fold content in the README?
> Why is Cloudflare paying this guy again
Perhaps usage of AI is a performance target he's being judged against, like at many tech companies today.
> A production-grade Matrix homeserver implementation
It's getting outright frustrating to deal with this.
Fine, random hype-men gets hyped about stuff and tweets about it, doesn't mind me too much.
Huge companies who used to have a lot of good will putting out stuff like this, seemingly with absolutely zero reviews before hitting publish? What are they doing? Have everyone decided to just give up and give in to the slop? We need "engineering" to make a comeback.
We found that reviewing AI code is bottleneck for performance so we stopped reviewing it
https://github.com/matrix-org/matrix-rust-sdk/blob/main/CONT... is an example of engineering trying to make a comeback, on the Matrix side at least :)
As long as you take ownership, test your stuff and ensure it actually does what you claim it does, I don't mind if you use LLMs, a book or your dog.
I'm mostly concerned that something we used to see as a part of basic "software engineering" (verify that what you build is actually doing what you think it is) has suddenly made a very quick exit from the scene, in chase of outputting more LOC which is completely backwards.
I review every line of code I generate, and make sure I know enough that I can manually reproduce everything I commit if you take away the LLM assistant tomorrow.
This is also what I ask our engineers to do, but it's getting hard to enforce.
That's the only way, but I even doing that I fear I loose some competency.
If you take ownership of the code you submit, them it does not matter if it was inspired by AI, you are responsible from now on and you will be criticized, possibly you will be expected to maintain as well.
Vibing is incompatible with engineering and this practice is disgusting and NOT acceptable.
Comment was deleted :(
I get vibe coding a feature or news story or whatnot but how do you go about not even checking if the thing actually works, or fact checking the blog post?
Army brain.
Optics is the only thing that matters, there are people genuinely pushing for vibe coding on production systems. Actually, all of the big companies are doing this and claiming it is MORE safe because reduces human error.
I'm starting to believe they are all right, actually. Maybe frontier models surpassed most humans, but the bar we should have for humans is really really low. I genuinely believe most people cannot distinguish llms capabilities from their own capabilities, and their are not wrong from the perspective they have.
How could you perceive, out in the wild, an essence that scapes you?
[flagged]
Why?
Vice signaling
Because I am normal.
Are you sure that's "normal"?
Coming to the comments to brag about ignoring something you clearly didn't ignore (given that you're here in the comments) is actually pretty abnormal behavior.
Normal people don't jerk themselves off about being edgy in public. Hope this helps!
no, it's not normal to have that reaction to a TLD. you're a bigot and a freak
it seems as if literally everyone associated with "AI" is a grifter, shill (sorry, "Independent Researcher"), temporarily embarrassed billionaire, or just a flat out scammer
I have yet to see a counter-example
I have a feeling that AI psychosis is more prevalent than we realize, especially in software.
I would not rule out that sometimes they are just incompetent and believe their own story, because they just don't know it better. Seems this is called a "bad apple"?
Everyone (not really, but basically yes) associated with $current_thing is a rent seeking scammer.
Even if Blockchain has tremendous impact, even if transformers are incredible (really) technology, even if NFTs could solve real world problems...you could basically say the same thing and be right, rounding up, 100% of the time, about anything technology related (and everything else as well). This truly is a clown world, but it is illegal to challenge it (or considered bad faith around here)
They did build a browser; it may not be a very compliant or complete browser, or even a useful one, but neither was IE6!
It didn't even compile, which makes me consider wether your comment is just ignorant or outright maliciously misleading
The version that was live on GitHub the day they published their blog post was missing compilation instructions, didn't cleanly compile and didn't pass GitHub Actions CI.
The project itself did compile most of the time it was being developed - the coding agents had been compiling it the whole time they were running on it.
Shortly after the blog post they updated the GitHub repo with compilation instructions and it worked. I took this screenshot with it: https://static.simonwillison.net/static/2026/cursor-simonwil...
The "it didn't even compile" criticism is valid in pointing out that they messed up the initial release, but if you think "it never compiled" you have an incorrect mental model.
Also, didn't it use Servo crates? I don't think you can say 'from scratch' if 60% of the actual work is from an external lib.
If I install an Arch Linux, I don't say I 'installed Linux from scratch'.
It used cssparser and html5ever from the Servo project, and it used the Taffy library for flexbox and CSS grid layout algorithms which isn't officially part of Servo but is used by Servo.
I'd estimate that's a lot less than 60% of the "actual work" though.
My bad, I was misinformed, thanks for correcting me, I thought it used the renderer, not just the parser. Thats honestly way better than what I thought.
I believe it was basically a broken, non-functioning wrapper around Servo internals. That’s what I’d expect from a high schooler who says “i wrote a web browser”, but not what I’d expect from a multi-billion dollar corporation.
They aren't really a multi-billion dollar corporation. A lot of it is them just pumping up their valuation. Stuff like this proves that in a lot of ways.
They are running > 300 DC's...
Talking about Cursor not Cloudflare.
My understanding is that it doesn't even compile if you clone the repo.
It does now. It didn't on initial announcement day.
It didn't and it had some pretty weird commit history and emails. Overall not a super great sign...
They didn't build a browser from scratch.
It is worrying to see a major vendor release code that does not actually work just to sell a new product. When companies pretend that complex engineering is easy it makes it very hard for the rest of us to explain why building safe software takes time. This kind of behavior erodes the trust that we place in their platform.
The real concern is that we've been doing this race to the bottom for so long that it's becoming almost trivial to explain why they are wrong. This over simplification has existed before AI coding and it's the dream AI coding took advantage of. But this market of lemons got too greedy
Bloody hell that's embarrassing, for both Cloudflare and the blog author. Did he not have anyone review it before publishing?
So many failures coming out of Cloudflare these days, feels like they peaked a while ago and are slowly declining into incompetence.
> So many failures coming out of Cloudflare these days
I wonder if there's a particular new fad that could be causing this
Hubris?
That the original post to HN linked in the blog was done on a throwaway kind of implies a level of awareness (on the part of the dev) that the code/claims were rubbish :)
Not to mention they commented on their own post, pretending to ask a question..
Embarrassing, coming from a company like Cloudfare
Ahh, so that is what "shipping at the speed of inference" means
Honestly I like Cloudflare's CDN and DNS but beyond that I don't really trust much else from them. In the past though their blog has been one of the best in the space and the information has been pretty useful, almost being a gold standard for postmortems, but this seems especially bad. Definitely out of line compared to the rest of their posts. And with the recent Cursor debacle this doesn't help. I also don't really get their current obsession with porting every piece of software on Earth to Workers recently...
>I also don't really get their current obsession with porting every piece of software on Earth to Workers recently...
Because their CDN/DNS is excellent software but it's not massive moat. Workers on other hand is.
It's like difference between running something on Kubernetes vs Lambdas. One you can somewhat pivot with between vendors vs other one requires massive rewrites to software that means most executives won't transition away from it due to high potential for failure.
Yeah, I like that I can just upload a static html and host it there for free, but anything more I dunno. Its all about vendor lock-in with their products.
I essentially just use them for this and domain DNS/Registrar as their pricing is pretty good for that.
I guess it depends on the author. Seems like it is the first post for this author, and given the reception, maybe the last one...
“This architecture shifts the paradigm for self-hosting. It turns "running a server" from a chore into a utility. You get the sovereignty of owning your data without the burden of owning the infrastructure”
Yeah, this is just shameful. Obviously written by an LLM with zero oversight. If this engineer doesn't get fired I'll lose all trust in Cloudflare.
I've never thought someone should be fired based on a blog post but man, this comes real close.
This appears to be the author's first blog post for Cloudflare, Cloudflare being the author's first post-military employer. For his sake and Cloudflare's, this deserves an AAR that I hope becomes a teachable moment for both.
Did they really vibe code a partial implementation and blog about it?
That's one way to destroy the CF blog credibility!
In 2026, you should be implementing MLS instead of Matrix.
It’s not a working or complete implementation, but…
But according to the README, it is production grade! Presumably "production" in this case is an isolated proof of concept?
Well that is an interesting idea and proof of concept. I agree that the post is not the best I have seen from Cloudflare, and it shouldn't suggest that the code is production ready, but it is an interesting use-case.
The developer just "cleaned up the code comments", i.e. they removed all TODOs from the code: https://github.com/nkuntz1934/matrix-workers/commit/2d3969dd...
Professionalism at its finest!
LLMs made them twice as efficient: with just one release, they're burning tokens and their reputation.
It's kinda mindblowing. What even is the purpose of this? It's not like this is some post on the vibecoding subreddit, this is fricken Cloudflare. Like... What the hell is going on in there?
Oh wow I'm at a loss for words.
To the author: see my comment at https://news.ycombinator.com/item?id=46782174, please also clean up that misaligned ASCII diagram at the top of the README, it's a dead tell.
Yeah deleting the TODOs like that is honestly a worse look.
I also use this as a simple heuristic:
https://github.com/nkuntz1934/matrix-workers/commits/main/
There exist only two commits. I've never seen a "real" project that looks like this.
I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.
I have a similar process. Internal repo where work gets done. External repo that only gets each release.
To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.
I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.
The repository is less than one week old though; having only the initial commit wouldn't shock me right away.
That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.
But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.
It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released).
So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :)
Incoming force push to rewrite the history . Git doesn't lie!
I wouldn't put it past them...
I wouldn't put it in past tense...
Here's the post on LinkedIn
https://www.linkedin.com/posts/nick-kuntz-61551869_building-...
https://www.linkedin.com/in/nick-kuntz-61551869/
DevSecOps Engineer United States Army Special Operations Command · Full-time
Jun 2022 - Jul 2025 · 3 yrs 2 mos
Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.
Tbf, there is no one with a ‘serious DevSecOps background’. It’s an incredibly strong hint that the person is largely a goof.
Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs.
This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force.
I don't know what's more embarrassing the deed itself, not recognizing the bullshit produced or the hastly attempt of a cover up. Not a good look for Cloudflare does nobody read the content they put out? You can just pretend to have done something and they will release it on their blog, yikes.
Wow this is definitely not a software engineer. Hmm I wonder if Git stores history...
Reminds me of Cloudflare's OAuth library for Workers.
>Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security
>To emphasize, this is not "vibe coded".
>Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.
...Some time later...
What is the learning here? There were humans involved in every step.
Things built with security in mind are not invulnerable, human written or otherwise.
Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.
This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.
Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)
And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?
This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.
This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.
If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.
the problem with "AI" is that by the very way it was trained: it produces plausible looking code
so the "reviewing" process will be looking for the needles in the haystack
when you have no understanding, or mental model of how it works, because there isn't one
it's a recipe for disaster for anything other than trivial projects
The learning is "they lied". After all, apart from marketing materials making a claim, where is the evidence?
Wait, we think they’re lying because an advisory was eventually found? We think that should be impossible with people involved?
Reading the necessary RFC is table stakes. Instead we got this:
>"NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"
>"haha gpus go brrr"
(Those lines remain in the readme, even now: https://github.com/cloudflare/workers-oauth-provider?tab=rea...)
If you're asking in good faith,
> Every line was thoroughly reviewed and cross-referenced with relevant RFCs
The issue in the CVE comes from direct contradiction of the RFC. The RFC says you MUST check redirect uris (and, as anyone who's ever worked with oauth knows, all the functionality around redirect uris is a staple of how oauth works in the first place -- this isn't some obscure edge case). They didn't make a mistake, they simply did not implement this part of the spec.
When they said every line was "thoroughly reviewed" and "cross referenced", yes, they lied.
I mean, you can't review or cross reference something that isn't there... So interpreting in good faith, technically, maybe they just forgot to also check for completeness? /s
To me it's likely, given the extremely rudimentary nature of that issue.
No more vulnerabilities then I guess!
Hilarious. Judging by the username, it's the same person who wrote the slop blog post, too.
they should have at least rebased it and removed from git history
Um what's up with companies trying to recreate really big projects using vibe coding.
Like okay, I am an indie-dev if I create a vibe coded project, I create it for fun (I burn VC money of other people doing so tho but I would consider it actually positive)
But what's up with large companies who can actually freaking sponsor a human to do work make use of AI agents vibe code.
First it was cursor who spent almost 3-5 million$ (Just came here after watching a good yt video about it) and now Cloudflare.
Like, large corpos, if you are so much interested in burning money, atleast burn it on something new (perhaps its a good critique of the browser thing by Cursor but yeah)
I am recently in touch with a person from UK (who sadly got disabled due to an accident when he was young) guy who is a VPS provider who got really impacted by WHMCS increase in bill and He migrated to 1200 euros hostbill. Show him some HN love (https://xhosts.uk/)
I had vibe coded a golang alternative. Currently running it in background to create it better for his use cases and probably gonna open source it.
The thing with WHMCS alternatives are is that I made one using gvisor+tmate but most should/have to build on top of KVM/QEMU directly. I do feel that WHMCS is definitely one of the most rent seeking project and actually writing a golang alternative of it feels sense (atleast to me)
Can there not be an AI agent which can freaking detect what people are being charged for (unfairly) online & these large companies who want to build things can create open source alternatives of it.
I mean I am not saying that it stops being slop but it just feels a good way of making use of this tech aside from creating complete spaggeti slop nobody wants, I mean maybe it was an experiment but now it got failed (Cursor and this)
A bit ironic because I contacted the xhosts.uk provider because I wanted to create a cloudflare tunnels alternative after seeing 12% of internet casually going through cf & I saw myself being very heavily reliant on it for my projects & I wasn't really happy about my reliance on cf tunnels ig
nkuntz1934 Senior Engineering TPM @ Cloudflare
Of course, this is done by a manager. Classic corporate mindset, I can do what these smelly nerds do every day, hold my bear.
He doesn't even know how git works, huh?
What a clown.
TPM isn't manager. It's basically a PM, but they're (supposed) to be technical
My guess, a program manager high up in the engineering org and not a people manager. But suggesting a high up program manager doesn't direct people is also wrong. TPMs "make the wheels go 'round" in engineering. They very much control the fate of other individual, and often whole teams so their integrity and capability both matter considerably which means they should not be passing themselves off as a coder or their individual code projects as production ready.
Does TPM not mean Technical Program Manager or Technical Product Manager?
Product Managers are generally not "Senior Engineering," though I suppose it is possible. IMO, it's a whole lot more likely a program manager than a product manager.
Probably, but that isn't a management role, they're not a manager, even if the job title includes the word manager.
Comment was deleted :(
[dead]
[flagged]
everybody is vibing everything now, code, messages, reviews, everything
Kind reminder that the author of that post is a human who will be affected by all the hate. Is it really worth it?
I agree that the post is wanting, but the idea itself is interesting: running a Matrix homeserver on workerd.
I also can't help but feel bad for the author. However, when the first line of the README is
> A production-grade Matrix homeserver
this is engineering malpractice. It is also unethical to present the work of an LLM as your own.
> Is it really worth it?
Unequivocally yes.
Fraud is fraud, and if your first instinct is to defend it in this manner, check yourself in the mirror.
Comment was deleted :(
[dead]
[flagged]
This is a bit more than overselling a proof of concept. He made claims that were not correct, and presented some LLM generated code as point of pride. And not on his blog, but a company's website.
He's emblematic of the era we now live in. Vibe coded projects that the "developer" didn't learn anything from, posted using LLMs. People have zero shame, zero curiosity, zero desire in learning and understanding what they're working on.
Also it doesn't make sense to escalate an interaction by swearing at a person and simultaneously asking them to calm down.
I’m plenty calm. There’s just nothing to debate here: the blog post and repo are a conscious, deliberate, and egregious misrepresentation of fact.
I would absolutely say exactly the same things to the author’s face as I’m saying right now. I would never work for a company that condones this in a million years, as a matter of principle.
And I didn't say that you cannot criticise it (or did you think I was talking to you personally?).
I just see a lot of comments from people who just seem happy to see that they can contribute to ruining someone else's day (or more).
> or did you think I was talking to you personally?
You wrote,
> May I kindly ask you to calm the fuck down?
So yes, a reasonable person would conclude that you were talking to them.
> I just see a lot of comments from people who just seem happy to see that they can contribute to ruining someone else's day (or more).
Which comments do you see doing that? Exactly?
In a real "engineering" role, this person would be stripped of their license for stamping "production grade" on a bunch of AI slop.
That doesn't exist in our trade, so yeah, public shaming is the next best thing. I sincerely hope links to this incident will haunt him every time someone googles his name forevermore.
There was a piece a little while back, most probably from Cory Doctorow, about how some humans have already become Reverse Centaurs:
Controlled by a machine and only there to put their names and reputations on the line when the machine messes up.
Maybe this applies more to a writer having to generate 20 articles per hour in some journalism sweatshop, pressured to push out anything that will catch the winds of SEO augmented news, but I would not discount the level of pressure that the author of the blog post was put under to produce something, anything...
Based on the published profile, I strongly suspect that this person is not paid that well at all. you are not looking at a FAANG kind of deal here most certainly.
So maybe spare one second of thought for that future where many many folks are just there to be burnt up in some cancellation machine whilst profit gets accumulated elsewhere...
As you say, it's pretty hard to say that the average quality of software engineering makes it deserve the word "engineering" at all. Most software is bad accross the board, and developers on average get pretty good salaries for... whatever they bring to the world.
Still I don't think that some random employee deserves to be harassed and publicly shamed for a bad blog post.
In other industries this would be a gross ethical issue and potentially a legal one.
In this industry, public criticism for public fraudulence is "harassment", I guess? C'mon, man.
I think it's a pretty big deal for a major company to put out a blog post about something that is "production grade" and pushing customers to use it without actually making it production grade.
> They start by saying they "wanted to see if it was possible"
That's a generous read. From the actual article:
> We wanted to see if we could eliminate that tax entirely. Spoiler: We could.
Sure it's a bad post. But the guy did not make a nazi salute at a meeting...
Comment was deleted :(
We are getting tired of being lied to.
The person who wrote the article probably does not benefit from lying, I don't think it was the intent. It is a bad post, don't get me wrong, but maybe there is no need to insult the author just for that.
When called out, they deleted the TODOs. They didn't implement them, they didn't fix the security problems, they just tried to cover it up. So no, at this point the dishonesty is deliberate.
Crafted by Rajat
Source Code