hckrnws
Ask HN: Is it time to fork HN into AI/LLM and "Everything else/other?"
by bookofjoe
I would very much like to enjoy HN the way I did years ago, as a place where I'd discover things that I never otherwise would have come across.
The increasing AI/LLM domination of the site has made it much less appealing to me.
I have seen this question asked on subreddits, Not about AI, but for other topics that some people dislike.
They always seem to take the form of "Should we divide this group into A and B, A stays here and B goes over there and that way everybody is happy"
Invariably the person who proposes this wants to remain in group A and will not be a participant in group B.
To me this seems like the subtext is "Those people are not welcome here, they are not like us. It's not like we have anything against them, we just don't want them ramming it down our throats"
Anyone is free to make a website with whatever content they want, they can invite people to it and grow your own community. Directing a community to divide to remove an element you dislike is an attempt to appropriate the established community.
It must have been somewhat the same when chess engines started to beat human players. The chess community should be fairly divided about the usefulness of such a tool. After a while things settled down, and all players use the tool in some way or another.
Some chess players benefited more from the tool than others. I always analyze my games carefully with an engine after the game. After less than 10 years I managed to get from zero to almost master level. I attribute that to extensive engine analysis I put on my games afterwards.
The user needs to know how to use the engine, LLM or chess engine, when it makes sense to use it, what are the shortcomings of the tool and so on.
LLMs are game changers, and AI's ability to distinguish the signal from noise is marvelous. Will it be a game changer like it is now for chess, a very narrow game compared to everything else, remains to be seen.
Hacker news, like everything in tech, is susceptible to hype. Today it's AI, a few years ago it was Bitcoin.
I do think it's worthwhile to occasionally have a discussion about what content we want to see, and if a particular topic is getting too much attention.
It's also totally reasonable for a group of people to not want their agenda hijacked.
So, IMO, let the discussion continue. Let's see what comes out of it.
I should add: Many years ago I used to read a news site that was modeled after slashdot. One day the person running it decided to switch it to be community-moderated.
Every day it was the same discussion over again, from someone who didn't bother to do a Google search or look at what was posted the day prior. After a week or so of seeing the same discussion over and over again, I stopped reading the news site.
Needless to say, it's important to occasionally have discussions like this. I also think we under-appreciate the amount of moderation that goes on here. Sometimes I look at the "new" feed and it is just loaded with lots and lots of nonsense, so I get that someone has to put their finger on the scale to keep the quality up.
To begin with, this would be a non issue if HN just introduced something like user provided tags and users can vote for/against (to circumvent abuse)
Then the people wanting to filter "x" could just do it via simple grease monkey scripts or if HN natively supported it.
Sure, it wouldn't be perfect, but neither does it have to be.
Most platforms don't grow this feature because they can benefit from redirecting user energy into places that the platform is choosing. Or some vocal minority of the user base benefits from redirecting the platform to a place of their choosing.
Similar to nest usurpation with eusocial insects, this is by definition parasitism when the energy-redirection is unwanted or unavoidable.
In the specific case of AI it's way worse than the usual suspects where everyone is effected and so everyone has to have some opinion (looking at you politics). Because even some rant about how much you hate AI is directly feeding it at least 3 ways: first there's the raw data, then there's the free-QA aspect, then there's the free-advertisement aspect when others speak up to disagree with your rant. So yeah, even people who like some of the content sometimes quickly start to feel hijacked.
No. HN is good as it is and I find it disrespectful when newcomers are demanding changes like this. There's a good reason the forum has stayed the same for almost 20 years.
Topic tags wouldn't kill HN.
I highly doubt this is going to happen anyway.
HN has many VCs and startups and HN itself is backed by a VC firm, so I highly doubt AI news is going anywhere as it benefits many YC Startups currently levering this hype as well as VCs and other investors shoving cash into this.
Additionally, that there are also ton of vibe coders, OpenAI / Meta / Google folks here interested in this topic.
I'm afraid the only solution is for people to ride the cycle until it either fizzles out or morphs into something else.
> I have seen this question asked on subreddits, "Should we divide this group into A and B, A stays here and B goes over there and that way everybody is happy" To me this seems like the subtext is "Those people are not welcome here"
I don't disagree with this observation about Reddit. However, I feel HN readers are more topic-oriented. Folks really do come to HN to read the articles and then maybe get drawn into a discussion.
I grant there are some topics here that tend to be more engagement driven but on balance I think the above holds.
> Folks really do come to HN to read the articles and then maybe get drawn into a discussion.
based on the number of comments i see that are oblivious to the actual content of the articles, i'm pretty sure the user flow is "Folks come to HN to read headlines and have a conversation, and then maybe get drawn into reading an article"
Those comments can't reflect people who are drawn in by the article but don't engage. Upvotes hint it is a significant number.
Past that, I don't see non-reading commenters being a dominant presence. Some topics draw a few more than normal but that's the worst of it.
Really? I feel like P(didn't read the article | wrote a comment) to be quite high personally. Any thread with > 100 comments seems to be full of these posters.
It is also possible to appropriate an established community by bringing in new members over time with views opposing the founding principles. This is much easier if the leadership preaches tolerance.
This is one of those things that is kind of hard to say without people getting triggered because of negative stereotypes but sometimes you have to stand up for principles and kick people out of social groups to keep a good thing going.
For myself, i often want to be able to just "shift views" on an existing community, rather than wholesale move to somewhere else that fits better.
I find I can do that with granular enough subreddits, or the (maybe old) feature in Twitter where you could group people you follow into lists and see multiple "homepages".
This for me has solved the issue of dividing community, which at the least from a practical level can be tricky.
Ive been exploring how to achieve this effect "on top" of HN lately, rather than by controlling followers, by popping a very simple AI filter on top that re-ranks it for me, and found it quite satisfying, but not sure what the ultimate value/usecase might be.
I don't think the poster has the power to split HN in twain.
I don't think the poster believes some kind of democracy could bring about this.
I do believe that by entertaining the idea, the subsequent discussion will be useful for moderators to get a feel of what their userbase thinks of the current state of things.
From my understanding, the soul of HN and what makes it what it is is the moderation - having discussions on issues is an efficient way to signal to them.
> To me this seems like the subtext is "Those people are not welcome here, they are not like us. It's not like we have anything against them, we just don't want them ramming it down our throats"
I am truly tired of AI being rammed down my throat, not just via the tech news, but in article content (slop), in un-asked-for tech product features, and at my own tech job. The solution is not to divide the community and make people unwelcome, but to provide at least some minimal set of filters and ways to opt out of the hype frenzy. I don't want people to feel unwelcome, but I do wish there was a way to turn the AI firehose off.
Comment was deleted :(
Who’s forcing you to read the AI articles?
Some people come to HN for interesting articles, many lists exists if you want to know what I mean.
If, say, a third to two third of articles in any given frontpage, for multiple months to years, do not fit this description - can you see how one's ability to find what they are looking for gets hampered?
Like yes, you can grow nice flowers on the beautiful fertile soil there, it just sucks we need to get rid of these protected grasslands harboring endangered species on top of it.
Who are you to say whether an article is interesting to anyone else but yourself?
> To me this seems like the subtext is "Those people are not welcome here, they are not like us. It's not like we have anything against them, we just don't want them ramming it down our throats"
It could just as easily be "I don't feel like there is a place here for me anymore and I wish I had another place to go"
In my experience that is not what people mean.
People with that sentiment ask about what alternative places exist, some of them make their own places.
My post above mentioned something I notice on Reddit. I hardly ever visit Reddit these days. It doesn't really feel like the place for me now. I am not posting this comment on Reddit.
> People with that sentiment ask about what alternative places exist, some of them make their own places
I don't think that's overall very true
Most of those people are just lonely and isolated, and that's a big part of why we are living in what people are calling a "loneliness epidemic"
It's easier than ever to make a new niche area. It's more difficult than ever to get your niche area discovered by others, because you are drowned out by the noise
It feels quite hopeless for many people in my experience
This honestly reminds me of the crypto days from 2017-2021ish.
Literally 80% of the posts were about crypto and how we were going to experience some ground shattering revolution. There were so many posts about how all the topics are about crypto and how it is annoying.
Ultimately, all that noise and the billions of dollars poured into that turned into a meme if we're honest. Most people just buy/flip crypto or hold BTC that they'll sell when they double it after a year.
AI in LLM form is at least useful in many ways and in front of millions of people without any rugpulls and other shit, but due to their inherent limitations (doesn't matter how much python it writes and executes, half the time or more its wrong for any actual/meaningful work) I think the hype will settle in the next couple of years.
> Anyone is free to make a website with whatever content they want, they can invite people to it and grow your own community.
This is very hard to do. But hey, I'll give it a try.
Starting now a new community for AI-assisted coding: https://kraa.io/vibecoding
> product building
> vibecoding
These should not be deemed equivalent.
It's about a topic not the people.
So say the people who say "Hate the sin not the sinner" when they talk about homosexuality.
So say the "bigots" who, for example, want sports news separated from regular news because they don't find football so interesting.
Yes, it’s always about the people. Adding a “small inconvenience” to people with a different perspective is ok, right? just visit two sites if you want your AI news.
How does this sound? It’s about a religion not the people.
I built you this: https://tools.simonwillison.net/hacker-news-filtered
It shows you the Hacker News page with ai and llm stories filtered out.
You can change the exclusion terms and save your changes in localStorage.
o3 knocked it out for me in a couple of minutes: https://chatgpt.com/share/68766f42-1ec8-8006-8187-406ef452e0...
Initial prompt was:
Build a web tool that displays the Hacker
News homepage (fetched from the Algolia API)
but filters out specific search terms,
default to "llm, ai" in a box at the top but
the user can change that list, it is stored
in localstorage. Don't use React.
Then four follow-ups: Rename to "Hacker News, filtered" and add a
clear label that shows that the terms will
be excluded
Turn the username into a link to
https://news.ycombinator.com/user?id=xxx -
include the comment count, which is in the
num_comments key
The text "392 comments" should be the link,
do not have a separate thread link
Add a tooltip to "1 day ago" that shows the
full value from created_at
I updated it to fetch 200 stories instead of 30, so even after filtering you still get hopefully 140+ things to read.
https://github.com/simonw/tools/commit/ccde4586a1d95ce9f5615...
I’ve built a site that does the same sort of exclusion filtering, with a lot more bells and whistles. Very much in the spirit of “what if HN stayed HN, but had actual, very useful features.”
Here’s an “anti-ai” timeline filter:
https://hcker.news/?filter=top30&exclude=llm%2C+vibe%2C+open...
I’m not using the Algolia API, I ingest the hn fire hose on my own server so the filtering is very fast.
Top story: Kiro: new agentic IDE
Just add “agent” to the search box. It’s saved in local storage.
I just added "agent" to the default exclusion list.
Still seeing `Kiro: A new agentic IDE` BTW.
If the filters UI at the top shows "llm, ai" instead of "llm, ai, agent" then you probably have that previous search saved in localStorage.
Huge respect for all your articles and work on llms, but this example should have been using AI to create a tool that uses AI to intelligently filter hacker news :)
"Onedrive is slow on Linux but fast with a “Windows” user-agent"
"Agents raid home of fired Florida data scientist who built Covid-19 dashboard"
"Confessions of an ex-TSA agent"
"Terrible real estate agent photographs"
etc etc
See comment here: https://news.ycombinator.com/item?id=44571740#44572312
I'm not sure what I'm supposed to see there. From my point of view, this is a low-effort, vibe coded app that doesn't solve the problem the OP had but it's solving a different one. You'd need to at least train a small classifier based on something like BERT to actually address the issue. What I showed in my comment just demonstrates that this doesn't solve the problem OP had.
An interesting example of both LLMs' strengths and weaknesses. It is strong because you wrote a useful tool in a few minutes. It is weak because this tool is strongly coupled to the problem: filtering HN. It's an example of the more general problem of people wanting to control what they see. This has existed at least since the classic usenet "killfiles", but is an area that, I believe, has been ripe for a comprehensive local solution for some time.
OTOH, narrow solutions validate the broader solution, especially if there are a lot of them. Although in that case you invite a ton of "momentum" issues with ingrained user bases (and heated advocacy), hopelessly incompatible data models and/or UX models, and so on. It's an interesting world (in the Chinese curse sense) where such tools can be trivially created. It's not clear to me that fitness selection will work to clean up the landscape once it's made.
Not sure what a local solution would look like when what you see is on websites, maybe a browser extension? we just made a similar reskin as a website, and it works great, but is ultimately another site you have to go to. Its another narrow solution with some variation (we do use AI to do the ranking rather than keyword filtering), but im interested in the form factors that might give maximal control to a user.
It is strong because you believed it created something of value. Did it work ? Maybe. But regardless of whether it worked, you still believed in the value, and that is the "power" of AIs right now, that humans believe that they create value.
Probably would work better as a userscript, so you don't have to rely on a random personal website never going down just to use HN. I don't have a ChatGPT account but I am curious as to if it could do that automatically too.
Interesting idea, we could consider that as an alternative implementation to https://www.hackernews.coffee/. While we are planning on making it open-source, a userscript would be even more robust as a solution, although would need a personal API key to one of the services.
This is neat, but with the given filters you autoselected (just the phrases "llm" and "ai"), of the 14 stories I see when I visit the page, 4 of them (more than 25%!) are still stories about AI. (At least one of them can't be identified by this kind of filtering because it doesn't use any AI-related words in its headline, arguably maybe two.)
people have said it elsewhere, but I think you might have to fight fire with fire if you want semantic filtering.
> of the 14 stories I see when I visit the page, 4 of them (more than 25%!)
Llm maths? ;)
25% of 14 is 3.5. 4 is more than 3.5. Ask grock if you still don't get it.
There's a special kind of irony to use AI to help out the people who hate AI.
It's not hypocrisy or anything negative like that, but I do find it amusing for some reason.
> to help out the people who hate AI.
Was it? I feel like it was clearly meant to be smug and inflammatory rather than useful in any meaningful way.
I was gong for smug, inflammatory and useful at the same time.
There is an even more special kind of irony to see it failing as the top ranked story now is "Kiro: A new agentic IDE"
I know but the irony stands. We will get used to people getting embarrassed by AI results.
This seems like exactly the type of problem human-written filtering systems fall into as well.
human-written filtering systems don't brag about having a solution for a problem in 2 minutes and fail.
This sounds more like a complaint about the human author, than the system itself.
Not at all, simonw's work is fantastic. But it was a funny #fail.
[dead]
I also built https://lessnews.dev (HN filtered by webdev links)
One decision I had to make was whether the site should update in real time or be curated only. Eventually, I chose the latter because my personal goal is not to read every new link, but to read a few and understand them well.
I think there is a fundamental disconnect in this response. What the user is asking for is for a procedural and cultural change. What you’ve come up with is a technical solution that kind of mimics a cultural change.
I don’t think it’s wrong, but I also don’t think we can really “AI generate” our way into better communities.
Simonw’s response is the right response. You should not bend the community to your will simply because you do not like the majority of the posts. Obviously many people do like those posts, as evidenced by them making the front page. Instead, find ways to avoid the topics you do not desire to read without forcing your will on people who are happy with the current state.
Let me stop folks early, don’t make comparisons to politics or any bullshit like that. We’re talking only about hacker news here.
feature request for OP: sort by "LLM Agentic AI" embedding cosine distance desc
AI solving the too-much-AI complaint is heart-warming. We're at the point where we will start demanding organic and free-range software, not this sweatshop LLM one-shot vibery.
Love it. :D
simon how do you get so much done? It’s incredible. Would love to see the day in the life TikTok :P
You can even make it live with SSE/EventSource.
Have you no sense of embarrassment that you displayed exactly the kind of bullshit that AI skeptics talk about all the time? You put in almost no effort to spit out some garbage that doesn't solve the issue even in a technical sense.
Not at all. I think you misunderstood the point I was making here.
I think the idea of splitting Hacker News into AI and not AI is honestly a little absurd.
If you don't want to engage with LLM and AI content on Hacker News, don't engage with it! Scroll right on past. Don't click it.
If you're not going to do that, then since we are hackers here and we solve things by building tools, building a tool that filters out the stuff you don't want to see is trivial. So trivial I could get o3 to solve it for me in literally minutes.
(This is a pointed knock at the "AI isn't even useful crowd", just in case any of them are still holding out.)
There's a solid meta-joke here too, which is that filtering on keywords is a bad way to solve this problem. The better way to solve this profile... is to use AI!
So yeah, I'm not embarrassed at all. I think my joke and meta joke were pretty great.
It only shows 13 stories? And no pagination.
Comment was deleted :(
Comment was deleted :(
[flagged]
Please don't cross into personal attack.
In a thread devoted to filtering out AI, I gave them a way of filtering out AI.
(The fact that I wrote it using AI doesn't really matter, but I personally found it amusing so I included the prompts.)
> The fact that I wrote it using AI doesn't really matter
Given that it is a poorly implemented solution that doesn't really do what the OP asked, yes it is.
It really isn't incumbent on you to feed the trolls here.
[flagged]
Now that's impressive. I've worked with and managed many humans and almost never do I get want I want back in one prompt.
Even ones with detailed specs and the human agreed to them don't come back exactly as written.
I think it's a bifurcation between 0-1 prompts (self-driven) and a 1,000 prompts :)
tf humans do you work with?
That's at least 5 JIRA tickets.
Also a lot of cursing that I’ve been told to cut down on by HR. (/s, kinda)
Great example of the power of vibe coding. The first item is literally "Kiro: A new agentic IDE".
There is literally an input box to put terms you want to exclude...
The prompt asks for "filters out specific search terms", not "intelligently filter out any AI-related keywords." So yes, a good example of the power of vibe coding: the LLM built a tool according to the prompt.
> The prompt asks for "filters out specific search terms"
So if I want a front page free of LLM "agents" but also want to view stories about secret agents it will do that, right?
See comment here: https://news.ycombinator.com/item?id=44571740#44572312
The prompt was to exclude llm and ai by default though
the prompt was "default to "llm, ai"", which is exactly what it did. Nothing in the prompt about defaulting to other related terms
And that title didn't contain either of those words...what is the complaint again?
if all you want is word filtering in the title, you can simply write an adblock rule.
But how are you supposed to hype AI by using old tech like that?
Have AI write the rule and an article about having AI write the rule.
because the point is literally to filter based on vibes not precise keywords
That is not what the prompt I saw above asked for. It took him a few min. Write your own with a semantic based filter instead of a keyword based filter if that's what you want.
So I have to stay up to date on AI stories just to know what buzzwords I should filter so I don't see AI stories?
Sounds to me like you want a deeper version of this that uses AI instead of keywords to help filter out AI stories.
At a certain point it’s ironic
I think we're well past that stage. Using AI to escape AI. Does that count?
I think there’s another step here: Using AI to build tools that use AI to escape AI.
Eventually: using AI to build tools that use AI to escape AI using tools that use AI.
> using AI to build tools that use AI to escape AI using tools that use AI
Few illustrations are so absurd yet feasible enough to depict as horrendous a reality as this.
Clearly the US needs a constitutional amendment to preserve the right to keep and bear AI tools. Then we can arm the victims of AI tools with their own AI tools, for self-defense. If we're lucky, AI will send its AI thoughts and AI prayers in carefully calculated quantities.
Better yet, such expressions would be categorized as tokens of condolence at no expense to the public. Subsidized by the arms manufacturers.
Comment was deleted :(
Lol, yup. See azath92 comment - https://www.hackernews.coffee/
Add the buzzword when you see a story you don't like. Or settle with it filtering 90% of the AI content and just don't click on whatever remains, I doubt you expect the top story to be interesting to you 100% of the time.
Our brain decodes info based on context and extrapolation
This submission we're commenting on could be about filtering out any data, not just AI stuff. Politics, crypto, AI etc. Or more minute like "Trump" "fracking" "bitcoin" etc.
In any of these scenarios, with a tool designed to filter out content based on limited context, when would you ever be perfectly satisfied?
would you like AI to help you build the perfect context-filter model?
And certainly in our anti-politics filter we’d want to include the filtering of stories that promote the extreme political position that tech is somehow detached from politics! (Especially Silicon Valley startup tech that owes so much to the local politics and economy of California).
Which is to say, filtering politics out is absurd, one person’s extreme politics is another’s default view of the universe.
In a similar vein, I’ve had people assert (in all seriousness), their English had no discernible accent because they were American.
It’s a similar kind of mindset.
Isn't it enough to bury yourself under the rock? - you want the fact of your having done so concealed from you also? But what about the fact of wanting that?
That's how any filtering service works
...Yes? This is how this tool is coded. Machines do what one codes them to do, not what one wants them to do. If you're interested in making a more intelligent tool you can do it. This tool does exactly what @simonw says it does.
sounds like you need an AI to sort out and predict what you won't want to see ;)
How about a version with LLM integration that detects "AI" related stories in a more clever way? /s
A tool was offered that can accomplish what you want, with a very small amount of added effort on your part.
No, you do not have to "stay up to date on AI stories"—if you see one, add the keyword to the list and move on. There are not as many buzzwords as you seem to be implying, anyways.
If you are dissatisfied, you are welcome to build your own intelligent version (but I am not sure this will be straightforward without the use of AI).
Just write “there is an input box …”.
Stop saying “literally”.
If you're unable to discern that the word serves a purpose(emphasis) in that sentence, I literally don't know what to say to you.
Of course I can discern that. I think it sounds stupid and childish, and makes someone appear less intelligent. Overused and misused word. But this is now derailing the thread.
I’m with you here - it’s a completely superfluous word that the young have adopted as some form of belonging ritual. It has no purpose, adds no emphasis and is just poor English masquerading as a statement.
It used to be that literally had a meaningful definition - quite literally. Now it doesn’t (see #2) [https://www.merriam-webster.com/dictionary/literally]
Not everyone has caught up.
Superfluous words serve no purpose, though your use of one here emphasizes your lack of maturity. If that’s your goal, good writing.
It's bad enough to expect other people to change the way they communicate to make you feel better.
It's another thing entirely when the way they're communicating is accurate and correct.
But there literally is an input box.
I like this because things can stay permanently filtered. Just not across devices. But that wasn't one of the original requirements.
Also a great example of how software can be perfectly to spec and also completely broken.
llm, ai, cuda, agent, gpt.
Wish it returned more unfiltered items tho.
Isn’t knocking out CUDA going to take out a significant chunk of GPGPU stuff with it? I can see wanting to avoid AI stuff, for sure, but I can’t imagine not wanting to hear anything about the high-bandwidth half of your computer…
[flagged]
Please don't cross into personal attack.
Perhaps you should add a privacy policy or just release the source rather than assume people will trust your site. Why do you do these demos if you aren't upfront about all the things the LLMs didn't do?
I released the source: https://github.com/simonw/tools/blob/main/hacker-news-filter... (Apache 2 licensed) and a commit history listing the prompts I used. https://github.com/simonw/tools/commits/main/hacker-news-fil... - also displayed on the site here: https://tools.simonwillison.net/colophon#hacker-news-filtere...
I don't think I need a privacy policy since the app is designed so that nothing gets logged anywhere - it works by hitting the Algolia API directly from your browser, but the filtering happens locally and is stored in localStorage so nobody on earth has the ability to see what you filtered.
The API it uses is https://hn.algolia.com/api/v1/search?tags=front_page - which is presumably logged somewhere (covered by Algolia's privacy policy) but doesn't serve any cookies.
> Why do you do these demos if you aren't upfront about all the things the LLMs didn't do?
What do you mean by that?
You should try to get other people to make your demos is all I'm saying. I don't know why you keep inserting yourself either. Why didn't someone else post the thing you made? Were they waiting for you to do it or do you think people aren't smart enough to do it? I'm just trying to understand why every damned LLM story has to feature you. In what ways could you avoid such a filter of your posts?
Comment was deleted :(
The site does not request any personal information, therefore no privacy policy is required.
It has no server side log? How do I know that if there is no policy?
This post is turning up at least every other day. The last few times my reply was "AI is 4/30 or 5/30 of the front page, it's not such a big deal", but today it is 9/30.
I am wondering what the ratio is for VC and angel dealflow in the valley right now.
Hanging out on the "new" page and upvoting quality non-AI articles is an effective method of resistance.
> Hanging out on the "new" page and upvoting quality non-AI articles is an effective method of resistance.
Fully agree, and I in fact am finding that I actually find more stories I'm interested in that way than looking at the front page. For whatever reason, I'm increasingly getting out of sync (interests-wise) with broader HN. So many stories I think are great HN material (and would have been a few years ago) languish with almost no activity.
So there are two reasons IMHO to browse new: Surface better stories to front page for engagement, and find better stories
As you age your interests and curiosity change, in ways you often don't see until later.
Very common in computer science contexts. Young undergraduates always pick up the new tech and make something that seems alien and wrong first. It's not even the masters students.
Possibly the same Kiro - Agentic IDE post would have been as interesting to you as the launch of Atom or something related to VS Code, etc.
>Hanging out on the "new" page and upvoting quality non-AI articles is an effective method of resistance.
I hang out in /ask and /asknew for my part.
PS: Hey, Paul... When are you going to close my 2021 issue[0], you already merged the pull request[1] :D
Come on, man!
I will take a look at it.
Will "circle back" in a few years.
Buy my AI/LLM RAG Agentic bot to handle pull-requests and follow-ups based on HN conversations.
>This post is turning up at least every other day.
Res ipsa loquitur
> The last few times my reply was "AI is 4/30 or 5/30 of the front page, it's not such a big deal", but today it is 9/30.
A bigger impact for me has been the number of mentions of AI in the comments. It's not just that a large part of the front page is dominated by LLM hype posts, it's that every single post has a least one guy near the top somehow bringing AI into the discussion. I don't even care if it's "AI will fix this" or "haha, AI sucks at this too". I just don't want to hear anything about AI ever again.
> I just don't want to hear anything about AI ever again.
Genuinely curious: Why?
Don’t get me wrong, I upvoted this post, and would love to see AI separated out, or at least tagged (like a root comment suggests) so that I can filter them out if I want.
But I can’t say I’d never want to hear anything about AI ever again (though I’m headed in that direction).
What field are you in, and what are your interests, such that you’d want to visit HN without ever seeing mentions of AI?
Not your parent, and not anti-AI, but I’ve seen similar things to this thread in smaller spaces I’m in.
There are some people who are having genuine crises over this stuff, some of it existential, and some of it “wow I thought my friends had some basic agreements about the world that we actually don’t,” and seeing this stuff on the regular just fans these sorts of issues.
Also, in a simpler sense, there are a limited number of homepage spots, and if you don’t want to see a topic, it effectively shrinks your homepage. If HN only showed five stories to me it would be less useful than it is now.
> Also, in a simpler sense, there are a limited number of homepage spots, and if you don’t want to see a topic, it effectively shrinks your homepage. If HN only showed five stories to me it would be less useful than it is now.
Yes, I feel like all these shallow "[Someone] vibe-coded [thing] with AI using [Claud whatever]" articles are hitting the front page and muscling out other, more interesting ones. Just like the "[Common unix utility] re-written in Rust!" articles of years past.
I wrote this
https://ontology2.com/essays/HackerNewsForHackers/
years ago but I don't stand by that article because I don't feel that way anymore. I do stand by the sequel
https://ontology2.com/essays/ClassifyingHackerNewsArticles/
because that's the operating principle of YOShInOn which is something a little more sophisticated applied to RSS feeds and productized.
That is a wonderful question, but it's very hard to answer without essentially knowing me, and that may be a little bit ambitious for a comment.
I'm a software engineer. I consider this some of the most important work of our generation. The hardware we've made today has unlocked an until now impossible control over the world. We don't have to mechanically devise a way to make a clock that tracks the stars. We can just program it into a microchip, and it'll just do it. We don't have to manage an untold thousands of people to calculate our taxes. We can write it into a computer and it can just do it. Forever and perfectly. We're just not applying it.
I've reached the point of despair. It's not a AI doom kind of despair, where I believe that AI is going rogue or whatever. It's a much more pedestrian of despair. We have tremendous problems ahead of us. Both when it comes to the climate, but also when it comes to just doing the things that society always has to do and AI doesn't offer anything to any of the actual problems of society.
While people are dying of Ebola in Africa and Americans are dying because they can't pay for healthcare, we are talking about automating software development for ad-tech companies. It's embarrassing. This is my field, these are my people, and this is the best we have to offer.
I try to abstain from that despair by just not engaging with it. Either AI will happen and we'll take it from there, or it wont and then we'll have wasted a lot of effort and will hopefully never had any credibility as an industry again. I can't make a difference in either of those outcomes, so I just want it to go away.
Let me make it clear though. I too love the math behind recent AI. I even love the engineering behind how we do fast GEMM on GPU's. The challenges are really fun technically. That just can't be what decides our direction.
I hope that somewhat answered it a little. It's a bit hard to get such a large topic rooted so deeply in me into a comment. Thinking about the future in relation to these billion dollar companies and what they make does actually make me emotional.
> automating software development for ad-tech companies
A lot of technology advancements are automation of human tasks. Its been going on for decades and does eliminate a lot of jobs (or move them to a different continent). There aren’t telephone switch operators anymore, or cashiers adding up items manually in a store checkout or any other countless jobs that are now done with software.
One way to look at it is if you’re an employee at a company you don’t really have any say in what projects/products you work on. If those projects are eliminating or creating jobs or saving lives or whatever, your only choice is really about working for that employer.
Thank you, I feel similarly. We've become gods of computing through global-scale invention, production, supply-chains, and finance, and we're apparently going to use that power to "improve productivity" which in the best case means cheaper apps that make people more money. I've not heard a single actual use-case for the modern AI/LLM that helps us with our actual national and global problems.
I mean there are plenty of people using AI/LLMs to help with the actual problems, but it's about the same level of proportionality of people generally speaking working on those problems (vs. against or mostly just indifferent). So thus most of AI/LLM use is not in those areas, sadly.
That is because those problems aren’t being fixed not because they are not technically fixable, but because as a society we would need to agree on what ‘fixed’ means.
And that triggers the culture war, because Urban/Rural and other major factions have wildly different experiences, incentives, and goals on these fronts. And anyone trying to tackle those real problems who is noticed by one side or the other will inevitably get attacked.
And rather than sit down and really consider what we (as a nation!) want overall, make compromises, and agree to work together, we’d rather sit in our comfortable air conditioned places and stab each other in the back over the internet - or just check out into a comfortable bubble.
And unfortunately that means that the real problems are escalating.
Not the OP, but I'm sick to death of hearing about AI
The hype around it is ridiculous. I don't personally find it nearly as useful as people are saying, so everything feels like people are trying to gaslight me
Don't get me wrong, it's cool tech. Amazing stuff. I just personally don't have much interest in it until it's much more reliable for the things I want to use it for
And I'm really exhausted, tired of hearing about how this is going to replace people like me any minute now
> And I'm really exhausted, tired of hearing about how this is going to replace people like me any minute now
I'm kind of exhausted in general (year after year) of frankly unimaginative engineers who should know better, latching on to whatever is the latest soup of the month, and touting it here as the greatest human achievement since fire.
I do wish that there was a good place to talk about interesting stuff with people online that was a bit more resistant to both the extreme hype and the extreme pessimism
HN threads very often feel like whichever side posted first winds up dominating the thread, it's bizarre
I see the comments on some articles that are massively pro AI all upvoted to the top, then the comments on another article and it's all the negative AI comments upvoted
It's weirdly echo-chambery on a post by post basis
Yeah this has been the worst development in HN culture as HN has grown in my experience. To make things worse the extremity of the hype and pessimism just creates a huge flamewar where each side is dedicated to using the worst strawmen to undermine the other side.
Personally I just started treating this site as a sophisticated shitposting place and started actually talking about tech in group chats with friends who work in the industry. Increasingly I see folks refer to the content here in the same breath as Reddit so I don't think I'm the only one.
It's probably just a scale problem. When a website becomes big enough it becomes dominated by the folks with the most time to post and the most passionate opinions.
> Personally I just started treating this site as a sophisticated shitposting place and started actually talking about tech in group chats with friends who work in the industry
Yeah... I think this is probably smart. I wish I had that, I don't have a lot of friends in the industry
Something to work
I noted that too, to the point where I'm suspecting that "that guy" (obviously not just one user) is being paid to do so.
I've started downvoting them, the same way I always downvote "I fed this to an LLM and here's what it spat out".
I have had that same thought when I see them as one of the first comments on a post. I can't do anything with that suspicion, how would I prove it, but it's definitely there.
[flagged]
People who are a little late to the site may not know there was a time on HN where Erlang has even more frontage submission than the best of AI / LLM.
Ruby Rails, Postgres, SQLite, Rust, etc. They all have their moments and I dont think LLM right now is as overwhelming as any other hyped moments. Certainly not Erlang.
That was very different. Somehow the entire front page was Erlang, but it was only for a day or 2. AI is different from that. It's like a good 40-50% of the posts for at least a year or more, and I don't see it going away anytime soon. It's also different from web3/etc. as those were at most 10% of the posts and most of us can see it's just hype.
I'm not fighting for a split/fork, just stating the fact that it's nothing compared to Erlang.
IIRC that was a deliberate campaign to make the site unattractive to a spate of non-technical folks who had apparently all simultaneously discovered it.
https://news.ycombinator.com/item?id=512145
"We've had a huge spike in traffic lately, from roughly 24k daily uniques to 33k. This is a result of being mentioned on more mainstream sites [...] You can help the spike subside by making HN look extra boring. For the next couple days it would be better to have posts about the innards of Erlang [...]"
"Ok, ok, enough Erlang submissions. You guys are like the crowdsourced version of one of those troublesome overliteral genies. I meant more that it would be better not to submit and upvote the fluffier type of link. Without those we'll be fine."
Also some fun comments here: https://news.ycombinator.com/item?id=512178
This is a great little anthropology work of you. Thanks for taking time to find this out!
Here's what the frontpage looked like on the day of Erlang:
lol ask a community of autists for more posts about the innards of Erlang, don't be surprised when you get posts about the innards of Erlang.
It all depends if you care about the tech side of HN or the startup side of HN. I love the tech articles above all else and could easily do without the general trend fluff.
With that said, I don’t find the AI posts nearly as bad as the Blockchain era.
I don’t remember blockchain ever being as big as AI here. More annoying? Yes.
As annoyed as I am with the constant deluge of uninteresting AI/LLM articles, I would much rather see a split between tech and startup news. I think that's a lasting and useful distinction.
Isn't topical subcommunities just Reddit?
The userbase overlap is probably > 50%. To many people HN is just another subreddit.
Personally, I'm more interested in the think-pieces than the actual news.
(And I could very much do without the content that revolves around US politics. Even if it draws me in sometimes.)
Lobste.rs provides that distinction.
A socially awkward speakeasy with deranged moderation?
Where do I sign up?
A forum that is exclusionary-by-design has already failed.
And it's invite-only, so it's hardly an alternative.
well, it takes around 20 minutes to get an invite, so hardly a problem if you prefer only tech articles
How exactly does it take 20 minutes to get an invite? I have not tried it, but I can't see how that would be easy.
I got mine in about that time, joined their IRC and asked for an invite, someone DM'd me, asked me a couple of questions and sent me the invite. This was about 7 years ago when there as a lot less people, so I imagine it should be easier now.
No idea. Read only is good enough.
I wouldn't mind the Erlang-dominated front page coming back :)
Seconded :)
Erlang is kind of a special case, since there was that period when the community's preferred response to "too much politics" was to spam submissions about Erlang. Agreed though, it doesn't seem to have taken over more than (say) Bitcoin or Rust have at times.
I miss the days of daily Haskell posts.
I can imagine you would, with that username :)
I would definitely follow a HN fork with posts of such amusing spirit.
"tell me you like Haskell without telling me you like Haskell" moment
I mean, he basically said directly that he likes Haskell lol
I've been here a while and this is certainly more, prolonged, and has no end in sight compared to most other hype cycles we've experienced.
It's also exceedingly generic such that AI isn't really a topic, it's an entire classification or maybe domain to steal from the animal kingdom hierarchy.
to be fair, AI is replacing all computers so talking e.g. about languages is believed to be soon obsolete.
I would like to see more nuanced and interesting articles about AI though. Right now it's all about VCs measuring the size of their investments and the politics of alleged superstar programmers.
The best one was the 2048 era: https://hn.algolia.com/?q=2048
I hope I'm not the only one here who never heard of Erlang until I read your comment (I arrived in 2016).
Oh my god, the years of the JS frameworks. Millions traumatized for life
One might argue that those technologies are sunsetting now
Microservices had a micro moment not much longer than xml.
If it follows the name, it's gonna be terrible since we're dealing with large language models
Comment was deleted :(
Yes, but unlike AI & Crypto, Erlang came with little grift, slop and Show HN spam.
The atmosphere on the site was very different then. There was plenty of Erlang vaporware and lots of "how to grow your startup" growth spam which wasn't called growth spam yet. The community was a lot less cynical then (though obviously the middlebrow dismissal [1] tendency of the site is quite old.)
Weird that people are floating the idea of kicking out of a tech forum the most important tech development of the last 10 years.
Not sure what that means about the community, but must mean something.
The problem is the quality, not the topic. Understanding serious papers about AI development requires fairly specialist knowledge; there are plenty of people around (like myself) who have been programming for decades and can write really nice code in a bunch of different programming languages, but have very little if any mental model of "transformers" or whatever.
So in practice, "AI" content ends up revolving around people bandying about opinions about whether or not we're all doomed, or whether or not we're all on the edge of a utopia, or how much productivity programmers (and which ones) have lost or gained, or what kinds of tasks the LLMs are or are not currently or still good at, or whether anyone still cares about the fact that the term "AI" is supposed to mean something broader than LLMs + tool use.
The emergence of the "vibe coding" concept has made things worse because people will just share their blog posts about personal experiences with trying to write code that way, or flood the Show HN section with things that are basically just "I personally found this specific thing to be 'the boring stuff' that's actually relevant to me, so now I'm automating it" with a few dozen lines of AI-generated code that perhaps invokes some API to ask another AI to do something useful.
Interesting take.
To me it feels like golden age of hackers in the 60s-80s (which was before my time but I heard stories about) where everybody is doing their own home grown research to the best of their abilities and sharing insights of varying quality.
But somehow these days if it's not all polished, HN "hackers" aren't interested.
I think both things can be true:
1. This is a great time to get your hands dirty with LLM tech and explore workflows and tooling that bring you joy.
2. The writing around this exploration is often low quality insights or low quality engagement bait that leads to flamewars. Engagement bait that often takes one of two forms. One being a novella on how surely this time the human race is doomed due to singularity/capture by the rich/fascism/etc. The other being how we're one cm away from utopia because automation/flourishing of creativity/etc.
I am enjoying playing around with the tech a lot but the presence of 2 is just annoying. I do think that's an HN problem and not a problem with tech writing as a whole. There's subreddits that, while they have their own problems, are a lot less flamey when discussing these topics.
> But somehow these days if it's not all polished, HN "hackers" aren't interested.
The fun part is that these days, typically the READMEs (especially) and licensing and documentation and maybe even the packaging setup are "polished"; the actual code (and perhaps the tests), not so much. It's quite backwards from what you expect from humans writing new code based on personal intrinsic motivation.
A lot of people on the internet have turned hating AI into a personality.
Why kick it out, in the past when similar annoyances of dominating the front page occurred they created the Show link and the Ask link. For people interested in those they still exist, just away from the front page
It means there are grumpy curmudgeons in every community.
It is reducing my desire to read this site. I don't have anything against the subject matter necessarily, and sometimes it can be interesting, but in large parts it is attracting very low quality discussions and content about vibe coding X product.
Can you link to some specific examples of low-quality discussions?
As with any Major Ongoing Topic on HN (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...), the goal is to reserve frontpage space for the higher-quality stories and try to downweight the follow-ups and low-quality ones. We can't do this perfectly, of course, but we try.
I feel it's to generic in application to be interesting to a broad audience like HN. Some things I like because I have interest in the problem space and am interested in how they applied AI to it. But most things I'm not even interested in the problem space and so could care less how they applied AI to it.
Luckily there are many other places you can spend your time.
Honestly, I’ve always appreciated how much of 2007 Hacker News is still intact. It remains one of the few places on the internet where discovery still happens organically without trending algorithms or clickbait optimization. It's just manual submissions, one by one.
There’s only one other community I’ve encountered like it, run by a small liberal arts college.
From a signals perspective, HN is incredibly valuable. You get to watch in real time what’s capturing the minds of technically inclined readers. Sure, that means lots of lurkers and a few dominant topics (right now: AI). But that’s also kind of the point. HN works as a reflection of where the collective attention is, whether we like it or not.
Anyways...just two cents.
No. HN is like this. It skews heavy towards startups and right now if you have one of those and you aren't putting AI in your investor propaganda, you're not going to get many investors.
Besides, it's already starting to slow as people realize AI isn't as great as the influencers want you to believe.
I haven’t noticed any slowing, if anything it’s accelerating as people try vibe coding and realize they can build an mvp and get some suckers to sponsor on GitHub. Just look how many end up with donate links. I suspect a large portion of people releasing open source vibe coded projects don’t care about the project but see it as a low effort way to make a few bucks.
If you think HN AI/LLM content is bad, try LinkedIn or X!
HN is probably the best source of informed, critical takes on AI/LLM content and that is super valuable to me. I don't think it should fork; I want the same audience to keep doing its work and having the debates :P.
Feature request: HN could support a tag or label for categorizing a post. This would allow for filtering trivially, and creating views based on participant interest.
Wouldn't be hard to train an LLM to do it!
Indeed! Previous:
https://news.ycombinator.com/item?id=44261825
I suppose an extension is the answer, classifying and customizing the user’s view accordingly with a pluggable LLM config.
weve just explored a HN site-reskin as a quick way to validate this, and I now use it for my browsing every day. Its a pretty transparent "profile" that gets applied by an llm to rank your HN frontpage, but would be trivial to shift that to a filter.
An extension could be a powerful way to apply it without having to leave HN, but I wonder if that (and our website prototype) is a short term solution. I can imaging having an extension per news/content site, or an "alt site" for each that takes into account your preferences, but it feels clunky.
OTOH having a generic llm in browser that does this for all sites feels quite far off, so maybe the narrow solutions where you really care about it are the way to go?
Maybe you'll want to try Techne: https://techne.app.
Keyword filters on the user side would avoid adding extra steps to submitting and moderation. Categories are an extra complication.
Someone has probably developed a personalized browser for HN.
https://github.com/plibither8/refined-hacker-news is perhaps relevant.
If someone wants to add LLM pluggable support (API endpoint target) and it’ll work on Firefox, I’m willing to kick in some fiat. “HN Copilot.”
Not a browser but a reskin website: https://news.ycombinator.com/item?id=44454305
That is a perennial proposal as far as I remember.
I second that emotion.
I don't know if you're allowed to promote alternatives to HN, but Lobste.rs has tags which you can follow or completely block. Plus I've found the quality of the discussion is higher than HN, at the cost of much lower quantity.
> at the cost of much lower quantity.
Which they could solve by having a less dumb invite system. They can very easily confirm I am not a bot nor a spammer based on any number of objective metrics I can provide to them. But instead the answer is "idle in IRC, hope for the best" and thus they end up with the audience who is willing to jump through those hoops
No, will go away just like all the crypto stuff - remember that time? - went away.
Very few problems were solved by crypto, so it naturally disappeared.
On the contrary, LLMs based AIs create a lot of new problems.
Comment was deleted :(
Unlike crypto though LLMs are actually useful.
"A broken clock is right twice a day". I guess that's useful too, in a similar kind of way.
Reading this from a user named "leptons" made me chuckle.
Comment was deleted :(
But but the Rust stuff is still there ! (To my pleasure I must confess)
Could we have a fork where people talk about Rust somewhere else ;)
A new crypto LLM AI on the blockchain-l - in Rust!
You joke--but an article exactly like that has probably been posted here.
No for three reasons.
One, lets be honest, hn wont do it, part of their secret sauce is that they don't change, and they know that.
Two, fragmenting the community would just reduce engagement and risk making both feel like a ghost town.
Three, LLMs are (one of) the forefronts of our industry. State of the art is advancing fast. It has properties that no one knows the best practises for. And it has implications that are wide ranging. To try and bury this because it has a lot of new developments goes against why most of us are on this site.
I believe in the meritocracy of the upvote button.
I havent been on here forever, but I vaguely remember other trends took over for certain periods. I could be misremembering, but crypto drove a lot of interaction for a while. I'm indifferent towards LLMs, probably because I'm not a developer in my day job, and I don't mind seeing LLM posts. It is annoying that every single LLM thread devolves into the same tired arguments between LLM zealots and detractors.
As someone who quite likes AI, couldn't the AI dislikers just ignore the ~15% of stories that are about AI? Or does their mere presence offend?
The mere presence does seem to a offend. HN has what seems to be on average a more negative appraisal of AI than the industry average (based on where new software jobs/funding are going), but some of the comments I read seem to imply that this page is swamped by endless AI hype posts devoid of substance.
My first instinct reading these comments is "are we on the same website?", but I realize their perspective is possibly skewed by a strong visceral dislike of AI as an affront to many fundamental things they like about software and tech. It's become a tribal conflict, based on the ethos of "whose side are you on" rather than a sober appraisal of the facts, benefits, and legitimate hazards.
> I would very much like to enjoy HN the way I did years ago, as a place where I'd discover things that I never otherwise would have come across.
I've had the exact same feeling a lot over the past couple years or so, and especially the last 6 months. I used to hit the front page and find 5 to 10 stories I was interested in. Exhausting those to read the second or third page wasn't common. Now I find maybe one story I want, and I routinely will scan through 4 or 5 pages (down to 120 to 160) and only find a handful (4 or 5) that I want to read.
I've long found myself wishing for mini-HNs on different broad topics that interest me. Sadly this was the whole point/idea behind reddit. For example, besides the actual and venerable and loved real HN, I'd love an HN for:
1. Politics: Where disagreements are encouraged and any claims are challenged, but only with factual arguments/counterarguments, and any emotional arguments are moderated (basically how we encourage HN comments to be). There have been some reddit communities over the years doing this, but IME they frequently devolve into echo chambers. It almost always comes down to bad moderators.
2. General News: Where stuff that is of broad interest (and not really tech-related) can be posted and commented on in thoughtful ways. Particularly local news would be fun
3. <placeholder>: Had an idea and forgot it as I was making the list. Will edit and insert when I remember!
I've kind of accepted that my dream just can't work (at least, looking at Reddit as the great experimentation of that). People on the internet are just (generally speaking) incapable of consistently humanizing the user(s) on the other end, and proceed to treat others very poorly. Pride and inability to be wrong strongly exacerbate that tendency.
> There have been some reddit communities over the years doing this, but IME they frequently devolve into echo chambers. It almost always comes down to bad moderators.
In my experience:
Most of them are basically designed to be echo chambers from the start — opposition is only admitted in to the extent that it allows easy targets to knock down. Most people just aren't that good at explaining why they believe what they believe, let along making a convincing argument for it; so all you need to do is set up an environment where one side's position is the default.
There have been a few attempts at explicitly avoiding that problem. They do eventually collapse. But I don't think it's due to bad moderation. It's more that certain factions simply refuse to engage civilly and unemotionally with each other. They will see statements as inherently provocative that the other side genuinely consider matter-of-fact.
I was a moderator for a place like that once. It was remarkable to me how, on the "hot topics" that were polarizing and led to a lot of bans and suspensions, on one side people who were suspended would argue and whine and complain basically as long as we'd listen to them, maybe even the entire duration of the suspension, and they would never get it into their head what our standards were for respectful discourse; and they would even suggest that having such standards was inherently oppressive; and when they got back they would immediately go back to their old ways. And on the other side, people would basically say "LOL, see you on <suspension end date>" and disappear, and come back as promised, and behave themselves for a while.
And while there were a very few people who simply couldn't kick the habit of using slurs or other disparaging terms to refer to identifiable groups of people, there were far more — almost all on the opposite side — who simply couldn't kick the habit of openly insulting the people they were directly responding to. Or at insinuating negative character traits and hidden motivations not in evidence, or other such "dark hinting" as we call it. Or even just of using obnoxious, brutal sarcasm all the time when we expected people to speak plainly.
[dead]
I have a similar process, but usually scan down to 60 (as I did today). I found eight stories including this one and have tabbed them to read. I don't like rust-y koolaid myself, but would never complain that it is here, nor would I complain about seeing the word 'typescript' as it is really far from my interests. To my interest -- excellent AI related white papers, AI agent paradigms and code, model announcements etc. are regularly posted here. Of the eight I picked today, half are AI related. 4 out of 60 isn't bad if I was trying to be an artificial intelligence ostrich.
I for one am happy with this site’s own little fads. Who knows, either AI stays with us and I’ll be glad to have got my helping here via osmosis. Or it goes away and I can reflect on this fun little quirk our community once had.
No, it is not time to fork HN into AI/LLM and "Everything else/other".
I enjoy the website as-is, and simply use search when I want to get to the topics that interest me.
No.
There's always a flavor of the month. Go back 3-5 years and every third post was crypto or NFT related. AI/LLM too will pass.
I've never really understood this desire of people to effectively hide content that doesn't interest them. Just... ignore it. Like there are enough people on HN who really care about academia and research. I don't. But that's fine. Let them be.
But here's the interesting part: so many on HN rail against the newsfeed concept . You will hear a significant number of HNers say they just want everything in chronological order. Well, except for the subjects that don't interest them.
If HN submissions were tagged and a recommendation algorithm decided what to show you, you'd get exactly what you want: fewer AI/LLM posts if that doesn't interest you. But somehow newsfeeds are still bad?
The desire is relateively simply explained. Some people used to find HN interesting, but the modern set of things being upvoted isn't matching their interests anymore. They already ignore the content they don't like, the problem is the content they do like isn't there anymore. The assumption, I expect, is that if there was less LLM content the site would have more of the "older style" content they used to enjoy. I don't necessarily think that will happen, but that's my interpretation of the sentiment.
Exactly.
It's not supposed to be zero-sum — posting volume isn't limited, or at least I assume we're nowhere near what the servers can technically handle — but attention span is limited. Seeing a front page full of things you aren't interested in makes it harder to find the things you are interested in, and feels discouraging if you want to post one of those things (an unfortunate feedback loop).
Maybe try lurking into "Past".
Think about it. You can go into whichever pre-AI booming period you desire.
Today I think I'm gonna check out what was hot in May 2009.
https://news.ycombinator.com/front?day=2009-05-14
"Obama proposes no capital gains tax on qualified small business stock"
Sounds steamy.
https://news.ycombinator.com/item?id=608202
See you there!
It's a hype cycle that will eventually die down. People here are usually pretty excited for new hype cycles they think they can make equity windfalls from. It's looking less likely that individuals will be making windfalls even as of this week with the new style of top-talent-acquihire acquisitions that seem to be increasing in number, so there's hope that HN goes back to generally technical :)
Difficulty - AI makes it a lot easier to generate mountains of hype articles and drown out other content, self-reinforcing the hype.
Different than prior hype cycles.
Frankly, this one seems to be dying out more from everyone just flat out refusing to pay attention to online stuff or things on their phone long enough to starve the beast. If that is even possible.
From my perspective HN has always had certain themes I find overly repetitive and boring.
I just whack “hide” on those and never think of them again.
When a quarter or more of the front page is stories about those themes, you'll be doing a lot of whacking.
Nope. While the blockchain craze was less meaningful and slightly less annoying, it died down. This will too (even though there's more actual value hiding in corners).
There are certainly periods where one concept is "viral" and appears quite often; that's normal.
No, I have zero interest in LLM and ignoring the posts has worked fine for me. The political posts are causing more damage IMO. There is also a "hide" button if you keep seeing a lingering high activity post.
But finding out that LLMs are used to push the political posts in such obvious ways yet people are still falling for it is a somewhat interesting topic. Regardless whether that just pushes your bias towards LLMs == bad or tweaks you to spin up your own LLM to combat by searching/detecting/posting the slop. Burying your head in the sand and ignoring all of it is not good either.
> LLMs are used to push the political posts in such obvious ways
Most of "the political posts" seem to happen because someone shares a news article and everyone else uses it as an excuse to discuss the general topic (or at least something that gets general agreement as being the topic). I'm not really clear on how LLMs get involved there.
I would absolutely click on an article about the detection of AI and how to get rid of fake accounts. Or how AI is being used to scam people. But I couldn't care less about turdF1nger.6i and it's 3.4b paramters.
We just built https://www.hackernews.coffee/ to rerank your frontpage based on a quick survey of your preference, all local storage based.
In general we're thinking about how you can have a transparent profile that stands in place of an opaque algo, or in this case a dominance of a community by something you're not so into. It allows you to still engage with HN, but through the lense of a profile you have control over.
Ironically it is built with AI, but its pretty straightforward no magic stuff. Keen to hear if it is useful, or could be, we're really early stages exploring where to go with it.
I feel like 10 answers is too few to generate a good profile. I tried it and it marked a lot of stories as 'skip' that were pretty interesting to me, presumably because they didn't fit into the 3 or 4 categories it determined I was interested in. That said, it was useful to filter out AI stuff, so that's good I guess.
My experience with HackerNews would be significantly improved if I could exclude the LLM-related stuff... it's overwhelming.
I say this about Reddit all the time. If you’re on Reddit (or HN) to just consume, then you’re doing it wrong.
Threads that are “my feed isn’t what I want” are exhausting. Sure, cool, but unless someone is breaking some rule, you’re looking for an algorithm to feed you content, which is all well and good, but it’s a different type of site.
Reddit (and HN) are designed exactly so that you can share something interesting you found.
You can easily clone HN using other forum tools (discourse, discord, HN+) and start your own? Or HN provides APIs that are quite powerful.
I'm interested in news about current and emergent technologies. I wouldn't mind if those who are not interested made their own site and left the curious people alone. Please do.
That’s like saying HN but without the web stuff.
AI is the largest technology advancement of the last 2 decades…it’s going to show up.
HN is the way I keep up with what’s going on. AI is very much the topic of the moment. I’m fine with it the way it is.
No. This is a trend. HN is a tech and startups website so it will show trends. At one point it was VR, eventually it was Web 3.0. Right now it's LLMs but this too will pass and something else will come along.
I concur, we need to start our "No Homers Club"
'Domination' in what sense? I could see a couple ways you might mean this, and as qe are HNers of similar "tenure" but as far as I recall more or less otherwise strangers to one another, I could see some interest perhaps in comparing our views of what's changed and how. (Hence being vague here to try to avoid putting too strong a stamp on initial conditions...)
HN is just a reflection of the community using it. And there's always some area that's hot and trending, common challenge on any platform with a popularity-based curation.
-> But still better than a highly-personalised algo that you don't get to control?
In an ideal world, you'd be able to tag a post (or a comment) with arbitrary tags, with an optional real number to turn it into a vector. This would make it possible to rank your suspected level of AI generatedness of a comment, for example, without having to disturb other things.
The UI for said system, on the other hand, is something I can't even imagine.
Comment was deleted :(
As much as I love LLMs and crypto, I'm really tired of subpar LLM/AI news populating the vast majority of the feed.
But I have no idea how to separate topics on HN. Is it even possible to do so while keeping the community intact.
ublock origin filter example, removes post, post actions/stats, and spacer:
news.ycombinator.com##tr.submission:has(:has-text(/LLM|agentic/)) + tr + tr
news.ycombinator.com##tr.submission:has(:has-text(/LLM|agentic/)) + tr
news.ycombinator.com##tr.submission:has(*:has-text(/LLM|agentic/))
Vibe code a browser extension that uses a cheap LLM to filter out content that you don't want to see.
Or just spend 60 seconds writing a ublock filter with the ten most common phrases you don't want>
This happened with X/Twitter. What it resulted in was a sycophantic hug-box on Bluesky and amplified social-darwinist amoral techno-capitalism on the former site. I believe splitting the rare congregations of diversely oriented smart people leads to worse outcomes for everyone, as better ideas/conversations emerge from opposing sides rubbing up against each other. Bifurcating HN would probably lead to a hype-driven, noisy AI side and a myopic, increasingly anachronistic non-AI side.
Maybe we could fork it into technical discussions and complaints about technical discussions.
No. Ignore the threads you do not like. Conversely start your own message board if you desire and have your own rules.
Not sure if this would be practical - AI seems to be part of the startup ecosystem now.
I guess people still use HN to discover things that they never otherwise would have come across, just that it now also includes AI, for better or worse.
We had it the way we had,
Because of the way it was.
And, because of the way it is,
We have it the way we have.
And so it is.
I've gradually come around to reading and engaging more with AI-related topics, but I'd still appreciate this. The balance of the content is way off.
I dont think its the right idea long-term
If it went thru that this changed I would not be opposed tho I would read both
Comment was deleted :(
curious, couldn't AL/llm related content also have interesting new information? im not referring to ai generated sloppy articles. more so the deep tech behind it. personally im super interested in machine learning and love it when i come across such links here.
From time to time, you find something AI/LLM related that is interesting. Like the people trying to save money by speeding up their audio before submitting for transcription. Not only did it save money, but improved accuracy. It's those kinds of finds that are interesting in the hacker sense. Increasing the number of tokens at a certain point has a negative affect. Okay, someone is taking that hacker ethos to see what happens when the knobs are twisted. Showing me yet another image manipulation tool or new chat bot? Yawn. Next.
A cynical take is if Joe learned a bit about LLMs, they could build an extension that filters AI stuff into a separate tab or something along the lines of what simonw coded up.
This too shall pass, Joe.
No it isn't.
no
Prevention of bots and other kinds of automation takes precedence over any thematic changes.
If that is done first, we might not need to separate subjects.
HN lacks even the most basic aspects of human verification.
It’s not just Hacker News. It’s everywhere else too. I want some sort of web extension that allows me to just block list items from various social media websites when the topic at hand is in my blocklist.
> I would very much like to enjoy HN the way I did years ago, as a place where I'd discover things that I never otherwise would have come across.
So what does this mean exactly? Nothing LLM/AI related on hacker news is new to you, or you would easily have come across it without HN? Really? Where exactly are you finding your AI/LLM news?
In the last ycombinator batch, how many companies do not use AI in the core of their business?
This is why https://lobste.rs/ and reddit have tags/subreddits in a nutshell. They get too big and people want seperation.
vibecode yourself a filter ;-)
Comment was deleted :(
More generally, I wish the site had tagging support. It would help solve a number of other problems, too.
Comment was deleted :(
Unfortunately, this is simply a by product of the fact that this is both news and the state of the world.
Like most it too will come to pass (as it is further adopted in the mainstream and becomes commonplace).
You can perform a poor guy filter via <https://hn.algolia.com/?dateRange=last24h&page=0&prefix=fals...>
If you like intelligent, tech-focused discussion as seen on HN but have less of an appetite for other aspects of the HN community, you might find you really like lobste.rs
I'm seeing quite a few plugs for lobste.rs ITT. My recollection of my impression from many years ago is that they had different politics from HN (I did read it every now and then, even if I only actually joined months ago) of the time, but that many posters were still very much engaged in ideological battle, and furthermore flagging and downvoting was used more to suppress one side of the argument than it was to keep the site tech-focused.
I can’t speak to that aspect specifically, but I have flip-flopped in the last five years or so. I don’t go there anymore hardly and conversely I find myself using HN a lot more. My decline in usage coincided with some policy changes they made years ago. I still think there are people who will really enjoy lobste.rs over HN. It’s not for everyone (and as of late, not for me).
If that happens, I would like the "non AI" side to also not allow AI-generated content. (But how would that be enforced? I don't know.)
More generally: You could think about creating "sub HNs" for AI, politics, functional programming, startups, and several other categories. You could think about having something in your settings which specified which sub-HNs would put stories on your front page, with the default being "all".
You mean, blockchain and everything else?
Oh, sorry, wrong hype cycle.
Currently, for me on the front page, there is 10/30 AI/LLM related. It means you have 20/30 that is not about AI/LLM. 1 of them is blockchain btw.
Typical HN, 1/3 hype, 1/3 less hype tech, 1/3 other. AI is the current hype.
Comment was deleted :(
This is one thing lobste.rs does really well. Every submission needs a tag and you can exclude tags that you are not interested in.
One liner you can use in a GreaseMonkey/TamperMonkey script:
[...document.querySelectorAll('.titleline > a')].filter(link => link.innerText.split(' ').find(word => ['llm', 'ai'].includes(word.toLowerCase()))).forEach(el => {const sub = el.closest('.submission'); sub.nextElementSibling.remove(); sub.remove() })
I wrote this in 2 minutes so I'm sure someone is going to reply with something better.
Comment was deleted :(
Fuck no. AI/LLM is a tool like any other and we need to keep on top of it.
If anything it needs less politics, I have other sites for that bs.
The first fork should be one for socialists and one for capitalists… but the latter, Bookface, already exists.
The fact is, the vast majority of people on HN have drunk the AI kool-aid and have no desire to be critical of it or avoid it.
Install firefox
then install violentmonkey
then install https://salamisushi.go-here.nl
browse around as usual and it will collect all discoverable feeds.
then export the feeds as opml
then install a robust RSS aggregator
then load the opml into the aggregator
then sort the news items by pubDate
then remove the obnoxious subscriptions
this is the way
Containerization (either the docker stuff or the literal 40 foot steel boxes) was a huge revolution in their respective industries.
There was a ton of work and howling and news about them for years, decades.
Now they’re so boring and standard that they’re just table stakes. Nobody cares about them enough to get into long discussions about them.
The same in a best case will happen with LLMs - the things they can do will become boring and assumed, and people will eventually stop trying to make them do things they can’t.
you're seeing more that content because it is relevant. HN should show relevant topics in tech. the AI/LLM domination of tech as a whole is what you're seeing on HN. There is lobste.rs which might be what you're looking for.
Comment was deleted :(
Yeah, definitely an adverse amount of guerilla advertising. How many veiled pro insert some code assistant posts can one r&d budget write?
Nobody is stopping you from creating your own HN clone with whatever rules and guidelines you want. I'd say go for it, and good luck! I'll stay here though.
You don't even have to create it. https://lobste.rs exists and is opensource https://github.com/lobsters/lobsters It even has tags built-in.
I mean, just don't click on an AI story.
Maybe someone could make an AI service to separate them out.
~simonw's demo of a quickie customized HN front-end is great.
But ultimately, your browser should have a local, open-source, user-loyal LLM that's able to accept human-language descriptions of how you'd like your view of some or all sites to change, and just like old Greasemonkey scripts or special-purpose extensions, it'd just do it, in the DOM.
Then instead of needing to raise this issue via an "Ask HN", you'd just tell your browser: "when I visit HN, hide all the AI/LLM posts".
Its pretty easy to do the user-loyal bit, with a bit of prompting to give an llm your preferences/profile. Not ideologically loyal, but i mean acting in accordance with your interests.
The tricky part is having that act across all sites in a light and seamless way. Ive been working on a HN reskin, and it only is fast/transparent/cheap enough because HN has an api (no scraping needed), and the titles are descriptive enough that you can filter based on them, as simonws demo does. But its still HN specific.
I dont know if llms are fast enough at the moment to do this on the fly for arbitrary sites, but steps in that direction are interesting!
I'd expect a noticeable delay with current local LLMs - especially visiting a site for the 1st time. But then they could potentially memoize their heuristics for certain designs, including recognzing when some "deeper thought" newly required by server-side redesigns.
But of course local GPU processing power, & optimizations for LLM-like tools, all adancing rapidly. And these local agents could potentially even outsource tough decisions to heavierweight remote services. Essentially, they'd maintain/reauthor your "custom extension", themselves using other models, as necessary.
And forward-thinking sites might try to make that process easier, with special APIs/docs/recipe-interchanges for all users' agents to share their progress on popular needs.
Now that is a browser I'd pay for. A genuine user agent, rather than just a browser.
It would also need to be able to "Recognize tasteless, ad-ridden, or other difficult-to-read pages, silently dismiss cookie popups and signup solicitations, undo any attempts to reinvent scrolling, remove all ads except for those on topics X, Y, and Z, and present the page using something like Firefox's reader mode."
Other requirements would include "Fill in these fields that are marked as autocomplete=off," "Use this financial site to display exactly the charts and tables that I want, in this order," "Clean up broken, irrelevant and repetitive search listings on Amazon and eBay," and so on.
For extra credit: "Maintain this persona on Facebook, this one on Bluesky, this one on Slashdot, and this one on HN. Synthesize documents needed to establish proof of age and other aspects of personal identity."
AI/LLM has become of core part of IT. If you don't want AI then it seems like you want a retro-computing news aggregator or just HN minus personal annoyances. I get it, sometimes I want the simpler days but as long as the AI/LLM posts are not dumbed down mainstream content I'm interested it them and most visitors probably are also. I wouldn't have discovered most of the articles from other places. The posts match the site's on-topic criteria, from https://news.ycombinator.com/newsguidelines.html
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.
> as long as the AI/LLM posts are not dumbed down mainstream content
The problem is, most of it really is (as it boils down to "I am / $COMPANY is using an LLM to do something; here's how you can do it too, and / or some pundit's opinion of the implications for the industry"). And the stuff that wouldn't be (like how they work, or statistics and benchmarks), often requires relatively specific domain knowledge to really appreciate.
Crafted by Rajat
Source Code