hckrnws
> For example, there is really no way to tell aggregators that you changed an article's path.
Of course there is, just change the URL. Both Atom and RSS have the concept of an "entry ID" which should never change, but anything else in the entry can just be updated (other than the published date I guess).
However this is easy to misunderstand because in RSS it is optional and in Atom people frequently use the URL as the ID and then accidently change that URL which according to the protocol is actually creating a new set of entries unrelated to the old ones.
In fact in the author's case it is very likely if they added items back to their feed with the old ID and new URL Feedly would update it. Although there is no way to undo duplicates and you shouldn't add back an entry that was removed from the feed (as it will appear new to subscribers that didn't know about the entry previously).
Never change the article path without a redirect.
If I honor you with a well deserved link to your article in a section that requires the context, insights and perspective you provide it should not be made into a mistake to be avoided next time. I should not have to do the work for you - because I wont.
You’re preaching about the wrong issue. Arranging an HTTP redirect on your server won’t deduplicate records with mismanaged IDs in the aggregator.
That's why when I write something that I think will be read for years to come I use archive links to link to other pages.
I keep thinking I should archive everything but only link to it if the target goes missing. Could check every time it is clicked or scan for dead links.
For a while I put magnet links under youtube videos. Got a mail from someone looking for a video for a long time. He found many pages with the deleted video on them, including mine.
> Finally, and one of the biggest issues with RSS, is that it's not very intuitive to non-tech savvy individuals.
Back when Google reader was a thing this wasn't true. Your browser auto-themed and provided subscribing options for your configured reader. It's a true shame that Firefox gave up supporting RSS out of the box to match Chrome's stance on the subject
07-01-2013, never forget.
2013-07-01 ISO
07/01/2013 US
01.07.2013 Rest
Pick one, but not the abomination you posted, thanks.There's so many google dates to not forget, I've started forgetting them all.
There's always a handy reminder here [0].
RSS can do that too!
> never forget
Honestly I'm pretty much over this now and ready to move on with my life :p
There are plenty of good rss readers out there - many of them created because Google left the space. I think that this diversity and decentralisation is a net positive for the world.
You’re right on that last point: Reader was nice for the community scale but most of the feed readers I’ve used have more and better features – Google never treated Reader better than the proverbial red-headed stepchild and it was on life support for years before the end.
I would reframe that “never forget” advice to something along the lines of “never forget that corporations do not care about you” – Google had a reputation for being different and a lot of us believed that to some extent; sacrificing Reader trying to make people use Google+ was a watershed moment when a lot of people had to accept that their loyalty wasn’t reciprocated.
Do you remember FeedBurner? It allowed you to subscribe to RSS feed using many popular services - screenshot: https://www.theedublogger.com/files/2010/01/feedburner49.jpg.
There are still many RSS extensions for all web browsers, so RSS is not dead.
That's my feeling as well... When browsers had good RSS plugin integration it was pretty great. Really destroyed my reading habits when they killed reader and removed browser integration.
For anyone not using rss and wondering whether to get into it and how, this is what worked for me over the past year.
I used to have 4-6 sites I would check regularly. A couple occasionally had a news update article I liked reading. A couple were “firehose” sites for a hobby which produced 4-20 articles per day. Hard to keep up with on web without feeling overwhelmed.
I got NetNewsWire which has a good sync system across machines. Then I put the occasional news articles in one folder and the firehose feeds in another.
(If a site has no obvious rss, paste in the homepage. NetNewsWire seems to figure it out)
It now takes 2-3 min per day to look over the firehouses. Can scan, read the ones I want, mark the rest as done.
For the news/essay sites, no longer any need to check a site periodically to see “is there an update?” It all just comes in when published.
In a world where habit transformation is hard, this one stuck. Consistently has led me to read more of the stuff I wanted to keep up with, while spending less time and energy around actually finding it.
NetNewsWire is excellent. Clean, responsive, blends into the desktop, and doesn't have memory leak issues like so many macOS RSS readers do. I wish all software could be like it.
It just turned 21, time flies!
Appreciate the mini-guide here, inspired me to give nnw and rss in general a place in my day again rather than just checking the sites manually.
Glad to help! I should have added I started light, then added in some more stuff once I had a working system.
I had tried RSS years ago and my feed then was unwieldy enough I never checked it. This attempt is smaller and stuck
I was curious what would happen if you add many. Turns out that around 1000-3000 feeds the chaos starts to self-organize. I have 30 000+ subscriptions now and it's just wonderful.
Beyond the sweet spot you cant not filter out topics covered by everyone. You are aware Tucker interviewed Vladimir, there is Genocide in Gaza, Bidden has nothing interesting to say anymore. You just cant have tens of thousands of articles about David Bowie dying. You know that already.
And then.... when you've silenced the echo what remains is all the other things that happened in the world, the small, the unique, the rare, the interesting... the real www?
The funniest thing was businesinsider: They echo a lot but their titles are so descriptive they hit pretty much every filter I've created. That what remains is actually interesting. I think something like 0.01%! The next website with its "You will be angry when you find out about this!" articles must be extremely fascinating 100% of the time (this never happened) to avoid getting blacklisted.
I do quests like find feeds for press releases for all fortune 500 companies.
Best one thus far was to take a list of all countries and find 1-5 English news feeds for each. Turns out many have an English version of the national news that absolutely no one reads. Who is to say that if you have a few million people, nothing interesting happens there? Their official stance on events on the other side of the world is often so free from bias it is almost boring. A war becomes just a war without good or bad guys.
Whatever government websites publish is usually worth reading the headline of.
HN is a good website but the curated list of headlines on the front page requires the topic to appeal to the audience, the audience isn't as broad as other websites but it's a serious limitation compared to that what you find interesting.
If HN was to one day decided to force onto the front page each new book about eating bugs there is probably one guy out there excited about it. Then comes breeding bugs and then growing plants to feed to your bugs... Our guy would be thrilled? Sadly for him it cant happen. It's unthinkable.
Facebook is actively filtering out the things I want to read. lol?
Twitter is the mega echo chamber.
Reddit echo's and filters and is littered with low effort commentary. Subs are dominated by majority perspective.
Wikipedia is a trench war.
But what were we thinking? Why should other people be held responsible for what you read and write?
There is also the angle of programmers trying not to be political or activists. Over the years the sum of little crumbs of avoiding politics or activism adds up to enshitification.
When Musk fired so many at Twitter people talked about how difficult it is to get the posts onto everyones feed.
Meanwhile my crappy laptop and my crappy internet over my crappy wifi can deal with hundreds of thousands of feeds, millions.
I can read Nazi's, Maoists, Israeli's, religious fanatics, hackers, eugenicists without them making an effort towards making their thing more palatable to the masses. Non of them are making thousands of accounts in an effort to promote their views. There are no likes, views or upvotes to be purchased.
When their new website finds their way into my aggregator (how?) I can get rid of it in a single click.
Sounds like you use GNUs. A wildebeest to set up, but, once done, nothing quite like it filters the filth.
How much time you spend reading the news?
It this point, I would probably invest some time into automating it even more. Download all articles, and save them into PDF. Create full text search over them. Use LLM for sentiment analysis...
We did something like that at bitcoin boom.
I spend 3-10 minutes scrolling over at least 1/3 of 5000 headlines. Today I open 12 in tabs doing that, I've looked at 3 of those.
I got rid of most things, I only keep [pubDate, url, title] and only if the title doesn't contain any badwords and the pubDate isn't older than the oldest item.
Each item comes with a hours and minutes ago a translate to English button, the domain name, and a blacklist button.
While it does take opml most of my feed lists are just txt files with one url per line.
It loads one of those files then pulls x feeds per second and parses y per second where x and y are adjusted to the pending requests and pending parse jobs.
While you could certainly do many interesting things most of my experiments didn't survive my desire to keep it simple. I for example use to dump the results on a web page. I couldn't convince myself it was useful to the goal.
I bought an old used server with 24 CPU cores way too much RAM. I use it to self host a number of things (torrent box, NAS, unifi controller, DNS server). I recently added freshrss.
I love it. It halved my social media use, and it my news paper has feeds for subjects that interest me, meaning most of my internet usage can be found in one place.
This was my life back with Reader. So strange what my process is like now, so inefficient.
recently started using the app and i share a lot of same impressions.
i went ahead and checked their github after reading about sync feature here but sadly they currently only support macOS and iOS.
personally i am yet to find an app with sync features without requiring self-hosting or freeware account creation. regardless as long as you read on one device it is still possible to have a decent experience.
I remember the thread where people shared their blogs on HN.
Then, someone aggregated all of the feeds they could find in an OPML file, from which I built a site that shows an aggregation of some of the latest blog posts: https://hn-blogs.kronis.dev/
I think RSS is pretty cool! Not many people know this, but even Thunderbird has a built in RSS feed reader, so I don't even need a separate program: https://blog.thunderbird.net/2022/05/thunderbird-rss-feeds-g...
Do you mean this thread: https://news.ycombinator.com/item?id=36575081
this was really cool. there should be a monthly "which blogs got updated this month?" post on HN like the hiring/wants to be hired posts.
Yep, that's the thread the data came from, would be cool to get more up to date data some time. I'm not sure whether enough people start new blogs for it to be a monthly thing, but once or twice per year would be neat!
I love RSS. I take a bit of an unconventional approach and use Discord as my RSS reader. I run a self hosted instance of MonitoRSS (https://github.com/synzen/MonitoRSS). I have a server with just me and my bot instance and I tend to group my feeds into categories and channels (effectively creating a tab system per subscription or group of subscriptions). I have Discord installed on my laptop, phone, and desktop so this means that I can easily look at all my subscribed feeds wherever it's convenient for me. When I'm not set to "do not disturb", I even get push notifications on my devices when content is posted to feeds that go to channels I haven't muted. I think the only real downside of the setup is some days I am very busy and don't check the server that often, so I'll come back to a large backlog of things to read and I'll end up missing or under-appreciating some gems.
Yay RSS :) A habit (useful one, i might add) i'll never get rid of :)
Quick PSA here: if you ever want (or need) to filter a RSS feed to get rid of adverts, random rants of a certain topic you don't want to read about - there's RSS Bridge [1] which lets you do that.
It also provides some bridges that let you subscribe to sites that don't even have an RSS feed via scraping, or YouTube channels or ....
It's pretty much a swiss army knife for RSS (no affiliation, just a happy, self-hosting, user!)
Just a heads up that YouTube channels do generally have RSS feeds; e.g.: https://www.youtube.com/feeds/videos.xml?user=NFL or https://www.youtube.com/feeds/videos.xml?channel_id=UC8WAaaW...
But RSS Bridge does look very neat!
Oh good, they're back! There was concern a little while ago that Google had killed off the YouTube RSS feeds
https://news.ycombinator.com/item?id=39179446
Click saver: 2 weeks ago YouTube channel RSS feeds were returning a 404 response (It looks like they actually came back while that thread was still active)
It's a fair point - Google cannot be relied upon to continue providing this kind of open access indefinitely. I noticed my RSS reader hitting those 404s at the time, and I promptly started looking into the Piped / Invidious infrastructure, thinking Google is pulling up the drawbridge (a la Reddit, Twitter, etc). Then YT RSS came back and I stopped, but I really should switch my feeds away from Google anyway.
Just paste the channel URL until your reader and it should find the feed automatically. YouTube has the proper discovery metadata set up.
Related, from Molly White's excellent review of Chris Dixon's Read Write Own [1]:
> RSS is dead, he repeats endlessly throughout the book.[...] It's profoundly weird to read RSS's obituary as a person who checks her very-much-still-alive feed reader several times a day to get everything from cryptocurrency news to dinner ideas, and who rarely encounters a website that doesn't provide a functional feed.
[1] https://www.citationneeded.news/review-read-write-own-by-chr...
This little footnote made the obituary of RSS even more hilarious:
> As it happens, Dixon's very own website has a functioning RSS feed. He may not even realize this, as RSS is so ubiquitous that many website and blog software products either build it in by default, or make it easy to add with simple plugins.
I think that’s especially relevant given the context of that review. Chris Dixon is regurgitating a bunch of blockchain sales points because he has major investments in the space–a fact he’s not quick to share with readers–and needs to talk them up before he can exit. It’s worth asking whether he gives RSS the opposite treatment because he never figured out a way to get VC-scale returns out of a technology which isn’t amenable to gatekeeping.
No matter how much people want to proclaim RSS is "dead" there is actually no way to kill it. It is decentralized. No one owns it. If we keep publishing RSS feeds, it still lives.
Its death can be from sites deciding it’s no longer worth supporting, when they do a redesign. I can see this happening if a bunch of young front end devs are put in change of the full redesign.
I’m currently on a project where the front end has all the control and none of them seem to know anything about computers or tech in general. It seems like they all went to an 8 week boot camp and got a job. It has been very frustrating.
On behalf of front-end devs, I'm sorry. I'm not an amazing FE dev, cut my teeth on Rails projects and weird PHP, but the lack of simple thinking in the frontend world hurts my soul. I need to switch...
I’m sure there are FE devs who aren’t like this, it’s just been my experience over the last 3 years. I don’t mean to be disparaging, I’m just frustrated.
To add to the issues, the FE team has a dedicated QA team, we don’t have any QA for the backend, we just need to do it ourselves. But the QA team doesn’t know what they are even looking at (they’ve never spoken to us). They make sure the UI does UI things, but don’t understand the goal of what it should accomplish, so we (the backend devs) end up needing to do a significant amount of the FE QA, as what they are looking to release is just bad.
No, don't apologize, you're right. Frontend should be a junior's intro to backend, and I will advocate for this forever. Senior FE devs should be the competent ones who really like FE.
> the FE team has a dedicated QA team
What a headache. Should be unified.
Comment was deleted :(
These days there are too many information published as newsletter and will require some tools to reverse publish it as RSS. RSS is not dead, but half way there sadly.
Woohoo, I love RSS. I feel this is a good a time as any to share Bubo RSS [0] by George Mandis, as well as my personal fork [1].
In essence, all it really is is a build script that reads your RSS feeds from a JSON file, and builds a static site as 1 HTML file and 1 CSS file. You can then run the script at whatever interval you want (basically however often you want to update your "feed"). I do this using Github Actions and publishing to Github Pages [2]. Anyway, its awesome and I've been doing this for a few years now.
[0] https://github.com/georgemandis/bubo-rss
I could not be happier with my self hosted FreshRSS.
It's a major way i stay in touch without having to doom scroll
Why not provide a URL for your recommendation?
This is what I am using, with the FeedMe app on Android so I can read wherever I am. Works well.
What does FeedMe offer that's better than the freshrss mobile web view?
RSS is great and I can't imagine what my life would've been if I were to scan the news feeds on each individual site that I have included in my RSS feed. However, the text rendering feature is rather useless for the resources that are greedy for views, clicks and other analytics, as they simply truncate the body of the article, leaving you no choice but to open the original url to read the rest of it. For my use case in particular, a full title is enough, but I was rather bummed initially that the promised "read all articles rendered in the same style with no distractions that existed on the original website" never worked out.
> as they simply truncate the body of the article, leaving you no choice but to open the original url to read the rest of it
Inoreader has a 'Load Full Content' button above every (shortened) feed item. It retrieves the original in full & displays it in clear text (no ads and little html formatting).
I discovered the button by accidently clicking it - it's an unintuitive steaming coffee cup.
bazqux is similar! They have a depiction of a couch prominently in the toolbar, and zero describing text. And also no documentation. I needed to open the site on the desktop to get an indication for that (as a hover text).
I like the iOS client Unread for this. You can set it feed by feed to either display the feed content as-is or have it parse each article reader-mode slash Instapaper style and show that instead of the feed text
I imagine others do this too, keep an eye out for it as a feature! Very handy :)
>leaving you no choice but to open the original url
This is auto-unsubscribe territory for me.
Some RSS readers have the ability to scrap the article in that case.
I use rss for youtube and it's been great but lately a lot of "shorts" stuff has been junking up the feeds.
If you're using https://newsboat.org, you can add a filter (killfile) to remedy this:
ignore-article "*" "title =~ \"#shorts\""
ignore-article "*" "content =~ \"#shorts\""
Not perfect though, since it only ignores videos with hashtag '#shorts' in title or description and not all shorts includes that hashtag.If you're possibly using Feedbin, I've been successfully using Feedbin's "Actions" to select for Shorts with the search field media_duration and then marking those items unter 60 seconds as read. That feature and the search syntax are a little bit hidden, but are worth it:
Does this still work? This post from a couple weeks ago was saying they were down. Was this a temporary outage, not a removal?
https://news.ycombinator.com/item?id=39179446
I like the idea of moving some of the channels I actually like to RSS.
Yes it still works, I use it daily. They have had a couple of instances of downtime in the last year, a lot more the Youtube itself, but still very reliable.
They removed the OPML export from the subscription manager unfortunately.
I'm subscribed to a YouTube playlist RSS feed and it last updated on Saturday.
Best as I can tell there is no difference between Shorts and full length videos when coming from the RSS feed. It even loads in the full player on the website when following the url.
The best I've come up with is to use youtube-dl to get the duration and if it's shorter then 60s drop the item.
I generally just use the thumbnail to quickly manually filter shorts. If the thumbnail is vertical it is 99% a short.
A few channels I subscribe to make interesting shorts so I keep them. But for everyone one else the delete key is very fast to press.
I have 50 or so RSS feeds, a few went stale, I thought the URL had changed. Turns out they are just not publishing. It's pretty rare to find a site I want without being able to find a RSS feed for it.
I use it almost every day, mostly to follow new content.
For a while I used social media (facebook, twitter, etc), but that stopped working well for me when algorithms started deciding what I was supposed to see. On top of this, the "drama" I was seeing and wasting my time on, ads, privacy concerns, etc, made leave social media and go back to RSS. For this use case, it's much better than social media, even when sites decide to only include a paragraph or two to force me to visit the website.
RSS is still pretty great indeed. It's just that the big tech and social media services want you to believe that it's "dead" just to keep you hooked and addicted into their system as possible.
I’ve used multiple rss readers over the years. I would read them for a few months and then stop because news reading would become a habit. I’ve ended up using bookmark folders in Firefox and visiting the sites I like once every few months when I can focus on them.
Where rss still gets the most use from me is podcasting. It’s open and human readable and it’s great to dump an opml file into a new player or catcher and have episodes download and start listening instantly.
I actually find that for most feeds, I dislike chronological sorting. It is a good default, but I prefer to have rules that promote certain authors and keywords. For example, on my Arxiv feeds, I promote certain keywords, and "autoread" everything else. On news feeds, I promote articles about my specific area of the city.
I use Gnus to create my own recommendation engine to do this.
For a person who is not a techie, we find it really difficult to navigate and use RSS
Just toss in a site you like, it'll (try to) find the RSS feed, and you're done.
I have about 50 sites added in the old reader, and it makes following them easy.
Waiting for a selfhost options with feedly pro features.
Web scraping (limited) for sites that doesn't have RSS.
Filters through dupes.
AI priority filters to for specific topics/subjects.
Twitter addresses a bunch of the downsides of RSS that were listed. The author's criticism of Twitter seems to be that it is too easy for people to use, that it is too helpful in finding you things to read, and that if offers a way to reply to posts. These seem like a net benefit to me and if you don't want to see replies then don't click into a tweet.
But Twitter doesn't address many of the downsides of social media that were listed, or many of the upsides of RSS that were listed.
> But Twitter doesn't address many of the downsides of social media that were listed, or many of the upsides of RSS that were listed.
Seriously. You even have to log in just to view or search stuff.
Logging in makes it easy to sync your feed between devices. Making account is quick and easy.
But Twitter is just one website. Must we have an account at _every_ website?
Gathering accurate publication dates is the only problem I’ve faced with RSS.
Some of these channels flat out don’t give them to you at all. Instead, I have to either resort to polling methods or archive data.
I’m open to better solutions
Just opened this via Miniflux
The main problem with RSS is there might be a large quantity of (useless) traffic produced by subscribers to a personal website which is update with a low frequency.
There were some services to fix this problem, but most of them have all been (deliberately) closed now.
> there might be a large quantity of (useless) traffic
RSS has update hints in the form of skipHours, skipDays and TTL to let clients know if you expect to be publishing regularly or not. Also most RSS feeds are just links with a tiny amount of text. A large youtube channel produces a feed file in the 30KB range for example. Unlikely you'll ever exhaust your transfer quota at any provider with that size of data.
Quite easy to CDN that file as well.
Some RSS clients just ignore the hints.
Does Atom support this? It looks like it might be RSS-only?
Technically yes, although it may not be respected.
> Elements from other namespaces may be included pretty much anywhere. This means that most RSS 1.0 and RSS 2.0 modules may be used in Atom.
https://validator.w3.org/feed/docs/atom.html#extensibility
I believe this wasn't included because you can just set the Expires header on the web server to control cache times.
Crafted by Rajat
Source Code