hckrnws
Oh wow. This hits hard in the feels.
Here's my personal submission for "UI problem that has existed for years on touch interfaces, plus a possible solution, but at this point I'm just shouting into the void":
https://medium.com/@pmarreck/the-most-annoying-ui-problem-r3...
In short, an interface should not be interactable until a few milliseconds after it has finished (re)rendering, or especially, while it is still in the midst of reflowing or repopulating itself in realtime, or still sliding into view, etc.
Most frustratingly this happens when I accidentally fat-finger a notification that literally just slid down from the top when I went to click a UI element in that vicinity, which then causes me to also lose the notification (since iOS doesn't have a "recently dismissed notifications" UI)
I've had this a few times, particularly on mobile, where you're doing something and some pop-up will steal focus, but of course you were tapping or swiping or something the exact instance it popped; it stayed just long enough for the after-image on your retinas to catch a single word and you realise it might have been important, but it's gone now, with no sign.
This happened to my just the other day; I was purchasing something online with a slightly complicated process, from my mobile, I didn't want to f* up the process, and I was tapping occasionally to keep the screen awake while it was doing "stuff"; needless to say, something popped up, too fast for me to react, I have no idea which button I tapped if any, or if I just dismissed it, to this day no idea what it wanted but I know it was related to the payment process.
I've seen this solved in dialogs/modals with a delay on the dismiss button, but rarely; it would also make sense to delay a modal/dialog of some kind by a couple hundred milliseconds to give you time to react, particularly if tapping outside of it would dismiss it.
I find myself using Notification History on Android more and more often, but a lot of the time it's not even notifications, it's some in-app thing that's totally within the developer's control.
The fact that Android even has a notification history is huge.
iOS does not!
It's really handy, I just wish they'd add a search feature; sometimes I need to look for a notification from a particular app and it can't be done :(
TIL Android has a notification history. I've been using Android phones for over a decade and never new.
Yes, 1,000%.
The one I don't quite know how to solve is when I'm tapping a device to connect to -- whether a WiFi router or an AirPlay speaker or whatever -- and I swear to god, half the time my intended device slides out from under me a newly discovered device enters above and pushes it down. Or sometimes devices disappear and pull it up. Maybe it's because I live in an apartment building with lots of devices.
I've seen this solved in prototypes by always adding new devices at the bottom, and graying out when one disappears, with a floating "resort" button so you can find what you're looking for alphabetically. But it's so clunky -- nobody wants a resort button. And you can't use a UX delay on every update or you'd never be able to tap anything at all for the first five seconds.
Maybe ensuring there's always 3 seconds of no changes, then gray out everything for 0.5 seconds while sliding in all new devices discovered from the past 3.5 seconds, then re-enabling? I've never seen anything like that attempted.
To me the BIGGEST annoyance is the iOS “End call” button.
Just as I’m about to tap it, the other person ends the call and what I’m actually tapping is some other person on my call list that it then immediately calls. Even if I end the call quickly they often call back confused “You called, what did you want?”
Apple: PLEASE add a delay to touch input after the call screen closes.
My solution would account for this as well.
The solution needs to be global. Literally, if any part of the screen just changed (except for watching videos, which would make them impossible to interact with), add a small interaction delay where taps are no-op'd.
Happens to me far too frequently as well!
I have another submission for most annoying UI problem: Trying to read a thing, but it's on medium.com. The sheer amount of popups and overlays I need to click away before I can actually read your thing, geez.
Reader mode is a thing, but if you can suggest a better (ahem) medium to repost it to, I'd be happy to!
(Honestly, I'm sort of with you on the medium thing, but I posted this years ago now...)
Sometimes you see reductionist comments about websites just being for displaying text - but for medium it’s true.
The worst is when somebody has a custom domain, but it’s actually medium so I don’t know not to click on the link.
One of the reasons I set Kagi to "lower" results from medium for my searching.
The sheer amount is zero with an ad blocker enabled.
I consider UBO basically mandatory for browsing the web in 2025, too many sites are unusable and infuriating without it.
I had two popups visiting that medium link with uBlock Origin. Perhaps it is possible to get zero with UBO, but default settings don't seem to be enough.
OP might also be using a list like the "Anti-adblock Filter". You also might want to explore the settings of UBO as there are more lists titled "Annoyances" that you can enable to shut out further garbage.
I think you need to enable the Easy List Annoyances filters.
Your browser should be set to open all pages in reader mode.
Oh man. This makes me want to throw my phone against the nearest brick wall sometimes. The UI is loading, I reach for the button I want to hit, but it moves and a different button takes its place because the app/page is still loading, or worse, an ad has taken the place of the button.
This also happens where sometimes the hotbar has three buttons, and sometimes four, and the worst apps are when buttons switch ordinal positions depending on if there are three or four buttons in there.
It feels very strange to get so agitated by these small behaviors, but here we are.
It's like getting poked hard in its annoyance level, right?
> or worse, an ad has taken the place of the button
That's actually a dark pattern/perverse incentive I hint at towards the end of my blog post about it.
> or worse, an ad has taken the place of the button.
this has happened to me and i even clicked on the ad. It actually made me smile a little bit and reminded me of the "clever girl" scene in Jurassic Park.
If a user is interacting, DO NOT UPDATE the list / models / etc.
If an update is required, rather than just desired, freeze all input so the user knows it's about to update, this might be accompanied by a quick 'fade' or other color shift to indicate an update is about to be pushed and they should release and re-plan actions.
It seems related to "debouncing", where you delay a lookup (to autocomplete, etc.) until a certain delay after a user stops typing
It's not just a touch issue. Desktop environments have toast notifications and dialogs that can pop up unexpectedly (neither of which are remotely new problems). You can be trying to click something at the corner of your screen and have it intercepted by a notification or you can be pressing enter and have it activate the default action on a dialog that just popped up. Especially in the dialog case you often just have to hope that it wasn't actually something you needed to see or select a different option on.
> In short, an interface should not be interactable until a few milliseconds after it has finished (re)rendering
I was a console game developer working on UI for many years so I am deeply familiar with the problem when a UI should be responsive to input while the visuals are changing and when it should not.
You might be surprised, but it turns out that blocking input for a while until the UI settles down is not what you want.
Yes, in cases where the UI is transitioning to an unfamiliar state, the input has a good chance to be useless or incorrect and would be better dropped on the floor. It's annoying when you think you're going to click X but the UI changes to stick Y under your finger instead.
However, there are times where you're tapping on a familiar app whose flow you know intimately and you know exactly where Y is about to appear and you want to tap on it as fast as you can. In those cases, it is absolutely infuriating if the app simply ignores your input and forces you to tap again.
I've watched users use software that temporarily disables input like this and what you see is that they either get trained to learn the input delay and time their tap as tightly as possible, or they just get annoying and hammer inputs until it gets processed.
And, in practice, it turns out these latter times where a user is interacting with a familiar UI are 100x more common than when they misclick on an unfamiliar UI. So while the latter case is super annoying, it's a better experience in aggregate if the app is as responsive as it can be, as quickly as it can be.
Perhaps there is a third way where an app makes a distinction between flows to static context versus dynamically generated content and only puts an input block in for the latter, but I expect the line between "static" and "dynamic" is too fuzzy. People certainly learn to rely on familiar auto-complete suggestions.
UI is hard. A box of silicon to a great ape is not an easy connection to engineer.
For me the most annoying one is the cookie consent banner. Very few sites have clearly defined buttons like “Allow all” “Deny all” etc. but majority of them have a (intentionally) convoluted UI so that a lot of users just accept all.
I frankly wish the web standards group had released an API for these so they could be handled at the browser level.
This has a name, it’s called CLS (cumulative layout shift) and Google actually penalizes SERP rankings for pages with bad CLS. More info on it on lighthouse.dev (if I recall the domain correctly).
Google is an annoying example! Especially on mobile the search bar shifts around between mainpage, search field edit-mode, and results page, but shifts only a half second after loading a static version. End up clicking into a blank new search page because the doodle jumps to right under your fingertip.
that domain doesn't work :(
I did find this though, and I think I will add it to my medium post: https://web.dev/articles/cls
Comment was deleted :(
If you're curious to learn more, this is often referred to as "layout shift". There are black-hat designers who deliberately use it as a dark pattern -- you might notice a suspicious number of cases where the most common click target is replaced by a click target (or late-arriving popup) for submitting an email address or entering the sales funnel. But typically it's just bad design.
In the latter case, you could quietly disable buttons after a layout shift, but this can cause problems when users attempt to interact with an onscreen element only to have it mysteriously ignore them. You could visually indicate the disabled state for a few hundred milliseconds, but this would appear as flicker. If you want to be very clever you could pass a tap/click to the click target that was at that location a few tens/hundreds of milliseconds prior, but now you've got to pick a cutoff based on average human response times which may be too much for some and tool little for others. That also wouldn't help with non-click interactions such as simply attempting to read content -- while not as severe, trying to read a label that suddenly moves can be almost as frustrating. Some products attempt to pause layout shifts that might impact an element that the user is about to interact with, but while this is possible with a mouse cursor to indicate intent it can be harder to predict on mobile.
Some of these ideas are even used in cases where a layout shift is necessary such as in a livestream with interactive elements. However, the general consensus is to use content placeholders for late-loading content and avoid rendering visible elements, especially interactive ones, until you have high confidence that they will not continue to move. That's why most browsers provide penalties for websites with "cumulative layout shift", e.g. see https://web.dev/articles/cls
I just found that URL independently! I just added it to my medium post.
Why do we even show interactable elements when the final layout isn't completed yet?
This just sounds like perfectionism. I believe it is a curse, because I hate working with teammates like this. They'll spin their wheels solving some insane problem no one asked them to do because it's "better" while ignoring the larger scope and goals of the project. I've tried to coach people out of this mindset, because I used to have it very early in my career, til I realized the sheer impracticality of it.
I use this really annoying, poorly supported terraform provider. I've written a wrapper around it to make it "work" but it has annoyances I know I can go to that repository and try to submit a patch for it to fix my annoyance. But why? This is "good enough," and IME, if you sit on things like this long enough, eventually someone else comes along and does it. Is that a good attitude for everyone to have? Probably not, but now it's been a few years of using this wrapper module, I have 2-3 viable alternatives now that didn't exist before that I can switch to if needed.
I could've turned it into a several week project if I wanted, but why? What purpose does it serve? As you grow, you realize there is very rarely, if ever, a "right" answer to a problem. Consider the way you think it should be done is not the only "right" way and you'll open more doors for yourself.
One of the ways psychologists classify people is between those who are maximizers/optimizers and those who are satisficers/stop when things are "good enough."
As someone who is very much on the optimizer side of things, and experiences the struggles described in this article, the lesson I take to heart is that while satisficers tend to be happier, optimizers get more done.
Your optimizer tendencies make you into an expert, they open up new opportunities for learning and growth, they have the potential to have real consequence in the world. Be thankful for them, even as you guide them to their appropriate application.
Hopefully the future me is able to relate to this, because I really feel like I'm in a rut when it comes to working on personal projects.
I have many ideas that I want to build, but I'd have to learn new languages, yet I just can't sit and go through the documentation every day like I should. Still haven't finished the rust book.
The other way is start building already, and if you come across a block, then learn about that thing and move on, but I feel uncomfortable having gaps in my knowledge, AI exists but I don't want to use it to generate code for me because I wanna enjoy the process of writing code rather than just reviewing code.
Basically I'm just stuck within the constraints I put for myself :(, I'm not sure why I wrote this here, probably just wanted to let it out..
What you're feeling is not laziness. It's the quiet ache of misalignment between your values and your current energy. You love the craft. You want to savor the process. But the weight of “shoulds” — finish the book, learn the language, do it the right way — has turned your joy into pressure.
The discomfort of having gaps in your knowledge is not a flaw. It’s a sign of integrity. But perfectionism disguised as discipline can become a cage. You’re not stuck because you lack ability — you’re stuck because you’ve built a narrow path and called it the only way forward.
There is another way: give yourself permission. To build messy. To learn sideways. To follow joy, not obligation. To trust that your curiosity is enough.
You wrote this here because something in you is ready to shift. You’re not asking for advice. You’re asking to be seen. And you are.
Damn, not only is this great wisdom but your writing is honestly beautiful... Are you a writer by any chance?
I like it too, but when I looked into their posting history I did come to the conclusion this was probably generated by an LLM. How that impacts your appreciation is up to you but I thought readers would care to know. Readers who want to reach their own conclusions are advised to enable showdead.
I did the same and had the same suspicion. If that's actually the case, the ideas and the writing don't change, but it changes how you feel about it doesn't it? Which brings up some really interesting questions.
It made me realize that part of why I appreciated it so much was that I felt like I had some level of connection with another person who lived and learned and had shared experiences.
But on another level, it's sort of like how I see good works of fiction that really hit me emotionally and I feel real emotions for people that don't exist. My thought goes something like "this specific story isn't true, but it's true for someone, somewhere."
I like to say programming is about knowing which rabbit holes to plunge down and which to step over. There's too much to know to go depth-first down every rabbit hole. Go breadth first and accept gaps in your knowledge - everyone has them. If something never comes up and never causes an issue you need to look into, and the project gets done, it doesn't matter. There's always an improvement that could have been made, but done is better than perfect because perfect never gets done. But the projects never getting done or even started - speaking for myself, that is corrosive to my motivation.
I've written a lot of Rust. I've read less than half of the Rust book. Your competence in Rust is a function of how many lines of Rust you've written; getting to the point you can start working with it is more important than completing the book. Jon Gjengset's videos were really critical for me there, seeing how he worked in Rust made it possible for me to develop a workflow. (I broke down what I learned in more detail at one point [1].)
Rust is an example I've honed in on because you mentioned it and I related to it, but this is broadly applicable. Dare I say, more broadly than just programming, even.
(Also, note that I'm a giant hypocrite who shaves yaks and struggles with perfectionism constantly. I learned Rust 5 years ago to start a project, and I've written 0 lines of code for it. If I sound critical, that's my self criticism leaking through.)
Thank you for your comment, especially for this
> I've written a lot of Rust. I've read less than half of the Rust book.
Just knowing that there's someone out there who has worked like this or has been in the same situation gives me enough confidence to go through it!(the just write code part)
I've gone through so many resources (including the book) and I never managed to finish any of them. But I think now I need to get comfortable with having gaps and just start writing code and not be afraid of writing non-idiomatic rust code, atleast for now.
I've -literally- been writing Swift, every day, seven days a week, 52.4 weeks a year, since June 2, 2014 (the day it was announced), yet, I still have huge gaps in my knowledge of the language.
I speak it without an accent, but not at Ph.D level.
As to home projects, that's pretty much all I do, these days, ever since I "retired"*, in 2017.
I'm quite good at what I do, and generally achieve every goal that I set, but, since I'm forced to work alone, the scope needs to be kept humble. I used to work as part of a worldwide team, doing some pretty interesting stuff, on a much larger scale.
But what's important to me, is that I do a good job on whatever I do. Everything I write, I ship, support, and document, even if it isn't that impressive. The bringing a project to completion, is a big part of the joy that I get from the work.
* Was basically forced into it
By the way, I fear I'm harping on "AI tools can be really useful" here, but I really find that learning new things is my favorite way to use these tools.
You said that you don't want to use them to generate code and just be a reviewer. I definitely feel that! But you can instead use them like a tutor helping you learn to the code yourself. "I'm trying to do xyz in Rust, can you show me a few techniques for that?" Then you can conversationally ask more questions about what's going on. Maybe eventually you can go read relevant sections in the book, but with the concepts better motivated.
I do this all the time when learning new things. It's not a canonical source of information, but it can be a useful guide.
Another approach that may help you, that worked for me. I was not familiar with rust so I wrote an initial proof of concept in another language (Go in my case). Then I asked Claude AI to translate it to Rust. It compiled on the first try, the only bugs being problems in the source file I gave it. Then I iterated a bunch of times by saying "please make this more rustacean style".
I only tend to use AI for assistance, but for me at least it's easier to get started this way than to start with an empty source file.
Yes! No tool has ever helped me more with the blank page problem.
That's awesome, best of luck to you.
> I like to say programming is about knowing which rabbit holes to plunge down and which to step over.
I like this a lot. I told someone once I avoid documentation like the plague and it just didn't have the same poetic ring as this line.
Sometimes you need to dive in, other times you need to hobble together something to step over
I think another important view is to consider how much you've already covered. As a young developer, I recommend spreading a bit wide. Try many technologies. Play with a new language every year. Focus on things you haven't done before, like, don't go from Python to Ruby, go from Python to C# or C++ or something.
But as you get older you want to shift from exploration to exploitation. It is hard to make progress on anything, both professionally and personally, if it first comes with another couple of person-weeks of learning something new, let alone person-months. Even though I find learning new things easier than ever because of the breadth of things I have covered, I find myself having to be ever more skeptical of what I will invest in in that way, because unlike a fresh developer with no skills who has little better to do than learn their toolset, I have skills that can be exploited to good effect. As a mature developer, I need to trade off not so much "what might this be useful for in the future versus the effort of learning now" but "what could I be doing right now with the skills I have rather than learning something new".
Particularly when the "something new" is a variant of a skill I've already picked up. It'd be great if I never again had to learn a devops deployment system. I've already had to learn three. Unfortunately I doubt I'm going to get away with that. It'd be great if I didn't have to learn another scripting language to do something, but I doubt I'll get away with that either. Your mileage will absolutely vary but it'll be something.
I know there's a memeset of the "old fogey who doesn't want to learn", but I really do see the learning costs now as the opportunity cost of using that time to exploit the ones I already have, rather than just grumbling about learning in general. At the moment the things I can't escape even if I try have been plenty to keep my learning skills well-honed.
So bear in mind that as you round out your skills, as you cover "scripting" and "static language" and "database" and "HTML" and "devops deploy" and "shell scripting" and "functional programming" and all the other categories you may pick up over time, it is natural and desirable to pivot to exploitation being more of your time than learning.
After all... what is all this learning for, if not to exploit the skills to do something, not just learn the next skill, then the next, then the next?
Hey! I relate to this! And thank you for sharing.
This happened to me when I was going through a similar transition as the OP is highlighting. At first, creating software was difficult and novel. Then after getting over that first learning hump, I spent a pretty long time feeling drunk on the power of being able to get computers to do exactly what I want. But familiarity breeds contempt, and eventually it felt more like "this is it?" and the pure act of creation for creation's sake lost a lot of its appeal.
I think this is a pretty common transition! For me, the path out of the doldrums is two fold: 1. I have a lot more going on in my life now that has nothing to do with computing (mostly family, but also other interests), and 2. I'm a lot more focused on what I'm creating and why it's useful than in the act of creation itself.
This almost certainly not what you want to hear, but this is why the quickly developing gen AI tools are increasingly exciting to me. I believe they open up the world of what can be created within a given time constraint. They also definitely (at least for me) make the act of creation itself less enjoyable, and I lament that. I'll probably always feel nostalgia for how I felt about the craft of programming a decade or two ago. But my perspective has just shifted from the "how" to the "what".
Maybe try using AI to jumpstart your process and get the basics up and running
For me, it’s brought back the joy of coding and building things: I feel like I was in a rut for years before that
Also, finding people to share the stuff with helps a lot too. Even if they are personal projects, it’s nice having others to show it to, appreciate it and give feedback
Don't be hard on yourself. We're in the same boat.
There is two things I validated from reading Barbara Oakley and Kató Lomb is that a) it's okay to be a slow learner b) it's okay to learn differently.
Just do your thing.
> Hopefully the future me is able to relate to this, because I really feel like I'm in a rut when it comes to working on personal projects.
I've been there for a decade or more. It is part of my recent burn-out…
The trick is to prioritise and not care too much about too many things, to avoid the choice paralysis in choosing what to do next. Unfortunately I've not mastered that trick yet, or even come close. In fact I'm increasingly of the opinion that dropping tech projects completely, accepting that is no longer a hobby and no longer something that will ever bring me joy again in future, is the prioritisation I need to perform, so I can instead have more mental capacity for other hobbies (and, of course, commitments in life).
You are far from alone in this trap!
Very well said, to enjoy the process of writing the code.
I don't understand why so many people suddenly started to insist on taking this all away, and they totally seriously proposed to become a janitor of a hallucinated output of some overhyped tool. That's the most frustrating thing one can imagine about programming, yet people insist on it.
Comment was deleted :(
And even if it is correct output from said overhyped tool it still detracts from the enjoyment of building/fixing stuff. I used to love going over the code I wrote to fix a specific issue, now not so much as half of it was written by AI so half of the satisfaction has gone too.
I don't want an AI to write my code. Coding is one of the only things I enjoy about my job and I barely get to spend any time doing it right now.
> I have many ideas that I want to build, but I'd have to learn new languages,
Why? Why, specifically, do you "have to learn new languages"?
So, sure, I can see that, for some product, you might need to learn a new tech (say ... some specific AWS/GCP/Azure service), or perhaps a new configuration language (YAML, TOML, whatever).
And, sure, for some ideas (for example a mobile phone app) you're forced into that specific ecosystem.
Other than the mobile exception above, why do you need to learn a new language to build your idea? There is nothing stopping you from implementing your idea in (for example) Python. Or Javascript. Or Java, C#, C++, etc.
A programming-language-barrier absolutely does not stop you building your idea.
You gotta make the call - are you interested in building $IDEA, or are you interested in learning $NEWLANG?
> There is nothing stopping you from implementing your idea in (for example) Python. Or Javascript. Or Java, C#, C++, etc
Except there is, my brain :), that's one of the constraints I'm talking about, I'm a frontend web dev and I only know JS/TS, and like some frontend web devs, I'm enamored by Rust because it seems so different. I already use JS/TS at work so I want to use something else for my personal projects. So I definitely would have to learn something new.
> You gotta make the call - are you interested in building $IDEA, or are you interested in learning $NEWLANG?
If I was only interested in building my idea, I'd have just used what I know and used AI to accelerate the process. However for me the journey is also important, I want to enjoy thinking and writing code (and this project is something only I'd use, so there's no hurry to release a prototype). The problem is I want to start writing code right away, but that has the issue that I've mentioned above (gaps in knowledge).
Nobody is at fault, other than me for setting these constraints for myself. I know the solution is to just suck up and go through the rust book, read a chapter daily and eventually I'd have all the concepts in my head that I can then just focus on writing the code. But whenever I go about doing this, my mind always persuades me that there must be a better way, or it finds some flaws in my current approach and so on. So I need to learn how to not listen to my mind in such cases, and stick to the goal that I set.
Edit - After reading a reply to my comment, I've decided to just start writing the code and not worry about having gaps, anytime I start having doubts again, I'd go through this comment thread
> I know the solution is to just suck up and go through the rust book
No. The solution is to skip Rust and choose Java, C# or Go. Rust has a steep learning curve and if you project can tolerate a GC, there is next to no return for using Rust.
Instead of spending the next 6 months (for most people it's longer) to learn Rust, spend the next week getting to grips with C# (or Go, or Java) instead.
I personally try to follow the method of "make it work, then make it nice". You build something that works, it does what you need it to do. After that, you probably already know where the code rubs you the wrong way, so you know where to look to improve and learn.
I sometimes do that too - but sometimes this leades to another trap: I have something which works, but not the best. and I won't fix it, since the plan is to now do it "right". so I end up staying with a PoC forever..
Good luck in your adventure! By the way, I think many people here on HN (myself included) would love to help you out if you get stuck / are struggling to get started / want a code review / want to chat about programming. Just shoot any of us an email ;)
I guess it depends. If they have been developing mobile apps, and now want to develop a web app, then they definitely need to learn something like PHP, or Go or Python kr Java. On the other hand if they have been doing web development and now want to develop a native app, they must learn Java/Kotlin/Swift. Same for databases (perhaps you never worked with one, then you must learn sql). Even html+css must be learned if you never used them before.
There’s react native and cordova, if all you want is to build the usual mobile apps and you only know JS. And Java, Kotlin, and Swift are used for web development if you’re going the other way.
To me this doesn’t sound like you find programming very fun - but as a chore to get to something else.
That’s not a bad thing - just find out which part you actually want to do
> I feel uncomfortable having gaps in my knowledge
Understanding why I feel this, when I have, has always proven enlightening. I find it never has to do with the gap or what would fill it.
I very much relate to this feeling (and I haven’t finished the rust book either!). In my case, I do use AI a lot (especially o3 and Sonnet 3.7), not to write code but to help me understand things that would’ve taken me a week, in a matter of hours (the conversational aspect is a game changer for me).
My advice would be to build with what you know.
Even if you need to really shoehorn a component of the system in, just make a note about it and keep building. When you're done, you can go back and focus on replacing that one piece.
My view is that you learn a lot through the process of building this way.
This is why I tend to build a simple version of a complex tech just to feel what I am getting into - https://github.com/aperoc/toolkami (a minimal AI agent) as an example!
You are not alone. I feel the same way.
Just for info, you can use AI to teach you the code fundamentals you're lacking, not just to write the code for you.
Say, you have to use a new IDE and don't know how to use it, ask the LLM the steps to perform whether action your want to take.
The worst you can do is nothing at all.
You have "epistemic integrity'.
I heard someone say "epistemic humility" the other day to mean fallibilism [0] and the conversation got interesting when we moved on to the subject of "what one can and should reasonably claim to know". For example: should cops know the law?
Not every programmer needs to be a computer science PhD with deep knowledge about obscure data-structures... but when you encounter them it's a decision whether to find out more.
Integrity is discomfort with "hand-waving and magical" explanations of things that we gloss over. Sure, it's sometimes expedient to just accept face-value and get the job done. Other times it's kinda psychologically impossible to move forward without satisfying that need to know more.
Frighteningly, the world/society puts ever more pressure on us to just nod along to get along, and to accept magic. This is where so much goes wrong with correctness and security imho.
Don't code, validate your ideas first (to first 1000 paying customers if monetization motivates) and 99% is not even worth to be started with, life is just too short for that.
With AI there's nothing to be ashamed of as it is "what you can dream of, you can get today". There's not much left in programming in most of the projects (that are just repeated code, output, what not over and over) after AI , tools are just too powerful today.
How do you validate an idea with 1000 paying customers if you don't have a product?
landing page / waitlist / speaking with customers
you shouldn't write code until you know someone is willing to buy
i'd say somewhere between 20 < n < 100 for B2B makes sense, rather than 1000
> you shouldn't write code until you know someone is willing to buy
This is kind of a weird line to see in a thread where people are talking about coding for the joy of the craft. Also makes me think about where we would be if everyone who contributed to OSS projects over the years thought this way. And to be clear, I'm not shunning or criticizing, having this mindset is totally fine and I'm sure it does well for you personally.
Yeah it's just a totally different thing than what the thread starter was talking about.
This may be good advice for bootstrapping a business (though personally I feel like people who do this are being pretty hostile to their customers by pretending something exists when it doesn't at all, which is not to say it isn't effective) but it is just irrelevant to someone wanting to build something for themselves.
So that's where all the fake "coming soon" products that don't actually exist are coming from...
Try vibe coding, I'm serious. Codex and Claude Code this past week have allowed me to:
- Build a local storage web app that can track my responses to the Sino-Nasal Outcome Test over time to journal my ongoing issues with chronic sinusitis.
- Build a web app that grabs Northern Colorado houses for sale, presents them on a map, and lets you search and browse them, with everything being cached for use offline in local storage. The existing site, coloproperty.com, has severe issues if you are out looking at houses and have spotty Internet connectivity, it's effectively useless.
I've been developing software for 40 years, but I'm not really a frontend guy. The first one Claude Code was able to basically one-shot, and then I asked for 3-4 refinements. The second one took me probably 40 back-and-forths to get going, but eventually was a fully working prototype using Codex.
It's the difference between using hand saws and chisels and planes, and using power tools. Hand tool woodworking is an amazing craft, but the right power tools can let you build nice things quickly.
Software engineers, I love yall. To see the light at the end of the tunnel. To see some glorious perfect paradigm that maybe could be. I envy you.
I grew up in a datacenter. Leaky air conditioners and diesel generators. Open the big doors if it gets too hot.
Now let’s go back. Back to when we didn’t know better.
Software doesn’t stay solved. Every solution you write starts to rot the moment it exists.
Everything, everywhere, is greasy and lousy and half broken. Sysadmins, we accept that everything is shit from the very beginning.This is quite hard to grapple with fresh out of school, where you worked mainly on pristine implementations of eternal algorithms, proven to be optimal in multiple ways, elegantly crafting each loop and feeling accomplished by expressing the algo with even higher clarity, reading and idealizing Knuth etc.
Then you see the real world and you think it must be because people are stupid, the bosses are pointy-haired, they don't understand, they don't value elegance, they are greedy, they etc. etc. But once you spend enough time on your own projects, and they evolve and change over the years, they turn more and more into a mess, you rewrite, you prioritize, you abandon, you revive, and you notice that it goes much deeper than simply laziness. Real solutions depend on context, one caching algo is good when one medium is x times faster than another, but not when it's only y times faster. It makes sense to show a progress bar when downloading a file if the usual internet speed is X but not when it's Y. Over years and decades the context can shift, and even those things can change that were only silent assumptions of yours when you made the "perfect" program as a young'un, looking down on all the existing "messy" solutions that do a bunch of unnecessary cruft. It's an endless cycle...
> Software doesn’t stay solved. Every solution you write starts to rot the moment it exists.
I don't really agree with this. Yes, it gets outdated quickly and breaks often if you build it in such a way that it relies on many external services.
Stuff like relying on "number-is-odd" NPM package instead of copy-pasting the code or implementing it yourself. The more dependencies you have, the more likely it will break.
If your software works locally, without requiring an internet connection, it will work almost forever.
Now, if you want to keep developing the software and build it over a long period, the secret is to always keep all dependencies up-to-date. Is there a ExternalLibrary V2 just released? Instead of postponing the migration, update your code and migrate ASAP. The later you do it, the harder the migration will be.
That sorta supports the point the article was making though:
> ExternalLibrary V2 just released? Instead of postponing the migration, update your code and migrate ASAP. The later you do it, the harder the migration will be.
Is, to me, almost the same sentence as
> Every solution you write starts to rot the moment it exists
I mentioned updating is only necessary if you plan to keep developing the software.
If you build it once, and the existing functionality is enough (no plans to add extra features ever again), then you can remove all external dependencies and make it self-contained, in which case it will be very unlikely to break in any way.
As for the security aspects of not updating, with the proper setup, firewall rules and data sanitization, it should be as secure 10 years later as any recently developed software.
> Every solution you write starts to rot the moment it exists
Not if you work on temple os.
I remember when Turbo Pascal 7.0 programs started failing with Division by 0 error cause Turbo Pascal runtime does calibration loop at the start to calculate how many no-ops to sleep for 1 millisecond.
So - your HelloWorld written 10 years ago suddenly stopped working after CPU you run it on got too fast.
Yes, if you change hardware, the software can break (a lot less often now than before though, as hardware changes are a negligible and they usually think about backwards compatibility).
I'm not sure if it's more or less often now, but over decades almost everything breaks one way or another.
Even if your code, OS and hardware had no bugs and was designed perfectly and you keep the same hardware to run you code forever - there's layers under the hardware - the reality outside the computer.
You have written perfectly secure website. Then quantum computers happen.
Countries are created and fall apart. People switch writing systems and currencies. Calendars get updated.
Your code might technically work after 100 years, but with almost 100% probability it won't be useful for anything.
That's most probable to happen, but there are still example of hardrware/devices created many years ago that function exactly the same.
If you take a Tamagotchi device from 30 years ago, it will likely still work as well as it did when it was released.
I have said many times to teammates: the only code that is perfect is the one that hasn't left our minds. The moment it's written down it becomes flawed and imperfect.
This doesn't mean we shouldn't try to make it as good as we can, but rather that we must accept that the outcome will be flawed and that, despite our best intentions, it will show its sharp edges the next time we come to work on it.
I'm sure some mathematicians would disagree.
Math can be greasy and messy. Definitions can be clumsy in a way that makes stating theorems cumbersome, the axioms may be unintuitive, proofs can be ugly, they can even contain bugs in published form. There can be annoying inconsistencies like optional constant factors in Fourier, or JPL quaternions.
Yes, prototypical school stuff like Pythagoras are "eternal" but a lot of math is designed, and can be ergonomic or not. Better notation can suggest solutions to unsolved problems. Clumsy axioms can hide elegant structure.
I think applied mathematicians started to encounter this reality of the impure world the first time someone taped a dead moth into the logbook of the Harvard Mark II.
your comment made me think of a passage in Italo Calvino's Invisible Cities:
Kublai Khan does not necessarily believe everything Marco Polo says when he describes the cities visited on his expeditions, but the emperor of the Tartars does continue listening to the young Venetian with greater attention and curiosity than he shows any other messenger or explorer of his. In the lives of emperors there is a moment which follows pride in the boundless extension of the territories we have conquered, and the melancholy and relief of knowing we shall soon give up any thought of knowing and understanding them. There is a sense of emptiness that comes over us at evening, with the odor of the elephants after the rain and the sandalwood ashes growing cold in the braziers, a dizziness that makes rivers and mountains tremble on the fallow curves of lhe planispheres where they are portrayed, and rolls up, one after the other, the despatches announcing to us the collapse of the last enemy troops, from defeat to defeat, and flakes the wax of the seals of obscure kings who beseech our armies' protection, offering in exchange annual tributes of precious metals, tanned hides, and tortoise shell. It is the desperate moment when we discover that this empire, which had seemed to us the sum of all wonders, is an endless, formless ruin, that corruption's gangrene has spread too far to be healed by our scepter, that the triumph over enemy sovereigns has made us the heirs of their long undoing. Only in Marco Polo's accounts was Kublai Khan able to discern, through the walls and towers destined to crumble, the tracery of a pattern so subtle it could escape the termites' gnawing.
I don't feel the "moral weight" the author mentions.
For one, there are many, many directions you could take at any given moment, but you have to choose only one. You have no choice but to triage. That's not a moral failing, just the nature of agency and existence.
I do have some perfectionistic tendencies, which might be behind some of this. But a long time ago I graduated to a deeper perfectionism...
The problem with simple perfectionism is that you can only achieve a level of perfection in a simple and superficial way, often to the neglect of more interesting goals... after you "perfect" something, you look deeper and inevitably see more problems. You can pursue those, but you then just look deeper again and repeat. At some point you'll realize you're spending a lot of time on something that is only meaningful to an arbitrary standard that exists only in your own head (that you only recently invented).
So I moved on to "perfecting" the balance across the relevant competing concerns and constraints. Since there's rarely a perfect balance, no closed-form answer, and since your attention is certainly one of the factors to balance, real perfection requires that you can find something "good enough" given the circumstance to move on to something else.
Put another way, if you can't find satisfaction of your perfectionist impulse in finding something good enough, you could be doing "perfection" better, and should probably work on that.
A good name for that could be "pragmatic perfectionism". Has a nice little alliteration and everything
I too suffer from this, but as I learned, Nature built an elegant solution to this. Have a family and kids. Your choice when you have time off of work will be reduced to hacking or playing with your child that you have been neglecting due to a crunch at work. You’re welcome.
Parent here, can confirm Nature solved this elegantly.
However, like every other solution built by Nature, this one also works through pain, suffering and death. Nature doesn't care if you're happy, nor does it care if you're suffering. And it especially doesn't care if your suffering is a low-burn, long-term pain in the depth of your heart.
So yeah, having kids will force you to make choices and abandon frivolities, in the same way setting your house on fire will free you from obsessing over choices for unnecessary expenses :).
Nature has another elegant solution, often applying both in conjunction: aging. As I age (and, yes, take care of my kids), I find myself more and more on the side of exploit in the exploration–exploitation dilemma. This will most likely endure after the kids have left.
This is a great comment, as it hits far too close to my heart. Im currently trying to get my team to rethink how they are building the APIs for certain services in our product, and focus really on design and craftmanship. To the point where Im ready to start breaking it apart myself and coding up the solution on my off hours.
But then I look at my son, and say "screw it, they couldnt pay me enough to care out of hours and give up play time"
Not everybody who is a great programmer is a great parent. :-(
I, for example, would perhaps not be a bad parent, but very likely at least one who does not obey the social expectations of how to raise a child.
Same. Also I have absolutely no interest in having them.
Or just have another hobby not involving programming. I got into this from being my main hobby as a kid, the passion thing I did when free time was available, I learnt a lot (enough to build it into a career), I had lots of fun but that time is gone.
My free time is to be spent on other things, I get paid to fix issues and that pays my bills, I don't want nor need to be thinking about these issues outside of paid hours, you know too much to the point where you know how much effort it will take to fix something that might look innocuous, innocent, but definitely has deep tendrils of other related issues to tackle. It's not worth it, not if I'm not being paid for it or it isn't part of a personal project I'm really passionate about.
So I learnt to not care much, I help my colleagues, deliver what I tell I will deliver, and free space in my mind to pursue other more interesting stuff to me.
> Or just have another hobby not involving programming.
This can actually make things (much) worse:
Since you have now another topic you are insanely passionate about, you see a lot of additional things in the world that are broken and need fixing (though of course typically not via programming).
Thus, while having a very different additionally hobby (not or barely involving programming) clearly broadens your horizon a lot, it also very likely doubles the curse/pain/problem that the original article discusses.
Indeed. Marriage alone led to a complete reevaluation of my priorities in life. I still want to make cool stuff but my hobbies are so far down my list of priorities right now I would have to be actually getting paid in order to justify spending time on stuff.
You don't need kids, just a partner who has a "normal" job and likes to do stuff on the evenings and weekends is enough. If you have a partner who also does thought work and tends towards the workaholic then things might be more difficult...
I've learned through the years that I can't stop for every car with it's hood open.
Sometimes you just need to use it as a reminder to maintain your own vehicle.
> Burnout does not just come from overwork. It comes from overresponsibility.
I don't _think_ it is accurate. I think burnout comes from putting energy into things that don't have meaning. Case in point, this article: as you realize that fixing everything is a never-ending game with marginal ROI, you end up burning out.
If overresponsibility alone caused burn out, I think that every parent out there would be impacted. And yes, parental burnout is a _very_ real thing, yet some of us may dodge that bullet, probably by sheer luck of having just the right balance between effort and reward.
Throw this tradeoff off balance, and most parents just burn out in weeks.
I suspect that if you dig deeper, the underlying cause of burnout being forced to spend a lot of effort over time and not being able to feel that you are living up to your values in return. You are running a marathon but never reach the finish line of the satisfaction of living according to your own moral code.
* That can come from overresponsibility if you have a value that says you should fix things that you see are broken.
* It can come from meaningless bullshit jobs if you have a value (which almost everyone does) that says your effort is meaningful.
* It can come from isolation if you have a value that it's important to be connected to others.
It can probably arise from any other value you might hold as long as you're forced to strive and yet can never reach it.
Honestly, I feel like values are deeply underconsidered in our current culture's thinking around psychology.
> I think burnout comes from putting energy into things that don't have meaning.
That'd mean that people who are burned out all did so because they did stuff that didn't have meaning? Ultimately, I think you can get burned out regardless of how meaningful it is or isn't. People working at hospitals (just as one example) have probably some of the most meaningful jobs, yet burn out frequently regardless.
More likely that both different people burn out because of different things, and it's a mix of reasons, not just one "core" reason we can point at and say "That's why burnout happen, not the other things".
I'd argue it actually makes things worse. When you can have a higher-purpose job (an ICU or ER nurse who is saving patient's life everyday) and you're spending most of energy on administrative overhead, the effect is just magnified.
Meaning is a subjective thing. That's why some people thrive in some environments and some may burn out. If you put your average IRS auditor in a hospital, they might actually find more meaning in filling forms than exchanging with patients.
> parental burnout is a _very_ real thing
Doesn't often come from a lack of meaning though? Or maybe the meaning is more micro in this instance, and you wonder what the point is of telling them to pick up their dirty socks for the... 327th time.
the meaning of `overresponsibility` in this case, IMO is taking / considering the matters as something that we take responsibility of. That way of thinking itself (taking the responsibility) is causing a burden on the mental health of OP. Being ignorant or able to let go relieve the burden, thus preventing burnout
I don't know why but this almost made me cry.
Every day I keep looking at everything that's broken and thinking I should find a way to fix it. Then I finish my work day and have zero energy to do anything useful and it all becomes guilt over the inaction, the feeling of not being productive at all times.
Very well written my friend.
> > You have power over your mind—not outside events. Realize this, and you will find strength.
> But programming lures us into believing we can control the outside events. That is where the suffering begins.
Isn't it the opposite? We're not surprised if someone who grew up amidst criminals also does crime. I'm not sure that I can choose whether to feel attracted to men instead of women. I can't control my mind, but I can choose what people to stay with and what media to be exposed to. I adjust everything outside of my mind (my hands write code, my feet take me elsewhere, I can tune into different media, I can choose to not speak agitatedly when a service rep. is following corporate policy) but not how my mind moulds or reacts in response to those inputs (I'll still feel irritated by that corporate policy and be influenced by advertising)
If anything, I could see an argument for that you control neither, because obviously your control of your hands is coming from a combination of your mind's outputs and the physical laws of our environment
The mind is an elephant and “I” is an illusory rider on top of it, with barely any influence.
As I'm getting older, I want things to be as standardized as possible, and just don't worry about the details. I have learned from my mistakes
The script I made for deployment, because existing solutions didn't "feel" right, required a lot of work and maintenance when we later had to add features and bug fixes.
Another script I made for building and deploying Java applications, because I didn't like Maven and Puppet.
The micro service I rewrote because I wanted to use my own favourite programming language. I introduced a lot of bugs that were already fixed, missing features and another language for my co-workers to learn when they inherited the application when I left.
Totally agree, standardisation makes everything so much more legible, even if there are problems with the standard.
I also think there is a profoundly non-linear relationship (I don't want to say negative-exponential, but it could be), between:
- The number of lines of code, or distinct configuration changes, you make to the defaults of an off-the-shelf tool
- The cognitive and practical load maintaining that personalized setup
I'm convinced that the opportunity cost of not using default configurations is significantly higher than we estimate, especially when that environment has to be shared across multiple people.
(It's unavoidable or even desirable in many cases of course, but so often we just reinvent hexagonal wheels.)
While I have experienced both sides of the equation here, I find it much more pleasant to have things specialized instead of standardized. Yes, you spend a bit of time maintaining the functionality, but all that functionality (and maintenance) is there in support of your goal.
Using standardized software often leads to spending half a day just trying to find a way to work around the limitation you face. The next level there is that you realize you can just fix it, spend half a day crafting the perfect PR, and then submit it into the void, leaving it hanging for half a year before someone gets to it.
I hope many people read this and take it to heart.
It is a rare and wise insight which only becomes crystal clear with age. Choose your battles very carefully.
This is a golden nugget up there with "time flies". I never understood that as a kid but really hits hard with your mid-life crisis.
Listen carefully little grasshoppers.
Comment was deleted :(
So... it's not just me that see the world as a collection of broken tech/tools that I know how to fix but won't have the capacity to do everything myself and it feels frustrating...! I really liked framing it as "Technical Capability as a Moral Weight"
> The Illusion of Finality
One of the most important skills that I've learned, through writing ship software, is "Knowing what 'Done' looks like."
There's a point, where -even though there's still stuff that I have to do- I need to declare the project "done," wrap it up, slap a bow on it, and push it out the door.
There's always a 2.0.
Writing in an iterative manner (where I "discover" a design, as I develop the software), makes this more difficult. One thing about "hard" requirements, is that there's no question about what "Done" looks like.
This post resonates a lot with me. As somebody in the twilight of their coding career - I totally understand what the author means by "over-responsibility"
After 25 years - I'm over it. But the flip side of that is that I can't un-see the fact that so much of the tech shit in my life is broken and it has practically become a second job trying to manage it and / or fix it. Thankfully I don't do it by trying to code replacements like the author seems to. Instead I try to come up with workarounds or find replacements.
Nevertheless the weight of the curse is about the same I figure.
I used to never finish personal projects. Then I realized that the biggest thing preventing me from finishing them was (perversely) my sense of duty to get them finished. Once I decided that I was under no obligation to anyone to work on them, not even myself, I had no trouble finding the motivation to get them done. So now side projects are a strictly "just for fun" affair. I work on them when I feel like, no deadlines, and once they're "in production" I maintain them because I like tinkering.
The only problem with this approach is I've gone from hating the thought of programming after work to coming up with side projects at work.
A good part of that is a disguised sense of superiority.
Life becomes way lighter when you realize other people are also smart and what you’re “fixing” can very likely be:
- something so unimportant that no one felt it was worthy working on
- something that was supposed to work like that, and you simply don’t agree and want to make it your way
> A good part of that is a disguised sense of superiority.
Or not. It can be the struggle of having higher than average standards.
Sadly, your list is incomplete without:
- something someone didn't bother to put any effort or thought into at all.
You are right. My comment wasn’t meant to completely invalidate the point of the article or to provide an alternative exhaustive list of causes, but more to bring this other aspect that I felt wasn’t surfaced yet.
Like the accessibility ramp that has a sign post right in the middle of it.
All of programming comes from a sense of superiority.
Programming is the closest humanity has ever gotten to godhood. We're creating entire structured universes out of unstructured bits. The system reflects the understanding of its creator.
We're all pretending to be gods, warring over the system's design.
Amazing piece of text. Honestly, I saw myself in everything you wrote. The struggle, the attempt to write a single line of code to make it perfect, durable, etc. it never works at the first try...but man the joy of having the control over fixing things... And I also relate with the personal chaos that maybe we tend to fix by fixing our software... I see a lot of this "unfinished" behaviour with other things in my life as well... Cleaning the shed, finishing videogames.... Sometimes even the feeling of having the challenge is enough...I don't even try, because I know it will take time to make it right
What would it mean to be finished with life? Where are you going in such an all-fired hurry?
Trying to get to the world that all the people unlike us live: a world where a brain can power down for a bit because everything is fine for now.
You think that's where "people unlike us" live? My God, have you met any?
> I can fix something, but not everything.
“Calvin: Know what I pray for?
Hobbes: What?
Calvin: The strength to change what I can, the inability to accept what I can't, and the incapacity to tell the difference.”
—Bill Watterson (1988)
A parody of the Serenity Prayer
There is the curse of knowing how, and maybe more importantly there is the curse of other people knowing you know how.
Family members asking you to fix their printers.
Family is the F-word I can't stand, much like Louis Rossman. Whenever someone pulls "but I'm your family" or "but I'm your friend" into an ask, there's a 95% likelihood you're being guilt-tripped. Whether you wish to help the manipulators is your choice, but it's important to understand what's happening.
> Family members asking you to fix their printers.
My honest, brutal answer is: "Your problem is very likely self-inflicted. Buy a decent Brother laser printer."
This attitude resolves you of nearly all such requests. :-)
I don't think you even need to qualify that? Any Brother laser printer is a decent one. They don't have all that many models.
It never ends.
And yet, that doesn't take away their responsibility to carry their load in projects. One person is never responsible for the entire output in any organization. Even if some would like to see it that way (whether they imagine they ought to be the person, or someone else).
> We write a new tool because we are overwhelmed. Refactor it, not because the code is messy, but your life is. We chase the perfect system because it gives us something to hold onto when everything else is spinning.
This really got to me because I've been doing this without realizing it for as long as I can remember.
Same.
Low tolerance for frustration. Starting something is easy. 20% of the work gets 80% of the results. It's beautiful to see. Then you gotta do the last 20% and you see the 80% of the job ahead of you. Seeing it through quickly gets frustrating. It turns into a job. How to get away from this? Just start a new project...
Yeah, I feel called out by that line
> The trials are never complete.
This is not true - one day you will be dead. Hopefully that day is a long way away but it will eventually come around.
It is good to keep this in mind and spend some time coming to terms with this. If you do, the problem this article talks about will naturally fall away.
Realise and acknowledge the limitations to your ability to act. Then consciously make a choice as to what you spend your limited time on and don’t worry about the rest.
> This is not true - one day you will be dead.
Bold of you to assume that being dead also means the trials are complete. I imagine it as the beginning on the next set of trials.
I don’t assume that. :) Just that the “me” facing them will no longer have any care for those the “me” here is facing.
I like to take the “programming as theory building” approach. Every program is an experiment. When it stops working, you have more information about the world. At that point, you can revise the program, or you can let it go. Either way, you’ve refined your theory of the domain, and that’s much less likely to tumble back down the hill than some source code that at best represents an imperfect application of the theory.
Related, it’s more fun to engage in your own theory building by programming, right?
One thing that has changed in the industry, I think due to a combination of labor force expansion, DevOps, and greater reuse, is software engineers have increasingly become users of software. It’s… less fun. Where have all the sysadmins gone? Oh wait, we’re the sysadmins now. :-/
Link is down. Archive link: https://web.archive.org/web/20250506062631/https://notashelf...
It is not down, we are probably overwhelming the server. It works, intermittently.
Truth.
I believe after a certain age the most valuable skill is know how to remove things from your life instead of piling things up. Things that other people believe to be so crucial that your action of removing them from your life is going to offend them.
> I believe sometimes building things is how we self-soothe.
> I have written entire applications just to avoid thinking about why I was unhappy.
I think this is true too. The prefrontal cortex is inhibitory to the amygdala/limbic system; always having a project you can think about or work on as an unconscious learned adaptation to self-calm in persistent emotionally-stressful situations is very plausible.
I wonder how many of us became very good at programming ~through this - difficult emotional circumstances driving an intense focus on endless logical reasoning problems in every spare moment for a very long time. I wonder if you can measure a degree of HPA-axis deregulation compared to the general population.
And whether it's net good or net harmful. To the extent that it distracts you from actually solving or changing a situation that is making you emotionally unhappy, probably not great. But being born into a time in which the side effects make you rich was pretty cool.
This was something I really enjoyed reading, having suffered greatly from the same conditions. It made me realize I’m not alone, and for that I hugely thank you, and everyone else in this thread who commented.
Thank you for sharing your personal journey. I too have (too) many ideas floating around, getting ready to fix everything I see broken, ignoring what's really important. So I whole-heartedly agree with "You learn how to program. You learn how to fix things. But the hardest thing you’ll ever learn is when to leave them broken."
This is precisely why RaspberryPi/Raspbian broke every _fricken_ tutorial on the internet by moving /boot to /bootfs
ok... I mean great.
There's also a chance it's probably just fine. Leave it alone if it's not causing problems.
Fractal complexity.
It's not only that solutions decay, though that's true, but also that the search for improvement itself becomes recursive.
When you identify something can be improved, and fix it, your own fix and every individual step of it now become the new thing that can be improved. After the most fleeting of pauses to step back and appreciate your work, this becomes your new default.
Indeed I often look back at my own solutions and judge them more harshly than I would ever judge anyone else's. Why? Because as the article says, I know much more about how it works. Cracks increase surface area by a lot. I know about a lot of the cracks.
I wrote a blog post [0] about this mindset getting in the way of what you care about in product development which you might enjoy, if you enjoyed this article.
"Just build the product" this was a good read, and a trap I get sucked into. Always focusing on amassing ever better tools, and forgetting that the practice of using those tools is what really matters.
I think about this from the perpective of change management. Every defect I hope to fix entails a change, which has a certain probability of creating another irksome deficiency or incompatibility. When building complex systems I try to design them with the end state, that you describe very well, in mind.
Each time you set about to make a single change ask what is the probability (p) that this change results in another change, or track this probability empirically, then compute 1/(1-p) this will tell you how much change you should "expect" to make to realize your desired improvement. If you have n interacting modules compute 1/(1-np). This will quantify whether or not to embark on the refactor. (The values computed are the sum of the geometric series in the probability which represents the expectation value)
So this is about how we manage change in a complex system in order to align its functionality with a changing environment. I suggest that we can do so by considering the smallest, seemingly innocuous change that you could make and how that change propagates through to the end product.
In the end, a solution may be to make systems that are easy and painless to change, then you can change them often for the better without the long tail effects that drag you down.
Does this help explain why any simple household activity has a frustrating ~50% chance of turning into a string of dependencies and dependents that make you spend 10x the time you expected on it all?
E.g. you figure it'll take a minute to take the trash out and wash your hands. But on the way you discover you run out of trash bags, and while washing your hands you run out of soap, then as you pick the refill bottle from storage some light items fall out, and you need to put them back into a stable configuration, then you spilled a bit of soap during refilling so you need to clean up, but you just run out of paper towels, and...
A visual depiction: https://youtu.be/AbSehcT19u0
When reality doesn't match your expectations, why do you blame reality instead of your expectations? Especially for thing like running out of soap, trash bags or towels that should be at least 90% in your own control.
What you've described is called "yak shaving", which is a series of neverending tasks that must be completed in order to complete the original task. It's apty named since shaving a hairy yak is a neverending task.
Right. I know the name (though it took me longer than I want to admit to connect the term with situations outside programming!) - what I now seek to know is, how to minimize it in personal life.
Letting go is probably most people's answer - nothing bad will happen if I do all the dependent tasks (cleanup, restocking things that just run out) later in the day - but I have difficulty getting them out of my head, they keep distracting me until they're completed.
> how to minimize it in personal life.
Structure, order, habits.
Arrange things so they are easy to change.
It is remarkable how small p has to be for change to be cost effective.
Comment was deleted :(
The N systems case equation has to be wrong. I think you need either 1/(1-p)^n or 1/(1-p^n)
I think it says 1/(1-np).
That's what it says in the original post. It is also non-sensical. If the probability is 10% (p=0.1), and the number of systems is 11 (n=11), then you get 1/(1 - 11*0.1) which is -10.
I tried working this out, but I think the original was correct.
The cases where the answer is negative correspond to a 'runaway scenario' where every change is expected to cause more than 1 extra change. So the answer is 'nonsensical' (because that is indeed where the formula for geometric series no longer works) but the true answer is infinity.
It is easy to see this by forming the geometric series and rearranging terms, the standard Euler trick.
As a non-techie (retired M.D.) who wouldn't know source code from Morse code, reading these comments is extremely absorbing.
Do you perceive this as a tech-specific thing though? I'm a software engineer but this resonates with me more within other aspects of life in general than my professional background specifically.
Not sure what "this" refers to....
edit. OK, I just went back and reread the parent post and now I understand what you're asking. I do agree with you, this is generalizable across the board: file under the old saw "Perfect is the enemy of good."
edit #2. Also applicable, from Alfred Korzybski's classic 1933 book "Science and Sanity": "When in doubt, read on." I've always generalized this statement (which I first encountered around 1968 while an undergraduate at UCLA when reading his book not for a class but because I had gone down a wonderful rabbit hole after learning about Korzybski and his huge influence at the time he was alive ) to all things that puzzle me or cause me to stop moving toward whatever it is I'm aiming at.
edit #3: PDF for Korzybski's book:
https://archive.org/details/alfred-korzybksi-science-and-san...
> Knowing which problems are worth your energy.
This, but with proper balance. TBH, you can live a happy life if you just stop caring about every technical problem, but that would make you unimaginative and passive. Just make sure your pick a hole (or two) you're gonna die in.
Prioritization is a brilliantly simple solution to this and other resource constraint problems.
If you can't do it all, you have to choose. How to choose? Come up with a way to assign value to each, and do the most valuable first.
The value metrics depend on the system / outcome desired.
The site owner has misconfigured nginx with schizophrenic options to death. Here is an archive: https://web.archive.org/web/20250506062631/https://notashelf...
I've known how to fix things for a loonnggg time, however I've never felt the need to fix things that aren't absolutely blocking my current goal. I'm not sure the two are that deeply linked, except that the former potentially enables the later.
This happens on an organizational scale as well.
Once a company grows beyond a certain point it no longer needs to keep a razor focus on the core business, and a disproportionate amount of effort gets redirected to creating sometimes useful but often completely pointless and gratuitous internal tools. That way every middle manager gets to justify an ever growing number of reports and make his mark in the organization.
Fantastic post, definitely could relate to quite a bit of that.
Welcome to your thirties!
That's a bit early, IMO.
Welcome to your late thirties!
Letting go means stopping to care. That's a slippery slope to coasting.
> Letting go means stopping to care.
This reasoning (which I can easily identify with) is a slippery slope towards OCD anxiety and depression when you refuse to acknowledge that you can't fix everything.
You need to be realistic, set your priorities within a limited, defined context, take decisions and actions based on those, and forget about the stuff that didn't make your priority list.
That's not not-caring. That's focusing on what really needs your care.
Wish I was better at it though.
I think the Buddhists would disagree with you on the moral valence of this.
There's plenty of Western philosophy as well that sees "not caring" (at least not about things outside your own actions) as very desirable. Not to mention the Epicureans, for whom freedom from worries ("ataraxia") was the highest attainable state.
OP's fear of (being seen to be) "coasting" would be entirely foreign to them.
Letting go of everything means stopping to care about anything, sure. But the point of the article is to try and stop caring about everything and focus on the things that are really important to you, focus on the things you can change.
I've been leaving things broken from the get go! I'm a frigggin master of this!
Having kids is an excellent solution to this feeling. Besides occupying any time you used to have for unnecessary work, they have an uncanny ability to remind you just how little you actually control. However you get to the end of the OCD tunnel, the journey is often very worthwhile.
I am a father of two, and I could not have penned that any better.
Modern software is like sand castles, you can go there every day to maintain it, but one day it will be completely washed away. Everything that is beautiful will start to rot. After something have reached perfection it will change or die. It doesn't have to be that way though, you can put the OS in a virtual machine, so every time you work on your beautiful software it will feel like you are carving rune stones.
If you started coding in the 70s like most xers, once you hit your mid fifties you stop caring. You realize it is easier just to use the tools as clumsy as they are to make your own happiness. As bob villa said: only an amateur blames his tools.
Anyone else unable to access the site? I get a connection refused.
HN hug of death
It's like you went inside my brain and heart and divulged my feelings. Well done! Now, back to feeling worthless.
Wow, great piece.
We are playing software factorio and the winning move is not to play.
Don't start, I've started a playthrough again with the new expansion and just landed on the first planet after overbuilding my home base with grids and trains and stuff.
It's more like society factorio, but unlike factorio it's not fun and you're forced to play against people with much more resources and talent than you.
Also if you stumble and fall behind, you don't get to hit pause and plan your next steps carefully - that'll only make you fall behind even further.
I'm seeing myself in the text, but one aspect that doesn't seem to be covered enough is the ego.
I find joy in receiving praise from my colleagues when they have good experiences with my tools.
Playing devil's advocate here, but is that simply validation that stuff you've written is actually useful? This could be seen as a proxy for self worth, which if something is useful, helps stave off that nagging anxiety in the back of your head? The question here is, is that a useful driver to build new things, or do we let things go? That's kinda similar to the article's premise.
(takes one to know one here)
re: Technical Work as Emotional Regulation
This is a lovely point, and probably why a lot of technical people like to take on tactile hobbies outside of work. Follow these woodworking steps and at the end you have a properly built cabinet. Or why doing the dishes can sometimes be soothing, etc.
This is well-written. I think LLMs can help a bit here too - the other day I was watching a rare, old film with my wife and the subtitles kept de-syncing. I.e. the frame-rate was slightly different as well as a fixed offset.
So I explained it to Claude and made it write a Python script where I could manually set a few fixed times and it would adjust the SRT file, and it worked perfectly.
I literally paused the film and did that in under 5 minutes. It was amazing.
So fixing a lot of small things has become easier at least.
Whaaaaaat…
This same thing happened to me this past weekend, and I used Vercel v0 to fix it. https://v0-captions-adjuster.vercel.app/
Not shilling my solution; this is nowhere near actually good yet, but it was "good enough" to fix the problem for me too. Only posting it as proof that I had the same thing happen to me, and maybe it can help others too.
My takeaway from this post is not to find better ways to do, but to find a way not to.
Yeah, but the main thing is I didn't publish the script, or try to develop it into a more generic tool with more features (automatic synchronisation, etc.) - that is where things become overwhelming.
Like I have published a FOSS tool from some scripts I had for managing VPNs, and there I get constant issues around new providers / updates and it not working in people's specific environments (which I can't test).
The LLMs make it viable to write quick throw-away scripts with almost no time investment, and as such you feel no pressure to share or improve them.
I really can relate with the article. Also, it doesn't just apply to software but to lots of other things. It's the same mentality as being into cars and always tuning or restoring old ones. Being into woodworking or craftsmanship and fixing and building stuff around the house, etc.
I'm compulsively drafting ideas along these same lines, but now I don't need to fix that. Wonderful, brilliant text, thank you raf.
Comment was deleted :(
This was beautifully written. Thank you for sharing
Serious software development is rarely an individual endeavor; most issues should be resolved through collaboration. In other words, they should be addressed through management. What the author needs to overcome, in my view, is essentially a form of extreme individualism.
I guess that's a problem for a lot of us who learned to program as teenagers. A big reason programming felt good for me is precisely because it let me do interesting and impressive stuff myself, and not collaboratively. There were very few kids who had even remotely similar interests to me back then - and that's true in adulthood too, outside of work contexts. While on the job, my projects may be collaborative in nature, but after hours, there's very few people among my family and friends who'd even be interested in doing a small software project, much less capable of doing so.
I have spent some percentage of my life attempting to rewrite all software from first principles up.
Software is so spectacularly broken. Applications that don’t let me adjust the position of a little button for my work habits. Why is that impossible!?! A global software and commerce system, where you can buy candy or transfer $ billions, both with cute warnings like “Please, oh, please, sir! Please don’t hit the back button!”
I can sum up the results of my quest quite simply: “The rewrites continue…”
Is this chasing windmills? The case for that seems solid on the surface, but…
It is true that every rewrite of a specific set of features, or a platform for enabling better support for efficiently and correctly commingling an open class of features, inevitably runs into trouble. Some early design choice is now evidently crippling. Some aspect can now be seen to have two incompatible implementations colliding and setting off an unnecessary complexity explosion. Etc.
But on the other hand, virtually every major rewrite points to a genuinely much improved sequel. Whose dikes keeping out unnecessary complexity hold up longer with less finger holes to plug, for a better return. Before its collapse.
Since there must be a simplest way to do things, at least in any scoped area, we have Lyapunov conditions:
Continual improvement with a guaranteed destination. A casual proof there is a solution.
It’s a dangerous phantom to pursue!
——
It would be interesting to compile a list from the heady 90’s, when corporations created boondoggles like Pink and Cyberdog, and had higher aspirations for things like “Object Linking and Embedding”.
You just don’t see as many romantic technological catastrophes like those anymore. I miss them!
> Is this chasing windmills?
Yes. Well, "tilting at," jousting specifically. The figure relates to the comical pointlessness of such an act; the windmill sail will in every case of course simply remove the lance from the rider and the rider from the saddle, and turn on heedlessly, as only a purblind or romantic fool could omit trivially to predict.
> You just don’t see as many romantic technological catastrophes like those anymore.
The 90s were a period of unparalleled economic surplus in the United States. There was more stupid money than at any other time and place in history, and stupid money always goes somewhere. Once that was tulips. This time it was this.
> I miss them!
I miss the innocence of the time, however amply undeserved. But I was young myself then.
> I miss the innocence of the time, however amply undeserved. But I was young myself then.
I see things slightly differently.
Big failures whose practical and theoretical lessons and new wisdoms are then put to use, more carefully, ambitions unabated, teach things, and take technology to unexpected places.
But big failures, institutionalized as big failures, become devastating craters of resources, warding off further attempts for years or decades … but only after the fact. That didn’t need to be their legacy.
If you're looking at really from first principles, it's hard to beat forth systems. You can type in Plankforth (https://github.com/nineties/planckforth) in a hex editor ie. this can be built from zero software, by effectively morse-coding it into bare memory.
In terms of accessibility though, I'd recommend Forthkit (https://github.com/tehologist/forthkit), Miniforth (https://compilercrim.es/bootstrap/), Sectorforth (https://github.com/cesarblum/sectorforth), Sectorlisp (https://justine.lol/sectorlisp2/) Freeforth (https://github.com/dan4thewin/FreeForth2 contains an inlining cross-compiler for MSP430)
The problem with forths is that they don't seem as scalable as say lisp, from a social perspective. At a larger level, Project Oberon (https://projectoberon.net/) builds from the base CPU on FPGA, and A2 (https://en.wikipedia.org/wiki/A2_(operating_system)) show what can be done to scale up.
Steps (https://github.com/robertpfeiffer/cola/tree/master/function/...) also was supposed to do this, but the available code is rather disjointed and not really easy to follow.
Forth has been a great inspiration. A demonstration that great flexibility, and low level control, can be had with very low overhead or complexity.
As you note too, Forth is also useful as a counter demonstration of how important abstractions are. Without powerful abstractions (or simple abstractions that can be composed into powerful abstractions), Forth fails to scale, most especially across a team or teams, and for any expectation of general reuse, beyond basic operations.
The first version of Forth I used I wrote myself, which is probably a common event as you point out. Forth language documentation is virtually its own design doc.
Lisp is the other language I began using after buying a book and writing my own.
Thanks greatly for the links! I will be following up on those. Any insight from anywhere.
I don't know if this is post-hoc justification, but I see myself as somebody who wants to know what everything is and how everything works - so to me, re-implementing (and always failing to take it to completion) is the means to an end. I think I spent the first 25 years of my life studying, so learning has become the goal itself. Work is there to provide funds to support me while I learn. Re-implementing the basics of something is a terrific tool for learning how it works.
A fellow autodidact!
Yes, implementing things, even those that others have already done, reveals depths that no study of others’ artifacts or solutions ever could.
I was in the middle of rewriting a library and adding sensible defaults for the CLI. I feel like I wrote this for myself
I expected this to be about the curse of competence. Still an interesting write, appreciate the sharing.
I find the key is to have a purpose that would be less pursued if time is spent poorly. Combined with getting exceedingly honest with your estimates and internalizing https://xkcd.com/1205/ you can avoid throwing the baby out with the bathwater.
There's a... sad comparison of this article about over-responsibility, and https://www.benkuhn.net/blub/ about the value of learning about the stack you use (the more you know about the workings of bugs, sometimes the more they gnaw at you).
Link is broken, maybe someone has a backup archive.
It's good to remember it's a marathon, not a sprint and that everything rots over time.
This resonates a lot with a thought that I've had for a long time.. A computer is not a tool, it never was, it never will be.. It's a workshop.
So true. I have tried building from scratch so many times for specific use-cases with my own opinionated experience only to recreate the bloat over time. That’s actually good. The alternative was building something based on momentary spark of creativity that no one, not even me end up using.
Slow down, you crazy child
You're so ambitious for a juvenile
But then if you're so smart
Tell me why are you still so afraid? Mm
Where's the fire, what's the hurry about?
You'd better cool it off before you burn it out
You've got so much to do
And only so many hours in a day, hey
But you know that when the truth is told
That you can get what you want or you can just get old
You're gonna kick off before you even get halfway through, ooh
When will you realize Vienna waits for you?
Slow down, you're doing fine
You can't be everything you wanna be before your time
Although it's so romantic on the borderline tonight, tonight
Too bad, but it's the life you lead
You're so ahead of yourself, that you forgot what you need
Though you can see when you're wrong
You know you can't always see when you're right
You're right
You've got your passion, you've got your pride
But don't you know that only fools are satisfied?
Dream on, but don't imagine they'll all come true, ooh
When will you realize Vienna waits for you?
Slow down, you crazy child
And take the phone off the hook and disappear for a while
It's alright, you can afford to lose a day or two, ooh
When will you realize Vienna waits for you?
And you know that when the truth is told
That you can get what you want or you could just get old
You're gonna kick off before you even get halfway through, ooh
Why don't you realize Vienna waits for you?
When will you realize Vienna waits for you?
"Vienna" by Billy Joel
https://www.youtube.com/watch?v=3jL4S4X97sQraf: from grins to smirks, thank you for repeatedly putting the smiles on my face. You know me so well considering we’ve never met. Ever so salient; thanks for (hopefully) course correcting any of my remaining years :)
These days, knowing that instead of spending hours artfully crafting a solution to something, GPT could code up a far-less-elegant-but-still-working solution in about 5-10 minutes of prompting has all but solved this.
I went down this road and it doesn't free up time, you just get to fix many many more problems.
I should clarify - I don’t mean I use GPT to write these solutions, I leave them unsolved knowing that they’re solvable in a very inelegant way.
That makes me feel even more guilty for not solving them, now that I realize the solution is one or two orders of magnitude easier to do.
Not joking with orders of magnitude. At this point, I regularly encounter a situation in which asking ChatGPT/Claude to hack me a little browser tool to do ${random stuff} feels easier and faster than searching for existing software, or even existing artifacts. Like, the other day I made myself a generator for pre-writing line tracing exercise sheets for my kids, because it was easier than finding enough of those sheets on-line, and the latter is basically just Google/Kagi Images search.
Yeah but if you let go of your years of coding standards / best practices and just hack something together yourself, it won't be much slower than chatgpt.
For some value of "working".
My most useful hobby project is the one i just "hack and find out".
I make a matrix led clock that can sync time through network, using Arduino and esp32. Due to time constraint, the coding standard is horrible(magic number, dynamic allocation, no abstraction between module, etc), but hey, it works, at least for 7 years now. The code took me 3 days to finish, and i would never write such code in production FW.
There is a bug that may makes it unable to connect network, but it can be fixed by turning off then on again, i never bother to debug or patch it.
Perfect is the enemy of good.
I'm getting a 404.
I see a similar issue in 2025, but with vibe coding.
It all boils down to the 3 key factors: speed, quality and cost. And you can't have it all
Know your trade-offs.
How do you achieve speed and quality at the same time?
To quote one movie character: "Bro, when you can't fix a problem with money, you fix it with a lot of money."
Jokes aside, you find the crème de la crème of engineering, and pay as much as they ask for.
Speed + Quality = $$$$$
On the other hand,
Speed + Cheap = Crap Quality
Cheap + Quality = Slow
With vibe coding, you have none of those. The initial speed advantage will disappear once you need to maintain the mess.
Tnx for the comment. Discussing tools (in this case, vibe coding) naturally raises the issue of what skills are actually needed to use them effectively, such as prompting, which in turn leads to a broader question about the role and relevance of traditional software engineering skills. It's a whole other topic...
You can create value with vibe coding. As I said, know your trade-off, your context.
You wouldn't use a hammer to fix a watch, would you?
Glad I'm low enough on the bell curve to not have this problem. If a tool is broken I shrug. Maybe someone will fix it
It's funny how it's returning a 503 :(
Great article. I know it's not the main subject, but I really liked this part:
> Technical Work as Emotional Regulation
Men are taught to do that in most societies. You are unhappy - don't bother talking about it (men don't cry), do sth for the society - you'll receive praise in return and your pain will go away for a while. Even if nobody'll praise you - you'll think better of yourself. Same thing that makes our fathers obsessively fix any minor inconveniences around the house instead of going to the doctor with their big health problem.
Men often laugh at women talking for hours instead of fixing the damn problem (and it is frustrating to observe). But we often do not fix THE damn problem either - we fix other unrelated problems to feel better about the one we fear thinking about.
What's more tech-specific IMO is the degree to which our egos are propped by our code. Code is the one thing many programmers had going for them when they grew up. It's what made them special. It's what was supposed to pay for all the bullying in school. It's what paid their bills and made them respected. It's very hard not to make code your main source of value.
People praise "ego-less" programming, and most programmers adhere to the rules (don't get overly defensive, take criticism, allow others to change "your" code, etc.) But that's not actually ego-less programming, it's just hidding your ego in the closet and suffering in silence.
If you procrastinate when programming - it's because you feel your code reflects on your worth as a human being. It's all ego. Changing what you do won't change that. You need to change what you think.
To expand, code is a means, the core "habits" that programmers develop (at least speaking for myself) is twofold; abstracting, trying to reduce things to simple stereotypes; codifying / decision-tree-ing interactions, things like that. And problem solving, when someone mentions an issue, the gut reaction is to try and fix it. And it takes a lot of internet wisdoms and/or therapy to learn that you are allowed to let people stew in their problems, or that unless you're asked for a solution you don't need to offer one.
Problem solving is easier than listening and empathy.
Some truth in there, but I have to admit that most simple static generators I have written I wrote out of fun and curiosity and not because I thought all existing ones had flaws (even if they did).
It is okay to do things and abandon them later, that is how we learn. We programmers are multipliers, which gives us special responsibility. If we create a shit tool with a shit workflow that wastes time, we waste time multiplied by the number of users of our software. If we save time or bring joy, the same is true. That can be beautiful and devastating.
But every software needs to be maintained somehow and maintainability is a technological choice as well. I have an embedded project running for well over a decade without a single maintenance step of either the hard- or the software. I could have built that project also with more dependencies to the outside world, or in more sophisticated ways with more moving parts, but I wanted to not deal with the consequences of that. This choice isn't always easy, but it is a choice. Ask your sysadmin which things worked for the past decades without having to touch them and investigate. Typically it is boring tech with boring choices.
Another aspect the article does not tackle is that if you know to repair or build many things, people will naturally also come to you asking you to do precisely that. But that again produces maintenance work and responsibility. This is why I like working in education, you can show people how to do it and then it is their project.
This is very amusing, and evocative of what creative makers go through in many other fields....and as an do it my self absolutist, got me mired in attempts to pick up programing as another side gig/hussle in a life that is wildly over subscribed. So reluctantly I am letting go of a bunch of stuff that lingers, Presque vu skills, always almost, and never enough time to push up and over. The bonus is that I have focused on things that I did level up on, and can now do with ease, and a bit of flare......and perhaps more importantly give others a bit of joy to watch and see, "hey! that doesn't look so hard now!" One of the things I picked up.along the way was, audio engineering, live and studio sound, where when it kicks in, it's pure misery, and every mistake and buzz, ruins all music listening, fingers twitching for sliders that are not there, griping like a back seat driver.....the better the stereo, the worse the experience, especialy with heavily produced studio work.....at least with live recordings, all the mistakes are honest and oten the recovery is where the magic happens.
The next step is burn out, then keeping everything default and just living with it. I know of no steps after this.
Then one day you die and get a standard-issue funeral. I guess.
we wont fix things by adding but deleting
Yes, but deleting is more difficult and often more expensive than adding. It becomes another burden if you are not bold and determined enough.
The less, the better
This resonates
Can't disagree with any of that
Why do I have 244gb of environments and beat myself up when I delete something to play a game?
This deep desire to affect change in a controllable way...
This infinite desire for self value defined by external validation.
It's not sustainable. Perfection can only be obtained by observation of perfection of combined self through self and other.
It's okay to discard parts of yourself to balance yourself with your counterpart. A willing violation is no longer a violation.
Not observation of one or the other on a pedestal, but accepting that both are vital parts to the system and observing the perfection that comes from co-iteration.
Essentially turning a binary system quantum.
The site is down
/.ed :(
Well, with OpenBSD:
- Updates often don't break things
- Remind for notes
- Gopher as my main site
- Multimarkdown+git for a wiki
- A web/blog without RSS is not worth your time
- Nvi can be good enough against vim, entr+make do magic
- mbsync/msmtp/slrnpull and so work in batch mode, fire mutt and forget
I don't hack my tools any more. I don't distrohop. CWM, XTerm, MuPDF, GV and friends like Bitlbee do everything well since years. Now I'm focusing in Forth, because in a near future low power microcontrollers and laptop will say a thing or two.
This resonates so much. It's always nice to see I'm not alone.
I believe computer programming is the closest humanity has ever come to godhood. We're creating entire universes out of unstructured bits. It's addicting. I feel this deep need to remake everything in my own image, to have the entire system reflect my own understanding.
I often feel like I'm insane for thinking there's a better way. Surely someone much smarter than me would have thought of it, right? I must be stupid and missing some crucial fact that proves me wrong. Then I do it and it actually fucking works. What a rush.
I only regret the fact I'm a mere mortal with just one lifetime and whose days have just 24 hours which must be carefully allocated. Real gods have infinite time and are capable of infinite effort. Just look at the universe. It's a deep religious realization.
> Your once-perfect tool breaks silently because libfoo.so is now libfoo.so.2.
... Solution: get rid of libfoo and do it yourself. Now when it breaks you only have yourself to blame.
Yeah, I know... At some point it becomes pathological. It can still be an immensely fun activity if you're curious and have way too much free time on your hands.
> Sometimes, it’s OK to just use the thing.
Also okay to just complain. No, you don't actually need to send in the damn pull request. It's alright.
nice
Comment was deleted :(
"SSL error"...
[dead]
[dead]
Crafted by Rajat
Source Code