hckrnws
I maintain a few JavaScript libraries that I manually verify compatibility against IE6 (and have lints to catch violations). I manually polyfill a few necessities and quality-of-life improvements up top in the script. Out of curiosity, I removed my polyfills and tried swc and babel both, followed by an eslint pass, and the results were absolutely atrocious. Everything gets polyfilled, even stuff that has been supported in every IE version ever. The usage-based detection is completely borked, and is completely based off string searching property/function names. Using toString() anywhere pulls in the polyfills for Date to string, Regex to string, and Object to string. Using regex anywhere pulls in a bunch of regex polyfills. It was a nightmare and the size of my library increased by orders of magnitude!
(I tried opening an swc issue about optionally using typescript ast info (via a plugin, not in swc core) to have more correct usage-based polyfill detection, but that was closed as unlikely to be acted upon.)
Author here.
That mirrors my experience too on working in various projects. The automatic polyfilling story is such a good thing in theory, but reality isn't as rosy and much more polyfills than necessary are included.
Thanks for writing your article and sharing! One thing I do on my other libraries is also to polyfill a few heavier things asynchronously instead of making everyone pay the price upfront, so for example I detect if a JSON polyfill is required before asynchronously loading that polyfill. I think the expectation for most things is that the browser will support them by default, and it’s ok if all/any polyfills are loaded separately and asynchronously.
> I maintain a few JavaScript libraries that I manually verify compatibility against IE6
Genuinely curious why anyone would target IE6 in 2023. Is it a personal goal to have massive coverage for your library?
It was used in the checkout flow on our website and the marginal cost of supporting IE6 wasn’t worth a lost sale. That was many years ago, and since then I don’t think we have anyone on less than IE8 due to TLS version issues with CloudFront, but the code already supported IE6 and there aren’t many polyfills extra compared to IE10, so :shrug:
Why are you supporting IE6?
I replied to a sister comment to yours but it’s an old library that started off with IE6 support maybe 10 years ago and just kinda never lost it.
Thank you for supporting IE6. I also maintain some JS and ensure compatibility with older browsers. Someone out there still wants to use it, whether it's an old machine with nostalgic value, the only machine they happen to have available, or just retro-computing enthusiasts, and I take pride in accommodating them. Good on you.
Since IE6 depends on the schannel implementation of the OS, it should not support neither TLS1.2 nor TLS 1.3. This means that it simply can't access the modern web anymore. Maybe there are still webservers out there though that still support outdated SSL/TLS versions and were never patched shudder
For accessibility reasons, I make my sites available using HTTP.
It allows users to bypass using TLS in situations where it breaks being able to access the website, such as this one.
(There are many scenarios when TLS can break accessibility, and in many of them access to the information is more important than the integrity of the connection.)
I guess one could also set up a proxy server that terminates TLS and rewrites https links to http. That would considerably improve confidentiality (TLS is not about integrity. Integrity has to be added at the application layer!)
I agree, and that is a great idea.
It would not solve all the problems, since not every user can set this up for themselves, but it would help a lot in the retro-computing world.
There is some interesting [drama][1] with this, since this article noticeably doesn't mention any PRs they opened to remove some of these older polyfills.
The reason those PRs were never opened/merged is the maintainer of many of those libraries [has a strong stance on "breaking" changes][2] in software:
> I have developed an intense avoidance for breaking changes in all of my packages, because I don't want to inflict hundreds of millions of dollars of person-hour cost on the entire industry unnecessarily.
IMO this argument avoids the opposite claim, that people then spend a ton of time (and money) trying to make old tech work with newer tech since not everyone maintains to the same standards of backwards compatibility.
But regardless, no one is required to stick to a particular way of creating open source software, so the one benefit here is that you are free to [fork the library][3] (assuming its license allows for that) to remove some backwards compatibility that isn't relevant to you.
[1]: https://twitter.com/ljharb/status/1704912065486618915
[2]: https://github.com/import-js/eslint-plugin-import/pull/2447#...
[3]: https://www.npmjs.com/package/react-outside-click-handler-li...
I try to focus on the issues rather than individuals, but the root of the problems in the listed eslint plugin libraries points to ljharb.
If you do some simple digging into these libraries, you will find that these types of commits are quite common within them.
https://github.com/jsx-eslint/eslint-plugin-react/commit/e1d...
https://github.com/jsx-eslint/jsx-ast-utils/commit/bad51d062...
https://github.com/jsx-eslint/eslint-plugin-jsx-a11y/commit/...
He would rather see the download count of these polyfill libraries https://github.com/ljharb/ljharb#projects-i-maintain increase, compared to assessing the health of the JavaScript ecosystem.
ljharb is an extremely interesting person. There’s no doubting the positive impact he’s had on the OSS community and the work he’s done.
However, there are some things he does that are incomprehensible.
For example, Enzyme. Over three years ago this issue was opened for Enzyme on React 17: https://github.com/enzymejs/enzyme/issues/2429
Nothing moved for a while, and I think he said something along the lines of “if you want React 17 support, stop complaining and help”. So the community got involved. There are multiple PRs adding React 17 support. Many unofficial React 17 adapters. A lot of people have put a lot of work into this, ensuring compatibility, coverage etc. Yet to this day, none of them have been merged. Eg https://github.com/enzymejs/enzyme/pull/2564
Given the amount of time that has passed, and the work the community has put in, something is amiss. It feels like he’s now intentionally avoiding React 17+ support. But why? I don’t understand why someone would ask for help then ignore the help when it comes in. That isn’t much better than the swathe of rude/entitled comments he was getting on the issue before he locked it.
I ended up migrating to RTL, but this made many of my tests more complicated (especially compared to shallow rendering).
I'd guess the issue with that project is that it is mostly a single person effort and if that person is busy with other projects/repos/real life, they might not have the bandwidth to keep-up with the review, loose interest or maybe did not notice the PR update in the stream of incoming requests.
I guess the community (if there is any) can request to have more maintainers added to the repository to share the burden, else you still have the option to fork the repo and publish it under a different name.
I get being busy. But it’s been three years, and he keeps saying the project isn’t dead, so he hasn’t lost interest. Also, he asked for help then largely ignored the help and/or started bike shedding the PRs. He’s 100% across everything that happens in the repo, investing a lot of time replying to every comment immediately.
I’m pretty sure people offered to become maintainers and he declined. Forking has its own set of problems, but yeah you’re right, this is at least a possibility.
RTL is more flexible and arguably puts developers in the situation where they have to write better more user centric tests.
The complexity for me is around events which I imagine is somewhat common, that said in think it’s fair better and more durable than enzyme
Ljharb is harmful to the ECMAScript community
He's frequently on a powertrip and confidently wrong about so many things. For instance he's one of the people who perpetuate the "Javascript is fast and you don't need to optimize anything" falsehood. No wonder he's bringing in 100 polyfills into everyone's project. He thinks JS is C.
I'm thankful for his work, but I do agree, this polyfill madness has to stop.
I am _appreciative_ that people like him exist, because -- personal feelings aside -- we do need people who have his level of dedication in the open source space.
My only interactions with ljharb have been in TC39. More often than not when looking at issues he's active in, I find myself wanting a level of maturity and thoughtfulness that just isn't there.
His contributions are neither novel or very compelling. He's more of an open-source bureaucrat than an engineer or a developer, yet he is currently steering several major TC39 proposals. I've seen him shutdown or sidetrack valid input and criticisms in all of them. The vibes I get from his behavior are akin to him being a member of "the cool kid club" -- wanting to maintain that ingroup/outgroup boundary very tightly.
He really should just get out of the standards/committee space entirely for the time being, and get some coaching on leadership skills before reentering. I realize this is super negative. Still on the ropes whether or not this feedback belongs in the public space.
Insanity. In projects where I have full technical control I don’t use eslint anyway, it creates a massive load of work for negligible (if any) gain. Typescript finds the important issues, prettier fixes formatting; linters just create giant swathes of useless busywork.
I think your description of linting can be true, but not necessarily for every eslint configuration. I feel that I've landed on configs that feel useful to myself and the rest of the devs in the team. Even with a rule set that contains many rules that can look nit picky at first glance, no one complains since they are auto fixable, and fixing is a part of a commit hook.
There are a handful of rules that are nice to have in a TypeScript project to make sure devs don't do things that break type safety. Plus some that avoids mistakes from slipping through (even though the code is reviewed).
One thing I've found super useful is to have @typescript-eslint/ban-ts-comment enabled, but configured so that you can still use @ts-expect-error and the others as long as you provide a comment when doing so. This is so nice when doing code reviews, either someone has provided a very good reason for the exception, or it is clear that the same result could have been achieved with a better approach. Same goes for disabling eslint rules inline in a file, also allowed if commented. I feel that this is a very good compromise between being strict with linting, but also not letting linting get in the way.
Yeah there is like one rule that I often try to fix, the “no non-null assertions” rule (some.property!), because it often leads to bugs. I kind of wish you could just disable non-null assertions at the compiler level really. Maybe strict mode should do that.
I can’t think of any other lint rules I find valuable. For the giant mass of node_modules you need to run eslint, I think its cost-benefit is pretty darn terrible.
> massive load of work for negligible (if any) gain
If you coded alone for 10 years and then add a strict linting config you're going to have a really bad time.
If you actually follow the advice a linter gives you, you come up a 10x better developer.
Of course not all lint rules are created equal, but some are arguably existential.
I’ve done that, and I’ve coded on projects with big teams, with and without linters, for way longer than that, with multiple languages. Linters have never contributed anything close to what a half decent type system does to a project.
Types, tests, lints are all complementary. Your comparison is meaningless. The fact that you'd even compare the two shows how much your experience with linters is worth.
"I don't wear a helmet because I can just drive slowly"
"I can eat this 8-patty burger because I'm going for a run later"
You know, it's heresy to say, but if there was some kind of analytics or reporting done by the plugin we could know for sure that no node v4 users are out there on it and upgrade the deps and be done with the whole argument.
> But regardless, no one is required to stick to a particular way of creating open source software, so the one benefit here is that you are free to fork the library (assuming its license allows for that) to remove some backwards compatibility that isn't relevant to you.
I think that's the most important takeaway here. The library author decided not to make breaking changes and to keep compatibility wherever he can. I don't think that's a requirement many people have (not to this level, anyway), but it's not unreasonable either.
No project is required to use or accept his code. People want qs, resolve, and nvm.sh, and this one person is willing to provide it to everyone for free.
I don't care if he refuses suggestions because of "breaking changes" or because "they don't spark joy". It's his project, you can disagree with him all you want, but you can't complain that the free work he's doing isn't to your liking.
I think it's telling that a lot of people are willing to argue with the maintainer but very few people are willing to step up to provide and maintain a better fork.
Is there not a config for minimum supported JS version? That would appease everyone, while maintaining backwards compatibility.
Docs should show the recommended version (modern) and show what options are available to go deeper.
Obviously adding those settings for every pollyfill in non-trivial, but burdening everyone with every pollyfill ever is also suboptimal. If anything, this would make cleanup easier going forward since it would all be classified
There is an "engine"!entry in package.json, and a package manager should be smart enough to not upgrade past what it can. But people tend to not use that sort of dependency flexibility and use lock files…
The reason libraries call polyfills directly is because it's impolite for a library to change global scope underneath you.
Usually it's the top-level application's author who chooses and configures polyfills.
Now one may reasonably ask, why doesn't the library just call Object.defineProperties directly, and tell the user to install the appropriate polyfill?
I'm going to guess that a library that Just Works after an npm install will see much better adoption than one that requires each user to configure their babel/swc/etc. correctly, especially since the library can be a dependency of another library.
There's currently no standardized mechanism in the npm ecosystem to do the equivalent of "Install this library, and also configure your environment to pull in all required polyfills" so that the required functionality is available in global scope. One reason is because the transpilers that automatically polyfill into global scope are third-party tools.
Maybe a standard mechanism like this should exist, but it doesn't today, hence the quite reasonable choice of library authors to directly use polyfills because doing so:
1. Avoids pollute the global namespace by avoiding applying a polyfill globally
2. Works as a dependency without additional configuration by the user
3. Preserve backwards compatibility
A somewhat cheap fix to at least reduce duplication of polyfills would be for libraries that need polyfills to accept a wide version range. That would give the package manager room to pick a version that's compatible across call sites.
But why are we polyfilling a function that exists in every version of node? When did this code not just work after installation that it required a polyfill?
My best guess is that the code was used in non-node environments.
It wasn’t and isn’t uncommon to pull down a dependency from npm and expect it to work in multiple runtimes.
Modifying global scope is the whole point of a polyfill though. And polyfills check themselves whether they were already applied or not.
Maybe a step to a more sane situation would be reducing redundancies between polyfill libraries to ensure they don't step on each other's toes.
You'll never get NPM apologists to acknowledge this. One of their only skills is making non-specific appeals to the necessity of it all (as essential infrastructure) and vague arguments that boil down to "you need to trust the wisdom of the crowds" (and e.g. the fact that it exists and everyone else is using it means that anyone who disputes its value just doesn't understand it—bonus points for them if they manage to work in a slight that's designed to paint you, implicitly or explicitly, as a junior), despite not being able to attest to any firsthand knowledge of the real why of anything they're defending.
> The new dependencies were all polyfills for JavaScript functions that have long been supported everywhere. The Object.defineProperties method for example was shipped as part of the very first public Node 0.10.0 release dating back to 2013. Heck, even Internet Explorer 9 supported that. And yet there were numerous packages in that dependend on a polyfill for it.
Back in 2011ish or so when npm was just getting started and Ruby on rails was all the rage. "Do not repeat yourself" was seen as a gold standard for programming. The end result has been a mess, particularly for npm. There were FAR too many articles talking about how "there's no such thing as too small a dependency" and talks given about how much a virtue it was to create "is-odd" or "is-even" "look you saved 3 lines of code that you don't have to test!"
Unfortunately, that compounded with browser of the era (Internet explorer...) having basically 0 support for modern javascript led to a proliferation of dependencies, polyfills, etc that are nearly impossible to remove from the ecosystem.
I've not seen a lot of node apologists that are fine with the current ecoystem. The problem is righting the ship is going to be terribly hard. Either existing frameworks/libraries need to go through the effort of saying "Ok, do I really need is-even, let's remove it" or we need new frameworks/libraries to abandon tools and the ecosystem in favor of fatter and fewer dependencies.
I think the issue all stems from the fact that before 2010ish, there was one library and one framework, jquery (Ok, there were others... but were there really?) and that added a good 1mb to any webpage. The notion was we do more with less if we had a bunch of smaller deps that didn't need to be brought in.
> "Do not repeat yourself" was seen as a gold standard for programming.
I remember this era. I’ve been using nodejs before npm existed and so many silly things have happened in that time.
I think the core problem the JS ecosystem has always had is that most JS developers are relatively inexperienced. (JS is very beginner friendly and this is the price we pay). I still vividly remember being at nodecamp in 2012 or something listening to someone tell me how great it would be if the entire OS was written in javascript. It didn’t matter how much I poked and prodded him, he couldn’t hear that it might not be an amazing idea. I think he thought it would be easier to reimplement an OS kernel in JS than it would be to just learn C. And there were lots of people around with a sparkle in their eye and those sort of wacky ideas - good or bad. It was fun and in hindsight a very silly time.
So yeah, of course some idiot in JS made is-even. And is-odd (which depends on is-even). I see all of this as the characteristic mistake of youth - that we go looking for overly simple rules about what is good and bad (JS good! C bad!) and then we make a mess. When we’re young we lack discernment about subtle questions. When is it better to pull in a library vs writing it yourself inline? When is JS a good or a bad idea? When should you add comments, or tests? And when do you leave them out?
Most of the best engineers I know made these sort of stupid philosophical mistakes when they were young. I certainly did. The JS ecosystem just suffers disproportionately from this kind of thing because so many packages in npm are written by relatively new developers.
I think that’s a good thing for our industry as a whole. But I also get it when Bryan Cantrill describes JS as the failed state of programming languages.
> The JS ecosystem just suffers disproportionately from this kind of thing because so many packages in npm are written by relatively new developers.
I think it also suffers because it grew in the age of Internet and Open Source, which made the problem compounding. Programmers write a lot of stupid code when learning, it's part of the process - but it used to be that the stupidity was constrained to your machine and maybe a few poor souls who ended up reading or using your code. In JS ecosystem, all that stupidity gets published, and ends up worming its way, through dependency chains, into everything.
> I still vividly remember being at nodecamp in 2012 or something listening to someone tell me how great it would be if the entire OS was written in javascript.
That reminds me of Gary Bernhardt's "The Birth & Death of JavaScript" talk[0], which one of the best comedy programming talks. Despite the comedy, it's also a half-decent idea.
[0] https://www.destroyallsoftware.com/talks/the-birth-and-death...
I thought is-even/is-odd were satirical? Granted, satire echoing these same points... but, you know, just for-the-record.
Although it's unclear, every time someone provides a one liner there are staunch not satire arguments about "covering edge cases you didn't think of" and I disagree. Look at the actual code: https://github.com/i-voted-for-trump/is-odd/
There's a "tested edge case" where you can check isOdd('1') and it will work - to me the code is already wrong, isOdd(STRING) should throw. The correct solution is to use Typescript and assert the input is a number.
The Number.isSafeInteger check is an interesting one. Yes, it's not "safe" to work with a number outside this but no doubt something in the function would be broken before an is-odd check if it was relied upon.
Often the edge case someone else catered to is something counter to my expectation. I would rather write two lines of code myself and be aware of the behaviour.
It's hard to tell.
The author of both packages has since moved the github repos to an organisation named "i-voted-for-trump" and labelled the packages with the "troll-bait" tag.
But the author also claims those packages where created when they were learning to program, and they seem to have stuck with the idea of small libraries (just not quite that small). The changes seem to be due to grief the programming community has given him about those packages, as some of the worst examples of the pattern.
I suspect the author was being slightly satirical by choosing to do the simplest possible NPM library. But at the same time they probably believed it was a useful package, allowing new programmers (like himself) to calculate the evenness/oddness of numbers without needing to know or google the mod-2 trick. And if you are going to learn how to create npm packages, might as well start small.
If they were aiming to be slightly satirical, they were not expecting that level of outrage.
It's worth pointing out that the libraries do slightly more than just mod-2. It checks the argument passed in was actually an integer, and throws descriptive error messages if the argument is not a number or not an integer.
There were packages for every color code in npm at one time.
Those may be satirical, but `leftpad` is real.
https://qz.com/646467/how-one-programmer-broke-the-internet-...
`left-pad` is the perfect amount of complexity and "I'm sure I can do it" ability by everyone who looks at it. I want you to write that logic and not have some stupid mistake that was fixed in that module 15 years ago.
Now we have String.prototype.padStart thanks to that debacle.
Then copy-paste it from SO^H^H your LLM copilot. It's clearly not something that benefits from being an external dependency.
Most people fine with the problems of the ecosystem have financial incentives.
Stating that you maintain 800 NPM libraries brings more clout and money than maintaining a foundational one.
Even with foundational packages things tend to go wrong. Why add features to an existing package if I can write several plugins? Or even worse in some cases: why use the existing configuration file if I can instead just ask users to install dozens of dummy packages that only exist to trigger a feature in my Core package?
There was a closely related movement at the time that called for that languages should have minimal "bloat", combined that with importing only what you need down to single functions, it was viewed as the future of programming.
I was sympathetic to that idea then, it sounded good in theory, however in practice it was horrible.
Today I enjoy coding in "bloated" languages with only a few external dependencies.
I wonder if there would be any interest from folks for an alternative npm registry that would automatically cleanup dependency chains like this (among other things, possibly), remove polyfills etc.
I always thought about something like this, with on the fly manipulation of packages via SWC would be pretty fast I think
I think a better approach would be automated tooling to detect opportunities for cleanup, and then submitting PRs to libraries.
That only works if the maintainers of the packages are interested in the cleanup -- the maintainer behind some of the packages mentioned in this thread is very much not. They prefer the bloat to incrementing the major version number.
esm.sh kind of does this for livecoding environments (e.g. Codepen/Colab) where you're using <script type=importmap> instead of a bundler.
jQuery is the one that’s outlasted them all, but yes - there were absolutely other frameworks in use. It also discounts the detour into the Backbone era before Angular and React took off.
I've interacted with devs who still bring in prototype.js
Within the last year even
MooTools springs to mind
MooTools, Prototype, Dojo, YUI... what an era.
It sometimes feels like part of what we deal with now in the JS world is due to half the crowd not knowing what came before.
(Edit: to be clear, I think the other half is responsible for the advancements... so it probably evens out)
One of the systems I currently work on has all five of those. Which one was in use was developer's choice for pretty much any given page or feature.
Is that with the 1MB a joke? Just in case it's not, jQuery never was that big, not even uncompressed.
And one could download stripped-down versions with just the features needed. Important especially with jQuery UI since it had quite some heavy components.
> You'll never get NPM apologists to acknowledge this.
How should NPM prevent archaic dependencies, or the "even more bizarre" (author's words) problem of developers calling polyfills directly instead of the function that the polyfill fills?
The parent is not criticising NPM the tool/registry, but rather the ecosystem and culture.
I'm happy to criticize NPM the tool. The whole thing is designed as a second, crummier version control system that lives in disharmony with and on top of your base-level version control system (so it can subvert it). It's a terrible design.
There's basically at most one reasonable use for npm: as a glorified download manager, i.e. to quickly fetch a module by name (right before you check it in to version control with Git). This differs wildly, of course, from how it's actually used, which is as a drug that sweeps mountains of unaudited code under the rug so people can trick themselves and others into thinking that none of it's really there on the basis that it's not visible when anyone first clones the repo.
To answer the other commenter's question, "How should NPM prevent archaic dependencies": it shouldn't; it's okay for programmers to be responsible for their work.
How is npm a version control system?
npm is a fairly standard package manager, much like many others, pretty good even.
I don't know of anyone who says that package managers and source control solve the same problems. They both happen to use the word and concept of "version" but to mean different things. Yes, some projects vendor in their dependencies into their source control system, but they must either manually verify package version compatibility or use a package manager like npm to help them do it. And vendoring doesn't work for actual packages published to the package repo. If they vendored dependencies then every dependency would be duplicated always, defeating the very purpose of a package manager!
> npm is a fairly standard package manager, much like many others
Yes.
> They both happen to use the word and concept of "version" but to mean different things.
Right. I think I covered that adequately.
> And vendoring doesn't work for actual packages published to the package repo.
What?
> If they vendored dependencies then every dependency would be duplicated always, defeating the very purpose of a package manager!
Yes. Alternatively:
Please clearly articulate the purpose of a package manager (in the sense of the term when it's used to describe npm and others). See if you can work it out so that you can state it in the form of a testable hypothesis (i.e. ideally in quantitative terms like MBs/GBs of disk space used, or network transit, or time to fetch—or anything that you think accurately reflects what you consider to be the value proposition that npm fulfills and which we can use to evaluate it in scenarios where it would or would not be a good fit).
Man, this isn't hard.
The purpose of a packages manager is to allow me to describe the packages and versions that my own package depends on, and download compatible versions of those dependencies and their transitive dependencies in such a way that dependencies are shared and that my runtime can use them.
npm does that, as does Cargo, Pub, Gems, pip, etc.
> Man, this isn't hard.
Well, you ducked the question—and what you did say isn't particularly illuminating to anyone who is already familiar with npm and cargo—so it's not at all clear at this point that you're right about that. (Your overall comment actually reads a lot like the sort of thing that I mentioned at the top of this thread: "non-specific appeals to the necessity of it all [...] vague arguments [...] they manage to work in a slight that's designed to paint you, implicitly or explicitly, as a junior".)
I'll ask again: what is the value proposition of npm-like package managers in quantitative terms? Your hypothesis should be falsifiable, if not testable.
If you're having difficulty, we might start here:
> in such a way that dependencies are shared
Why?
"Your hypothesis should be falsifiable, if not testable"
Seriously? There are not strange requirements, and you're ducking the issue by trying to force a pseudo-scientific dialog on this. There are language semantics in play, among many other considerations that I'm sure you're well aware of.
But I don't need to convince you, you're already convinced. Have fun.
> Seriously?
Yes, seriously.
> There are not strange requirements
Huh?
> you're ducking the issue
I am? How? (What is the issue I'm ducking?)
> There are language semantics in play, among many other considerations that I'm sure you're well aware of.
What?
> I don't need to convince you, you're already convinced.
I don't get it. (What am I already convinced of?) You said this wasn't hard. So why does it seem that way?
I don't see the problem, to be honest. Everybody is using this guy's work for free and then he needs to deal with complaints that it's not good enough, because it supports ancient browsers in a way that's not to their liking.
You can always fork the projects with this guy's polyfills if you want, but you'll end up forking quite the collection of projects. Most of them are very minor and only end up polyfilling anyway, so you can probably get rid of the packages you don't want in an afternoon. Fork them and maintain them yourself if you're so inclined, don't complain that the free work he's doing for you isn't to your specifications.
It is infuriating. I love the power of the node ecosystem but the total lack of critical thinking around it drives me bonkers.
Marvin is doing wonderful work in the JS ecosystem around performance. It has been largely focused on tools and node, however, he did have an interesting set of things to say about how they optimized performance in Preact as well on his website that was really interesting too.
One thing I've noticed is the rampant duplication of polyfills and babel helpers. To the point that I now have overrides setup via pnpm and I re-write many imports of polyfills to point at my own shims, which simply re-export existing functionality native to the language, most of the time.
For smaller utility packages, I often simply clone the repo and copy things over that we need, or copy the src right out of the node_modules folder if possible, then I strip away all the superfluous imports (and often convert from commonjs to ESM if needed)
Saves so much headache, its better for users, smaller builds etc.
That sounds awesome. I've dabled with something like that but only for lodash (makings sure all different flavours get aliased to a single thing) but I never went too far with all the other stuff.
You wouldn't happen to have an example of what you're doing laying around would you? I'd be genuinely curious to try stuff like that out.
Author here.
Thanks for the kind words! It's feedback like this that encourages me to keep writing about it.
I share your experiences regarding babel helpers and haven't found a good solution myself. Similar to you, I often patch unnecessary stuff out via patch-package, but that approach doesn't scale well.
I use path resolution, it tends to scale better (not great but better) for Babel helpers because ‘Babel-runtime’ can 100% be re-mapped to ‘@babel/runtime’. Same with corejs 2 -> 3, they just need path mapping.
Patching packages is definitely something I still have to do to strip polyfills and convert CJS to ESM if I can’t simply re-compile source
These polyfills aren't just large, they're also slow because they never use the native implementation. The good news is that you can replace them all with "overrides" in package.json. That's what nolyfill (https://github.com/SukkaW/nolyfill) does. Oh, and of course the README mentions ljharb.
I'd upvote you a 1000 times if i could. To lazy to mail dang if it's possible to pin it on top.
> Polyfills that don’t polyfill
This is what's sometimes called a "ponyfill". The idea is to avoid messing with global scope (monkeypatching), which could be problematic if you have multiple polyfills for the same API or polyfills that don't perfectly match the native behavior.
This can be a good thing in some situations, but in general it's probably best to leave polyfill decisions to the bundler so you can decide which browsers you want to support. Or even produce multiple versions, a lightweight one for modern browsers and one with tons of polyfills that gets served to ancient ones.
Author here.
Good point. Agree that the ideal scenario would be that the end user (or the tools they use) have the final say in which polyfills to load. It's a bit of a bummer that they are shipped as part of npm packages without an easy way to get rid of them.
I wonder if our industry will move to publishing the original source files to npm in the long run. Only the last piece of the chain, the developer using these dependencies, knows what their target environments are. So the bundler could then downlevel or polyfill the code for the specified targets.
I've always been kind of surprised that original sources aren't part of the NPM culture. For a long time, I included "typescript:main" in my package.jsons and configured my tools to prefer that to "main" (and now "module").
I get not defining it yourself, especially if your polyfill is limited to the sliver of a feature you use, but why not check if the feature is there first?
Ok, maybe someone else monkeypatched it. But at least you’d end up using the native functionality if it was there.
I don't really care to comment about the practice itself, but the "Polyfills that don’t polyfill" section is missing the point: the function is called directly instead of patching the global object so that the global object is not polluted by an possibly non-standard implementation. Additionally it does use Object.defineProperty if available - furthermore it doesn't even call itself a polyfill in the first place. If it's needed in 2023 is a valid point however.
I think it would be better to just expect the standardized functions to be present and then document that the project needs them (e.g. via peer dependencies), allowing users to install them themselves as needed.
That's a lot more work than your library just working in more places.
I don’t think it’s all that much more: basically every bundler I’ve used uses browserslist to include polyfills for the developer’s target audience.
But, also, I think this sort of easy path inflicts a huge cost on the ecosystem as a whole: writing to the standards and expecting your users to supply a compliant environment solves a lot of N*M problems in the dev process.
Sometimes it can be a security vulnerability to call a polyfill instead of the now available default implementation. For example, this 2018 bug [0] in the Grammarly Chrome Extension had a much wider impact due to its reliance on a fetch polyfill that was able to make requests (via XHR) to origins that native fetch could not.
I suppose in that case you could argue the real bug is in the XHR API, but it only affected the extension because the extension was using a fetch polyfill that relied on it in functions that could be triggered by an external page.
That's a very good point. Didn't know about the grammarly incident. I could definitely see this happening again with the amount of polyfills in npm packages. Polyfills are usually frozen in time and not developed further after they are released.
For what it's worth; `eslint-plugin-react` has been around for a long time and seems to support running in very old versions of Node.JS (back to v4[1] apparently! tho I can't find anything documenting that for sure.)
I was surprised to learn that Object.values is only supported in Node >v7, Object.fronEntries was added in v12, etc. So for this project maybe the polyfills are needed.
[1] https://github.com/jsx-eslint/eslint-plugin-react/pull/1038
There are two separate questions:
"Should libraries still be polyfilling to support ancient runtimes?" and "How should that polyfilling be implemented?"
Even if a library wants to maintain backwards compatibility, we can still argue this method of polyfilling (especially the phony polyfills) is damaging to the wider javascript ecosystem.
In an ideal world, the cost of polyfills for developers who don't need them should be zero.
For developers using bundlers, the bundler is expected to implement any required polyfills for the developer's targeted runtimes, and having the library ship with it's own polyfills is counterproductive at best. However, I suspect these libraries wish to maintain compatibility for developers not using bundlers.
Maybe npm should be upgraded to support multiple variants of packages? That way these libraries could ship both polyfilled and non-polyfilled versions of their packages in parallel.
Yeah, engines are a moving target. I'm all for backwards compatibility, but I'm worried about promoting old node versions with known unpatched security issues. Given that eslint itself only supports node >= 12.22.0 it seems like it's time to get rid of the polyfills.
I wish we as in the industry would find a better solution to adapt to this. It's a bit unfortunate that the polyfills as part of the library code itself, which makes it difficult to get rid of them once they're not needed anymore.
Genuinely curious, are there people out there using newer versions of this package with old / unsupported versions of Node (in production)?
Not really. Adoption of new Node versions is quite quick, given their short support periods.
It probably happens, but not really on purpose.
If the package.lock file gets deleted or someone runs a global npm-update then npm will update any packages while respecting semantic versioning.
It's possible an organisation forgot to include the package.lock file in their deployment image and they get updated npm packages every time they redeploy. It's also possible a developer making minor changes to a legacy system triggers packages to be updated, perhaps without even noticing.
In the early days of the Node.js ecosystem, there was a trend which was all about 'tiny modules'; many developers published tiny 10-line or so modules and people in the community were promoting these tiny modules really hard. It went a bit out of control and a lot of higher level modules were including many of those tiny modules, then those modules were themselves included into other, even higher level modules, etc... The number of modules used by some of these higher level modules/tools/frameworks grew exponentially and we ended up with a lot of unnecessary dependencies.
Each tiny module did just a bit more than it should have done or included just one more dependency than was necessary, sometimes the scope of the module would grow over time and all this added up. Also, different sub-modules used different sub-sub-modules for the same functionality so this caused a lot of duplication in the higher level modules.
For my own open source project, I've always been very careful about which dependencies I use. I favor module authors who try to keep their number of dependencies to a minimum. A lot of times, it comes down to figuring out the correct scope of the module... Most low level libraries should not need to do their own logging; therefore, they should not need to include sub-modules to colorize the bash output; instead, they should just emit events and let higher level modules handle the logging. Anyway there are many cases like that where modules give themselves too much scope.
I think a lot of the sentiment behind this was that the module system and bundlers didn't really support tree shaking so bringing in a big library with a lot of utility functions brought in a ton of code you didn't need.
This series is great!
You should seriously think about consolidating them into a book. Something I notice other engineers struggle with is how to properly assess performance, read heap snapshots, or even understand how to read a flamegraph for stack tracing tools. It would be nice to point, or buy, them a resource showing this.
I'd definitely buy a copy.
Second that, I would definitely buy a book based on this series.
Thanks for the kind feedback! It's definitely something in the back of my mind. I feel like I need to collect a little more content to fill a whole book, but I'm enticed by the thought of writing one nonetheless.
Part 3 send me down a debugging route of eslint performance in the GitLab project and we were able to move from 25 min of listing time in CI, down to 5. So, thanks for the inspiration!
This is a really nice post and series. I'm curious how you're doing the profiling and then generating the flame graphs e.g. in https://marvinh.dev/blog/speeding-up-javascript-ecosystem/. Is this Chrome's built-in devtools being used or something else?
Glad to hear you like it! Those flame graph screenshots are taken from https://www.speedscope.app/ .
Checkout https://github.com/SukkaW/nolyfill
I think /vendor/ folder should make a come back. This is how I’ve been doing all my side projects for a while now.
People really have no mercy upon themselves, to deal with the bloated crap of the [struggle-stack™](https://twitter.com/brianleroux/status/1643337745463644160)
Comment was deleted :(
I haven't got much experience with WASM, but is dependency hell something that WASM completely solves?
All of the cruft that you don't use will get optimised away by the compiler, right?
I'm not aware of any production ready WASM frameworks, but I'm ready for it.
1. WASM is non-JavaScript code, so it's unrelated to npm
2. Whatever dependency hell exists in the source language still exists at WASM compilation time
3. There will never be WASM frameworks because they're generally not the bottleneck.
The closest to WASM framework was Cappuccino, which let you compose a whole application in a language close to Objective-C
1. Not related to npm, but related to the web.
2. True, but compilers are generally better than transpilers.
3. Have you seen https://yew.rs/ ?
Are 73% of devs choosing to use typescript? Or is it just the default setting for popular frontend project setup scripts?
will these unneeded polyfills etc be removed by treeshaking and code splitting in the production build via bundlers?
are Bun and Deno solving this problem to some extent?
node.js/bun/deno need a battery-included stdlib to me like what python provides.
They are unfortunately not removed, because the way they are used makes it difficult for bundlers to detect them.
Deno encourages you to submit the original sources which can be even in TypeScript if you want. The users are very close to the newest Deno release and there is barely anyone staying on old versions. This works because Deno takes semver very seriously, which in turn encourages folks to upgrade. It removes the need for polyfills and allows you to always use the latest JS features.
Disclaimer: I work at Deno
Crafted by Rajat
Source Code