hckrnws
> For a small test page of 15KB, HTTP/3 takes an average of 443ms to load
443 milliseconds!! When typical user latency is sub 50 milliseconds, requiring 443 milliseconds to get a lightweight page displayed on the screen is terrible.
Users perceive 100 milliseconds as near-instant, and that ought to be the target. With 50 milliseconds of network latency, and 0-rtt support, that gives 50 milliseconds for server and client processing+rendering. Ought to be very do-able.
The fact it has not been done really is a failure of software engineering as an industry - we always favour more layers of abstraction over perfecting the user experience.
> With 50 milliseconds of network latency, and 0-rtt support, that gives 50 milliseconds for server and client processing+rendering. Ought to be very do-able.
You're forgetting DNS lookup latency (could be several sequential lookups), and potential protocol switch from HTTP to HTTPS. Having 0-rtt support doesn't do you any good if you don't know which server to send to.
Now compare the performance of HTTP/3 versus HTTP/1.1 over multiple years when the website isn't run by a corporation paying an engineer to babysit and maintain it. The HTTP/3 CA TLS HTTPS-only setup would start failing due to CA TLS problems eventually (in a year or two) while the HTTP/1.1 HTTP+HTTPS would remain accessible forever. This is the core problem with HTTP/3 for human people.
It is one that could be fixed by developers if only the correct flags were set during compilation of the HTTP/3 libs and during their linking to the relevant browsers. But no one seems to care about human use cases. It's corporate security uber alles.
TLS-only servers needing babysitting is mostly a failing of older projects like nginx which leave certificate management as an exercise for the user - newer projects like Caddy and Traefik have shown that for simple deployments it can be made almost completely transparent with automatic ACME provisioning. Caddy can even gracefully handle cases where a certificate is unexpectedly revoked, by provisioning a new one immediately. It really does Just Work™.
I don't know if it handles this unexpected-revocation scenario, but as Apache has had support for automated Let's Encrypt for a while now (I finally upgraded last week, when I realized I was missing out on this feature), I'm surprised that nginx doesn't?
I believe they mean ACME provisioning and renewal isn't baked into nginx, i.e. you have to install certbot separately and run it daily from a cron job or systemd timer, whereas traefik and caddy handle this for you without additional software or configuration.
I understood that. As Apache does have a way for me to internally automate all of that, without using an external tool such as certbot, I am surprised that nginx doesn't as well. Either way, you should add Apache to your list of servers, along with caddy and traefik.
It does, just not out of the box
I don't know if I agree that this is "it does" as this isn't really "not out of the box" and kind of "fully external", but what I continue to be surprised by is that Apache DOES come with this functionality, "out of the box" (via a first-party module--and btw everything in Apache is built out of modules, including protocols--that ships with Apache and is configured inside of Apache), while nginx doesn't. I feel like the decision to use Apache has been vindicated throughout the ages, time and time again ;P. (There was a very short period of time when nginx was "faster", but not only did the world move on to ubiquitous CDNs anyway, but Apache quickly responded with mpm_event, which eventually became the default.)
Newer indeed. Caddy isn't even 10 years old itself yet. I'm not sure how you can be sure it's all-in-one is going to handle long times gracefully. But for short term few to handful of years use cases, sure.
This.
The amount of times certificates have run out breaking things is astounding.
So I work around all certificate test the first thing I do when integrating with an external HTTPS site.
Good job people that get paid by HTTPS.
> The HTTP/3 CA TLS HTTPS-only setup would start failing due to CA TLS problems eventually (in a year or two)
Why? And why https with http 1.1 not?
> The HTTP/3 CA TLS HTTPS-only setup would start failing due to CA TLS problems eventually (in a year or two) while the HTTP/1.1 HTTP+HTTPS would remain accessible forever
This is nonsense: if you set up Let’s Encrypt or the equivalent, which is increasingly built in, it’ll keep working for HTTP 1-3. If you don’t, you need to rotate certificates for all of them.
If your claim is that the HTTP 1 option is better because you can be insecure after the certificate expires, that’s conflating two separate things and saying that you don’t care about your users privacy and security, which might be true but is a more compelling argument against using that service than against HTTP/2.
I don't think you quite understood my point.
Like said, HTTP+HTTPS in HTTP/3 would be perfect. Everyone could be happy and websites could have infinite unmaintained lifetimes and anyone could visit without getting approval from a third party corporation every 3 months. But nope, the big tech companies, and even Mozilla, refuse to implement an HTTP/3 that supports HTTP+HTTPS. It's flabberghasting, unless you view it through the lens of the needs of commerce only. That is my point.
*re: This is nonsense: So when I set up lets encrypt using an acme (1) client it should still work today? Oh wait, they dropped acme 1 support entirely and only support acme 2 now (and will again with acme 3). And that's ignoring that whatever acme client you pick is going to have breaking changes too over time (even if it's acme.sh!). Then there's the 2018 LE root cert expiration... then the 2024 one, then there's TLS version sun-setting...). It is not so non-sensical down in the weeds of actually hosting a 90-day cert CA TLS website over more than a few years. And just to note: I love LE. I use LE for some websites. I think it's great. I love HTTPS too. I just think HTTPS-only instead of HTTP+HTTPS is very damaging socially.
Disagreement is not failure to understand. If you’re unable to rotate certificates on a decadal scale you aren’t in a position to run an internet-exposed service because anything will need at least some security updates over that kind of timeframe.
It seems like your premise here is that the web is only for "professionals". It is not. It is for everyone. HTTP/3 certainly agrees with you though.
I wouldn’t set the bar for “professional” so low but it’s really just incongruous to say that it’s somehow easy and democratic to take care of running a server, registering a domain, setting up DNS, and creating some kind of web application but following high-quality documentation to add an extra couple of lines of configuration magically transforms it into a major barrier. In most cases, you don’t even need to do it yourself since most hosting companies have some kind of recipe which covers most of the work.
Your assumption that all that is needed does explain some things. Almost none of that is required and indeed, most of it would make things worse.
Running a webserver is as simple as "apt install nginx", forwarding ports 80/443 on your router and saving a index.html to the webroot directory. HTML and files in directories live forever. And there's no need for a domain, DNS, or especially an "application". Just drag and drop myphoto.jpg to ~/www/ in your GUI file manager and you're ready to send the link, http://my.ip.is.here/myphoto.jpg to friends. And it'll last forever with no security worries. nginx remote exploits are once a decade, if that. Sure you might have to re-link if your IP changes but that's not a problem. Sure you site will be inaccessible sometimes if your computer is off, but you're not sharing files to friends during that time so it's fine. The requirements of business use cases just don't apply. It's not "serious".
So you never need to patch that server? Your nginx, OpenSSL, etc. never need updates?
Seriously, this is just embarrassing. If it’s too hard for you to add a couple lines of config to enable Let’s Encrypt (apt will get you Apache w/mod_md or Caddy, too), it’s also too hard to run the rest of the server - and neither are that hard.
(2020)
I'm out of the loop, anyone knows if performance improved aince 2020?
Comment was deleted :(
Comment was deleted :(
[flagged]
what's the report about?
Click the link and find out? :)
I prefer TCP for web traffic rather than UDP.
Me too, HTTP/1.1 for life.
The performance of these binary protocols does not scale on most server implementations.
The protocol is not the bottleneck, the parallelism and memory latency is.
QUIC/UDP and BBR is mostly a way for Google to grab more than its fair share of bandwidth during congestion, and will lead to a tragedy of the commons where people using less pushy congestion control protocols like CUBIC will be edged away.
Care to elaborate?
Comment was deleted :(
Many providers limit or block UDP because of the history with DDOS attacks.
Crafted by Rajat
Source Code