JPEG-XL vs. AVIF and Others: 27 Images Compared
Some will blame Google for the demise of JPEG XL, but I think that's a red herring. JPEG XL includes Google technologies such as Pik and Guetzli/Brunsli. Some of the JXL developers are Google employees. Google doesn't have a profit motive to promote one format over another. Their upper management most likely isn't even aware that JPEG XL exists. It also doesn't explain why Mozilla and Apple are favoring AVIF too.
Nonetheless, there's no question that JPEG XL was sabotaged. AVIF was enabled by default in Chrome the day it was implemented, JXL had to go through an experimental period. At some point, Mozilla developers were told to stop working on improving their JXL implementation and they couldn't even merge already developed patches.
Google's justifications for the removal were baffling. They said there's "not enough interest" despite all the hype, all the major companies requesting JXL support. They claimed JXL doesn't bring "sufficient benefits", despite the numerous advantages, such as lossless JPEG recompression. They published their flawed benchmarks weeks after they made the decision to remove JPEG XL.
BTW, the removal happened immediately after Adobe announced they are supporting JPEG XL in one of their products. Almost as if they realized that JXL may be gaining too much support and it's time to pull the plug.
So who's behind this if not Google? To me, the most likely suspect is AOM, the Alliance for Open Media. The people who are involved in AOM also the ones who make the decisions in the browsers. What are their motives? I don't know, pettiness? Maybe they just want their format to win.
 - https://phabricator.services.mozilla.com/D119700#3977128
 - https://bugs.chromium.org/p/chromium/issues/detail?id=117805...
 - https://cloudinary.com/blog/contemplating-codec-comparisons
Some people try to spin this as a Google vs open standards, while it's mostly a Google devs vs. some other Google devs situation:
If you look at https://github.com/libjxl/libjxl/graphs/contributors, ~everybody except Jon Sneyers is a Google employee (I think the AVIF ratio is similar, but I didn't check).
So the question isn't about which format is better, but who won internal politics and it's clearly the AVIF folks - for reasons which aren't really possible to tell from the outside.
Sure is comforting to know that office politics ruins everything, including technical innovation.
Apple used to pick and champion technical innovation, and worked around politics, ideology or cost ( patent ) issues. Unfortunately that is no longer the case.
That has never been true. Steve Jobs split the company into warring factions with the Lisa and when that failed Jobs took over Wozniak's Macintosh project and restarted the civil war all over again.
Apple LOVES proprietary standards like Lightning despite harming UX because of their "fuck you pay me" attitude towards their customers.
I advise against glorifying a rich asshole who died from cancer because they thought they could cure it with fruit juice.
This is utter crock, they always heavily preferred close proprietary format they themselves held parents for.
Is there a published, freely available specification for JPEG-XL yet? The W3C publishes JPEG and PNG specifications, and AVIF can be found on the AOM website. One big issue with JPEG-XL is that the specification is closed. The old drafts I have found were extremely poor. I have heard that the actual spec is apparently better, but I cannot find a copy of it myself.
I mean, I agree with that, sure, but I can't imagine it changing any time soon. Ultimately, the JPEG-XL team made the choice was made to standardize the format within the JPEG WG of ISO, and for better or worse, this has had a pretty substantial effect on adoption. Maybe next time make better choices.
Mozilla and Google dropping JPEG-XL support because "too much complexity". Software engineers using "too much complexity" as an excuse for something they don't want to do is such a classic move.
Browsers are already absurdly complex, and are constantly receiving HTML, CSS and JS features that significantly increase complexity. Are we really to believe that adding support for a certain image format, which is basically as modular as it gets (decode the file into a buffer), is too much complexity?
Obviously, this is a case of them showing favoritism toward codecs based on the IP owners of those codecs.
The attempt to provide a second JPEG-XL implementation has stalled. Two independent implementations are the bar for a web standard. AVIF meets it, JPEG-XL does not.
There has been enough 3rd party interest to create multiple AV1/AVIF implementations. JPEG-XL has its original team (doing great work), and then… commenters lamenting, but nobody else invested enough to actually to write the code.
Browsers' goal is not just to have every benchmark-winning format. They also need to minimize attack surface, risk of bug-compatibility (which is why one and only implementation is not sufficient), and watch out for long-term costs of code size, maintenance, and technical debt. New codecs can be added at any time, but can never be removed.
AV1 got in, because video really needed an answer to the looming threat of patented expensive H.265. There was a risk of repeating the pains of H.264 support, and this time without Cisco's licensing loophole. Then AVIF got in, because AV1 was already in. It's that simple.
>The attempt to provide a second JPEG-XL implementation has stalled
What do you mean by "the attempt"? There are at least three independent decoder implementations:
jxl-oxide (Rust): https://github.com/tirr-c/jxl-oxide
J40 (C): https://github.com/lifthrasiir/j40
jxlatte (Java): https://github.com/thebombzen/jxlatte
None of them is feature complete, but it's because each of them is a hobby, single-person project. You're making it sound like some billion-dollar company tried to implement JPEG XL and failed, because it's too complex or something.
However, it should be noted that those independent implementations, despite being hobby projects, helped to iron-out some minor issues in the standard.
>Then AVIF got in, because AV1 was already in.
AVIF uses its own container format that had to be implemented. Implementing AV1 does not give you AVIF support for free.
The point of requiring a second implementation is to prove that the standard is stable and consistent enough that at least two of them can agree on how it works. It doesn't matter who is building the second implementation, it just needs to exist in a feature complete and maintained state. If hobbyist implementations have been uncovering issues in the standard, that just further justifies the "two independent implementations" rule. Bug-for-bug compatibility problems are browser vendors' worst nightmare.
I was thinking about j40.
It's cool that there's a Rust implementation in progress. If it works out, it will make it much easier for me to support JXL. I've been waiting to RIIR the j40 instead.
AVIF (HEIF) container format is MP4, which browsers already parse for video. There is a bit of extra work to extract image-specific data, but it's minor compared to AV1 implementation. AVIF even contains a completely redundant copy of key AV1 metadata (av1C) to spare implementations from parsing the AV1 payload.
RIIR = rewrite it in Rust
> None of them is feature complete, but it's because each of them is a hobby, single-person project
Which is why they're not considered implementations to meet this bar. You need multiple FULL implementations to ensure that it is actually FULLY viable, otherwise you risk taking on a potentially broken format that you can't remove.
> You're making it sound like some billion-dollar company tried to implement JPEG XL and failed, because it's too complex or something.
I doubt any big company actually tried (or tried very hard). Cost of implementation becomes a kind of implicit test here, because companies like things cheap.
> AVIF uses its own container format that had to be implemented. Implementing AV1 does not give you AVIF support for free.
It gives AVIF support for cheap due to code/algorithm re-use. Companies like things cheap, so that's a big selling point.
To be fair, you don't need a FULL implementation. You just need some safe subset that everyone agreed on. For example, even the old JPEG was never fully implemented in browsers — arithmetic coding is not supported.
> I doubt any big company actually tried (or tried very hard). Cost of implementation becomes a kind of implicit test here, because companies like things cheap.
So if an individual unpaid volunteer spends their time doing a 2nd implementation, <bigco> might consider it for their browser. But they won't pay for a 2nd implementation with their own staff, even if they own a humongous chunk of the browser market (which comes with responsibilities) and even if it is a better algorithm.
Sounds about right.
It's all about incentives. If there's enough incentive for them to do it, they will. "Better" doesn't always have much to do with it.
We make incremental gains as a human race, not necessarily optimal ones.
The number of implementations was not considered a criterion for Chrome, or at least they didn't mention it. For WebP, they didn't seem to care much about it. I agree that it's a good criterion, as it is a good way to ensure that a spec is solid and interoperability actually works. So I am happy that there are currently three alternative jxl implementations.
The main thing you want to have though, is a standard that cannot be unilaterally changed in arbitrary ways by a single party. However flawed it may be, ISO does have a long history of standardization, and it has procedures in place to ensure that changes are scrutinized and can only be applied when approved by the national standardization bodies.
Yeah, WebP was forced on other browsers, and never met the bar for a proper standard.
This is just WebSQL story all over again, right? “Go redo it from scratch” is the most ridiculous excuse in the book.
Except they made the right choice with WebSQL. Exposing and securing the entire api surface of sqlite -- from sql syntax to binary compatibility to behavior-altering pragmas -- would have been a compatibility and security nightmare that would make xkcd 1172 look tame by comparison. Exposing a block device instead  and layering user-provided sqlite on top of it via WASM  is infinitely better than rushing to lock ourselves into a roller coaster that runs straight through the worst compatibility minefield the web has ever seen.
: https://news.ycombinator.com/item?id=35165640 "Firefox 111.0 enabled Origin private file system access"
: https://news.ycombinator.com/item?id=33374402 "SQLite in the browser with WASM/JS"
Not to mention the complexity is mostly offloaded to the maintainers of the library itself, not the browser that calls it.
I wish Mozilla had gone ahead and enabled it by default in Firefox, as I'd hoped it would. Still salty it didn't happen (yet, at least). It seems like such a clear choice to developers that there's actually people being vocal about it (as evidenced by the sparse but constant stream of e.g. posts on HN about browser image codecs, which is a topic _no one_ finds sexy otherwise), but browser vendors still won't budge. It's weird to see them say "sure it's better, but it's not better enough than the alternatives". There's a large enough discrepancy in my own perception of the merits of jxl and that of the people making choices for browsers that I genuinely wish I understood them, because it feels like I'm missing something.
I still convert my pictures to jxl locally and, and the space savings are enough of a reason to go through the trouble of converting - it just seems like a no-brainer that it'd be even more valuable on the web, as bandwidth savings are generally even more important than disk storage, and then there's still the plethora of other features on top of that.
The browser developers defacto become the library maintainers since they're now on the hook for any security issue within that library forever. Even if the "real" maintainers go play with something else.
Indeed, once you officially support a format and it becomes a standard, you have to support it forever. Sunsetting a feature is not that easy on the web. So if for some reason the maintainer of a library used by the browser drops the ball you are on the hook for anything. And decoding binary data coming from unknown potentially malicious source is full of potential security traps.
Considering <picture> and image() exists and should be seen as a best practice, adding and removing image formats/codecs should have a few less problems than some other libraries, no?
They provide some resilience, assuming their fallback usage includes multiple formats. Which isn’t a safe assumption, but even granting it, there are other problems with removing a codec. The most obvious being abruptly increasing the bandwidth requirements for all users of the removed codec.
Even if that bandwidth is “free”, it can be significant enough to prevent access by end users with slow/spotty connections. Depending on the use case, that can have the consequence of also basically obsoleting the site serving those users. Just as some innovations enable whole new use cases, reverting the innovation can have the opposite effect.
>Obviously, this is a case of them showing favoritism toward codecs based on the IP owners of those codecs.
I am sorry it is hard for me to not take a jab at this considering their codec is patent free so their should be zero IP owners. /s
Anyway JPEG XL is a miniture project physically. A compressed wasm binary is around 175 kB. I don't think complexity is an issue there. 
This is a battle for future default image codec: open standard vs nearly proprietary controlled by a few corporations - https://en.wikipedia.org/wiki/Alliance_for_Open_Media : > The governing members of the Alliance for Open Media are Amazon, Apple, ARM, Cisco, Facebook, Google, Huawei, Intel, Microsoft, Mozilla, Netflix, Nvidia, Samsung Electronics and Tencent.
For example with defensive license termination: they can sue you, but if you try to sue them you have to rip off AVIF from all your products: https://aomedia.org/license/patent-license/
>Defensive Termination. If any Licensee, its Affiliates, or its agents initiates patent litigation or files, maintains, or voluntarily participates in a lawsuit against another entity or any person asserting that any Implementation infringes Necessary Claims, any patent licenses granted under this License directly to the Licensee are immediately terminated as of the date of the initiation of action unless 1) that suit was in response to a corresponding suit regarding an Implementation first brought against an initiating entity, or 2) that suit was brought to enforce the terms of this License (including intervention in a third-party action by a Licensee).
Defensive license termination is actually a good thing. It is how you can keep a codec royalty-free. Without defensive termination, it is way too easy in the current patent system for patent trolls to make bogus claims and bully companies into paying them royalties. JPEG XL also has defensive termination clauses, and so do the Apache 2.0 and GPL v3 FOSS licenses.
AOM has noble goals: royalty-free codecs for the web. JXL is also a royalty-free codec that is very well suited for the web (as well as many other use cases for still images). There is no reason why we can't just have both supported by browsers.
Comment was deleted :(
The Chromium dev who rejected JXL works on AOM code. The manager who commented on it also participated in some articles about the benefits of AV1.
JPEG XL could be twice as efficient as AVIF everywhere, and every blog on the internet could benchmark it, and I suspect it still wouldn't make a difference.
If it was actually 2x more efficient then that would be a very strong argument that I believe would prevail. But I'm kinda surprised so many people are so thirsty for a 10% improvement for natural images at qualities above what most websites serve. Which might even be reduced if AV1 encoder folks ever start caring about that quality range, given that it's way above what's relevant for video...
Give it another couple years and maybe we'll have another format 5-10% better than JXL, at which point I hope everyone starts getting angry that format isn't included in browsers... (actually VVC intra should be at that performance level today, not that it has any chance of becoming the basis of a web image format...)
Its not just about bitrate, as avif has some issues:
- slow power hungry encoding
- slow decoding (though this has gotten better very quickly)
- terrible lossless performance
- no lossless path from jpeg like jxl.
- no support for "exotic" formats like > 12 bpc or many extra channels.
Saying avif should be the only format is like saying everyone should depreciate png and just use jpeg. It doesn't cover all use cases, though frankly I am getting sick of slow pngs.
> people are so thirsty for a 10% improvement for natural images at qualities above what most websites serve
Because it won’t be just the web that uses JPEG. However whoever wins the support of the web will become THE universal image format for the foreseeable future.
> Give it another couple years and maybe we'll have another format 5-10% better than JXL, at which point I hope everyone starts getting angry that format isn't included in browsers...
That’s the thing. Once a format has become THE standard it won’t be replaced for a very long time. Might as well get the best bang for buck and pick the best standard that’s availability now.
>actually VVC intra should be at that performance level today,
Yes. Considering HEIC surpassingly did outperform AV1 and JPEG-XL in most of the benchmarks. You can expect VVC intra to be even better.
>Give it another couple years and maybe we'll have another format 5-10% better than JXL
Well yes. That is on the assumption everyone will settle on AVIF because that 10% is comparing to AV1. That wasn't true until Apple made the move. And AVIF, or AV1 the codec in general likes to remove details. So I would argue it isn't even a great image codec in the first place until it is fixed ( which the AOM has no intention of improving ).
The good thing ( or bad thing ) about JPEG XL is that they have now port some of the improvement to existing JPEG via JPEGLI, which gives ~10-15% improvement depending on image type without breaking compatibilities. This improvement on JPEG alone means the next generation replacement codec needs to be much better for it to be meaningful.
> JPEG XL could be twice as efficient as AVIF everywhere…
This isn't true if you're talking about encoding size efficiency. See the "Quality" chart at https://storage.googleapis.com/avif-comparison/index.html
Even JPEG XL cheerleader Cloudinary says, "JPEG XL can obtain 10 to 15% better compression than AVIF" overall, which isn't a sufficient incremental benefit. https://cloudinary.com/blog/the-case-for-jpeg-xl
I think they were trying to say that even if it were twice as efficient and everyone benchmarked it as twice as efficient, it still wouldn't make a difference because the Google devs are biased towards AVIF. It wasn't arguing that JXL is twice as efficient. It was just arguing that even if it were twice as efficient, it wouldn't matter.
If you look at the plots in the 2nd link you are referring to, they show the gains of AVIF over WebP are also similar, so the case of AVIF "isn't a sufficient incremental benefit" as well by your criteria.
Not to mention JPEG XL can do lossless and progressive on top, and that avifenc is much slower than cjxl.
Comment was deleted :(
Comment was deleted :(
I recently tried out HIEF and AVIF to try and save high resolution panorama images to, and discovered that AVIF currently doesn't support images > 8K, despite saving the resolution in 32-bit ints, which is a bit disappointing in 2023. (Limits inherited from the fact it's basically a video codec).
Also annoying was the fact I actually had to debug libavif to work out what the problem was, as `avifEncoderAddImage()` would just return a generic error that wasn't helpful when the dimensions were too big, but `avifRGBImageAllocatePixels()` and `avifImageRGBToYUV()` didn't complain about the dimensions beforehand.
>Limits inherited from the fact it's basically a video codec
It's also disappointing for a video codec released in 2018. For context, in 2019 Sony released an 16k screen. Granted, that was a home cinema screen, and 16k displays will be a niche product for the forseeable future, but it's also not something unimaginably large.
I doubt this will ever make sense. An 8k screen that is full field of view is already big enough that the pixels aren't visible.
An 8k screen has about 33 million pixels. The first google result claims that the human eye contains about 91 million rods (and 4.5 million cones). So just from a sensor-pixel vs image-pixel standpoint, there are gains beyond 8k. From the top of my head, going up to about 32k is meaningful if you account for the fact that what you're really catering to is central vision: the 2-5° of visual field where most of your cones are packed together, giving you much higher resolution for the point you're looking at.
And of course that's assuming that the viewer is static. If you assume that viewers can move to a point where the screen is bigger than their field of view (say a large display in a museum) or that the viewer can zoom the video (just as we routinely zoom images today) there are even fewer limits to reasonable resolutions.
It's not about the number of rods and cones, the question is the optical acuity of the eye. Someone with 20/20 vision has an acuity of about 1 minute of arc (1/60th of a degree). At living room distances a 4K 65" TV has pixels so small someone with perfect vision can't make them out. An 8K resolution is a waste. For 8K to be actually visible you're talking about TVs that take up the whole wall of a living room.
There are other types of acuity, such as vernier acuity, that are much more accurate.
If you had a "human sized" display (let's say 2m x 2m), that you could stand next to, and it showed footage of a real size analog caliper that you had to read, I wouldn't be surprised if you needed something like 128K video or more.
But I agree that for regular video content, 8K is good enough.
> there are gains beyond 8k.
Sure if you had pinhole irises to look through instead of blobs of organic tissue.
It's still a perfectly valid bitstream and file if you don't conform to the defined profiles and encode resolutions up AV1's maximum 65536x65536 (or even larger with HEIF grids), as long as you don't falsely claim you're conforming to the defined profiles/levels. Then there's simply no guarantee that any specific decoder can decode it. (but technically that's the same with merely conforming to the advanced profile, or else there wouldn't be two profiles...)
I don't know about the libavif API, but certainly libavif's avifenc will happily create large files that don't conform to the AVIF advanced profile.
Yeah, the structs within libavif store width and height as uint32_t (which is better than libjpeg and libpng which use uint16_t, but both of those old format libs seem happy to save > 20,000 dimensions in my testing, as does libheif, and all the other libs produce nice helpful error strings when the size is too large), so the format itself is fine, libavif just currently seems incapable of encoding images larger than film "8k".
I couldn't even get it to save at the size of 8192 x 5464, which is the R5's full resolution.
Is that a limitation of the format itself, or just a limitation of libavif?
The format stores both width and height as uint32_t (even in the structs within libavif!), so it's not the format itself, that's fine.
I couldn't even get it to save 8192 x 5464 size files which is the native format of the R5 camera, that was too wide.
It seems to be a limitation with the libavif API or maybe the codec.
Other parts of the format could have stricter limitations than the header, so the these fields being 32 bits doesn't necessarily mean it's not a format limitation Though I agree it's probably an implementation issue.
Is the 8k limit both horizontal and vertical?
Most of my panoramas are horizontal (15k x 4k), but a few are vertical.
I think so: I couldn't even save 8192 x 5464 full res examples from the R5 with it: it wouldn't even do 8000, seems to be limited to the "film" 8k resolution which is a bit less.
How is 32 bits not enough ?
It's more than enough per dimension (width and height), that's the point: the file format on disk is fine, there seem to be other limitations when trying to add the images to the format which error out currently with libavif.
Maybe not enough memory? Although 8192 x 8192 x 4 (RGBA) is 256 MB, which is quite reasonable.
Unclear why Google would straight up reject an open standard. Considering their browser market share, jt feels like they can just kill a standard.
Mozilla's statement gives a good summary of takes form both ends and why they decided their position was neutral https://github.com/mozilla/standards-positions/issues/522#is...
Unfortunately this basically means JPEG-XL is dead.
dead for the web. I still store various images in JPEG-XL, have support for it in my image-viewer of choice, can use it in Python's pillow (with a decoder plugin) and lots of other things I use to modify images.
I still have to occasionally convert some images back to jpeg, but the savings from storing all of them in jpeg-xl instead is worth it to me (and avif support is spotty in the same areas, it is only really ahead in browsers and photoshop)
X-JPEG-XL, bereft of pixels.
> Overall, we don't see JPEG-XL performing enough better than its closest competitors (like AVIF) to justify addition on that basis alone
The progressive enhancement feature alone could be quite amazing, and its competitors don’t offer that- No more thumbnail generation! Its competitors can’t offer that! Do they realize how many security holes everyone’s use of “ImageTragick” generates, and how much caching infrastructure is devoted to images?
also, how does he fail to see this as a potential competitive advantage over browser competitors?
Comment was deleted :(
> Do they realize how many security holes everyone’s use of “ImageTragick” generates
Can you expound on this?
No less than 630 vulnerabilities, and almost everyone uses this library, although VIPS is gaining tiny marketshare.
everybody use im because the number of formats it supports. If you remove psd/ps/pdf/tiff supports, the number of vulnerabilities is halved.
still not great, but this give some context to understanding the number.
Comment was deleted :(
Note that AVIF is badly broken in Gimp. Save an image as AVIF, then load it again. Decompose the Luma and Chroma. You will see that the chroma is on a perfect 2x2 pixel grid, as if it was upscaled using nearest neighbor upscaling.
This does not happen for JPEG.
I did a small amount of eval work and was not impressed with AVIF when it came to compressing photos I took with my DSLR that I wanted to present as a "photo I took with my DSLR" that is the core of the content. That's a different scenario that a splash image for a landing page, but I was really quite disappointed.
Worth Pointing out again. According to Chrome Stats from Google ~80 to 85% of images served are above BPP 1.0 ( Bit per Pixel ).  Figure 1
>The number of transferred bytes is important because it affects the latency and hence the user experience. Using this estimate, we calculated that between June 1st and 28th, 2019, the average BPP (weighted by the number of bytes transmitted with that BPP) was 2.97, 2.24 and 2.03 for small, medium and large images, respectively. We confirmed that there was no considerable difference between mobile and desktop platforms. This data indicates that image compression algorithms should target rich images with a relatively high BPP values, aiming to produce 1 BPP images almost free of artefacts instead of 0.5 BPP images with artefacts that are not too intrusive.
It has been quite well know by now, at least to those who are familiar with JPEG XL that is doesn't do as well in Non-photographic photos. But it is still much better than WebP or JPEG.
I dont know if we could potentially further improve Non-photographic photos, but if you are look at the chart JPEG-XL has a perfect score at below BPP 1.0 already. Compared to AV1 encoder having many years of head start.
> The Chromium team shouldn't have the absolute authority to shoot down would-be standards like this
I’m not sure that “the Chromium team” is a real thing. Chromium is a cross-company project. People from different companies and different teams work on it together. Google has a Chrome team.
edit: I’m not familiar with the Google JPEG-XL situation, but generally speaking, it should be possible to ship a feature in Chromium without Google’s involvement because there are three non-Google Blink API owners (last time I checked).
Even if you manage to land features in Chromium without approval from Google employees, that will not help much. If Google doesn't want the feature they will disable it in Chrome, which is what matters for adoption.
As I mentioned in the article, Thorium is a Chromium-based browser that fully supports JPEG-XL. Here's the libjxl patch that can be ported to other Chromium-based browsers: https://github.com/Alex313031/thorium-libjxl
“And others” in the title does not include HEIC: “HEIC is being excluded because it sucks”.
I think that line was being a bit cheeky. It seems to be excluded for the same reason JPEG2000 is: it's patent encumbered.
JPEG2000’s patents have expired, there’s no reason to exclude it due to patents at this point.
That really annoyed me. The blog post tries to do a lot to appear scientific and objective, then just tosses something out without testing due to personal opinion.
They also mention it’s not free. That would be ok. Just say your test is image quality of free image formats.
But that statement just undermined the trust I had in the author up to that point.
You're probably right that I shouldn't have been so rash, I'm sorry for the omission & my despondent attitude toward HEIC. My enthusiasm is sullied by the presence of royalties & a non-existent web presence because of these royalties which I consider reasonable (where JXL's lack of web presence is questionable). In the future, I may do a similar test and include HEIC, but currently I believe it fares worse than AVIF almost across the board. I'd be willing to see whether or not I'm correct in saying this, though.
I should include that under "Takeaway," I do say that " ... this is a non-scientific test," & it shouldn't be considered objective despite the use of metrics. While I find SSIMULACRA2 correlates very well with what my eyes see with these images & most other images, 27 landscape/architecture images aren't enough to draw any objective conclusions here & I'd need to do more testing in the future to produce research-quality work.
If you said something like that and provided a link right there at explained why it was already known to be worse, that would’ve been understandable.
I appreciate what you did. Your article is far above a simple “I looked at five pictures and decided this one was the best“ kind of comparison that you sometimes see. You clearly tried to do a good and fair job with your contenders.
The results were interesting to see. This isn’t an area I spent time in so I really didn’t know what to expect going in other than thinking there must be something better than the venerable JPEG.
Given how widely deployed HEIC is I would be curious to know objectively how it fares even if there are practical reasons to not choose it.
Is there a patent, copyright, or royalty reason why JPEG-XL isn't championed for the web? It would be awful to have another format that can't be used in certain Linux distros or browsers.
Nope. It’s the best thing out there atm. It’s just big tech wanting to control the IP
Or is it their desire to be slightly careful about which image formats they have to support in perpetuity? With AVIF you get to re-use a bunch of your AV1 decoding code too. And the reference JPEG-XL decoder is still on version 0.x, with no sign that they'll make a stable release any time soon.
Why everyone is all up in arms about the fact that the industry is finally standardizing on a good royalty-free image format is beyond me.
AVIF doesn't do everything, and a codec that's designed to encode video frames will always be limited compared to a pure image codec. It especially has trouble with lossless.
If we were picking a single royalty-free image codec for the 2018-2026 range, it should be JXL.
> AVIF doesn't do everything, and a codec that's designed to encode video frames will always be limited compared to a pure image codec.
Image encoding is a subset of video encoding, so any limits are mostly arbitrary, and in AV1's case image-only encoding was a consideration from the start. AVIF in particular supports color depths up to 12 bits, wide color gamuts, many color spaces and standards for color space signaling, etc., so I'm curious what aspect you consider limited.
> Image encoding is a subset of video encoding
This isn't true. There are a number of things that make sense for images but not videos. 1. Progressive rendering. 2. Big images (up to 2^30 by 2^30) 3. More channels (up to 4099) (this can be useful for transparency, depth, or tracking random other things for scientific purposes) 5. High bit depth (up to 32 bit) (this is mostly useful for sciency stuff) 4. Lossless encoding. Basically no one wants lossless video (it's way too big), but lossless images often make sense.
Also images use a wider range of resolutions. AV1 has a minimum 4x4 block size (compared to 2x2 for jpeg-xl) which will hurt it for small images (e.g. icons). For example, The ycombinator icon is 18 by 18 pixels which would make AVIF store a 20x20 image which is 23% more pixels. (I can't find a source but I think I remember hearing that AV1 actually has a minimum size of 64x64 but I can't find a source for that)
Comment was deleted :(
I can think of a few aspects where avif is limited.
12-bit is fine for delivery, but it's not enough for authoring. Cameras are typically 14-bit and during authoring you'll want some footroom on top of that.
Also avif is not usable for lossless: it is slower than png and often worse. Lossless is crucial for authoring workflows.
For high-fidelity lossy compression, avif can be worse than even jpeg.
Avif cannot do CMYK at all, so for printing use cases it is not usable.
Basically it is targeted at web delivery, but it's not really a general-purpose image format that can be applied in a broad range of use cases.
Seemingly half or so of images delivered anywhere currently are VP8-image (webp). You'd expect those to be AVIF instead in the coming years, which is obviously very much superior.
Forget lossless originals, this is about the average images that users see online, like 1000x1000 (or smaller) and 200kb (or so).
The key part is support for real-time hardware decoding. You can't have parts of images refer to far-off parts to save on redundant encoding, and your budget for calculation time and complexity is relatively small.
JXL offers re-encoding of existing jpeg at smaller size with no quality loss, and has an efficient progressive mode.
As the head of tech for a site with a few tb of user-supplied jpeg files (which are thumbnailed), I can see the appeal…
Not just no quality loss. Byte-for-byte identical
If something is 'byte-for-byte identical' wouldn't it be the same size?
It's byte for byte identical once you expand to an array of pixels. The storage format uses more advanced (lossless) compression, but the pixels that come out are identical.
When you transcode back to JPEG, the JPEG is byte-for-byte identical to the original JPEG. But if you decode to pixels you do not always get identical results as the JPEG because the JPEG standard does not specify bit-exact decoding and different decoders may differ slightly.
Sound more like a chicken and egg issue, look like "big tech" are mostly using the format for archival purpose.
As Mozilla put it, adopting a new format increase the attack surface ( that is already quite large ) of a browser, they can not simply take the reference decoder and integrate it into their codebase, they also have to use resources toward maintaining it. Firefox currently integrate jxl support behind an experimental flag, but that could mean the feature is in testing and could be deprecated in order to better allocate their resources.
Jxl can be byte for byte identical as the original jpeg at an around 80% iirc. Google isn’t interested in perfect archiving, they’d rather have lossy compression, which you can also do.
Why can’t they take the reference decoder?
As for the attack surface, why would their own codes be less vulnerable?
They don’t tend to support each other’s codecs either, not because of attack surface
Comment was deleted :(
Google doesn't want it so they FUD against it.
Or: parts of the internets are salty, because Google didn't get the message that nowadays it is mandatory to support their shiny toy in Google's product.
Well, Chrome doesn't support bunch of things, and the world didn't end. The usual topic is, that is supports too many things actually, more than it should.
Not "or". The FUD is real.
The bugreport even states: "There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL". Not sure on how Google "measured" that.
By looking at their developer which would have to do the work and those developers said "I'm not interested." Then looking at Mozilla developers which said "I'm not interested." Then looking at Safari developers which also said "I'm not interested."
Having developers decide they won't invest into your toy format forever is not FUD.
In case of Google it is. Haven't looked at others. Mozilla is dependent on Google money, Safari irrelevant.
I'm not sure you understand what "FUD" means.
I'll explain it to you. "fear, uncertainty, doubt".
"uncertainty, doubt": "not enough interest from the entire ecosystem to continue experimenting"
"fear": "By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome"
Honestly I don't think there was much interest before they rejected it.
The news about it being rejected increased JXL's profile tenfold and there's a lot more interest now.
It's somewhat similar to H2 Push, which had similar justifications about adoption and interest. When frameworks finally started supporting it to some extent Google expressed their wish to kill the feature, that instantly dissuaded most developers finishing their implementations or from even considering it.
(Not to mention the fact that the developer tooling for H2 Push was terrible to say the least for a long time)
To me it seems Chromium/Google are somewhat out of touch on how slow the web and adoption of new things happens.
In this case, it was also behind an experimental flag, I don't know how the f** they expect people to show interest if it's practically not usable on the web.
I think the most interesting takeaway here is that while JXL continues to face unfair treatment from web browser vendors, jpegli is a really fascinating development that performs very closely to AVIF at high quality in the dataset. I'd like to test jpegli with the XYB colorspace in the future to see how much better it is compared to RGB, but even with the RGB handicap it still looks very promising.
If image types were implemented the way they are on the Amiga OS, we wouldn't be having this conversation about whether jpeg-xl will be supported in a web browser.
When you add a jpeg-xl.library, every app on the system instantly supports loading and saving that image format.
Windows already has a codec system. Chrome just doesn't use it. No matter how perfect a system service you have, if the premier web browser doesn't use it, it doesn't matter.
This is on purpose, because otherwise it would lead to proliferation of redundant, potentially crappy and insecure formats on the Web.
You could end up with sites using some Windows-only formats (e.g. some internal MS Office format), iPhone-centric sites would serve patented HEIC, and then browsers on other platforms would be forced to adopt these formats.
Not every plugin dumped in the local OS is as hardened as web-facing codecs. People would likely get pwned through some legacy fax codec from their scanner software.
Users seeing broken images is not a good UX, and users installing random code-running plugins, because websites tell them to, is even more annoying and dangerous.
We've got decades of experience, and scars in the User-Agent header, showing that developers can't be trusted to make sensible technical choices.
>This is on purpose, because otherwise it would lead to proliferation of redundant, potentially crappy and insecure formats on the Web.
In a well-designed datatype system, an image datatype would, for decoding take a compressed image and return a decompressed image. Possibly with streaming support.
It would not be able to do anything else, not having capabilities to anything else than necessary.
Windows's system is terrible though. It's buggy, it requires to use of MS Store, the user experience is terrible. The HEVC extension for example causes incorrect HDR playback on certain AMD cards for years now, with no fix in sight. It's surprising that you've forgotten the shit web was with all the "You must install Adobe Flash Player to view this content" banners or shudder, Java and ActiveX applets.
Comment was deleted :(
Does gimp have some issue with avif? I tried setting the print size and after re-opening, it wasn't kept.
Crafted by RajatSource Code