hckrnws
I'm definitely excited to see 64 bit as a default part of the spec. A lot of web apps have been heavily restricted by this, in particular any online video editors. We see a bunch of restrictions due to the 32 bit cap today here at Figma. One thing I'm curious though is whether mobile devices will keep their addressable per-tab memory cap the same. It's often OS defined rather than tied to the 32 bit space.
I guess I’m just a crusty ol’ greybeard C++ developer, but it seems like a video editor is out of place in a document browser. There’s a perfectly good native operating system that nobody uses any more.
If we think we need a more thoroughly virtualized machine than traditional operating system processes give us (which I think is obvious), then we should be honest and build a virtualization abstraction that is actually what we want, rather than converting a document reader into a video editor…
> ... document browser ... document reader ...
I'm going to assume you're being sincere. But even the crustiest among us can recognize that the modern purpose for web browsers is not (merely) documents. Chances are, many folks on HN in the last month have booked tickets for a flight or bought a home or a car or watched a cat video using the "document browser".
> If we think we need a more thoroughly virtualized machine than traditional operating system processes give us (which I think is obvious)...
Like ... the WASM virtual machine? What if the WASM virtual machine were the culmination of learning from previous not-quite-good-enough VMs?
WASM -- despite its name -- is not truly bound to the "document" browser.
So you are telling me I can now directly render from WASM into the viewport of the browser with a11y? Nope, then it’s still restricted to interacting with the DOM through JS or or renderings to a cavas with no a11y support or significantly worse dx for a11y because you would need to render a screen reader only document layout ontop of your actual UI, instead of just handing the browser an interaction tree and allowing it to do what is needed for screen readers.
You can write Wasm bindings to the native UI toolkit of your choice and render with that. It doesn’t have to be in the DOM.
We have, but not by choice, I miss my native apps, even though ChromeOS Platform pays the bills.
> booked tickets for a flight or bought a home or a car or watched a cat video
Would you install a native app to book a flight? One for each company? Download updates for them every now and then, uninstall them when you run out of disk space etc
I can ask the same question about every other activity we do in these non-native apps.
I have installed all them on the phone, so yes.
Unfortunely several of them are glorified webviews.
I am old enough to have lived through the days Internet meant a set of networking protocols, not ChromeOS Platform.
And on those days hard disks were still bloody expensive, by the way.
> I have installed all them on the phone, so yes.
Isn't your phone providing a sandbox, a distribution system, a set of common runtime services, etc to get these native apps functional?
You don't have to squint your eyes to realize that this thing we call "document browsers" are doing a lot of the same work that Apple/Google are doing with their mobile OSes.
You mean like Windows Store, Mac App Store, apt, yum/dnf, emerge,....?
All the OS frameworks that are available across most operating systems that don't fragment themselves into endless distributions?
> don't fragment themselves into endless distributions?
My dear Lord! In what world are you living in?
Take a look at all of the "mobile apps" you installed on your phone and tell me which of those would ever devote any resource to make a apt/rpm repository for their desktop applications.
Even the ones that want to have a desktop application can not figure out how to reliably distribute their software. The Linux crowd itself is still at the flatpak vs AppImage holy war. Mark Shuttleworth is still beating the snap horse.
The Web as a platform is far from ideal, but if it weren't for it I would never been able to switch to Linux as my primary base OS and I would have to accept the Apple/Microsoft/Google oligopoly, just like we are forced to do it at the mobile space.
In the world we build for ourselves, a worse is better mentality world.
> a worse is better mentality world.
Seems like your preferred world is the totalitarian "choose any color you want as long as it is black" one, where everything is perfectly optimized and perfectly integrated into a single platform.
> > a worse is better mentality world. > > Seems like your preferred world is the totalitarian "choose any color you want as long as it is black" one, where everything is perfectly optimized and perfectly integrated into a single platform.
Idk, I have a feeling they would be anti systemd too
I am not so sure. Both Pottering and pjmlp are German.
Alan Kay on “Should web browsers have stuck to being document viewers?” and a discussion of Smalltalk, HyperCard, NeWS, and HyperLook
https://donhopkins.medium.com/alan-kay-on-should-web-browser...
>Alan Kay answered: “Actually quite the opposite, if “document” means an imitation of old static text media (and later including pictures, and audio and video recordings).”
document browser, document reader, printed paper, paper form, return to sender... those are all in the same concept space*
"virtual machine" is clearly not
that said, i love WASM in the browser, high time wrapping media with code to become "new media" wasn't stuck solely with a choice between JS and "plugins" like Java, Flash, or Silverlight
it's interesting to look back at a few "what might have been" alternate timelines, when the iPhone was intended to launch as an HTML app platform, or Palm Pre (under a former Apple exec, the "pod-father") intended the same with WebOS. if a VM running a web OS shows a PDF or HTML viewer in a frame, versus if a HTML viewer shows a VM running a web OS in a frame...
we're still working on figuring out whether new media and software distribution are the same.
today, writing Swift, or Nim, or whatever LLVM, and getting WASM -- I agree with you, feels like a collective convergence on the next step of common denominator
* note: those are all documents and document workflows with skeuomorphic analogs in the same headspace, and newspaper with "live pictures" has been a sci-fi trope for long enough TV news still can't bring themselves to say "video" (reminding us "movie" is to "moving" as "talkie" was to "talking") so extending document to include "media" is reasonable. but extending that further to be "arbitrary software" is no longer strictly document nor media
Personally not a fan of Windows 95 in the browser, however the browser stoped being a “document reader” a decade ago it’s the only universel, sandbox runtime, and everything is moving in that direction ... safe code. WASM isnt a worst VM; it’s a diffrent trade off: portable, fast start, capability scoped compute without shiping a OS. Raw device still have their place (servers). If you need safe distribution + performance thats “good enough” WASM in the browser is going to be the future of client.
A decade ago? Gmail was launched in 2004, 21 years ago.
XMLHttpRequest was part of IE5 in 1999 as an ActiveXObject. Outlook Web team built it a year earlier.
Not to mention Java applets which is how you would do this sort of thing in the early 2000s
The problem is: even us old timers can't deny nor change the fact that operating systems have been putting up barriers to disallow running untrusted code downloaded from the internet.
Try to distribute an installer on Windows that isn't signed with an extensive EV-certificate for instance. It's scare popup galore.
Not to mention the closed gardens of the Apple and Google Stores which even when you get in, you can be kicked out again for absolutely no objective reason (they don't even need to tell you why).
> then we should be honest and build a virtualization abstraction that is actually what we want,
This is not in the interest of Microsoft, Google or Apple. They can't put the open web back into the box (yet, anyway), but they will not support any new attempts to create an open software ecosystem on "their" platforms.
Just wait until the Web finally becomes ChromeOS Platform, after the Safari barrier is out of the way.
There is no difference between a "document" and an "app". There has never been a difference between the two, it's a purely artificial distinction.
Word and LibreOffice "documents" can run embedded macros. Emacs has `org-mode`, which can call out to any programming language on your $PATH. A PDF document is produced by running code in a stack-based virtual machine. Even fonts have bytecode instructions embedded inside them that are used for hinting.
If by "document" you mean "static text with no executable bits", then only plain text files can truly be called documents. Everything else is a computer program, and has been since the dawn of computing.
Comment was deleted :(
I don’t think the total dissolution of a blurry boundary is a useful act. Yes, many document formats become Turing complete and run Doom eventually, but there is still a notable practical distinction between the intent to create a document interpreter versus an application framework.
The browser removes the friction of needing to install specialized software locally, which is HUGE when you want people to actually use your software. Figma would have been dead in the water if it wasn't stupidly simple to share a design via a URL to anyone with a computer and an internet connection.
I can't shake the feeling that this ship has sailed and only a few got to get on it while it happened.
And this comes from someone who started with Flash, built actual video editing apps with it, and for the last 25 years build application with "it's not a web app, it's a desktop app that lives in a browser" attitude [1].
Even with Flash we often used hybrid approach where you had two builds from same codebase - a lite version running in the browser and an optional desktop app (AIR) with full functionality. ShareObjects and LocalConnection made this approach extremely feasible as both instances were aware of each other and you could move data and objects between them in real time.
The premise is great, but it was never fully realized - sure you have few outliers like Figma, but building a real "desktop app" in a browser comes with a lot of quirks, and the resulting UX is just terrible in most cases.
[1] just to be clear, there's a huge difference between web page and web app ;D
Think of it like emacs. Browsers are perfectly good operating systems just needing a better way to view the web.
That's too true to be funny!
PostScript is turing complete. SVG almost got the ability to open raw network sockets. Word files were (are?) a common platform for distributing malware.
I would argue that as soon as it was decided to include JavaScript runtime in the browser, it stopped being a plain document browser. From then on, we were just on the evolutionary path of converting it from a low capability app sandbox to a highly capable one.
There are projects to run WASM on bare metal.
I do agree that we tend to run a lot in a web-browser or browser environment though. It seems like a pattern that started as a hack but grew into its own thing through convenience.
It would be interesting to sit down with a small group and figure out exactly what is good/bad about it and design a new thing around the desired pattern that doesn't involve a browser-in-the-loop.
> run WASM on bare metal
Heh, reminds me of those boxes Sun used to make that only ran Java. (I don’t know how far down Java actually went; perhaps it was Solaris for the lower layers now that I think about it…)
The Java went so far down that many early ARM cores could be placed in Jazelle DBX mode, which processed Java bytecode in hardware shudders
I think it was far less special that advertized, so it was probably a stripped Solaris that ran a JRE hoping noone would notice. Dog slow they were at least so from my viewpoint, there was nothing magic about those boxes at all.
With hypervisors and a Linux kernel doing the heavy lifting, the WASM on bare metal probably just looks a lot like a regular process. I would bet Sun did similar … minus the hypervisor.
I do miss the Solaris 10/OpenSolaris tech though. I don’t know anything that comes close to it today.
Solaris is still around, while OpenSolaris forks support Oxide endeavours.
Technically, yes. I built+ported a majority of Debian packages onto Nexenta OS but that effort (and many parallel efforts) just vanished when Oracle purchased Sun. I don't miss SVR4 packages and I never grew fond of IPS. So many open-source folk warned of the dangers of CDDL and they were largely realized in fairly short time. Unsurprisingly, #opensolaris on irc also had a precipitous drop-off around the same time.
dtrace/zones/smf/zfs/iscsi/... and the integration between them all was top notch. One could create a zone, spin up a clone, do some computation, trash the filesystem and then just throw the clone away... in very short time. Also, that whole loop happened without interacting with zfs directly; I know that some of these things have been ported but the ports miss the integration.
eg: zfs on Linux is just a filesystem. zfs on Solaris was the base of a bunch of technology. smf tied much of it together.
eg: dtrace gave you access all the way down to individual read/write operations per disk in a raid-z and all the way up to the top of your application running inside of a zone. One tool with massive reach and very little overhead.
Not much compels me to go back to the ecosystem; I've been burned once already.
But that's what they're doing. The hard part isn't the VM. The hard part is the long fought functional yet secure sandbox that a browser provides.
> functional yet secure
Secure? Debatable. Functional? Not really.
For example, try accessing a security key and watch the fun. Sure, if you access it exactly the way Google wants you to, things kinda-sorta work, sometimes. If you don't want to obey your Google masters, good luck getting your Bluetooth or USB packet to your security key.
And because you are "secure", you can't store anything on the local hard drive. Oh, and you can't send a real network packet either. All you can do is send a supplication request to a network database owned by somebody else that holds your data--"Please, sir, can I have some more?". The fact that this prevents you from exporting your data away from cloud-centered SaaS providers is purely coincidental, I'm sure. </sarcasm>
So in the name of security we just kneecap the end users--if the users can't do anything, they're secure, right? Diana Moon Glampers would be proud.
Plenty of people still want local-first apps that function offline.
Totally possible today with modern SPA technology that all major browsers support
Security model of web is needed to be brought to the OS.
I a 64yo fart. Started programming with machine codes. Native is my bread and butter. Still have no problems and am using browser as a deployment platform for some type of applications. Almost each thing has it's own use.
Unfortunately, Memory64 comes with a significant performance penalty because the wasm runtime has to check bounds (which wasn't necessary on 32-bit as the runtime would simply allocate the full 4GB of address space every time).
But if you really need more than 4GB of memory, then sure, go ahead and use it.
Actually, runtimes often allocate 8GB of address space because WASM has a [base32 + index32] address mode where the effective address could overflow into the 33rd bit.
On x86-64, the start of the linear memory is typically put into one of the two remaining segment registers: GS or FS. Then the code can simply use an address mode such as "GS:[RAX + RCX]" without any additional instructions for addition or bounds-checking.
The comedy option would be to use the new multi-memory feature to juggle a bunch of 32bit memories instead of a 64bit one, at the cost of your sanity.
The problem with multi-memory (and why it hasn't seen much usage, despite having been supported in many runtimes for years) is that basically no language supports distinct memory spaces. You have to rewrite everything to use WASM intrinsics to work on a specific memory.
Somewhat related. At some point around 15 years ago I needed to work with large images in Java, and at least at the time the language used 32-bit integers for array sizes and indices. My image data was about 30 gigs in size, and despite having enough RAM and running a 64-bit OS and JVM I couldn't fit image data into s ingle array.
This multi-memory setup reminds me of my array juggling I had to do back then. While intellectually challenging it was not fun at all.
didn't we call it 'segmented memory' back in DOS days...?
We call it "pointer compression" now. :)
Seriously though, I’ve been wondering for a while whether I could build a GCC for x86-64 that would have 32-bit (low 4G) pointers (and no REX prefixes) by default and full 64-bit ones with __far or something. (In this episode of Everything Old Is New Again: the Very Large Memory API[1] from Windows NT for Alpha.)
[1] https://devblogs.microsoft.com/oldnewthing/20070801-00/?p=25...
A moderate fraction of the work is already done using:
https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html
Unfortunately the obvious `__attribute__((mode(...)))` errors out if anything but the standard pointer-size mode (usually SI or DI) is passed.
Or you may be able to do it based on x32, since your far pointers are likely rare enough that you can do them manually. Especially in C++. I'm pretty sure you can just call "foreign" syscalls if you do it carefully.
6502 zero page instruction vibes.
It was glorious I tell you.
Especially how you could increase the segment value by one or the offset by 16 and you would address the same memory location. Think of the possibilities!
And if you wanted more than 1MB you could just switch memory banks[1] to get access to a different part of memory. Later there was a newfangled alternative[2] where you called some interrupt to swap things around but it wasn't as cool. Though it did allow access to more memory so there was that.
Then virtual mode came along and it's all been downhill from there.
[1]: https://en.wikipedia.org/wiki/Expanded_memory
[2]: https://hackaday.com/2025/05/15/remembering-more-memory-xms-...
> Think of the possibilities!
Schulman’s Unauthorized Windows 95 describes a particularly unhinged one: in the hypervisor of Windows/386 (and subsequently 386 Enhanced Mode in Windows 3.0 and 3.1, as well as the only available mode in 3.11, 95, 98, and Me), a driver could dynamically register upcalls for real-mode guests (within reason), all without either exerting control over the guest’s memory map or forcing the guest to do anything except a simple CALL to access it. The secret was that all the far addresses returned by the registration API referred to the exact same byte in memory, a protected-mode-only instruction whose attempted execution would trap into the hypervisor, and the trap handler would determine which upcall was meant by which of the redundant encodings was used.
And if that’s not unhinged enough for you: the boot code tried to locate the chosen instruction inside the firmware ROM, because that will have to be mapped into the guest memory map anyway. It did have a fallback if that did not work out, but it usually succeeded. This time, the secret (the knowledge of which will not make you happier, this is your final warning) is that the instruction chosen was ARPL, and the encoding of ARPL r/m16, AX starts with 63 hex, also known as the ASCII code of the lowercase letter C. The absolute madmen put the upcall entry point inside the BIOS copyright string.
(Incidentally, the ARPL instruction, “adjust requested privilege level”, is very specific to the 286’s weird don’t-call-it-capability-based segmented architecture... But it’s has a certain cunning to it, like CPU-enforced __user tagging of unprivileged addresses at runtime.)
> The absolute madmen put the upcall entry point inside the BIOS copyright string.
Isn’t that an arbitrary string, though? Presumably AMI and Insyde have different copyright messages, so then what?
To clarify: when I said that “the boot code tried to locate the chosen instruction inside the firmware ROM”, I literally meant that it looked through the entirety of the ROM BIOS memory range for a byte, any byte, with value 63 hex. There’s even a separate (I’d say prematurely factored out) routine for that, Locate_Byte_In_ROM. It just so happens that the byte in question is usually found inside the copyright string (what with the instruction being invalid and most of the rest of the exposed ROM presumably being valid code), but the code does not assume that.
If the search doesn’t succeed or if you’ve set SystemROMBreakPoint=off in WIN.INI, then the trap instruction will instead be located in a hypervisor-provided area of RAM that’s shared among all guests, accepting the risk that a misbehaving guest will stomp over it and break everything (don’t know where it fits in the memory map).
As to the chances of failing, well, I suspect the original target was the c in “(c)”, but e.g. Schulman shows his system having the trap address point at “chnologies Ltd.”, presumably preceded by “Phoenix Te”. AMI and Award were both “Inc.”, so that would also work. (Insyde wasn’t a thing yet; don’t know what happened on Compaq or IBM machines.) One way or another, looks like a c could be found somewhere often enough that Microsoft programmers were satisfied with the approach.
I thought so, but "Copyright" is always the same? Haha, that's dangerously clever or cleverly dangerous.
Comment was deleted :(
And turned out we have the transistors to avoid it, but it's a really good optimization for CPUs nowadays.
At least most people design non-overlaping segments. And I'm not sure wasm would gain anything from it, being a virtual machine instead of real.
wait.... UNREAL MODE!
It looks like memories have to be declared up front, and the memcpy instruction takes the memories to copy between as numeric literals. So I guess you can't use it to allocate dynamic buffers. But maybe you could decide memory 0 = heap and memory 1 = pixel data or something like that?
Honestly you could allocate a new memory for every page :-)
The irony for me is that it's already slow because of the lack of native 64-bit math. I don't care about the memory space available nearly as much.
Eh? I'm pretty sure it's had 64-bit math for awhile -- i64.add, etc.
They might have meant lack of true 64bit pointers ..? IIRC the chrome wasm runtime used tagged pointers. That comes with an access cost of having to mask off the top bits. I always assumed that was the reason for the 32bit specification in v1
Comment was deleted :(
Bounds checking in other PLT is often reproted to result in pretty low overheads. Will be interesting to see some details about how this turns out.
I still don't understand why it's slower to mask to 33 or 34 bit rather than 32. It's all running on 64-bit in the end isn't it? What's so special about 32?
That's because with 32-bit addresses the runtime did not need to do any masking at all. It could allocate a 4GiB area of virtual memory, set up page permissions as appropriate and all memory accesses would be hardware checked without any additional work. Well that, and a special SIGSEGV/SIGBUS handler to generate a trap to the embedder.
With 64-bit addresses, and the requirements for how invalid memory accesses should work, this is no longer possible. AND-masking does not really allow for producing the necessary traps for invalid accesses. So every one now needs some conditional before to validate that this access is in-bounds. The addresses cannot be trivially offset either as they can wrap-around (and/or accidentally hit some other mapping.)
I don't feel this is going to be as big of a problem as one might think in practice.
The biggest contributor to pointer arithmetic is offset reads into pointers: what gets generated for struct field accesses.
The other class of cases are when you're actually doing more general pointer arithmetic - usually scanning across a buffer. These are cases that typically get loop unrolled to some degree by the compiler to improve pipeline efficiency on the CPU.
In the first case, you can avoid the masking entirely by using an unmapped barrier region after the mapped region. So you can guarantee that if pointer `P` is valid, then `P + d` for small d is either valid, or falls into the barrier region.
In the second case, the barrier region approach lets you lift the mask check to the top of the unrolled segment. There's still a cost, but it's spread out over multiple iterations of a loop.
As a last step: if you can prove that you're stepping monotonically through some address space using small increments, then you can guarantee that even if theoretically the "end" of the iteration might step into invalid space, that the incremental stepping is guaranteed to hit the unmapped barrier region before that occurs.
It's a bit more engineering effort on the compiler side.. and you will see some small delta of perf loss, but it would really be only in the extreme cases of hot paths where it should come into play in a meaningful way.
> AND-masking does not really allow for producing the necessary traps for invalid accesses.
Why does it need to trap? Can't they just make it UB?
Specifying that invalid accesses always trap is going to degrade performance, that's not a 64-bit problem, that's a spec problem. Even if you define it in WASM, it's still UB in the compiler so you aren't saving anyone from UB they didn't already have. Just make the trapping guarantee a debug option only.
It's WASM. WASM runs in a sandbox and you can't have UB on the hardware level. Imagine someone exploiting the behavior of some browser when UB is triggered. Except that the programmer is not having nasal demons [1] but some poor user, like a mom of four children in Abraska running a website on her cell phone.
Comment was deleted :(
The special part is the "signal handler trick" that is easy to use for 32-bit pointers. You reserve 4GB of memory - all that 32 bits can address - and mark everything above used memory as trapping. Then you can just do normal reads and writes, and the CPU hardware checks out of bounds.
With 64-bit pointers, you can't really reserve all the possible space a pointer might refer to. So you end up doing manual bounds checks.
Hi Alon! It's been a while.
Can't bounds checks be avoided in the vast majority of cases?
See my reply to nagisa above (https://news.ycombinator.com/item?id=45283102). It feels like by using trailing unmapped barrier/guard regions, one should be able to elide almost all bounds checks that occur in the program with a bit of compiler cleverness, and convert them into trap handlers instead.
Hi!
Yeah, certainly compiler smarts can remove many bounds checks (in particular for small deltas, as you mention), hoist them, and so forth. Maybe even most of them in theory?
Still, there are common patterns like pointer-chasing in linked list traversal where you just keep getting an unknown i64 pointer, that you just need to bounds check...
Comment was deleted :(
Because CPUs still have instructions that automatically truncate the result of all math operations to 32 bits (and sometimes 8-bit and 16-bit too, though not universally).
To operate on any other size, you need to insert extra instructions to mask addresses to the desired size before they are used.
Comment was deleted :(
WASM traps on out-of-bounds accesses (including overflow). Masking addresses would hide that.
Webapps limited by 4GiB memory?
Sounds about right. Guess 512 GiB menory is the minimum to read email nowadays.
I know you're in this for the satire, but it's less about the webapps needing the memory and more about the content - that's why I mentioned video editing webapps.
For video editing, 4GiB of completely uncompressed 1080p video in memory is only 86 frames, or about 3-4 seconds of video. You can certainly optimize this, and it's rare to handle fully uncompressed video, but there are situations where you do need to buffer this into memory. It's why most modern video editing machines are sold with 64-128GB of memory.
In the case of Figma, we have files with over a million layers. If each layer takes 4kb of memory, we're suddenly at the limit even if the webapp is infinitely optimal.
Comment was deleted :(
> 4GiB of completely uncompressed 1080p video in memory is only 86 frames
How is that data stored?
Because (2^32)÷(1920×1080×4) = 518 which is still low but not 86 so I'm curious what I'm missing?
> How is that data stored?
So glad you asked. It's stored poorly because I'm bad at maths and I'm mixing up bits and bytes.
That's what I get for posting on HN while in a meeting.
I would guess 3 colour channels at 16bit (i.e. 2 bytes)
(2^32)÷(1920×1080×4×3×2) = 86
Where does the 4 come from? I thought it was R+G+B+A, but you already have 3 colour channels in that calculation
Yep, my logic is faulty there. And even if we assume that it's 24bpp color, that's still a factor of 2 out.
Apparently with 24 bytes per pixel instead of bits :) Although to be fair, there's HDR+ and DV, so probably 4(RGBA/YUVA) floats per pixel, which is pretty close..
It doesn't actually allocate 4 GiB. Memory can be mapped without being physically occupied.
No, web apps can actually use 4GB of memory (now 16GB apparently).
In fairness, this is talking about Figma, not an email client
Finally a web browser capable of loading slack
I'm excited by the GC, ref types and JS string API! Been a while J, how are you going?
Comment was deleted :(
I assume that looking into the present we need to think about running local LLMs in the browser. Just a few days ago I submitted an article about that [1].
> Garbage collection. In addition to expanding the capabilities of raw linear memories, Wasm also adds support for a new (and separate) form of storage that is automatically managed by the Wasm runtime via a garbage collector. Staying true to the spirit of Wasm as a low-level language, Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm. But that’s it.
Wow!
It's very refreshing and good to see WASM is embracing GC in addition to non-GC support. This approach is similar to D language where both non-GC and GC are supported with fast compilation and execution.
By the way now you can generate WASM via Dlang compiler LDC [1].
[1] Generating WebAssembly with LDC:
I'm not familiar with WASM. Can someone explain why this is a good thing? How does this work with languages that do not have a garbage collector, like Rust?
Non-GCed languages will continue to manage memory themselves. Previously, GCed languages that wanted to run on WASM had to have an implementation of their runtime including GC compiled to WASM. The idea or hope here is that those languages can use the built-in GC instead and slim down the amount of WASM that needs to be delivered to run the application to only include a minimal runtime. The current scenario is closer to if a web app or node app built with JavaScript had to ship a significant portion of V8 with it to function.
The answer was kind of known before hand. It was to enable the use of GCed languages like Python on Ruby to create WASM applications. Meanwhile, non-GCed languages like Rust, C and C++ were supposed to continue to work as before on WASM without breaking compatibility. This is what they seem to have finally achieved. But I needed to make sure of it. So, here are the relevant points from the WASM GC proposal [1]:
* Motivation
- Efficient support for high-level languages
- faster execution
- smaller modules
- the vast majority of modern languages need it
* Approach
- Pay as you go; in particular, no effect on code not using GC, no runtime type information unless requested
- Don't introduce dependencies on GC for other features (e.g., using resources through tables)
[1] https://github.com/WebAssembly/spec/blob/wasm-3.0/proposals/...Note that the high level language needs a sufficient abstraction in its own runtime to allow substituting the Wasm GC for the runtime’s own GC. Work has been done for Java and Kotlin, but Python, C#, Ruby, Go can’t yet use the Wasm GC.
to also consider this one https://github.com/6over3/zeroperl
I works very well, thank you for asking: https://rustwasm.github.io/book/
Does this allow for shrinking the WebAssembly.Memory object?
- https://github.com/WebAssembly/design/issues/1397
- https://github.com/WebAssembly/memory-control/issues/6
This is a crucial issue, as the released memory is still allocated by the browser.
No, I don't think it will. Pointers to managed objects are opaque, and aren't actually backed by the wasm memory buffer. The managed heap is offloaded.
Shrinking the memory object shouldn't require any special support from GC, just an appropriate API hook. It would, as always, be up to the application code running inside the module to ensure that if a shrink is done, that the program doesn't refer to memory addresses past the new endpoint.
If this hasn't been implemented yet, it's not because it's been waiting on GC, but more that it's not been prioritized.
Wasm GC is entirely separate from Wasm Memory objects, so no, this does not help linear memory applications.
This seems less than ideal to me.
1. Different languages have totally different allocation requirements, and only the compiler knows what type of allocator works best (e.g. generational bump allocator for functional languages, classic malloc style allocator for C-style languages).
2. This perhaps makes wasm less suitable for usage on embedded targets.
The best argument I can make for this is that they're trying to emulate the way that libc is usually available and provides a default malloc() impl, but honestly that feels quite weak.
I don't see this as a problem in the JVM, where independently of what programming language you are using, you will use the GC configured on the JVM at launch.
That sounds like WASM is going into the Java direction. Is that really a good thing?
What do you mean by the Java direction? It's a virtual machine with GC support, so I guess in that regard it's similar to the JVM, CLR, BEAM, et al. If anything, those VMs show performance improvement and better GC over time and a strong track record of giving legacy software longevity. The place where things seem to fall apart over the long term is when you get to the GUI, which is arguably a problem with all software.
When is WASM finally going to be able to touch the DOM? It feels like that was the whole point of WASM and instead its become a monster of its own that barely has anything to do with web anymore. When can we finally kill JavaScript?
Agreed. This and (sane) access to multi-threading. I want to be able to write a Rust application, compile to wasm and load it with
<html>
<body>
<div id="root"></div>
<script type="application/wasm" src="./main.wasm"></script>
</body>
</html>
Would be great for high performance web applications and for contexts like browser extensions where the memory usage and performance drain is real when multiplied over n open tabs. I'm not sure how code splitting would work in the wasm world, however.v8 could be optimized to reduce its memory footprint if it detects that no JavaScript is running - or wasm-only applications could use an engine like wasmer and bypass v8 entirely.
Another factor is that web technologies are used to write desktop applications via Electron/similar. This is probably because desktop APIs are terrible and not portable. First class wasm support in the web would translate to more efficient desktop applications (Slack, VSCode, Discord, etc) and perhaps less hate towards memory heavy electron applications.
<!doctype html>
<wasm src="my-app.wasm">
Why not just do the whole DOM out of your WASM?> <script type="application/wasm" src="./main.wasm"></script>
<applet code="./Main.class"></applet>
Plus ça change...
The difference is that now it is cool.
Hi, thanks for the mention :-)
Lead dev of CheerpJ and CTO of Leaning Technologies here. Happy to answer any question from the community.
You're welcome, Leaning Technologies has done a lot of cool tech.
You can do all of this using Rust today. Very sane access to multi-threading => writing rust code that runs in parallel and is compiled without you worrying about it to run under several web workers
Not sure what the Rust situation is like, but last I checked (and compiled a non-trivial personal application) WASM supported the pthreads API and it seemed to work reasonably well. Have you encountered stumbling points porting a heavily MT Rust program to WASM?
That's supposedly WASI, an interface specifically designated for system programming use, and that's where it implements part of the POSIX support including pthread.
OTOH you still need to start a wasm runtime first, then import the WASI module into the wasm host.
P.S.: used to tinker with wasmtime and wasmi to add wasm support to my half abandoned deno clone ;) I learned this the hard way
WASI does not implement POSIX. And it isn't the goal of WASI to implement POSIX. It does not support pthreads either. WASI is similar to POSIX because its an API that provides a "system interface" for WASM programs. But the API is much simpler and quite different and capability based. There are things like WASIX (by Wasmer) built on of WASI that aims to provide POSIX support for multiple languages
WASM itself does not support pthreads. However things like Emscripten have very good support for pthreads. WASM is one of the compilation targets of Emscripten (we had asm.js before that) and on top of that emscripten provides excellent shims for many POSIX APIs. Regarding Rust which has supported WASM as a compilation target for a long time now, the idiomatic way of doing this is not shiming, but using higher level APIs: if you need parallel execution threads and need to compile to WASM you would use a crate (such as rayon) that is able to compile to WASM (and will use web workers under the hood just like Emscripten does). You would not use pthreads directly (std::thread)
WASM does not support pthreads in the browser, only web workers, which are much more limited.
Contrary to naysayers, I'm pretty sure this is very doable. Most browser JS objects map 1-1 to C++ native objects implemeneted in the browser.
As others have pointed out, the Js compoment interface is define in a language called WebIDL:
https://firefox-source-docs.mozilla.org/dom/webIdlBindings/i...
How it works in Chrome(Blink) is that a compiler generates a wrapper between V8, which is a V8 object that holds onto the native object reference using this IDL.
Once V8 cleans up the Js object, the native code, holding a weak reference to the native objects, detects that it has become unreachable and cleans that up.
In any ways, the object lifetime is encapsulated by the v8 Isolate (which is depending on how you look at it, the lifetime of the HTML document or the heap), so it'd be perfectly fine to expose native references as they'd be cleaned up when you navigate away/close the page.
Once you support all the relevant types, define a calling convention, and add an appropriate verifier to Wasm, it'd be possible to expose all native objects to Wasm, probably by generating a different (and lighter weight) glue to the native classes, or delegating this task to the Wasm compiler.
Of course, if you wanted to have both JS and Wasm to be able to access the same object, you'd have to do some sort of shared ownership, which'd be quite a bit more complicated.
So I'd argue it'd make sense to allow objects that are wholly-owned by the Wasm side or the JS side, which still makes a ton of sense for stuff like WebGL, as you could basically do rendering without calling into JS.
I'm going out on a limb here, but I'd guess since this multi-memory support has landed, that means a single webassembly instance can map multiple SABs, so it might be the beginnings of that.
I don't think WASM by itself will finally kill Javascript, other languages and their tooling are not well equipped to create DOM-based applications with the constraints expected for the web (fast loading times, lazy-loading, asset bundling, etc).
I don't think most languages it is even feasible to compile to wasm in a way that doesn't include a splashscreen for any non-trivial application. Which is simply unacceptable for most web content. And that is even before all the work required in user-land to support browser primitives (like URL routing, asset loading, DOM integration, etc).
So I can foresee this unlocking "heavy duty productivity apps" to run in the browser things like video editors or photoshop using web-first GUI (meaning DOM elements) without significant amounts of JS. But for general web content consumed by the masses I find it unlikely.
I expect the real "javascript death" will mean a completely new language designed from the ground-up to compile to WASM and work with browser APIs.
Basically never, because it would require re-standardizing the DOM with a lower-level API. That would take years, and no major browser implementor is interested in starting down that road. https://danfabulich.medium.com/webassembly-wont-get-direct-d...
Killing JavaScript was never the point of WASM. WASM is for CPU-intensive pure functions, like video decoding.
Some people wrongly thought that WASM was trying to kill JS, but nobody working on standardizing WASM in browsers believed in that goal.
People have to look at it, from the same point of view Google sees the NDK on Android.
"However, the NDK can be useful for cases in which you need to do one or more of the following:
- Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.
- Reuse your own or other developers' C or C++ libraries."
And I would argue WebGL/WebGPU are preferably better suited, given how clunky WebAssembly tooling still is for most languages.
A bit unrelated, but are there any recent benchmarks comparing it with native perf?
Hard to believe it can compete with V8 JIT anytime soon. Might be easier to integrate fast vector libraries within javascript engines.
You can write a WASM program today that touches the DOM, it just needs to go through the regular JS APIs. While there were some discussions early on about making custom APIs for WASM to access, that has long since been dropped - there are just too many downsides.
But then you need two things instead of one. It should be made possible to build WASM-only SPAs. The north star of browser developers should be to deprecate JS runtimes the same way they did Flash.
That is never going to happen until you create your own browser with a fork of the WASM spec. People have been asking for this for about a decade. The WASM team knows this but WASM wants to focus on its mission of being a universal compile target without distraction of the completely unrelated mission of being a JavaScript replacement.
It's also too early to worry about DOM apis over wasm instead of js.
The whole problem with the DOM is that it has too many methods which can't be phased out without losing backwards compatibility.
A new DOM wasm api would be better off starting with a reduced API of only the good data and operations.
The problem is that the DOM is still improving (even today), it's not stabilized so we don't have that reduced set to draw from, and if you were to mark a line in the sand and say this is our reduced set, it would already not be what developers want within a year or two.
New DOM stuff is coming out all the time, even right now we two features coming out that can completely change the way that developers could want to build applications:
- being able to move dom nodes without having to destroy and recreate them. This makes it possible so you can keep the state inside that dom node unaffected, such as a video playing without having to unload and reload a video. Now imagine if that state can be kept over the threshold of a multi-page view transition.
- the improved attr() api which can move a lot of an app's complexity from the imperative side to the declarative side. Imagine a single css file that allows html content creators to dictate their own grid layouts, without needing to calculate every possible grid layout at build time.
And just in the near future things are moving to allow html modules which could be used with new web component apis to prevent the need for frameworks in large applications.
Also language features can inform API design. Promises were added to JS after a bunch of DOM APIs were already written, and now promises can be abortable. Wouldn't we want the new reduced API set to also be built upon abortable promises? Yes we would. But if we wait a bit longer, we could also take advantage of newer language features being worked on in JS like structs and deeply immutable data structures.
TL;DR: It's still too early to work on a DOM api for wasm. It's better to wait for the DOM to stabalize first.
I am oversimplifying it, why should anything be stable?
That is the trend we face now days, there is too less stable stuff around. Take macOS, a trillion dollar company OS, not an open source without funding.
Stable is a mirage, sadly.
On the contrary, it’s something that solid progress is being made towards, and which has been acknowledged (for better or for worse) as something that they expect to be supported eventually. They’re just taking it slow, to make sure they get it right. But most of the fundamental building blocks are in place now, it’s definitely getting closer.
Aside from everything WASM has ever said, the security issues involved, and all the other evidence and rational considerations this still won’t happen for very human reasons.
The goal behind the argument is to grant WASM DOM access equivalent to what JavaScript has so that WASM can replace JavaScript. Why would you want that? Think about it slowly.
People that write JavaScript for a living, about 99% of them, are afraid of the DOM. Deathly afraid like a bunch of cowards. They spend their entire careers hiding from it through layers of abstractions because programming is too hard. Why do you believe that you would be less afraid of it if only you could do it through WASM?
Sounds to me like they forgot the W in WASM.
I agree with the first part, but getting rid of JS entirely means that if you want to augment some HTML with one line of javascript you have to build a WASM binary to do it?
I see good use cases for building entirely in html/JS and also building entirely in WASM.
getting rid of javascript entirely means to be able to manipulate the DOM without writing any javascript code. not to remove javascript from the browser. javascript will still be there if you want to use it.
Most of the time your toolchain provides a shim so you don’t need to write JS anyway. What’s the difference?
right, that's exactly the point. those that have a shim already successfully got rid of javascript, and that's enough.
Fair enough, I misunderstood what he meant by "deprecate JS runtimes".
You can use a framework that abstracts all the WASM to JS communication for DOM access. There are many such framework already.
The only issue is that there’s a performance cost. Not sure how significant it is for typical applications, but it definitely exists.
It’d be nice to have direct DOM access, but if the performance is not a significant problem, then I can see the rationale for not putting in the major amount of work work it’d take to do this.
Which framework is the best or most commonly used?
Yew, Leptos and Dioxus are all pretty decent with their own advantages and disadvantages if you like Rust. It's been a year or so since I last looked at them, to me the biggest missing piece was a component library along the lines of MUI to build useful things with. I'd be surprised if there weren't at least Bootstrap compatible wrapper libraries for them at this point.
I have slightly different question than OP - what's left until it feels like javascript is gone for people who don't want to touch it?
Say I really want to write front end code in Rust* does there just need to be a library that handles the js DOM calls for me? After that, I don't ever have to think about javascript again?
> Say I really want to write front end code in Rust* does there just need to be a library that handles the js DOM calls for me? After that, I don't ever have to think about javascript again?
yes, e.g. with Leptos you don't have to touch JS at all
Could you list some of these downsides and what are the reason of their existence?
For starters, the DOM API is huge and expansive. Simply giving WASM the DOM means you are greatly expanding what the sandbox can do. That means lower friction when writing WASM with much much higher security risks.
But further, WASM is more than just a browser thing at this point. You might be running in an environment that has no DOM to speak of (think nodejs). Having this bolted on extension simply for ease of use means you now need to decide how and when you communicate its availability.
And the benefits just aren't there. You can create a DOM exposing library for WASM if you really want to (I believe a few already exist) but you end up with a "what's the point". If you are trying to make some sort of UX framework based on wasm then you probably don't want to actually expose the DOM, you want to expose the framework functions.
> you probably don't want to actually expose the DOM, you want to expose the framework functions.
Aren't the framework functions closely related to the DOM properties and functions?
I was under the impression that this very much still on the table, with active work like the component model laying the foundation for the ABI to come.
Isn't going through the JS APIs slow?
used to be, in the early days, but nowadays runtimes optimized the function call overhead between WASM and JS to near zero
https://hacks.mozilla.org/2018/10/calls-between-javascript-a...
It did improved a lot, but unfortunately not near zero enough.
It is managable if you avoid js wasm round trips, but if you assume the cost is near zero you will be in for a unpleasant surprise.
When I have to do a lot of different calls into my wasm blob, I am way way faster batching them. Meaning making one cal into wasm, that then gets all the data I want and returns it.
GC was required part, because it was needed to allow interaction between DOM-side lifetimes and WASM-side lifetimes. You might still need a puny shim to essentially "declare" whatever form you want the DOM API to look like on WASM side (probably utilizing components and WIT in the future) but that shim won't have to do anything other than register the APIs because WASM will be able to take a reference to DOM object and vice versa instead of throwing a copy of data over the wall as it is now.
Comment was deleted :(
I am watching patiently from a distance to my hands on a well-designed frontend language but can't help to wonder... is it really _that_ inefficient to call a JS wrapper to touch the DOM?
Most code already is so horribly inefficient that I can't imagine this making a noticeable difference in most scenarios.
No, it's not that bad honestly. But it's not more efficient than JS either, and all this ugly glue code hurts my sensibility.
Sounds like something that could be trivially turned into a library, no?
There are some already (https://docs.rs/web-sys/latest/web_sys, for example), and I made mine too as an exercise. But it's still ugly, unecessary, verbose and inefficient code.
> When can we finally kill JavaScript?
If you think JavaScript has problems I have bad news about the DOM…
Wanted to say the same thing. People often conflate JS with the DOM API, but that couldn't be further from the case.
You can get rid of JS, but that won't help much because it's just a small language interfacing with a huge, old, backwards compatible to 20+ years ago system that is the DOM.
From what I understand, WASM is purely functional, to perform side effects, you will need a runtime like WASI to implement it
One of the things that I think make this tricky is that if you have any DOM references you now have visibility into a GCable object.
Part of the web Javascript security model is that you cannot see into garbage collection. So if you have some WASM-y pointer to a DOM element, how do you handle that?
I think with GC in properly people might come at this problem again, but pre-GC WASM this sounds pretty intractable
Probably never. There's a pretty good recent thread on this topic:
Hot take alert
> When is WASM finally going to be able to touch the DOM?
Coming from a web background, and having transitioned to games / realtime 3D applications...
Fuck the DOM dude. The idea that programming your UI via not one but TWO DSLs, and a scripting language, is utter madness. In principal, it might sound good (something something separation of concerns, or whatever-the-fuck), but in reality you always end up with this tightly coupled garbage fire split across a pile of different files and languages. This is not the way.
We need to build better native UI libraries that just open up a WebGL context and draw shit to that. DearIMGUI can probably already do like 85% of what modern webapps do.
Anyways .. /rant
That's true. But without the DOM we also loose the browser native accessibility support and text interaction and so on.
I know nothing about how accessibility supports works in the browser, but could they offer a way to use the accessibility API with your custom UI rendered in WebGL/WebGPU?
Preach. HTML and CSS are markup languages for creating documents. Using them for GUI applications is wild and obviously a bad idea.
HTML is pretty bad (XUL was better), but (subset) of CSS is probably OK.
Most major GUI frameworks operate on something very similar to HTML + CSS.
Can you elaborate? I don't see how this is true.
Couldn't you implement something like HTMX in wasm then and still have locality-of-behavior by specifying the behavior as html attributes?
> DearIMGUI can probably already do like 85% of what modern webapps do
I’m with you. Main blocker I’ve seen to “just use ImGui for everything” (which I’d love to adopt), is if I run ImGui in WASM the keyboard doesn’t open on mobile. This seems possible in theory because egui does it.
Even though running ImGui on mobile via WASM isn’t the primary use case, inevitably the boss or someone is going to need to “just do a quick thing” and won’t be able to on mobile, and this seems to be a hard ceiling with no real escape hatch or workaround.
One of those scenarios where, if we have to use a totally different implementation (React or whatever) to handle this 1% edge case, we might as well just use that for the other 99%.
I'd say two things about this.
1. Opening the native keyboard and plumbing those events through to the WASM runtime sounds pretty easy. It's probably not cause modern software, but conceptually it should be trivial.. right??
2. In terms of 'the boss' wanting to do 'that one weird thing' that there isn't a library/plugin/whatever for in DearImgui land. If dev time for everything else gets faster, than the 10x cost of that small corner case can be absorbed by net win. Now, I'm pretty sus on the claim everything else gets better today, but we can certainly imagine a world where they do, and it's probably not far away
Probably only after Components and WIT are stabilized. No point of making it wihtout it IMO.
I would bet on browsers being able to consume Typescript before WASM exposing any DOM API. That'd improve the Javascript situation a bit, at least.
Pretty sure this is here, my dude.
What are you referring to?
My bad, didn't fact check myself before I made an annoying quip. For some reason I thought browsers and started to roll out out native support for TS.
Maybe you got confused, because NodeJS is now able to strip annotations (it does not transform enums, etc) to let you run Typescript files directly.
Ahhhh, that was it. Thank you
It's been stuck at stage 1 since early 2022 unfortunately. https://github.com/tc39/proposal-type-annotations
I haven't really been following WASM development in the last year and didn't realize that WASM had moved to a versioned release model. I've been aware of the various features in development[1] and had thought many of the newer features were going to remain optional but I guess now that implementations are expected to support all the features to be able to claim compatibility with e.g. "WASM 3.0"?
It'll be interesting to see what the second non-browser-based WASM runtime to fully support 3.0 will be (I'm guessing wasmtime will be first; I'm not counting Deno since it builds on v8). Garbage collection seems like a pretty tricky feature in particular.
Does anyone know how this 3.0 release fits into the previously announced "evergreen" release model?[2]
> With the advent of 2.0, the Working Group is switching to a so-called “evergreen” model for future releases. That means that the Candidate Recommendation will be updated in place when we create new versions of the language, without ever technically moving it to the final Recommendation state. For all intents and purposes, the latest Candidate Recommendation Draft[3] is considered to be the current standard, representing the consensus of the Community Group and Working Group.
[1] https://webassembly.org/features/
> It'll be interesting to see what the second non-browser-based WASM runtime to fully support 3.0 will be (I'm guessing wasmtime will be first; ...)
Wasmtime already supports every major feature in the Wasm 3.0 release, I believe. Of the big ones: garbage collection was implemented by my colleague Nick Fitzgerald a few years ago; tail calls by Jamey Sharp and Trevor Elliott last year (with full generality, any signature to any signature, no trampolines required!); and I built our exceptions support which merged last month and is about to go out in Wasmtime 37 in 3 days.
The "3.0" release of the Wasm spec is meant to show progress and provide a shorthand for a level of features, I think, but the individual proposals have been in progress for a long time so all the engine maintainers have known about them, given their feedback, and built their implementations for the most part already.
(Obligatory: I'm a core maintainer of Wasmtime and its compiler Cranelift)
> garbage collection was implemented by my colleague Nick Fitzgerald a few years ago
The wasm features page says it is still behind a flag on wasmtime (--wasm=gc). Is that page out of date?
No, it's still behind a flag (and so transitively, exceptions are too, because we built exception objects on top of GC).
Our docs (https://docs.wasmtime.dev/stability-tiers.html) put GC at tier 2 with reason "production quality" and I believe the remaining concerns there are that we want to do a semi-space copying implementation rather than current DRC eventually. Nick could say more. But we're spec-compliant as-is and the question was whether we've implemented these features -- which we have :-)
Great, thanks for the info!
Wizard supports all of Wasm 3.0, but as a research tool, it only has an interpreter and baseline compiler tier (no opt compiler), so it doesn't run as fast as, say, V8 or wasmtime.
I suspect the versioning is going to replicate the JavaScript version system where versions are just sets of features that a runtime can support or not, I am not sure how feature discovery works in wasm though
The WebAssembly community should really focus more the developer experience of using it. I recently completed a project where I wrote a compiler¹ targeting it and found the experience to be rather frustrating.
Given that Wasm is designed with formal semantics in mind, why is the DX of using it as a target so bad? I used binaryen.js to emit Wasm in my compiler and didn't get a feeling that I am targeting a well designed instruction set. Maybe this is a criticism of Binaryen and its poor documentation because I liked writing short snippets of Wasm text very much.
Binaryen has a lot of baggage from Wasm early days, when it was still a strict AST. Many newer features are difficult to manipulate in its model.
In our compiler (featured in TFA), we chose to define our own data structure for an abstract representation of Wasm. We then wrote two emitters: one to .wasm (the default, for speed), and one to .wat (to debug our compiler when we get it wrong). It was pretty straightforward, so I think the instruction set is quite nice. [1]
[1] https://github.com/scala-js/scala-js/tree/main/linker/shared...
For what it's worth, I also tried Binaryen from TypeScript and similarly found it frustrating. I switched to using wasm-tools from Rust instead, and have found that to be a vastly better experience.
Isn't wasm-tools for working with Wasm modules? Maybe I'm missing something. I was using Binaryen to compile an AST to WebAssembly text. Also worth mentioning that Binaryen is the official compiler/toolchain for this purpose which is why I expected more from it.
Currently you use Binaryen to build up a representation of a Wasm module, then call emitText to generate a .wat file from that. With wasm-tools you'd do the same thing via the wasm-encoder crate to generate the bytes corresponding to a .wasm file, and then use the wasmprinter crate to convert from the .wasm format to the .wat format. Alternatively, I believe the walrus crate gives you a somewhat higher-level API to do the same thing, but I haven't used it because it's much heavier-weight.
What were your specific pain points? One thing that can be annoying is validation errors. That's one of the reasons that Wizard has a --trace-validation flag that prints a nicely-formatted depiction of the validation algorithm as it works.
Validation errors were bit of an issue. Especially because Binaryen constructs an internal IR that remains opaque until we emit the Wasm text. I did consider Wizard for my project but settled on Wasmtime because I needed WASI support.
My major pain point was the documentation. The binaryen.js API reference¹ is a list of function signatures. Maybe this makes sense to someone more experienced but I found it hard to understand initially. There are no explanation of what the parameters mean. For example, the following is the only information the reference provides for compiling an `if` statement:
Module#if(condition: Expression, ifTrue: Expression, ifFalse?: Expression): Expression
In contrast, the Wasm instruction reference on MDN² is amazing. WASI again suffers from the same documentation issues. I didn't find any official resource on how to use `fd_write` for example. Thankfully I found this blog post³.Wasm feels more inaccessible that other projects. The everyday programmer shouldn't be expected to understand PL research topics when they are trying to do something with it. I understand that's not the intention but this is what it feels like.
1. https://github.com/WebAssembly/binaryen/wiki/binaryen.js-API
2. https://developer.mozilla.org/en-US/docs/WebAssembly/Referen...
Thanks for bringing Wizard to my attention, the next time I need to validate wasm it's going to save me a ton of time.
Sorry about binaryen.js - those JS/TS bindings could be a lot better, and better documented, but priorities are generally focused on improving optimizations in core Binaryen.
That is, most work in Binaryen is on improving wasm-opt which inputs wasm and outputs wasm, so any toolchain can use it (as opposed to just JS/TS).
But if someone had the time to improve the JS/TS bindings that would be great!
I've tried using binaryen, and I've also tried emitting raw wasm by hand, and the latter was far easier. It only took ~200 lines of wasm-specific code.
i found assembly is easier to assemble from scratch (it's apple and orange but.). Most materials should exclude these tooling, mostly rust based tools. We should be able to write them by hands just like when assembly was taught. Compiler and assembly are separate classes. I think it's bad assumption that only compiler devs only care about wasm. It's compiling target, sure, but framing it this way will not broaden its knowledge.
The SpecTec mentioned in the announcement is really cool. They're using a single source of truth to derive LaTeX, Sphinx docs, Coq definitions for proofs, and an AST schema. Building the language spec in a way that its soundness can be proven and everything derived from one truth in this way seems super useful.
Since it hasn't been mentioned here yet: I wonder if the multiple-memories feature will somehow allow to avoid the extra copy that's currently needed when mapping a WebGPU resource. This mapping is available in a separate ArrayBuffer object which isn't accessible from WASM without calling into JS and then copying from the ArrayBuffer into the WASM heap and back.
Multiple WASM memories and Clang's/LLVM's address space feature sound like they should be able to solve that problem, but I'm not sure if it is as trivial as it sounds...
There has been a discussion (https://github.com/WebAssembly/multi-memory/issues/45) on the toolchain support, but I'm not sure if there have been steps to use multiple address spaces to support Wasm multi-memory in LLVM yet.
I'm just getting horrible segmenting and far-pointer vibes of the whole thing, been coding a classic Gameboy game for fun so fiddling with memory mappings is part of the "fun" but for anything non-constrained I'd hate that.
We buried far pointers with DOS and Win16 for a good reason..
I'd take segment-pointers over copying megabytes of memory around anyday though ;)
It's not much different than dealing with all the alignment rules that are needed when arranging data for the GPU.
"Far" memory is already a central concept in modern GPU APIs. Not all GPUs naturally share memory with the CPU they're running against and handles for resources are a requirement.
I'm a simple man who has simple needs. I want a better and faster way to pass Go structs in and out of the runtime that doesn't mean I have to do a sword dance on a parquet floor wearing thick knit wool socks and use some fragile grafted on solution.
If there can be a solution that works for more languages: great. I mostly want this for Go. If it means there will be some _reasonable_ limitations, that's also fine.
You're doing native code, this the solution is the same as in native code: your languages agree on a representation, normally C's, or you serialize and deserialize. Mixing language runtimes is just not a nice situation to deal with without the languages having first class support for it, and it should be obvious why.
I am not sure what you actually want but it sounds like something where the component model (the backbone of WASI) might help.
It defines a framework to allow modules to communicate with structured data types by allowing each module to decide how to map it to and from its linear memory (and in future the runtime GC heap)
In your case you could be able to define WIT interfaces for your go types and have your compiler of choice use it to generate all the relevant glue code
This is exactly why I say WebGL/WebGPU are much better for perfomance code on the browser than dealing with WebAssembly tooling.
This is the truth, and it's not really much better in non-GCed languages either. (In reality my impression is the GCed wasm side runtimes are even worse).
Some of the least fun JavaScript I have ever written involved manually cleaning up pointers that in C++ would be caught by destructors triggering when the variable falls out of scope. It was enough that my recollections of JNI were more tolerable. (Including for go, on Android, curiously).
Then once you get through it you discover there is some serious per-call overhead, so those structs start growing and growing to avoid as many calls as possible.
I too want wasm to be decent, but to date it is just annoying.
Was this dealing with DOM nodes and older IE versions by chance? That was probably the single biggest reason to wrap all DOM manipulation with JQuery in that it did a decent job of tracking and cleanup for you. IIRC, a lot of the issues came from the DOM and JS being in separate COM areas and the bridge not really tracking connections for both sides.
I need to pass all the user input events to a game engine, and get back the results into the webgl JS runtime side renderer. (The games at https://www.luduxia.com/ )
That would be more of a library than a WASM spec thing, no? I wrote a code generator that does this well for some internal use-cases.
Still looking forward to when they support OpenMP. We have an experimental Solvespace web build which could benefit quite a bit from that.
Open source CAD in the browser.
This is one of the best WASM-based web UIs I've seen! What was the hardest part of getting your desktop build working via Emscripten?
One thing worth noting, everything but the menus and popups is drawn with openGL. The text window uses GNU unifont which is a bitmap font. All the interactions are handled the same way as the desktop versions.
I didn't do the WASM port. It was started by our previous lead whitequark, and some more work done by a few others. It's not quite complete, but you can do some 3d modeling. Just might not be able to save...
> Just might not be able to save...
Oops, I did not read that before going ham in the editor. It seems that the files are stored inside the emscripten file system, so they are not lost. I could download my exported 'test.stl' with the following JavaScript code:
var data = FS.readFile('test.stl');
var blob = new Blob([data], { type: 'application/octet-stream' });
var a = document.createElement('a');
a.href = URL.createObjectURL(blob);
a.download = 'test.stl';
a.click();
PRs are welcome. ;-)
Sounds like that would be something on a different abstraction level than WebAssembly itself, or are there some blockers in the platform that preclude a OpenMP implementation targeting WebAssembly?
I just want to say that Solvespace is amazing, I was able to follow a tutorial on YouTube and then design a split keyboard case for CNCing, all pretty quickly a few years ago.
Thank you for maintaining it!
I'm still hype about WASM. This looks like a cool release. I'm running some pretty high traffic WASM plugins on envoy, running some plugins for terminal apps (zellij), and for one of my toy side projects, I'm running a wasm web app (rust leptos).
For 2 of those 3 use cases, i think it's not technically the optimal choice, but i think that future may actually come. Congratulations and nice work to everyone involved!
Custom Page Size didn't make it. It's useful for running WASM on microcontrollers where 64k of minimum RAM isn't feasible. Oh well, next time.
https://github.com/WebAssembly/custom-page-sizes/blob/main/p...
I really hope this spurs AssemblyScript to just port to WASM GC: https://github.com/AssemblyScript/assemblyscript/issues/2808
There's comments in there about waiting for a polyfill, but GC support is widespread enough that they should probably just drop support for non-GC runtimes in a major version.
On gc:
> Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm.
There's already a lot misunderstandings about wasm, and I fear that people will just go "It supports GC, so we can just export python/java/c#/go etc."
This is not a silver bullet. Cpp, or rust are probably still going to be the way to go.
Relying on the GC features of WASM will require writing code centered around the abstractions for the compiler that generates WASM.
I thought that the purpose of GC in WASM was to allow such higher level languages to be placed there without a bulky runtime also in WASM.
What's the value proposition of WASM GC if not this?
As I understand it, WASM GC provides a number of low level primitives that are managed by the WASM host runtime, which would theoretically allow languages like Go or Python to slim down how much of their own language runtime needs to be packaged into the WASM module.
But how those languages still need to carry around some runtime of their own, and I don't think it's obvious how much a given language will benefit.
>But how those languages still need to carry around some runtime of their own
Also just there will be a special version of those language runtimes which probably won't be supported in 10 years time. Just like a lot of languages no longer have up to date versions that can run on the common language runtime.
Programming languages with type erasure would have no runtime, just raw program code and the WASM GC. Languages that have runtime types still need a runtime for that functionality.
The Kotlin wasm compiler was basically engineered on top of wasm's GC support. Works fairly OK. As far as I understand it's essentially the same garbage collector that is also used for regular javascript.
> This is not a silver bullet. Cpp, or rust are probably still going to be the way to go.
I don't think that's necessarily true anymore. But as you say, it depends on the compiler you use and how well it utilizes what is there. Jetbrains has big plans with Kotlin and Wasm with e.g. compose multiplatform already supporting it (in addition to IOS native and Android).
Dart is in a more advanced state in that front.
The two main benefits of wasm GC are that compilers can avoid implementing or compiling a full GC but also that the guest and the host can share structured data types
That was the hope but unfortunately the browser can't do anything with WASM GC structs, not even read them!
Wasm-GC are abstractions for compiler writers to enable GC dependent languages to run without shipping a GC to run inside the already GC'd browser/Wasm heap and instead just use the browser GC directly.
So yes, Java,C#,etc will work better (If you look at the horrible mess the current C# WASM export generates it basically ships with an inner platform containing a GC), and no, it will explicitly not speak with "javascript" objects (you can keep references to JS objects, but you cannot call JS methods directly).
C# cannot be compiled to WASM GC yet: https://github.com/WebAssembly/gc/issues/77.
This isn’t true at all in Dart for example which is a WASM-GC language. Literally one of the very main selling points of Dart is you write your code once and it runs anywhere, WASM is just another compile target like x64 or RISC-V or iOS.
Is there a technical reason for the web limit to be 16 GB specifically? Or is it just a round number picked so that the limit could be standardized? Also, has the limit on JS heap size (and ArrayBuffer size) also been relaxed to 16 GB or is it still lower?
Comment was deleted :(
Folks here might be interested in WebAssembly from the Ground Up (https://wasmgroundup.com) — an online book to learn Wasm by building a simple compiler in JavaScript. (Disclaimer: I'm one of the authors.)
So far the book only covers WebAssembly 1.0 though. We'll likely publish an update to cover the (few) new features in 2.0, but WebAssembly 3.0 is a pretty big update. Garbage collection and typed references especially add quite a lot to the spec. But, they also make a lot more things possible, which is great.
The spec itself is also very readable and accessible. Everything in the spec that's described formally is also described in plain language. I can't think of any other spec that's quite as nice to read.
I've never used WASM so apologies in advance but for
>Typed references. The GC extension is built upon a substantial extension to the Wasm type system, which now supports much richer forms of references. Reference types can now describe the exact shape of the referenced heap value, avoiding additional runtime checks that would otherwise be needed to ensure safety.
Why is an assembly-like dialect interacting with concepts that are at a much higher level than it?
To WASM isn't it all just pointers in a big heap like every other assembly?
IIRC initially this was proposed so the runtime (i.e. the browser) could expose its APIs directly to wasm code, without the need for JS wrappers.
Does WASM still have 64 KiB pages? I get why for desktops, but there are use-cases for running WASM on microcontrollers where that's either inconvenient or outright impossible.
The one in particular I have in mind would be to put WASM on graphical calculators, in order to have a more secure alternative to the ASM programs (it's possible nowadays to write in higher-level languages, but the term stuck) that could work across manufacturers. Mid-range has RAM on the order of 256 KiB, but a 32-bit core clocked at more than 200 MHz, so there's plenty of CPU throughput but not a lot of memory to work with.
Sadly, the closest thing there is for that is MicroPython. It's good for what it does, but its performance and capabilities are nowhere near native.
https://github.com/WebAssembly/custom-page-sizes is a proposal championed by my colleague to add single byte granularity to Wasm page sizes, motivated by embedded systems and many other use cases where 64kb is excessive. It is implemented in wasmtime, and a Firefox implementation is in progress.
> Allow Wasm to better target resource-constrained embedded environments, including those with less than 64 KiB memory available.
If it has less than 64 kB of memory how is it going to run a WASM runtime anyway?
And even cheap microcontrollers tend to have more than 64 kB of memory these days. Doesn't not seem remotely worth the complexity.
It's not about the whole microcontroller having less than 64kB of memory - it's that each WASM module has a minimum memory size of 64kB, regardless of how much it actually requires. Also, if you need 65kB of memory, you now have to reserve 2 pages, meaning your app now needs 128kB of memory!
We're working on WASM for embedded over at atym.io if you're interested.
> If it has less than 64 kB of memory how is it going to run a WASM runtime anyway?
There is WARDuino (https://github.com/TOPLLab/WARDuino and https://dl.acm.org/doi/10.1145/3357390.3361029).
A runtime that accepts Wasm modules that use a large fraction of the functionality, there is going to be a RAM requirement in the few KiB to few tens of KiB. There seems to be a branch or fork of Wasm3 for Arduino (https://github.com/wasm3/wasm3-arduino).
If you are willing to do, e.g. Wasm -> AVR AOT compilation, then the runtime can be quite small. That basically implies that compilation does not happen on device, but at deployment time.
> in order to have a more secure alternative to the ASM programs
What security implications are there in graphical calculators in terms of assembler language?
Exam mode, or test mode. It's something that appeared about ten years ago, to ensure that a graphical calculator isn't loaded with cheats or has certain features enabled. The technical reason is that the RESET button no longer clears all of the calculator's memory (think Flash, not RAM) and proctors like to see a flashing LED that tells them everything's fine.
It's a flawed idea and has led to an arms race, where manufacturers lock down their models and jailbreaks break them open. Even NumWorks, who originally had a calculator that was completely unprotected and used to publish all of their source code on GitHub, had to give in and introduce a proprietary kernel and code signing, in order to stop custom firmwares and applications from accessing the LED and stop countries from outlawing their calculators.
Are the cases tamper proof as well? Because it's not like it's hard to open up a calculator and connect the LED somewhere else..
Sad state of affairs. I had no idea this was a thing.
Indeed. I got bit by the programming bug writing utility programs in TI-BASIC on my TI-83. I would've had a very different life trajectory had I not been able to do that.
Why WASM and not, like, java or something?
As in, Java ME?
Unless I'm mistaken, it's been on life support for the past 15 years. It's probably more heavyweight and firmware size/Flash usage is a concern. I don't think performance would be on par with WASM and there are use-cases where that really matters (ray tracing rendering for example). I'm also not sure there are many maintained, open-source implementations for it out there. I've also heard stories that it was quite a mess in practice because it was plagued by bugs and quirks specific to phone models, despite the fact that it was supposed to be a standard.
I'd gladly be proven wrong, but I don't think Java ME has a bright future. Unless you were thinking of something else?
I wasn't thinking of a particular implementation so much as a VM in the abstract. So Java ME failed (I don't know much about it)—but I just didn't even think performance would matter much for a graphing calculator with how cheap hardware is. Those things must be almost entirely profit margin these days. Do kids even use them any more for school?
I don't think the GC in this version has the features required to enable a C# runtime on top of it yet: https://github.com/WebAssembly/gc/issues/77
I wonder what language this GC can actually be used for at this stage?
The article answers your question, there are at least 6 languages: Java, OCaml, Scala, Kotlin, Scheme, and Dart.
OCaml with wasocaml: https://github.com/OCamlPro/wasocaml
Dart for a long time now.
I'm not familiar with all the implementation details of objects in C#, but the list of issues mixes runtime implementation details (object layouts) that should be fairly low effort to work around with actual language/runtime features (references, finalization).
In general though most regular C# code written today _doesn't directly_ use many of the features mentioned apart from references. Libraries and bindings however do so a lot since f.ex. p/invoke isn't half as braindead as JNI was, but targeting the web should really not bring along all these libraries anyhow.
So, making a MSIL runtime that handles most common C# code would map pretty much 1-1 with Wasm-GC, some features like ref's might need some extra shims to emulate behaviour (or compiler specializations to avoid too bad performance penalties by extra object creation).
Regardless of what penalties,etc goes in, the generated code should be able to be far smaller and far less costly compared to the situation today since they won't have to ship both their own GC and implement everything around that.
Part of the problem is you would need to fork the base class libraries and many popular nuget packages to remove any uses of ref/in/out, along with any interior references, spans, etc. The .NET type system has allowed 'interior references' (references into the inside of a GC object) for a long time and it's difficult to emulate those on top of WasmGC, especially if your goal is to do it at low cost.
It's definitely true that you could compile some subset of C# applications to WasmGC but the mismatch with the language as it's existed for a long time is painful.
Like I wrote, references could be managed with shims (basically a getter-setter interface per-referenced-value-type that's then implemented per-target type) and yes it'd be a painfully "expensive" variation, but for 90% of the cases those can probably be rewritten by a compiler to an inlined variation without those performance penalties.
My argument is that, code that goes on the web-side mostly would adhere to the subset, many of the ref-cases can be statically compiled away and what remains is infrequent enough that for most users it'll be a major win compared to avoid lugging along the GC,etc.
Kotlin/WASM is a thing
Still no mention of DOM.
<sets alarm for three years from now>
See you all for WASM 4.0.
That old thing again ;)
Direct DOM access doesn't make any sense as a WASM feature.
It would be at best a web-browser feature which browser vendors need to implement outside of WASM (by defining a standardized C-API which maps to the DOM JS API and exposing that C API directly to WASM via the function import table - but that idea is exactly as horrible in practice as it sounds in theory).
If you need to manipulate the DOM - just do that in JS, calling from WASM into JS is cheap, and JS is surprisingly fast too. Just make sure that the JS code has enough 'meat', e.g. don't call accross the WASM/JS boundary for every single DOM method call or property change. While the call itself is fast, the string conversion from the source language specific string representation on the WASM heap into JS strings and back is not free (getting rid of this string marshalling would be the only theoretical advantage of a 'native' WASM DOM API).
WASM is an abbreviation for WebAssembly. If it doesn't have DOM access, WebAssembly is as related to the Web as JavaScript is to Java. A language ecosystem with no I/O capability is as much use as a one-legged man at an arse-kicking party.
Webgl can't access the dom either
That's opengl on the web, a graphic api, you can't compare the two
I agree, but my point is that a language can be useful even if it does not have access to everything
Well, arguably the worst thing about WASM is the naming.
It's neither directly related to the web, nor is it an assembly syntax.
It's just another virtual ISA. "Direct DOM access for WASM" makes about as much sense as "direct C++ stdlib access for the x86 instruction set" - none ;)
If you want to compare the situation to x86, direct DOM access for WebAssembly is more akin to the BIOS than C++ stdlib access. If it can't interact with the outside world, it's just a very special toy that you can only use to play a game that isn't any fun, and a good candidate for those 'What's the next COBOL?' discussions that come up every now and then.
It's basically that joke about Haskell, that nobody uses it because it can't have any effect... but for WASM the developers are insisting on making sure it's real.
Oh wow, that really is terrible naming... I always thought WASM was a specification for compiling code into something that runs natively in web browsers—like a web-specific compilation target.. Today I learned.
It's an instruction set architecture that browsers happen to support executing directly.
In German, often things are named for where they came from, like Berliner or Frankfurter. WebAssembly came from the web, so makes sense :)
WASM is to webbrowsers what WebGPU is to webbrowsers. Designed with browsers in mind, but not bound to them.
Comment was deleted :(
Like C, which offloads IO to the standard library?
It's still there - you can still do I/O in C, even if you have to call a library function. In WebAssembly, there's no mechanism for I/O of any sort.
But that's not the (original) argument being made. Just as IO belongs in POSIX and not C, DOM access belongs in some other standard, not WASM
There is, it is just called WASI and it specifies syscalls in a different way.
WebASM is an assembly-like dialect, after all.
...in WASM you also call a function to do IO though? That function is just provided by the host environment via the function import table, but conceptually it's the exact same thing as a Linux syscall, a BIOS int-call or calling into a Windows system DLL.
As long as that function doesn't receive any parameter that is like any data actually on the DOM.
So you either create a very concrete JS library that translates specific WASM data into IO actions, or one that serializes and deserialize everything all around but can be standardized.
At this point, none of those options are much more capable than Java applets... Or, in fact, if you put a network call between the WASM and the JS, you won't even add much complexity.
> At this point, none of those options are much more capable than Java applets
Java applets allowed to load and call into native DLLs via JNI, so they were definitely much more capable than WASM, but also irresponsibly unsafe.
In your own WASM host implementation you could even implement a dlopen() and dlsym() to load and call into native DLLs, but any WASM host which cares about safety wouldn't allow that (especially web browsers).
Isn’t the whole reason why people want DOM access is so that the JavaScript side doesn’t have any meat to it and they can write their entire web app in Rust/Go/Swift/etc compiled to webasm without performance concerns?
Spoiler: there will be performance concerns.
The bottleneck is in the DOM operations themselves, not javascript. This is the reason virtual-dom approaches exist: it is faster to operate on an intermediate representation in JS than the DOM itself, where even reading an attribute might be costly.
This isn't true. DOM access is fast. DOM manipulation is also fast. The issue is many DOM manipulations happening all at once constantly that trigger redraws. Redrawing the DOM can also be fast if the particular DOM being redrawn is relatively small. React was created because Facebook's DOM was enormous. And they wanted to constantly redraw the screen on every single interaction. So manipulating multiple elements simultaneously caused their rendering to be slow. So they basically found a way to package them all into a single redraw, making it seem faster.
> without performance concerns?
WASM isn't going to magically make the DOM go faster. DOM will still be just as slow as it is with Javascript driving it.
WASM is great for heavy-lifting, like implementing FFMPEG in the browser. DOM is still going to be something people (questionably) complain about even if WASM had direct access to it. And WASM isn't only used in the browser, it's also running back-end workloads too where there is no DOM, so a lot of use cases for WASM are already not using DOM at all.
Comment was deleted :(
> Direct DOM access doesn't make any sense as a WASM feature.
…proceeds to explain why it does make sense…
It's not a WASM feature, but would be a web browser feature outside the WASM standard.
E.g. the "DOM peeps" would need to make it happen, not the "WASM peeps".
But that would be a massive undertaking for minimal benefit. There's much lower hanging fruit in the web-API world to fix (like for instance finally building a proper audio streaming API, because WebAudio is a frigging clusterf*ck, and if any web API would benefit from an even minimal reduction of JS <=> WASM marshalling overhead it would be WebGL2 and WebGPU, not the DOM. But even for WebGL2 and WebGPU the cost inside the browser implementation of those APIs is much higher than the WASM <=> JS marshalling overhead.
so the feature does make sense, it’s just the implementation crosses a Conway’s law boundary
(I also want this feature, to drive DOM mutations from an effect system)
Sorta, in practical terms once WASI is ready the browsers can define a DOM world in WIT and maybe support components using it
> WebAudio is a frigging clusterf*ck
Out of curiosity, what issues do people have with WebAudio since audio worklets became widely supported?
Audio worklets are a step into the right direction for streaming audio, but for a simple audio-streaming API you don't need the complex node-based architecture of WebAudio, you just need a JS callback which writes samples into an ArrayBuffer.
With audio worklets this callbacks runs in a separate audio thread, and with the (deprecated) ScriptProcessNode this callback runs on the main thread.
E.g. a "good" web audio API replacement would only offer a callback that runs in a separate audio thread plus a convenience function call which allows to push small sample-packets from the main thread to the audio thread (at the cost of increased latency to avoid starving) - this push-function would basically be the replacement for ScriptProcessorNode.
In general, see here for a pretty good overview why WebAudio as a whole is a badly designed API: https://blog.mecheye.net/2017/09/i-dont-know-who-the-web-aud...
TL;DR: WebAudio's original design requires a lot of complexity and implementation effort for use cases that are not relevant to most of its users - and all that effort could be used instead to implement a much smaller and focused web audio API that covers actually relevant use cases.
Specifically for audio worklets: those mainly make sense when the entire audio stream generation can happen on the audio thread.
But if you need to generate audio on the main thread (such as in emulators: https://floooh.github.io/tiny8bit/), unless you want to run the entire emulator in the audio thread, you need an efficient way to communicate the audio stream which is generated on the main thread to the audio thread. For this you ideally need shared-memory multithreading, and for this you need control over the COOP/COEP response headers, and for this you need control over the web server configuration (which excludes a lot of popular web hosters, like Github Pages).
For this situation (generate sample stream on browser thread and communicate that to the audio thread) you're basically re-implementing ScriptProcessorNode, just less efficiently and limited by COOP/COEP. So at the very least ScriptProcessorNode should be un-deprecated.
> If you need to manipulate the DOM - just do that in JS, calling from WASM into JS is cheap, and JS is surprisingly fast too.
From the point of view of someone who doesn't do web development at all, and to whom JS seems entirely cryptic: This argument is weird. Why is this specific (seemingly extremely useful!) "web thing" guarded by a specific language? Why would something with the generality and wide scope of WASM relegate that specific useful thing to a particular language? A language that, in the context of what WASM wants to do in making the web "just another platform", is pretty niche (for any non-web-person)?
For me, as a non-web-person, the big allure of WASM is the browser as "just another platform". The one web-specific thing that seems sensible to keep is the DOM. But if manipulating that requires learning web-specific languages, then so be it, I'll just grab a canvas and paint everything myself. I think we give up something if we start going that route.
Think of it as traditional FFI (foreign function interface) situation.
Many important libraries have been written in C and only come with a C API. To use those libraries in non-C languages (such as Java) you need a mechanism to call from Java into C APIs, and most non-C language have that feature (e.g. for Java this was called JNI but has now been replaced by this: https://docs.oracle.com/en/java/javase/21/core/foreign-funct...), e.g. C APIs are a sort of lingua franca of the computing world.
The DOM is the same thing as those C libraries, an important library that's only available with an API for a single language, but this language is Javascript instead of C.
To use such a JS library API from a non-JS language you need an FFI mechanism quite similar to the C FFI that's been implemented in most native programming languages. Being able to call efficiently back and forth between WASM and JS is this FFI feature, but you need some minimal JS glue code for marshalling complex arguments between the WASM and JS side (but you also need to do that in native scenarios, for instance you can't directly pass a Java string into a C API).
DOM API interactions with JS are already described with a DDL, they're not exactly native JS APIs.
Wow, I've used JNI many times, but many years ago. It is a bit painful. Cool to see it's been replaced by FFM, didn't know that existed.
Sure, but if someone came at the C-centric ecosystem today and said "let's do the work to make it so that any language can play in this world", then surely "just FFI through C" would be considered rather underwhelming?
Comment was deleted :(
Well that's what the WASM Component Model set out to solve, some sort of next-gen FFI standard that goes beyond C APIs:
https://component-model.bytecodealliance.org/
In my opinion it's an overengineered boondoggle, since "C APIs ought to be good enough for anything", but maybe something useful will eventually come out of it, so far it looks like it mostly replaces the idea of C-APIs as lingua-franca with "a random collection of Rust stdlib types" as lingua-france, which at least to me sounds utterly uninteresting.
While it can function as an FFI (it is indeed the basis of WASI) the component model is more about composability and interfaces
The practical argument is that while initially the DOM API was developed to be language agnostic with more of an eye to Java/C++ than JavaScript since a while ago this is no longer the case and many web APIs use JavaScript data types and interfaces (eg async iterators) that do not map well to wasm
The good news is that you can use very minimal glue code with just a few functions to do most JavaScript operations
> Direct DOM access doesn't make any sense as a WASM feature.
I disagree. The idea of doing DOM manipulation in a language that is not Javascript was *the main reason* I was ever excited about WASM.
You can't even run wasm in browsers without JavaScript, it is not a supported <script /> type
Anyway I am quite sure that you could almost completely get rid of js glue code by importing the static Reflect methods and a few functions like (a,b)=>a+b for the various operators, add a single array/object References to hold refs and you can do pretty much everything from wasm by mixing imported calls
> The idea of doing DOM manipulation in a language that is not Javascript
...is already possible, see for instance:
https://rustwasm.github.io/docs/wasm-bindgen/examples/dom.ht...
You don't need to write Javascript to access the DOM. Such bindings still call JS under the hood of course to access the DOM API, but that's an implementation detail which isn't really important for the library user.
While technically possible - the calls to javascript slow things down and you're never going to get the performance of just writing javascript in the first place, much less the performance of skipping javascript altogether.
The calls to JS are quite cheap, when trusting the diagrams in here it's about 10 clock cycles on a 2 GHz CPU per call (e.g. 200 million calls per second):
https://hacks.mozilla.org/2018/10/calls-between-javascript-a...
The only thing that might be expensive is translating string data from the language-specific string representation on the WASM heap into the JS string objects expected by the DOM API. But this same problem would need to be solved in a language-portable way for any native WASM-DOM-API, because WASM has no concept of a 'string' and languages have different opinions about what a string looks like in memory.
But even then, the DOM is an inherently slow API starting with the string-heavy API design, the bit of overhead in the JS shim won't suddenly turn the DOM into a lightweight and fast rendering system.
E.g. it's a bit absurd to talk about performance and the DOM in the same sentence IMHO ;)
Wasm could have had a concept of a string. I frequently mourn the rejection of the imo excellent stringref proposal. https://github.com/WebAssembly/stringref
wasm 3 has the JavaScript string built-in now according to TFA
That's a different thing. There is still no string type. JS string built-ins are a kludge for web targets only.
If your language and its compiler use JS String Builtins (part of Wasm 3.0) for their strings, then there is no cost to give them to JS and the DOM.
Fair enough, thanks for sharing!
Dart also has this and as you can see in the examples in the README the APIs look exactly the same as what you are used to in JavaScript but now are fully typed and your code compiles to WASM.
You don't get it.
... maybe you don't get it?
_Telling the browser how you want the DOM manipulated_ isn't the expensive part. You can do this just fine with Javascript. The browser _actually redrawing after applying the DOM changes_ is the expensive part and won't be any cheaper if the signal originated from WASM.
Don't get what, exactly?
Wasm doesn't specify any I/O facilities at all. DOM is no different. There's a strict host/guest boundary and anything interacting with the outside world enters Wasm through an import granted by the host. On the web, the host is the JS runtime.
Don't sleep on the Rust toolchain for this! You can have DOM-via-Wasm today, the tools generate all the glue for you and the overhead isn't that bad, either.
Yep and with better raw DOM performance than React:
https://krausest.github.io/js-framework-benchmark/current.ht...
Got a rec? The reply to you is talking about a component framework, rather than actual vanilla html/css access. I haven't seen anything, personally, that allows real-time, direct DOM interaction.
https://github.com/wasm-bindgen/wasm-bindgen is the tool for raw access to the DOM APIs.
Ah, yes! This seems like exactly the kind of minimalist exposure I was trying to find, to avoid the emscripten dependency. Thanks!
Wasm 3.0, with its GC and exception support, contains everything you need. The rest is up to the source language to deal with. For example, in Scala.js [1], which is mentioned in the article, you can use the full extent of JavaScript interop to call DOM methods right from inside your Scala code. The compiler does the rest, transparently bridging what needs to be.
DOM wouldn't be part of WASM, it'd be part of the host.
If there ever is a WASM-native DOM API, WASM GC should help a lot with that.
End-users DON'T want developers' apps running in the browser to have freedom to access everything on the end-users' machines. Not having direct dom access is a security feature, as much as an MMU is. Please don't ask for this.
Wasm is sandboxed at pretty much the same security boundary as js there is nothing a DOM-enabled wasm module could do (security/privacy wise) that JavaScript can't already do
It's explicitly negated from the Wasm-GC spec, too damn much security issue surface that keeps all of the browser makers solidly in the "do not want to touch" camp.
I wish the same mate, Please wasm team, I am more than happy waiting 3 years if you can guarantee that you are looking into the best way possible into integrating this feature of dom manipulation.
I sometimes feel like js is too magic-y, I want plain boring golang and want to write some dom functions without using htmx preferably.
Please give us more freedom! This might be the most requested feature and this was how I came across knowing wasm in the first place (leptos video from some youtuber I think, sorry if i forgot)
I was even trying to be charitable and read the feature list for elements that would thin down a third party DOM access layer, but other than the string changes I’m just not seeing it. That’s not enough forward progress.
WASM is just an extremely expensive toy for browsers until it supports DOM access.
It's a chicken egg situation. The people already using WASM either don't care about the DOM or had realized long ago that going through a JS shim works just as well, the rest just complain time and time again that WASM has no DOM access whenever there's a HN thread about WASM, but usually don't even use WASM for anything.
Especially if there was major momentum of people writing their web applications with wasm, there would be a reason to eventually get that massive undertaking of creating the ABI for that working. Then all those applications could just recompile to make use of this new hypothetically faster API. The bigger issue here is that it just doesn't make any sense to write frontend code in rust or go or whatever in the first place.
The whole js ecosystem evolved to become a damn good environment to write UIs with, people don't know the massive complexity this environment evolved to solve over decades.
My old team shipped a web port of our 3D modeling software back in 2017. The entire engine is the same as the desktop app, written in C++, and compiled to wasm.
Wasm is not now and will never be a magic "press here to replace JS with a new language" button. But it works really well for bringing systems software into a web environment.
The three main camps of wasm use cases are ( in no particular order )
1. Non browser application (lightweight cloud, plugins, sandboxing)
2. Performance kernels (like compiling a game/rendering engine or AI stuff)
3. Compiling js-like applications from other languages (eg blazor wasm and others)
The only case where DOM access would be useful is 3 and even there 90% of the gains are already available from the JS-strings proposal to avoid copying+reencoding.
Direct DOM access is otherwise mostly a red herring
If you give WASM access to everything, you've defeated the main reason it exists. Ambient authority is the reason we need WASM in the first place.
For server applications true, but the reason browsers made wasm was because they needed a safer/portable/standardizable alternative to PNaCl
Many users demand DOM access but how about starting with browser(js) access to wasm gc objects? I understand that storing the whole thing might interfere with GC, but just reading a property? If I define a well structured struct in wasm I should be able to read it from the host without creating a special read_my_object_xyz_property_abc() glue function, no?
Why exceptions were difficult without the builtin support? Sttange to hear that.
Is the component model work (https://component-model.bytecodealliance.org/) related to the 3.0 release in any way?
No, the component model proposal is not part of the Wasm 3.0 release. Proposals only make it into a Wasm point release once they reach stage 5, and the component model is still under development and so is not trying to move through the phases yet.
Unlike any of the proposals which became part of Wasm 3.0, the component model does not make any changes to the core Wasm module encoding or its semantics. Instead, it’s designed as a new encoding container which contain core Wasm modules, and adds extra information alongside each module describing its interface types and how to instantiate and link those modules. By keeping all of these additions outside of core Wasm, we can build implementations out of any plain old Wasm engine, plus extra code that instantiates and links those modules, and converts between the core wasm ABI to higher level interface types. The Jco project https://github.com/bytecodealliance/jco does exactly that using the common JS interface used by every web engine’s Wasm implementation. So, we can ship the component model on the web without web engines putting in any work of their own, which isn’t possible with proposals which add or change core wasm.
Thanks for clarifying.
Has anyone benchmarked 64bit memory on the current implementations? There's the potential for performance regressions there because they could exploit the larger address space of 64bit hosts to completely elide bounds checks when running 32bit WASM code, but that doesn't work if the WASM address space is also 64bit.
> WebAssembly apps tend to run slower in 64-bit mode than they do in 32-bit mode. This performance penalty depends on the workload, but it can range from just 10% to over 100%—a 2x slowdown just from changing your pointer size.
> This is not simply due to a lack of optimization. Instead, the performance of Memory64 is restricted by hardware, operating systems, and the design of WebAssembly itself.
https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...
Oof, that's unfortunate. I'm sure there's good reasons why WASM works like it does but the requirement for OOB to immediately abort the program seems rough for performance, as opposed to letting implementations handle it silently without branching (e.g. by masking the high bits of pointers so OOB wraps around).
Is this WASM specific though? Some apps suffer in performance when they move to 64-bit in general due to larger pointers and not taking sufficient advantage of/or needing 64-bit data types in general, hence the increased memory bandwidth/cache space slows them down (one of the reasons many people like a 32-bit address space, 64-bit data model).
The blog post explains that it's more than that. Bounds checking, in particular, costs more for reasons having to do with browser implementations, for example, rather than for architectural reasons.
This looks like a great release! Lots of stuff people have wanted for a long time in here.
Tail calls. Tail calls!
The tail call instructions (return_call and friends) were crucial for compiling Scheme. Safari had a bug in their validator for these instructions but the fix shipped in their most recent release so now you can use Wasm tail calls to their fullest in all major browsers.
Support for garbage collection is really nice to see. Previously, it’s been very difficult to do garbage collection in WASM because you don’t have direct access to the stack, so traditional approaches like stack scanning haven’t been feasible.
My dream is we get pluggable execution runtimes (wasm, jvm, , LLVM IR?) with pluggable display tech (svg, html, opengl) and each tech JITs itself on your platform.
While we do need a default “text mode” (html), js is not the answer to a common language and is holding everything back.
I'd really like browser vendors exposing to wasm a graphics api, input and accessibility. So we can skip the dom and start making actual apps
I'm both excited and terribly disappointed.
I feel like webdev ecosystem after the fall of Flash took a wrong turn and missed a huge opportunity. For whatever reason people opted for both sticking with JS and backward compatibility. I can clearly see an alternative universe where you have a browser that accepts both a <script> with JS and another, modern language, and it's up to the developer to chose if he wants to transpile/recompile the code for backward compatibility or go all in for new features.
I mean Typescript is cool, but it's reinventing the wheel, we already had that with ES4 over 15 years ago and WASM is still a second class citizen for hackers.
This. And it feels so frustrating, people keep defending w3c... I'm feeling let down by specs that instead of giving me a graphic, accessibility and input api, tells me where I should use <article> vs <aside> or whatever useless stuff
I'm glad I'm not alone with these feeling.
I was there when Macromedia took JS as their scripting language for animating GIFs. People soon started doing silly things and everyone realized it was not fit for the task. We had AS2 which added some syntactic sugar and was the exact same thing as early Typescript - an escape hatch that allowed you to use OOP while the resulting code was still just plain old JS.
But meanwhile Macromedia worked on a whole new thing called AS3. It was codified as ES4 and it's the reason why JS had all these "private", "abstract" keywords reserved.
Heck, we even had redtamarin for running AS as console! And there was Alchemy which is grandfather of WASM.
We were that close to having a compiled JS++.
But times were different, we were still in the middle of browser wars, and the standard was eventually scrapped because noone was interested in JS at the time.
And then Apple decided to kill Flash.
And for some reason the community decided to burn the library of Alexandria and spend the next 10 years on reinventing the wheel.
Sure, V8 is a world wonder, no question about it, the fact that JS can run that fast is just amazing. And people finally wrapped their head around TS and realized that types are useful. Wonderful. But I really think that we've lost 10 years running in circle and producing smart looking docs while tech is slowly getting back where it was 15 years ago.
> 64-bit address space. Memories and tables can now be declared to use i64 as their address type instead of just i32.
Could be nitpicking but in the PDF (https://webassembly.github.io/spec/core/_download/WebAssembl...), there's a passage that says:
> 32-bit integers also serve as Booleans and as memory addresses. (under 1.2.1 Concepts)
While 64-bit is not mentioned. Could it be an oversight or I understood it wrong?
These features look compelling. When will they all be available in mainstream browsers?
One bright point here is that the WASM changes may force v8 to improve its IPC by having a feature that Bun gets from JSC, which is passing strings across isolate boundaries.
IPC overhead is so bad in NodeJS that most people don’t talk about it because the workarounds are just impossibly high maintenance. We reach straight for RPC instead, and downplay the stupidity of the entire situation. Kind of reminiscent of the Ruby community, which is perhaps not surprising given the pedigree of so many important node modules (written by ex Rails devs).
Wasm 3.0 looks like a significant step forward. The addition of 64-bit address space and improved reference typing really expands the platform’s capabilities. Integration with WASI makes access to system resources more predictable, but asynchronous operations and JS interop remain key pain points. Overall, this release makes Wasm more viable not just in the browser, but also for server-side and embedded use cases.
Oh no, right after I started writing a binary decoder for 2.0. Does anybody know how much this changes things as far as a decoder is concerned?
Wasm only gets additive changes - the binary format can't change in a way that breaks any previously existing programs, because that would break the Web. So, you just have to add more opcodes to your implementation.
Awesome, thanks!
It introduces new types (structs and arrays), a new section for tags, and several dozen instructions (first-class functions, GC, tail calls, and exception handling). It generalizes to multiple memories and tables, as well as adding 64-bit memories. The binary format changes aren't too bad, but it's a fairly big semantic addition.
This looks like a huge release for C# and Java I guess. Half of the features are useful elements they no longer have to polyfill.
Kinda off-topic but do you see any value in a (rust) library that does only the binary decoding part + validation?
This is so exciting! Been waiting for this for years, very interested to see what people can do with this
64-bit addr space and deterministic profiles ftw!
Really nice new set of features.
What cool new features could this enable in Pyodide?
Does anyone know whether the exception handling implementation supports restartable exceptions like Common Lisp's and Scheme's?
Speaking for CL, it seems so for me.
The whole magic about CL's condition system is to keep on executing code in the context of a given condition instead of immediately unwinding the stack, and this can be done if you control code generation.
Everything else necessary, including dynamic variables, can be implemented on top of a sane enough language with dynamic memory management - see https://github.com/phoe/cafe-latte for a whole condition system implemented in Java. You could probably reimplement a lot of this in WASM, which now has a unwind-to-this-location primitive.
Also see https://raw.githubusercontent.com/phoe-trash/meetings/master... for an earlier presentation of mine on the topic. "We need means of unwinding and «finally» blocks" is the key here.
No, that functionality would fall under the stack-switching proposal, which builds on the tags of Wasm exception handling.
Having wasm 3.0 and a project named wasm3 which doesn't seem to support wasm 3.0 is sure going to get confusing!
Do WASM apps have now direct access to DOM?
Can QuickJS run in WASM3.0 with deterministic profile?
That would be pretty rad!
Comment was deleted :(
Doesn't look like they took anything out.
What do we need it for?
But looks like you still cannot open a raw TCP or UDP socket? Who needs this internet network thing huh?
I appreciate it is a potential security hole, but at least make it behind a flag or something so it can be turned on.
Opening a socket would fall under WASI[1].
Great work. WASM will eat the world :D.
Comment was deleted :(
[dead]
[dead]
[dead]
> GC and Exception handling
This was not necessary.. what a mistake, specially EH..
Not including GC would have been a mistake. Having to carry a complete garbage collector with every program, especially on platforms like browsers were excellent ones already exist, would have been a waste.
It's also important because sometimes you want a WebAssembly instance to hold a reference to a GC object from Javascript, such as a DOM object, or be able to return a similar GC object back to Javascript or to another separate WebAssembly instance. Doing the first part alone is easy to do with a little bit of JS code (make the JS code hold a reference to the GC object, give the Wasm an id that corresponds to it, and let the Wasm import some custom JS functions that can operate on that id), but it's not composable in a way that lets the rest of those tasks work in a general way.
Doesn't every WASM program have to carry its own malloc/free today?
Yes, every wasm program that uses linear memory (which includes all those created by llvm toolchains) must ship with its own allocator. You only get to use the wasm GC provided allocator if your program is using the gc types, which can’t be stored in a linear memory.
Yes, but Emscripten comes with a minimal allocator that's good enough for most C code (e.g. code with low alloc/free frequency) and only adds minimal size overhead:
https://github.com/emscripten-core/emscripten/blob/main/syst...
how is that different from compiling against a traditional CPU which also doesn't have a built in GC? i mean those programs that need a GC already have one. so what is the benefit of including one on the "CPU"?
The fact that a minimum size go program is a few megabytes in size is acceptable in most places in 2025. If it was shipped over the wire for every run time instead of a single install time download, that would be a different story.
Garbage collection is a small part of the go run time, but it's not insignificant.
I will be interested to see if Go is able to make use of this GC and if so, how much that wasm binaries
https://github.com/golang/go/issues/63904
Skimming this issue, it seems like they weren't expecting to be able to use this GC. I know C# couldn't either, at least based on an earlier state of the proposal.
this thread confirms my suspicions. some languages may benefit from a built in GC, but those languages probably use a generic GC to begin with. wheras any language that has a highly optimized GC for their own needs won't be able to use this one.
The "CPU" in every browser already has one. This lets garbage-collected languages use that one. That's an enormous savings in code size and development effort.
i don't see the reduced development effort, after all, unless the language is only running on webassembly i still need to implement my own GC for other CPUs.
so most GC-languages being ported to webassembly already have a GC, so what is the benefit of using a provided GC then?
on the other hand i see GC as a feature that could become part of any modern CPU. then the benefit would be large, as any language could use it and wouldn't have to implement their own at all anymore.
Aside from code size the primary benefit on the Web is that the GC provided to wasm is the same one for the outer JavaScript engine, so an object from wasm can stay alive get collected based on whether JS keeps references to it. So it’s not really about providing a GC for a single wasm module (program), its about participating in one cooperatively with other programs.
now that would make a lot of sense, thanks
Writing a GC that performs well often involves making decisions that are tightly coupled to the processor architecture and operating system as well as the language implementation's memory representations for objects. Using a GC that is already present can solve that problem.
> i don't see the reduced development effort, after all, unless the language is only running on webassembly i still need to implement my own GC for other CPUs.
I'd think porting an existing GC to WASM is more effort than using WASM's GC for a GC'd language?
i don't think so. first of all, you don't rewrite your code for every CPU but you just adapt some specific things. most code is just compiled for the new architecture and runs. second, those languages that are already running on wasm have already done the work. so at best new languages who haven't been ported yet will get any benefit from a reduced porting effort.
I think it's a "you don't pay for it if you don't use it" thing, so I guess it's fine. It won't affect me compiling my C or Zig code to WASM for instance since those languages have neither garbage collection nor exceptions.
It's kinda nice to have 1st class exception support. C++ exceptions barely work in Emscripten right now. Part of the problem is that you can't branch to arbitrary labels in WASM.
WASM isn't a language, so them adding stuff like this serves to increase performance and standardize rather than forcing compilers to emulate common functionality.
This allows more languages to compile to it. You don't need to use these features if you don't want to.
Besides making it much nicer for GC'd languages to target WASM, an important aspect is that it also allows cross-language GC.
Whereas with a manual GC, if you had a JS object holding a reference to an object on your custom heap, and your heap holds a reference to that JS object (with indirections sprinkled in to taste) but nothing else references it, that'd result in a permanent memory leak, as both heaps would have to consider everything held by the other as GC roots; so you'd still be forced to manually avoid cycles despite only ever using GC'd languages. Wasm GC entirely avoids this problem.
There's a joke in Brazil saying "Brazil is the country of the future and will always be that. It will never be the country of the present".
WASM is and will always be the greatest technology of the future. It will never be the greatest technology of the present.
WASM enables some pretty cool apps in the present, though. I think Figma heads the list.
We use it heavily here at Ditto. It's fantastic.
Wasm is one of the best solutions for running untrusted code. Alternative are more complicated or have limited language choices.
steve job's ghost will prevent wasm adoption.
> steve job's ghost will prevent wasm adoption.
https://webassembly.org/features/
That isn't updated for Safari 26, but by that table Safari 18 is only missing 3 standardized features that Chrome supports, with a fourth that is disabled by default. So what's the point of your comment? Just to make noise and express your ignorance?
Historically speaking, apple has consistently limited web app functionality on iOS since 2008. I think we would be much further ahead if it wasn't for Apple’s policies under his leadership.
Apple took over the distribution to prioritize a cut to the app store which crippled/slowed the open web PWA and WASM adoption.
Sure, and that's why Asm.JS(regular JS with special semantics) and later Wasm(bytecode translateable to JS) was so brilliant. It already worked on Safari, they had the option of either:
A: look slow compared to other engines that supported it
B: implement it
Now, stuff like the exception handling stuff and tail calls probably aren't shimmable via JS, but at this point they don't gain much from being obstructionists.
What ignorance? Safari doesn't support the most important additions:
- memory64
- multiple memories
- JSPI (!!)
I recently explored the possibility of optimizing qemu-wasm in browser[0].. and it turns out that the most important features were those Safari doesn't implement.
As a _user_ JSPI looks neat, however as a compiler writer JSPI looks like an horrible hairball of security issues and performance gotchas to degrade generated WASM code performance.
Say you have a WASM module, straight line code that builds a stack, runs quickly because apart from overflow checks it can just truck on.
Now add this JS-Promise thing into the mix:
A: now how does a JS module handle the call into the Wasm module? classic WASM was a synchronous call, now should we change the signature of all wasm functions to async?
B: does the WASM internal calls magically become Promises and awaits (Gonna take a lot of performance out of WASM modules), if not we now have a 2 color function world that needs reconciliation.
C: If we do some magic, where the full frame is paused and stored away, what happens if another JS function then calls into the WASM module and awaits and then the first one resumes? Any stack inside the wasm-memory has now potential race conditions (and potentially security implications). Sure we could make locks on all Wasm entries but that could cause other unintended side-effects.
D: Even if all of the above are solved, there's still the low-level issues of lowlevel stack management for wasm compiled code.
Looking at the mess that is emscripten's current solution to this, I really hope that this proposal gets very well thought out and not just railroaded in because V8's compiler manages to support it because.
1: It has the potential to affect performance for all Wasm code just because people writing Qemu,etc are too lazy to properly abstract resource loading to cooperate with the Wasm model.
2: It can become a burden on the currently thriving Wasm ecosystem with multiple implementations (honestly, stuff like Wasm-GC is less disruptive even if it includes a GC).
Regarding C - yes, multiple stacks should be supported, and I literally opened a PR to add coroutine support based on JSPI to emscripten: https://github.com/emscripten-core/emscripten/pull/25111
JSPI-based coroutines are much faster than the old Asyncify ones (my demo shows that).
As for your core message - I'm just the user, but if Google engineers were able to implement that, then it is possible to implement that securely. I remember Google engineers arguing with Apple engineers in GH issues, but I'm not on that level, I just see that JSPI is already implemented in Chrome, so you can't tell me it's not possible.
Multiple memories and Memory64 just became part of the spec. And JSPI is still being standardized. Is Safari slower to roll out new things? Yes. But it's hardly stopping adoption. Chrome has 70% of the browser market, Safari barely has 15%.
But Apple doesn't allow using other browser engines on iOS, so this matters much more. I mean, they were forced to allow them, but of course they didn't actually comply, they created artificial barriers to ensure only Safari can be used on iOS.
EDIT: By "safari" here I actually mean WebKit.
For the mobile browser market, Chrome is still around 70%, Safari is a bit better off in mobile at 20% of global browser market share. That's still a minority platform. It's not inhibiting wasm feature adoption with those numbers.
Crafted by Rajat
Source Code