hckrnws
For those unaware, the article hints at a system that really does believe everything is a file: Plan 9 from Bell Labs, the “second system”/spiritual successor of Unix. But it’s also worth pointing out that NT’s kernel is designed around a hierarchical namespace of “objects,” where various subsystems slot in at different levels to take over responsibility of the rest of the path. Unlike Plan 9, this is separate from the userland filesystem. It might be most familiar to people who have installed NT 4 (or maybe 3.51?) through XP via bootable floppy: SETUP.EXE shows strings like `\Device\HardDisk0` in the status bar.
Just pointing out how the same general idea can take distinct forms of implementation.
Under ReactOS' explorer.exe (IDK if it's possible to run it under Windows) you can see all the NT objects, even the Registry too. So you can browse the Registry hier as if it were a path.
I've never quite understood why the idea "everything is a file [descriptor]" is often revered as some particularly great insight. Perhaps it was for its time, but I think we have to be honest and say that it is a really awkward abstraction in 2025.
It can mean a couple of things:
- Kernel objects have an opaque 32 bit ID local to each process.
- Global kernel objects have names that are visible in the file system.
- Kernel objects are streams of bytes (i.e. you can call `read()`, `write()` etc.).
The first is a kind of arbitrary choice that limits modern kernels. (For example, a kernel might want to use all 64 bits to add tag bits to its handles - still possible, but now you are close to the limit.)
The second and third are mostly wrong. Something like a kernel synchronization primitive or an I/O control primitive does not behave anything like a file or a stream of bytes, and indeed you cannot use any normal stream operations on them. What's the point of conflating the concept of a file system path and kernel object namespacing? It makes a kind of sense to consider the latter a superset of the former, but they are clearly fundamentally different.
The end result is that the POSIX world is full of protocols. A lot of things are shoehorned into file-like streams of bytes (see for example: the Wayland protocol), even when a proper RPC/IPC mechanism would be more appropriate. Compare with the much maligned COM system on Windows, which though primitive and outdated does provide a much richer - and safer - channel of communication.
> I've never quite understood why the idea "everything is a file [descriptor]" is often revered as some particularly great insight.
I think the article articulated it decently:
> It is the file descriptor that makes files, devices, and inter-process I/O compatible.
Or if you like, because pushing everything into that single abstraction makes it easier to use, including in ways not considered by the original devs. Consider, for example, exposing battery information. On other systems, you'd need to compile a program using some special kernel API to query the batteries and then check their stats (say, checking charge levels). In linux, you can just enumerate /sys/class/power_supply and read plain files to get that information.
> On other systems, you'd need to compile a program using some special kernel API to query the batteries and then check their stats (say, checking charge levels)
I asked an LLM how to do this on Windows and got
> wmic path Win32_Battery get EstimatedChargeRemaining
Which doesn't seem meaningfully worse than looking at some sys path; it's not clear what the file abstraction adds for me there.
So you used an existing binary that hits the special kernel API to query the batteries. If you want to do it yourself (ex. to make your own graphical widget or something) then you have to hit that API yourself. And yes, sysfs is sort of an API too, but it's a simple, uniform API that in many cases can just be used via read() instead of needing to figure out some specialized interface.
To be clear, I recognize that some kind of general mechanism is useful, I’m just not sure why files and byte streams are considered especially great.
Because the flip side of your example is that you now have a plain text protocol, and if you wanted to do anything else besides cat’ing it to the console, you’re now writing a parser.
> To be clear, I recognize that some kind of general mechanism is useful, I’m just not sure why files and byte streams are considered especially great.
It's one of the local maxima for generality. You could make everything an object or something, but it would require a lot of ecosystem work and eventually get you into a very similar place.
> Because the flip side of your example is that you now have a plain text protocol, and if you wanted to do anything else besides cat’ing it to the console, you’re now writing a parser.
Slight nuance: You could have everything-is-a-file without everything-is-text. Unix usually does both, and I think both are good, but eg. /dev/video0 is a file but not text. That said, text is also a nice local maxima, and the one that requires the least work to buy in to. Contrast, say, powershell, which does better... as long as your programs are integrated into that environment.
That's why ioctl exists, which is an RPC. For example NetBSD even support sending messages created with its proplib as properly lists of Apple fame.
Also I always found it weird, that a lot of things are "files" in Linux, but not ethernet interfaces, so you have to do that enumeration dance before getting an fd to oictl on. I remember HP-UX having them as files in /dev, which was neat.
> Also I always found it weird, that a lot of things are "files" in Linux, but not ethernet interfaces, so you have to do that enumeration dance before getting an fd to oictl on. I remember HP-UX having them as files in /dev, which was neat.
My main complaint in general with everything-is-a-file is that it isn't taken far enough:) (Well, on anything except Plan 9)
Yeah, let's see how Xous fairs. Approach is interesting, and maybe the future is in those small, hardened microkernels.
Note that these books were written when design pattern was still a buzzword.
Hence the apropos title of "Ghosts of Unix Past" :-)
Unfortunately, in some parts of the industry it still is.
Careful selection also implies rejection. I wonder about the technologies that have been lost to time because they didn't pass this historical filter. I learned never to underestimate the accomplishments of our predecessors after reading about old mainframe systems.
Crafted by Rajat
Source Code