hckrnws
Going for the pipe spray is a kinda weird technique, and I'm honestly surprised that it worked. Usually just the fact that you are able to spray over the allocation at all isn't enough, and you also have to worry about your sprayed data containing additional pointers or things that also have to be valid.
I probably would have gone for turning the UaF into an type confusion style attack: if you spray more sockets you'll end up with two files, the original and the new one, that have aliased sk members, but the vsock code will incorrectly cast the new one to a `vsock_sock`. From there you can probably find some other socket type that puts controllable data over some field that vsock treats as a pointer or vice versa, and use it as both a kaslr leak and data-only r/w primitive.
> I probably would have gone for turning the UaF into an type confusion style attack
I'm aware that Linux is nearly 40 years old at this point, and C is even decades older. But it is mind-boggling to me that we're still talking about UAFs and jumping from dangling pointers to get privileged executions in the 21st century.
(rewrite it in Rust)
That was obvious to C.A.R Hoare in 1980, should have been obvious to the industry after the Morris worm in 1988, yet here we are, zero improvements to the ISO C standard in regards to prevent exploits in C code.
Never if anything remove Rust from linux!
Comment was deleted :(
Nonsense, the C guy told me those happen to people that make mistakes, and that he, being the offspring of the Elemental of Logic, and "Hyperspace cybernetic intelligence and juvenile delinquent John Carmack" is completely immune to such pathetic issues. He works at Linux. Yes, all of Linux.
Yes, we need a languages that makes it impossible. But how could this happen in Rust? https://github.com/advisories/GHSA-5gmm-6m36-r7jh But clearly, this does not count because it used "unsafe", so it remains the programmer's fault who made a mistake. Oh wait, doesn't this invalidate your argument?
A random 16 star github repo using unsafe doesn't really tell anyone much.
> Oh wait, doesn't this invalidate your argument?
Not really. If you can't be perfect, at least be good.
Unless you want to make an argument that searching all of your code base for UB is better than running on a relatively small subset (20-30%).
Or that Linux should use tracing GC + value types. Which I would find decent as well. But sadly LKML would probably disagree with inclusion of C# (or Java 35).
You can't be perfect. The question is how much improvement "rewriting in Rust" actually brings, how important that is, and what downsides it may have.
Another, way more impertant question is, can we keep maintaining legacy C code in community driven FOSS projects, when the world moves on.
If you really want, you could write your code in Agda or similar and prove it correct. See also seL4 https://github.com/seL4/seL4 which is a proven correct kernel.
... written in C.
It's written in C in the same way it's written in assembly:
C is just used as part of the process to go from a high level spec to executable code.
You can also compile eg Haskell via C.
The subset of C used in seL4 is highly constrained.
a subset of C is provable, and is the de-facto standard in the industry.
what is it with rust people and thinking robust, automatic correctness checking was invented in the last 20 years?
Wow, the Linux kernel must be full of dunces that rather than writing in proven correct C, added another language that other maintainers hate. What morons! /sarcasm
Disclaimer: above is sarcasm, while I don't think Linux Kernel Maintainers are perfect, I don't consider them dunces either. If they can't avoid writing UAF (use after free) or DF (Double free) bugs then what hope does the average C programmer has?
Proven correct C
The last version of C is ISO C 23. I also do believe that rewriting in Rust is the best way to address memory safety in legacy C code and not even for new projects, nor do I think that memory safety is the most pressing security problem.
"We’ve Got a Panic!"
Looks like we've got an encoding issue too.
I kind of want to trademark †so that ’ is not just mojibake.
Not sure Musk would let you trademark his kids name. /s
The server is responding with
Content-Type: text/html
i.e. no charset field.The document itself also lacks a declared character set.
One thing that really annoys me about the HTTP standards is that some older version used to say that text/* without a declared charset was definitely latin-1 (don't remember which version exactly). Then a later version said no, text/* without a declared charset is definitely utf-8. In practice I feel this means they said "not declaring a charset is definitely not ok", but in a way that no one will understand.
Possibly a newer version that I haven't read fixed how they said that. As long as I don't check I can hope.
I'm confused. The page has a HTML5 doctype, and https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... says that UTF-8 is the only valid encoding for HTML5 documents, yet Firefox interprets the page as Windows-1252 or such until I "Repair Text Encoding". https://webhint.io/docs/user-guide/hints/hint-meta-charset-u... says you're supposed to include a <meta charset="utf-8"> or optionally Content-Type header.
If you don't have a charset set, then you'll get the fallback for IE compatibility.
You should pretty much always use one.
I thought this was a joke at corrupting the data intentionally
this is a panic-corrupted buffer joke
> So I set off on a journey that would lower my GPA and occasionally leave me questioning my sanity
Amazing! Sacrificing GPA for projects is always a good time
I really liked the old German university concept, the one before we just took over Bachelor/Master.
Throughout my CS studies, I was just collecting "tickets" (very hard to translate the actual word, "Schein"), which basically just attested that you have passed a course. They (often) had a grade on it, but it did not matter. Instead, once in the middle ("pre-diploma") and once at the very end of your time at university, you'd have oral exams. And those determined your grade. To attend them, you needed the right combination of "tickets".
The glaring downside of this system is that if you had a bad time in those few months of your very final exams, you could screw up your entire grade.
The upside of it, is that I was free (and encouraged) to pursue whatever I wanted, without each course risking to have an effect on my "GPA". I had way more tickets than I needed in the end, and still time and energy to pursue whatever else I wanted (playing with microcontrollers etc.).
I had a couple of classes in USA uni that worked quite similarly. The professor said we can take the quizzes if we want, and if we didn't then the later quizzes would constitute more of your grade. The ultimate play was to only take the final quiz.
> The ultimate play was to only take the final quiz.
This is how a lot of British undergrad courses ('modules') work. One giant exam at the very end determining everything; no quizzes, no problem sheets, no midterms.
Modules? We just had six massive exams at the end of three years!
Chicago used to be that way in the long ago times.
> Throughout my CS studies, I was just collecting "tickets" (very hard to translate the actual word, "Schein"), which basically just attested that you have passed a course. They (often) had a grade on it, but it did not matter. Instead, once in the middle ("pre-diploma") and once at the very end of your time at university, you'd have oral exams. And those determined your grade. To attend them, you needed the right combination of "tickets".
What you are describing was one of the systems they used.
So at my university (OvG in Magdeburg) they used this system for math, but computer science had written exams.
Would not be a surprise if AI brought this back.
As a teacher once told me.
"Never let school limit your education"
For those wondering this is a common paraphrase of Grant Allen and Mark Twain. Here we say "Never let school get in the way of a good education."
I learned a ton while at my university. Much of it was outside of my classwork.
Agreed. My GPA suffered significantly in 1999 when I was writing a web service to help me calculate Pi, but it was absolutely worth it.
Comment was deleted :(
Comment was deleted :(
Good write-up. I liked your RSA tutorial too: https://hoefler.dev/content/RSA.pdf
Yay Rop Chains!
The Linux Kernel has millions of LoCs. There'll always be bugs.
It's about time to look at a sane design, such as seL4[0].
This is an apples-oranges comparison, unless things have changed drastically since the last time I worked on L4 (about 10 years ago). L4 is very secure and easy to reason about. But that's because it doesn't really do anything. It makes a lot of sense as a platform to build a general purpose OS on, and as a bottom layer for what would otherwise be a unikernel. But you'd run a browser on top of something that itself runs on seL4, not on seL4 itself.
That's kind of the point, though. Naturally, a consequence of moving functionality out of the kernel and into userspace is that the kernel does less. That means rethinking the idea of how we build software (WRT what an OS means and what a platform API looks like), but on the other hand our current mode of building software is objectively dogshit, so that's no great loss.
It's becoming more and more common to use non Linux based hypervisors to isolate workloads where security matters. Isolating applications within a given VM is not seen as important and therefore ditching Linux isn't really necessary. Applications can continue to be written against Linux APIs and we can create isolation domains separately. This is no longer just a server concept as even phones and cars are starting to employ this technique. It has high cost to RAM, but as RAM gets cheaper it's not as big of a deal.
The obvious question is why Linux is so widely used in the first place. I don't think "APIs" is enough to explain it. One obvious answer is the incredibly broad hardware support. Any alternative selected for use as the hypervisor is going to be at a serious disadvantage in that regard.
Not necessarily. You can forward a lot of hardware as is to a Linux VM if you have an iommu. It comes down to whether you need multiple VMs to share access to some hardware or not, which is not all that common based on the way the isolation domains work out. This can start to become more challenging when that hardware has shared resources such as clocks, buses, or power rails to manage, but soc makers are likely going to make hardware increasingly easier to work in this modality as customers require it.
don’t mind if you do guv.
[stub for offtopicness]
Cool writeup, and you have exceptional taste in fonts.
I can't read the dark blue links on the black background
Engage reading mode and relax.
Victim blaming.
[dead]
For the love of god please change the blue on black text to something more readable
The dark blue on black reads absolutely terribly
Try the Reader View feature of Firefox.
[dead]
Comment was deleted :(
yet another "use-after-free" sploit
Rust for Linux, wen?
It's a damn shame the current maintainers are so hostile to its adoption that many of the original rust 4 linux folks have left the project.
Counterargument: Linux is almost 35 years old (wow, time flies). Rust for Linux is a project started at the moment of biggest rust hype. It's understandable that the Linux maintainers are wary of introducing too much rust dependence, in case, for example, all the rust people leave in 5 years and current/old maintainers are stuck with it forever
Counterargument Counterargument: one would think 35 years is enough to work out the memory safety kinks. If C people can't sort it, a new solution needs to be used.
Not necessarily Rust, but something memory safe. Perhaps Java (if maintenance is that important) :P
go do it then...
Because it worked out so well for Rust in Linux?
Linux issues are not purely technical. There is the social inertia.
Does social inertia mean "the people actually working on it has opinions about the work they're doing"? Because that seems fine.
> the people actually working on it has opinions about the work they're doing
No. They have opinions and take actions to subvert related work. See people literally stopping a presentation to gripe about Rust cultists, or Hellwig throwing a hissy fit because he doesn't like that Rust code is adjacent to his DMA controller.
This isn't technical discussion, this is Office Politics 104.
Did they start their own project ? Linux is free, just fork it.
There are a lot of entities involved that need to be able to work together. Creating a form fractures things and requires all partners to move to said fork. It's far easier to work upstream even with resistance. Anyone who has maintained a long standing Linux fork understands the costs of trying to rebase thousands of patches. There will never be enough of a migration to make it unnecessary to need to rebase.
"Hey upstream maintainer, let me commit a bunch of code in a language you can't even read. You get to maintain it forever while I get to move on to bigger and better things. I am better that you after all: I know this cool new language and you don't."
And this didn't go over well. Shocking.
That isn't a remotely fair characterization of what the rust for linux team were saying. In particular they committed to maintaining it.
https://lwn.net/Articles/1006805/
"We wrote a single piece of Rust code that abstracts the C API for all Rust drivers, which we offer to maintain ourselves".
I wish that HN as a whole could maintain a respectful and curious tone of debate when these threads come up. Feel like both rust advocates and skeptics could do a lot better.
The 'just' doesn't belong in front of 'fork'.
Rust, the new "I use Arch, BTW"
Comment was deleted :(
Crafted by Rajat
Source Code