hckrnws
Note: Someone commented on the “limited shelf-life” of ransomware and why this doesn’t hurt other victims. They deleted their comment but I’m posting my response.
You are incorrect. What is limited is the number of attacks that can be used for victims to recover their files. If you think the author is the only person that was using this attack to recover files, you are incorrect again. I’d recommend checking out book The Ransomware Hunting Team. It’s interesting book about what happens behind the scene for helping victims recover their files.
What use is a counterattack if it’s inaccessible, either by another cost or because it’s only known by a few experts?
This feels like a net win.
What use is a counterattack if it's immediately fixed? Then absolutely nobody can use it, not even a few experts.
You’re making a lot of assumptions about the capability to reconnect and patch/update itself, preface the fix with “keep your machine offline from here in out” and we’re back to fixing it for everyone before that point.
Im confused, are you saying that you think building a method for anyone to break/brute the ransomeware is bad?
They're saying that publicly disclosing the vulnerability is bad because now it will be fixed.
This is a game of cat and mouse, like it has always been. Cannot rely on security by obscurity.
This is a highly ignorant response. There is no relying on security by obscurity - that concept doesn't even apply here, because we're not describing the defenders of a system under attack. This is ransomware that has already infected the system that you're supposed to be securing. Failing to realize that if you don't publicize the method of bypassing the weakness in the ransomware then you'll be able to save more victims indicates extreme stupidity and ignorance of the basics of the field.
Moreover, "This is a game of cat and mouse" suggests that it's not valuable for more victims to have their files decrypted, which is somewhere between malicious and insane.
You have to read my comment in context of the immediate parent which I replied to, not the OP.
The immediate parent comment says that if the vulnerability is publicly declared, attackers can easily patch it.
Paraphrasing my response: not publicly declaring the vulnerability is security by obscurity.. which does not work.
Don't attack a strawman.
If this were the case, then the existence of the Enigma machine, and also the existence of those Nazi communications which so kindly provided the daily seed for deciphering the codes, should have been published in the newspapers along the WWII.
I just hope that publishing the ransomware vulnerability was not ego-driven or anything like that, because they burned a period of time in which they could have helped many people.
> they burned a period of time in which they could have helped many people.
They absolutely did. hassleblad23 has absolutely no idea what they're talking about and no experience in the field. It was obviously the wrong move to publish the Akira weakness.
I don't think the Enigma machine example applies here.
The nazi communications were decrypted by a highly centralized and secrative group, making it very difficult for the Nazis to figure out how they were doing it.
But in this case any vulnerability in the ransomware will have to be exploited by many of the affected people to decrypt their files, which means wide distribution, which means that a leak to the ransomware developers will happen sooner than later. If there is no wide distribution of the vulnerability, the ransomware developers win anyway.
> I don't think the Enigma machine example applies here.
It absolutely does. Your claim is "security through obscurity applies when attacking a cryptosystem, so after you figure out how to break it, you should publish the details". By your logic, the Allies absolutely should have published the details of how they broke the Enigma.
> But in this case any vulnerability in the ransomware will have to be exploited by many of the affected people to decrypt their files, which means wide distribution
Yet again, you show your overwhelming ignorance of the field and basic logic. No, the decryption/exploit does not have to be widely distributed. It's extremely easy to realize that the good guys can just keep it tightly-held and provide a service where files are sent to them for decryption.
You should avoid spreading incorrect and harmful anti-information like this.
> Don't attack a strawman.
I'm not. You're just factually wrong.
> not publicly declaring the vulnerability is security by obscurity.. which does not work.
Now I know that you don't know what you're talking about. Anyone either passingly familiar with the field of information security, or capable of using basic logic, knows that this is incorrect in multiple ways.
First, because security by obscurity can increase the security of a system when you combine it with other measures.
Second, because you're using "security by obscurity" as a religious word without the slightest understanding of what it actually means, which is that, when designing a secure system (that is, when playing the role of the defender), relying on security by obscurity alone is bad.
This is not what is happening in the article. In the article, yohanes/TinyHack is playing the role of the attacker - the Akira ransomware has a cryptosystem and they are attacking it. "Security by obscurity" is entirely irrelevant here.
It's extremely obvious to either someone who thinks for a few seconds, or anyone with a basic understanding of the field, that the attackers primarily rely on security through obscurity, and that publicly revealing the vulnerabilities in the defenders' systems that you've discovered is almost always an extremely bad idea.
And that includes this case. Now that yohanes has disclosed the vulnerability in Akira, the authors can immediately patch it, and the upside is virtually non-existent: an educational lesson for someone new to the field, which could have easily been provided in a way that doesn't inhibit our ability to decrypt victims' files. If yohanes had instead kept the vulnerability a secret, they could have disseminated it to a limited number of other cybersecurity experts, and offered to decrypt files as a service, helping victims without revealing the vulnerability in the crypto.
You shouldn't comment if you don't have the slightest idea of what the words you're using actually mean.
[dead]
Anyone know why they are using timestamps instead of /dev/random?
Dont get me wrong,im glad they don't, its just kind of surprising as it seems like such a rookie mistake. Is there something i'm missing here or is it more a caseof people who know what they are doing don't chose a life of crime?
afaik the majority of ransomware does manage to use cryptography securely, so we only hear about decrypting like this when they fuck up. I don't think there's any good reason beyond the fact that they evidently don't know what they're doing.
My unqualified hunch: if they did that, then a mitigation against such malware could be for the OS to serve completely deterministic data from /dev/random for all but a select few processes which are a priori defined.
You can do the same with time though, just return a predefined sequence of timestamps.
And from a "defensive" perspective, if you don't trust any single entropy source, the paranoid solution is to combine multiple sources together rather than to switch to another source.
If it were me, I'd combine urandom (or equivalent), high-res timestamps, and clock jitter (sample the LSB of a fast clock at "fixed" intervals where the interval is a few orders of magnitude slower than the clock resolution), and hash them all together.
Even if the attackers used a fully broken since 1980s encryption-how many organizations have the expertise to dissect it?
I assume that threat detection maintains a big fingerprint databases of tools associated with malware. Rolling your own tooling, rather than importing a known library, gives one less heuristic to trip detection.
They used this with the IVs mucked with: https://www.gnupg.org/software/libgcrypt/index.html
Charitable, use of system level randomness primitives can be audited by antivirus/EDR.
I wonder at what point would the antivirus kick in. It doesn't require reading /dev/urandom for too long.
Rolling your own crypto is still a thing.
If it works (reasonably) it works, and it throws wrenches into the gears of security researchers when the code isn't the usual, immediately recognizable S boxes and other patterns or library calls.
Might be a bit a paranoia about official crypto libs backdoors, too.
Comment was deleted :(
In case the tool is used against them.
This was a great read and had just the right amount of detail to satisfy my curiosity about the process without being annoying to read.
Huge props to the author for coming up with this whole process and providing such fascinating details
Ransomware would be less of a problem if applications were sandboxed by default.
Sandboxed how? Applications generally are used to edit files, and those are the valuable files to a user.
Ransomeware wouldn't be a problem at all if copy-on-write snapshotting filesystems were the default.
Sandbox, user specifies access to certain files (like you can do limiting access to certain gallery items on android).
Then changes made to files should be stored as deltas to the original.
But realistically a good readonly/write new backup solution is needed, you never know when something bad might happen.
Okay so you give the sandboxed app access to ~/Documents and those get encrypted…
I think most people don’t care about their system directories but their data?
Backups and onedrive for enterprises, yes. :)
Obviously if you give all sandboxed processes access to /, that doesn't improve anything.
The idea is that you'd notice that your new git binary is trying to get access to /var/postgres, and you'd deny it, because it has no reason to want that.
Feels like a case where ZFS would help mitigate?
Like Android and iOS. The user manually has to grant access to files.
Which doesn't scale to office workstations or workplaces with network drives, where users needing to search and update hundreds of files at a time is the norm.
Developers with 1 project open have potentially hundreds to thousands of open, quite valuable files.
Now of course, we generally expect developers to have backups via VCS but that's exactly the point: snapshotting filesystems with append semantics for common use cases is an actual, practical defense.
I'm the old days we had mechanical write protect. I find it hard taking modern security seriously.
It should be pretty simple to [say] make a hardware solution to allow only writing out new files.
I also find it comical that my production database has instructions to conveniently delete or modify all rows in my table. That would be at the top of the list of features I don't want.
I have backups of course, backups on writable usb drives.
Like, when I lose everything it is really nice to be able to delete files from my backup drive. This is such a great idea.
Excuse my ignorance but is one really updating hundreds of files the day round? On some factory machines that do dangerous things you have to hold down two buttons.
About tge two buttons thing in factorys. The reason is, you don't have a hand in the machine. So it's not just two buttons, it's two buttons with such distance you have to use two hands. And, usually one of the two buttons, you have to hold in a middle position, if you push the button to much, it does also not work.
Something else, how many times, because of a bad mousepad, whole directorys got moved somewhere. Often you don't even know what you moved, so you can't even search. Special in my last company, we had for sure once a month such a "new" in our data.
The two buttons also make sure you really want to be doing what you are doing and are not doing other things at the same time. Perhaps multiple keys to launch the nuke would be a better comparison.
I did that drag and drop trick too. I don't even know what I did but a lot of system files and folders ended up on the desktop.
It isn't hard to imagine how features like this came to be and why the product was considered finished. We can obviously do better.
It does scale. Proper abstraction can be provided by the OS for searching through a large amount of files and granting access to a subset of them.
>Developers with 1 project open have potentially hundreds to thousands of open, quite valuable files.
And malware wouldn't be able to access any of those files without the developer explicitly giving it access.
Append only semantics doesn't scale for consumer devices as they do not have the luxury of extra storage space.
Again: define "explicit"? Does clicking a file count? Asking for code reformatting across the project? How long does access last? How is it revoked?
If the user runs "reformat project" once, then gets a new version, are try going to have any warning that "reformat project" is about to encrypt every file it touches?
Explicit as in when you run a new app it does not have access to any of your files and there is no way for it to gain access without you, the user, giving it.
>Does clicking a file count?
Yes, clicking a file from the file picker counts.
>Asking for code reformatting across the project?
You can grant access to a directory in that case.
>How long does access last? How is it revoked?
It can last forever or until the application is closed. There is room to choose how exactly it could work.
>If the user runs "reformat project" once, then gets a new version, are try going to have any warning that "reformat project" is about to encrypt every file it touches?
That would be up to the design.
Of course, there are many things a company can do to be a bit more assured it can access its data: CoW snapshots, backups on read-only medium (e.g. DVD or BluRay discs), HDDs/SSDs offline on shelves, and certainly many other things could help companies.
That's not incompatible with sandboxing applications to limit the damage a malware can do.
Even on a regular user's "workstation" there's no need for every single app to access every single directory / every single network drive with rw permission etc.
P.S: FWIW the backup procedure I put in place doesn't just encrypt/compress/deduplicate the backups, it also compares the backup to previous backups (comparing size gives an idea, for example), then also verifies that the backup can be decrypted, using a variety of metrics (for example if, after decrypting then decompressing the backup a Git repo backup is found, it'll run "git fsck" on it, if a file with a checksum is found, it'll verify that file's checksum, etc.). Already helped us catch not a malware but a... bitflip! I figured out that if a procedure can help detect a single bitflip, it probably can help detect malware-encrypted data too. I'm not saying it's 100% foolproof: all I'm saying is there's a difference between "we're sandboxing stuff and running some checks" vs "we allow every single application to access everything on all our machines because we need users to access files".
Or if non-trusted/signed apps only had COW disk access.
Or if people backed up more often.
"On my mini PC CPU, I estimated a processing speed of 100,000 timestamp to random bytes calculations per second (utilizing all cores)."
Would like more details on the mini PC. Processor, RAM, price. Is it fanless.
What could explain encrypting the first 65k with KCipher2 and the rest with something else? Seems odd.
[flagged]
Why don't you do the legwork instead of asking rhetorical questions?
Legwork of what? Companies already have done the legwork to make it easy for strangers to send you money.
Companies that "do the legwork" of decrypting ransomware for the most part just pay the ransom on your behalf.
Presuming this results in a cryptosystem change for Akira, there’s a real number of victims who won’t get their data back as a result of this disclosure.
Whether the number is more than that of victims to date who can recreate this? Who knows
How would they get their data back if someone theoretically knows how to decrypt but never tells anyone.
I can’t remember the example (it was a conference talk a few years ago), but I’m pretty sure there’s LE and DFIR companies who also reverse this stuff and assist in recovery, they just don’t publish the actual flaws exploited to recover the data.
Key being generated insecurely is hacking crypto systems 101. The mere fact someone can reverse it probably means this is the first thing to check.
It was already disclosed to the bad guys that someone managed to break their encryption, when they didn't get paid and they saw that the customer had somehow managed to recover their data. That probably meant they might go looking for weaknesses, or modify their encryption, even without this note.
Other victims whose data were encrypted by the same malware (before any updates) could benefit from this disclosure to try to recover their data.
> why publish this?
New versions of Akira and any other ransomware are constantly being developed. This code is specific to a certain version of the malware.
As noted in the article, it also requires:
1. An extremely capable sysadmin 2. A bunch of GPU capacity 3. That the timestamps be brute-forced separately
So it's not exactly a turn-key defeat of Akira.
Comment was deleted :(
once your files are encrypted by ransomware, does the encryption change if the malware gets updated? if not, then anyone currently infected with this version can now possibly recover.
if they don't release their code, then what's the point of having the code? they accomplished their task, and now here you go for someone else that might have the same need. otherwise, don't get infected by a new version
How would it be better, unless it's widely known to be breakable? And at that point, wouldn't the hackers know that too?
Crafted by Rajat
Source Code