hckrnws
> "How do they accomplish their goals with project BULLRUN? One way is that United States National Security Agency (NSA) participates in Internet Engineering Task Force (IETF) community protocol standardization meetings with the explicit goal of sabotaging protocol security to enhance NSA surveillance capabilities." "Discussions with insiders confirmed what is claimed in as of yet unpublished classified documents from the Snowden archive and other sources." (page 6-7, note 8)
There's long been stories about meddling in other standards orgs (both to strengthen and to weaken them), but I don't recall hearing rumors about sabotage of IETF standards.
These rumors, about IETF in particular, predate the Snowden disclosures.
Almost immediately after that happened, a well-known cypherpunk person accused the IETF IPSEC standardization process of subversion, pointing out Phil Rogaway (an extraordinarily well-respected and influential academic cryptographer) trying in vain to get the IETF not to standardize a chained-IV CBC transport (this is a bug class now best known, in TLS, as BEAST), while a chorus of IETF gadflies dunked on him and questioned his credentials. The gadflies ultimately prevailed.
The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency. This is a common failure of all standards organizations (W3C didn't need any help coming up with XML DSIG, which is probably the worst cryptosystem ever devised), but it's somewhat amplified in open settings like IETF.
> The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency
The 3 letter agencies usually recrut people from academia and sensitive organizations in order to pursue their agenda.
This would be a snappier comeback if it wasn't the academic cryptographer who was right, and the random people in non-sensitive organizations who shot the right answer down.
That's sort of not saying much. Are they going to recruit Joe the Plumber to submit IETF drafts?
[dead]
I wonder what other institutions are infested with swarms of gadflies that'll all swear up and down in unison that something is definitely good or bad, will attack outsiders with narratives that disagree with their own, and weaken the credibility of the entire institution in the process.
Surely this can't be a phenomenon unique to technology, can it?
It's just human nature. I refuse to participate in standards work, partly because it's much more pleasant throwing rocks from a safe distance, but also because I recognize in myself the same ordinary frailties that had friends of the original IPSEC standards authors writing mailing list posts about Rogaway being a "supposed" cryptographer. There but for the grace of not joining standards lists go I.
I think --- I am not kidding here, I believe this sincerely --- the correct conclusion is community-driven cryptographic standards organizations are a force for evil. Curve25519 is a good standard. So is Noise, so is WireGuard. None of those were developed in a standards organization. It's hard to think of a good cryptosystem that was. TLS? It took decades --- bad decades --- to get to 1.3. Of that outcome, I believe it was Watson Ladd who said that if you turned it in as the "secure transport" homework in your undergraduate cryptography class, you'd get a B.
It goes further back than crypto: back in the day, there was the design-by-committee OSI versus the "rough consensus and running code" IETF. The result is that while we still teach the OSI 7-layer model in universities, in practice we use TCP/IP, often with HTTP and TLS on top.
TLS originated at Netscape.
So did Js, it isn’t a counter argument. Many of the gadflies are useful idiots.
You're basically describing politics, where specific opinions are central to a group identity. So long as there is an innate human desire to belong, there will be plenty of people who are happy to live with any amount of cognitive dissonance.
Joining a gadfly swarm just gives them an opportunity to prove their worth to the group.
https://youtu.be/URdXC6UtfVg?si=b7uwjHujUvYG1hGH
Skip to 1 minute in and you’ll see the problem in experimental form.
DNSSEC is a great example of just how poorly committee designed crypto is. The government doesn't need to do anything to standards, they can just let them play out as-is.
We ended up with a huge mess that solves no actual problems, introduced problems that never existed before, and so confusing in what it does that people still blindly defend it.
Why do you assume the government didn't do anything? If anyone on those committees was paid or influenced by the NSA/CIA, they would not have disclosed it.
You're right, of course.
But standards bodies also sabotage their own work when there's no security relevance (USB's dozens of options with impenetrable names like "USB 3.2 Gen 2×2")
And the US DoD reportedly loves Trusted Platform Modules, so presumably if the US intervened at all it was to improve the spec - and yet it's got more holes than swiss cheese.
It’s true that you can always imagine a perfectly-concealed conspiracy but we see the same dynamics unfold in many places where there’s no security impact so the parsimonious explanation is that there is no conspiracy, only normal human social dynamics.
DNSSEC is one of the standards where I am confident the US government did not intentionally introduce weaknesses because the US government is the largest deployed base of zones due to a poorly thought out executive order.
I'm curious as to how successful they were at subverting the IETF process. It wouldn't be impossible, but since much of the process is in the open it could be difficult, especially if they did it under their own name.
I suspect most of it was done under different corporate identities, and probaby just managed to slow adoption of systematic security architectures. Of course, once the Snowden papers came out, all that effort was rendered moot as the IETF reacted pretty hard.
Ya ever heard of the OAuth2 protocol? I spent almost half a decade working on identity stuff, and spending a lot of time in OAuth land. OAuth is an overly complicated mess that has many many ways to go wrong and very few ways to go right.
If you told me the NSA/CIA had purposefully sabotaged the development of the OAuth2 protocol to make it so complex that no one can implement it securely it'd be the best explanation I've heard yet about why it is the monstrosity it is.
> Ya ever heard of the OAuth2 protocol?
Have you ever seen SAML? Now there is a protocol that seems borderline sabotaged. CSRF tokens? Optional part of spec. Which part of the response is signed? Up to you with different implementations making different choices; but better verify the sig covers the relavent part of the doc. Can you change the signed part of spec in a way that alters the xml parse tree without it invalidating the signature? Of course you can!
Oauth2 is downright sane in comparison.
[To be clear, saml is not a ietf spec, it just solves a similar problem as oauth2]
Honestly, as someone who has implemented both SAML and OAuth2 (+ OIDC) providers, I found SAML much easier to understand. Yes, there are dangerous exploits around XML. Yes, the spec is HUGE. But practically speaking, people only implement a few parts of it, and those parts were, IMHO, easier to understand and reason about.
This is definitely an unpopular opinion however.
What's insidious about SAML is that it really is mostly straightforward to understand, but it's built on a foundation of sand, bone dust, and ash; it works --- mostly, modulo footguns like audience restrictions --- if you assume XML signature validation is reliable. But XML signature validation is deeply cursed, and is so complicated that most fielded SAML implementations are wrapping libxmlsec, a gnarly C codebase nobody reads.
100% - XML vulnerabilities are the biggest issue. JWTs have also had their fair share, though I think they were mostly implementation bugs that have mostly been ironed out at this point. XML's complexity is inherent to the language.
> JWTs have also had their fair share, though I think they were mostly implementation bugs that have mostly been ironed out at this point.
The most famous JWT issue, to my mind, was people implementing JWT and -- as per spec -- accepting an encryption mode of "none" as valid.
That could be described as an "implementation bug", but it can also be described as "not an implementation bug" - all your JWT functionality is working the way it's supposed to work, it's just not doing the thing that you hoped it would do.
IMO this is a defect (sabotage? plain incompetence?) in the specs, full stop.
RFC 7519 (the JWT spec) delegates all signature / authentication validation to the JWS and JWE specs. (And says that an unholy mechanism shall be used to determine whether to follow the JWS validation algorithm or the JWE algorithm.)
JWS even discusses algorithm verification (https://www.rfc-editor.org/rfc/rfc7515.html#section-10.6), but does not suggest, let alone require, the absolutely mindbendingly obvious way to do it: when you have a key used for verificaiton, the algorithm specification is part of the key. If I tell a library to verify that a given untrusted input is signed/authenticated by a key, the JWS design is:
bool is_it_valid(string message, string key); // where key is an HMAC secret or whatever
and this is wrong. You do it like this: bool is_it_valid(string message, VerificationKey key);
where VerificationKey is an algorithm and the key. If you say to verify with an HMAC-SHA256 key and the actual message was signed with none or with HMAC-SHA384 or anything else, it is invalid. If you have a database and you know your key bits but not what cryptosystem those bits belong to, your database is wrong.The JWE spec is not obviously better. I wouldn't be utterly shocked if it could be attacked by sending a message to someone who intends to use, say, a specific Ed25519 public key, but setting the algorithm to RSA and treating the Ed25519 public key as a comically short RSA key.
> The most famous JWT issue, to my mind, was people implementing JWT and -- as per spec -- accepting an encryption mode of "none" as valid.
Sure that is pretty silly. However in saml you have xmlsec accepting the non standard extension where you can have it "signed" with hmac where the hmac key is specified inside the attacker controlled document. I would call that basically the same as alg=none, although at least its non-standard.
I did say "mostly." :) I almost mentioned this bug, but it's so famous, I doubt any mainstream OIDC libraries fall prey to it these days.
I'm a massive Aliens fan, and your word choice gave me a vivid picture of a vulnerability as a xenomorph egg leads to a swarm of nightmarish monsters invading the colony.
I don't know which species is worse. You don't see them ignoring CVEs for a percentage.
NSA?
Redirecting the user when they tap sign in from untrustednewsite.com, to a new window with the domain hidden of bigsitewithallyourdata.com and saying “Yeah, give us your login credentials” always felt like the craziest thing to me
So ripe for man in the middle attacks. Even if you just did a straight modal and said “put your google credentials into these fields”, we’re training people that that’s totally fine
Like how Plaid for banking works.
Comment was deleted :(
Comment was deleted :(
OAuth2 is about the simplest thing you could come up to solve the delegated authentication problem. It's complexity stems mostly from the environment it operates in: it's an authentication protocol that must run on top of unmodified oblivious browsers.
There's a lot of random additional complexity floating around it, but that complexity tracks changes in how applications are deployed, to mobile applications and standalone front-end browser applications that can't hold "app secrets".
The whole system is very convoluted, I'm not saying you're wrong to observe that, but I'd argue that's because apps have a convoluted history. The initial idea of the OAuth2 code flow is pretty simple.
Sounds like you’re an NSA plant, Thomas :)
[flagged]
[flagged]
The problem is, as anyone with sufficient experience knows, it is perfectly possible and common for devs to design security disasters without any involvement by the spooks. I suspect the NSA counts on this and uses any covert influence they might have to slow corrections (eg. Profiles for OAuth2 that actually are reasonable, patches to common services, etc.)
sabotaging a design is remarkably easy. We have several individuals that almost do it effortlessly. It's almost a talent for some. I suspect that doing it maliciously while hiding behind some odd corner scenario or some compatibility requirements can't be that hard and will be almost impossible to prove or detect.
On the other hand, why would the NSA even bother? With these talented individuals providing them a never ending supply of security issues? And they don't even have to pay for them besides having their normal 0-day discovery team look for them. I think supporting this is their reaction when a 0-day they are relying on gets patched out.
I can imagine special cases where they would exert resources and effort, such as the IETF or some other economically efficient chokepoint (Rust?), but not in general.
As a senior professional in any field, you would, if you could, move from a reactive strategy to a proactive one.
Comment was deleted :(
Just because you're paranoid doesn't mean they're not after you.
Is there any write-up on the IETF reaction?
There were a bunch at the time, a historical retrospective is "RFC 9446 Reflections on Ten Years Past the Snowden Revelations"
So that's how we ended up with IPv6.
What's wrong with it?
Nothing. That's pure FUD
Just a concrete example of that time we know the NSA actually did their job properly: https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA's...
Fantastic and damming read. I had no idea they were so active and nefarious even in the 70s
Are you sure you read it carefully? NSA's role in DES was to make it resistant to linear and differential cryptanalysis.
True, but they have also forced the reduction of the key size from 64 bits to 56 bits, making it much more vulnerable to brute force.
So they have hardened DES towards attacks from those with much less money than NSA, while making it weaker against rich organizations like NSA.
They made it resistant to differential attacks yes, but weakened it for their own attack surface
What does that mean?
They made the S-blocks appear less deterministic when sampling their behaviour, but made sure that anyone with prohibitively expensive equipment (say, the NSA) could brute force them in a reasonable time frame.
These are all words (mostly; "S-blocks" is not a thing, you mean "s-boxes"), but they're not coherent in a sentence. I think you're trying to point out that NSA shortened the key length from 64 to 56 bits, which has nothing to do with the DES s-boxes. NSA's interventions with the s-boxes made them more resilient to cryptanalysis, not less.
The thing you're thinking is nefarious is not (unless you're a connoisseur of good bugs and believe that differential cryptanalysis, the most important fundamental technique in block cipher analysis, should have been public earlier). It is in fact the opposite of nefarious.
You're right, I thought that the S-boxes were defined by the number of bits and that the NSA had deliberately shortened them, whilst obfuscating other attack techniques. Thanks for the clarification!
I'm too young to have firsthand experience, but I've seen the speculation that IPSEC was an example of this kind of strategy. It's certainly more-- ahem-- flexible that it probably ought to be. I know there have been exploits against ISAKMP implementations. I'd assume the baroque nature of the protocol drove some of that vulnerability.
Not IETF, but NIST, which I suspect is worse. Dual_EC_DRBG was withdrawn when it was discovered to be an attempt by the NSA to sabotage ECC specifications.
NIST EC DSA curves are the only ones used by CAs, are manipulatable, and have no explanation for their origin. Pretty much the entire HTTPS web is likely an open book to the NSA.
I don't know what you mean by "manipulatable". I don't know what you mean by "DSA". I assume what you're doing is casting aspersions on the NIST P-curves. They're the most widely used curves on the Internet (hopefully not for too long).
I don't think all that many serious people believe there's anything malign about them. It's easy to dunk on them, because they're based on "random seeds" generated inside NIST or NSA. As a "nothing up our sleeves" measure, NSA generated these seeds and then passed them through SHA1, thus ostensibly destroying any structure in the seed before using it to generate coefficients for the curve.
The ostensible "backdoor attack" here is that NSA could have used its massive computation resources (bear in mind this happened in the 1990s) to conduct a search for seeds that hashed to some kind of insecure curve. The problem with this logic is that unless the secret curve weakness is very common, the attack is implausible. It's not that academic researchers automatically would have found it, but rather that NSA wouldn't be able to count on them not finding it.
Search for [koblitz menezes enigma curve] for a paper that goes into the history of this, and makes that argument (I just shoplifted it from the paper). If you don't know who Neal Koblitz and Alfred Menezes are, we're not speaking the same language anyways.
The real subtext to this "P-curves are corrupted" claim is that there are curves everyone drastically prefers, most especially Curve25519 (the second-most popular curve on the Internet). Modern curves have nice properties, like (mostly) not requiring point validation for security (any 32-byte string is a secure key, which is decidedly not the case for ordinary short Weierstrass curves, and being easy to implement without timing attacks.
The author of Curve25519 doesn't trust the NIST curves. That can mean something to you, but it's worth pointing out that author doesn't trust any other curves, either. Cryptographers have proposed "nothing up my sleeves" curves, whose coefficients are drawn from mathematical constants (like pi and e). Bernstein famously co-authored a paper that attempted to demonstrate that you could generate an "untrustworthy" curve by searching through permutations of NUMS parameters. It was fun stunt cryptography, but if you're looking for parsimony and peer consensus on these issues, you're probably better off with Menezes.
Incidentally, you said "ECDSA curve". ECDSA is an algorithm, not a curve. But nobody likes ECDSA, which was also designed by the NSA. A very similar situation plays out there --- Curve25519's author also invented Ed25519, a Schnorr-like signing scheme that resolves a variety of problems with ECDSA. Few people claim ECDSA is enemy action, though; we all just sort of understand that everyone had a lot to learn back in 1997.
I don't think that quite emphasises enough why the P curves are rightly seen as suspicious.
The NSA knew all about the importance of nothing-up-my-sleeve numbers for eliminating suspicions around their involvement. That's why the curve constants are the output of a hash function.
And yet for this scheme to correctly eliminate suspicion, it's vital that the inputs to the hash function are also above suspicion. So a good choice of input to the hash function would be something like 1, or the digits of pi, or some other natural number. Then there can be no possibility of game playing even if the hash function is broken (which as we now know, SHA-1 is).
The NSA didn't do this. Instead they picked huge numbers that have no obvious origin and then refused to explain where they came from. They created something that superficially looks like it should place the scheme above suspicion, but when you look at the details you discover it doesn't actually do so.
This is catastrophic! The absolute best case explanation is rank incompetence, but that's quite simply implausible as these schemes are otherwise designed in a highly competent manner. The NSA just doesn't develop cryptographic schemes with obvious howling errors in them. Except in this case, where they supposedly did.
None the attempted justifications for why this is OK hold water, in my view. They all rest on a series of very dubious assumptions:
1. If there was a way to build exploitable curves, academics would know about it.
2. If an outside researcher did discover such a technique, they'd actually tell everyone about it and not, say, be bribed or coerced or convinced to stay silent by the NSA.
3. The NSA didn't know SHA-1 was broken despite being the designers of it, and therefore would have had to do a brute force search for an exploitable curve, which they couldn't have done.
What do we know today? Well, we know that SHA-1 has critical weaknesses which took decades to discover, we know that the NSA had an active programme of sabotaging cryptographic standards via kleptography, and we know that they are more than capable of corrupting large numbers of people across many institutions, including professional cryptographers working at places like RSA. So none of these assumptions look safe.
I think the idea that this can't be an NSA attack despite things looking exactly the way it would if it was boils down to a desire to believe that academia can't be far behind what the NSA can do. But that's a very subjective sociological belief. Having spent time at academic cryptography conferences and reading their papers, I don't find it hard to believe that the NSA could know things about ECC that aren't in the public domain. Academic incentives just aren't there to research the things the NSA cares about, especially in recent years.
Another take... How long did OpenSSL have the heartBleed vulnerability? And that was EASY to understand, it was completely open and readable code, there are a billion plus more programmers than cryptographers, and all these academics also didn't catch it.
I'm out of my depth for the math portion of the discussion, but I can't say that "other people would know" is a reason I can get behind.
IDK, it does look bad.
I think there's pretty widespread agreement that it looks bad!
Comment was deleted :(
Forged certificates would immediately show up in Certificate Transparency logs.
He isn’t saying the CA is bad. He is saying the curves selected are arbitrary and they stuck to specific ones with no reason. That the NSA has a backdoor to at least some TLS EC algorithms.
Now, IMO I thought the whole point was that the curve itself didn’t really matter. As you are just picking X Y points on it and doing some work from there. But if there is a flaw, and it required specific curves to work, well there you go.
The NIST process (especially then) isn't fully open, which makes it easier to subvert with an inside agent.
There is no evidence that it was backdoored.
Circumstantial evidence is still evidence. I'll acknowledge that this is extremely tenuous, but: the NSA has gimped algos before, wants to do it again as frequently as possible, and has the capacity to do so.
Unfortunately, at some point we (non-crypto-experts) have to trust something.
>Circumstantial evidence is still evidence
Circumstantial evidence doesn't prove anything.
Comment was deleted :(
Article references Russias SORM system which provides not only FSB but the police and tax agencies with basically fully access to everything on the internet including credit card transactions, this stuff started in 1995 and was penetrated by the NSA
> Under SORM‑2, Russian Internet service providers (ISPs) must install a special device on their servers to allow the FSB to track all credit card transactions, email messages and web use. The device must be installed at the ISP's expense.
originally there was a warrant system but it seemed quite liberal and they don’t bother with the secret court system “oversight” like the US:
> Since 2010, intelligence officers can wiretap someone's phones or monitor their Internet activity based on received reports that an individual is preparing to commit a crime. They do not have to back up those allegations with formal criminal charges against the suspect. According to a 2011 ruling, intelligence officers have the right to conduct surveillance of anyone who they claim is preparing to call for "extremist activity."
https://en.wikipedia.org/wiki/SORM?wprov=sfti1
Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
> Internet and telecom companies are required to disclose these communications and metadata, as well as "all other information necessary" to authorities on request and without a court order
https://en.wikipedia.org/wiki/Yarovaya_law?wprov=sfti1
> Equally troubling, the new counterterrorism law also requires Internet companies to provide to security authorities “information necessary for decoding” electronic messages if they encode messages or allow their users to employ “additional coding.” Since a substantial proportion of Internet traffic is “coded” in some form, this provision will affect a broad range of online activity.
> Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
Russia is not alone. The EU required the same thing from 2006, until the EU Court struck it down in 2014: https://en.wikipedia.org/wiki/Data_Retention_Directive
https://en.wikipedia.org/wiki/Data_retention#European_Union is a fun read in how the EU fined countries for adhering to their respective national constitutions and refusing to implement that directive.
Looking beyond the EU, there are plenty of allegedly democratic countries on that wiki page with legally-required data retention:
In 2015, the Australian government introduced mandatory data retention laws that allows data to be retained up to two years. [..] It requires telecommunication providers and ISPs to retain telephony, Internet and email metadata for two years, accessible without a warrant
So if governments are sniffing on high entropy traffic, could we just send normal seeming (SSH or whatever) packets with the payload coming from /dev/urandom? Would that be a denial of service?
If you could get enough people doing it, yes. But in practice the only people who would care enough would be the people the governments want to watch. Even a decent chunk of the crypto community would rather dunk on a cryptosystem they don't like than actually encrypt their emails (although of course how much of that is the NSA disrupting things is an open question).
I actually sorta think unencrypted email has been a boon for society as both corporations and government agencies have left paper trails that helped expose their misdeeds later.
I thought there were wildly popular messaging apps now that were encrypted by default ?
There are, but basically only because of Facebook, who seem to actually care quite a bit about crypto and have the clout to make things happen. I don't think it's coincidence that they're also the main sender of encrypted emails (you can set your PGP key and then they'll use it for all their emails to you).
Create 1000s of files like this:
dd if=/dev/urandom of=encrypted1.zip bs=4M count=1
and upload it to dropbox and google drive
NSA will archive it for you expecting to decrypt the content later in the future when a software bug is found or hardware is more capable.
And see the national debt and tax burden rise even more.
Who is really winning if anybody does this?
Hard drive manufacturers I have stocks of?
When they find a way to "decrypt" /dev/urandom output into, you know, whatever is expedient for them at some future date, do you think a secret judge in a secret court is going to believe that you were just moving around random noise for the lulz?
Your random string at position 483828180 says IdIdIt that's our proof right here
My conspiracy theory is that AES 256 has been cracked by NSA/CIA but they just shut up about it so everyone feels safe.
If AES is cracked by the NSA then they know that there is a fault. They must therefore assume that others could know about it and might exploit that fault, either now or in the future. A massive amount of American infrastructure, including intelligence services, relies upon AES not having such holes. They wouldn't sit on such information.
> including intelligence services, relies upon AES not having such holes. They wouldn't sit on such information.
The NSA uses a completely different set of algorithms called Suite A. They don't use AES for exactly this reason.
> Suite A will be used for the protection of some categories of especially sensitive information (a small percentage of the overall national security-related information assurance market)
> a small percentage of the overall national security-related information assurance market
That quote does not exist in the one Wikipedia citation. I am not sure where they got it. Any time data flows from one physical location to another it is encrypted with Suite A algorithms, which is arguably where it matters the most.
NSA is but one of maybe a dozen US intelligence agencies.
18 to be exact, but NSA houses the Cryptographic CoE and sets the relevant standards for the entire IC.
The government is known to horde and deploy zero days for surveillance... That was part of the snowden leak. google/Ask chatgpt about Tailored Access Operations revealed under the snowden papers.
If the flaw in AES is subtle enough, they may be (justifiedly?) convinced that nobody else's capabilities will suffice to discover and/or exploit it - or, at least, that they have enough of a grasp of what everyone else is doing that no other actor could discover the flaw without the NSA finding out about this.
Encryption bugs are fundamentally different than other exploits; if I send traffic protected by encryption then it needs to remain encrypted as long as the information needs to be secret. With national security that can be decades.
It seems reasonable that they would set a self-imposed expiry date on exploits then get them patched quietly.
Comment was deleted :(
They wouldn't go public with such information. They would very quietly get the most important parts moved off of it, quickly, with as little noise as possible.
Moving any part of the US government off of AES would be noticed. The NSA doesn't send IT people to do installs. They set standards which are then incorporated into products by outside contractors selling to government agencies. Any attempted migration away from AES would be news here on HN within hours.
As an example of one such standard which was recently updated and still includes AES256, https://en.m.wikipedia.org/wiki/Commercial_National_Security...
That’s not the point: if there’s a grave flaw in AES, other countries can find it too.
If they can find a flaw in AES 256, others can too.
My theory is they've got a backdoor in the iPhone hardware random number generator. It's the most obvious high-value target, it's inherently almost undetectable (the output of a CSPRNG is indistinguishable from real random numbers), and you can keep crypto researchers busy with fights about whose cryptosystem is best, safe in the knowledge that it doesn't matter what they come up with, they were owned from the start.
They would be foolish not to at least try, either by intercepting entropy-supplying syscalls or compromising the hardware directly. RNG attacks are easy to design and nearly impossible to detect.
Try to implement out-of-spec authentication and watch your logs fill up with bots.
What’s the implication? Sorry. I can see it going either way.
[flagged]
[flagged]
What an awful take.
what was the take?
There is an option in your profile called “showdead”. You should be able to read the comment after updating that option to be affirmative.
Should have just kept it off.
I love reading the controversial things. Admittedly, it’s very commonly low quality like OP in this thread but sometimes you get good if unpopular arguments. The challenge to my worldview is a utility that’s difficult to replace.
This is a genuine question, I am curious as to what drives men such as you to such comments.
It doesn't take much outlandish theorizing to wonder who might be interested in character assassination of people challenging surveillance
I doubt it’s connected but would be fascinating if true.
Because they're so bad it would need a global system of total surveillance to catch them? Sure.
https://news.ycombinator.com/item?id=11872642
^ that is what that is. Or, in more detail: https://github.com/Enegnei/JacobAppelbaumLeavesTor/blob/mast...
You can't connect real things like these documents with slander by people who do nothing to step on the toes of the NSA. That is all that the BS about Assange or Appelbaum being a sex menace or Snowden being a Russian asset is. "Oh noes, they're a threat to the work we're not doing". Nobody is asking you to get drinks with Assange or Appelbaum. They don't want to be your friend. It's okay if you don't like them, for whatever personal reasons (and choosing to fall for this crap falls under personal reasons). It's not okay to be part of a mob that murders people by throwing a pebble each with this plausible deniability, in this "genuinely curious" just wondering kind of way. Enough is enough.
It certainly isn't fascinating. 3 letter agencies are torturing and murdering people, and having nothing better to do than gossip about gossip about messengers is just vulgar, boring, infantile cowardice, puffed up with not even clever words.
Comment was deleted :(
Wow djb was on his committee, cool.
DJB is widely thought to be the entire reason he was in the program at all.
>https://en.wikipedia.org/wiki/Daniel_J._Bernstein
I didn’t know who this was, others probably too.
Maybe not cool. There’s discussion in many places, but see https://news.ycombinator.com/item?id=13891900 for background.
Crafted by Rajat
Source Code