Hallo Du! Bevor du loslegst den Talk zu transkribieren, sieh dir bitte noch einmal unseren Style Guide an: https://wiki.c3subtitles.de/de:styleguide. Solltest du Fragen haben, dann kannst du uns gerne direkt fragen oder unter https://webirc.hackint.org/#irc://hackint.org/#subtitles oder https://rocket.events.ccc.de/channel/subtitles erreichen. Bitte vergiss nicht deinen Fortschritt im Fortschrittsbalken auf der Seite des Talks einzutragen. Vielen Dank für dein Engagement! Hey you! Prior to transcribing, please look at your style guide: https://wiki.c3subtitles.de/en:styleguide. If you have some questions you can either ask us personally or write us at https://webirc.hackint.org/#irc://hackint.org/#subtitles or https://rocket.events.ccc.de/channel/subtitles . Please don't forget to mark your progress in the progress bar at the talk's website. Thank you very much for your commitment! ====================================================================== So please welcome Nick. OK. Hello, is this thing on? It is OK. Hello, everybody. I'm Nick. I work yes, at CloudFlare, everybody's favorite CDN and I work in applied cryptography. And I'm going to talk today about the most interesting protocol in the world, transport layer security. So if anybody has used this browser, do you recognize this little icon right here? This is this is Netscape, one point one. This is the first Web browser that had SSL that had some sort of layer of security built into it. And it had this little icon here that looked like a gun and maybe a pin from a grenade. But if they're connected, then that means your site security eventually improved it to this little lock icon that evolved to this like screen lock in the address bar that we all know and love. But https SSL this this whole encryption thing on the web enabled a revolution, starting with e-commerce. You can go to this beautiful website here with your with your browser and then buy some books or buy some of these you stuff. It actually was of a revolution. And to put it into context, this is this drawing that we did once of of what the Internet looks like. And there's all these different links that connect over all these different protocols. But to access websites, it's really point to point. And you go from either your phone to a data center or your laptop computer to a data center. And it's, you know, a way to get information and websites. And this originally is a plain text protocol. Everything is sent without any encryption or encoding. It's just letters on the wire. And after this came out, people wanted to add some security. So this protocol was invented called HTP and the S is supposed to stand for secure. But lately it's been sort of another S word. So HDB, HDB, this is HGP plus security. And right now what that means almost exclusively is this protocol called transport layer security that you've all heard about. And it provides data encryption and integrity as well as authentica tion of the server, tells you what website you're actually going to and gives you a way to verify that that's who it really is. So this is this is this is a messy details talk. So as with anything, the devil is in the details. So let's go a little bit back to the beginning. SSL was secure. Sockets Layer was the original protocol that Netscape invented back in the early 90s to encrypt the web. And it was invented by this guy, Kip Ebbie Hickman. And, you know, I heard some stories about this. I saw that Moxie Marlinspike in DEFCON 18 sort of tracked him down and had a phone call with him. But there was this story about SSL one being presented. And I emailed Phillip Halam Baker and he kind of described the setting in which SSL one was presented, and it was by Marc Andreessen, MIT to a group of around six people. And the people in the audience apparently broke it right away because there was no authenticity checks at all. So SSL V1 was an auspicious start to a security protocol. It's, you know, completely unauthenticated. So to to get back to it, it was evolved. And I'll get into how SSL became and, you know, became something that we use every day. But it really breaks down into two different pieces. One is public key cryptography. And this is how you as a browser and your server sort of established shared keys and the identity of the server and data encapsulation, which is you want to send data to the server and back, you have to encrypt it somehow and authenticate it. This key establishment part happens with this with what's called SSL handshake. And this is, you know, pretty complicated. There's pieces going back and forth. But let's focus first on that little checkmark on the left, which is cert validation. And this is how Tlas provides authenticity. This beautiful thing that we've all built together called the public key infrastructure. And so just bear with me for a second. Like, how how does this work? This is it's based on something called X five or nine certifi cates and certificates are files with different things like the identity of who you are, a public key, and they're usually digitally signed by. Certificate authority and this certificate authority has a certificate itself, and then oftentimes that's signed by another certificate authority, this forms a chain of trust. And as long as you can trust the sort of dark brown certificate there, then you can find these signatures are correct. Then you can say, OK, yes, this is someone I trust to issue certificates for websites, and therefore this site is really who it is. So this this whole chain of trust thing seems really easy, right? It's just checking digital signatures, checking, making sure that the the metadata inside it is correct and matches the site. You're going to. Well, yeah. Boy, was that they were wrong about that being easy. So I'm going to explore a couple of different ways in which this was completely messed up, including implementation, bugs, intentional flaws and issues of trust. So to make it more clear, this is what we mean when we validate a certificate. This is in that check. In that first diagram is as a client, you pass the certificate, you find a parent certificate, and you make sure that the signature on this certificate is signed by the parents publicly. And if that parent is trusted by you, then you're good. And you know, this is good and you go on. If not, you do the same thing on the certificate. You check it certificate, check it signature. And so this is just making sure the certificate is correct. What about actually tying this to your encryption channel? There's really two ways of doing this in terms. First is the RSA method. And in this method, as a client, you encrypt a pre master secret, a bit of data that you want to share with the server, with the servers publicly and send it over. If the server can derive the same keys from that information as you, then that's sort of an implicit validation that they have the corresponding private K ey Diffie home. And this is much more recommended. By the way. What happens is, yeah, you you sign a couple of things, right? The server assigns several parameters that are used to derive these shared keys and you just verify it with the certificate now. So there's two things, right? Validate certificate type certificate to channel it. Phase one breaks, then all you have to do is use some untrusted certificate and the client will say, OK, this is good. I believe that this is a real trusted certificate. Go on from there. The second phase, if there's a problem with that, then you can use a real certificate, but a fake signature. Has anybody seen this code before? You got one couple of hands out here. This this is code in a library called Ganu Tools. And what this is really doing is it's supposed to check if this is a K, and it turns out if you remember your C correctly, that anything other than a zero return value is true and a zero is false. What happens here is if that one call check, if it's the get sign data fails, then you return result. And that result is going to be a negative number and therefore it's going to say, yes, this is a K. So this this bug really says if you give it an invalid issuer then oh yeah. This is a K. It's not bad. It's actually completely good as a K. And this code was introduced in 2005. It was discovered in again as a bug in 2014. So in in all tools that used Linux. Can you tell us for nine years this channel binding, this is certificate authority code was was in there. This might be more familiar. Has anybody seen this code before? This is the title of the talk, right? This is an apples common crypto library. And again, this is just a simple programing bug. Accidentally, there's two Gote fails. I don't really know exactly how you could put to go to fails, but from what I understand, they're not using git at Apple for this library. So if anybody remembers how you merge it, merge problems and subversion, sometimes you end up getting duplic ate lines. Maybe that's how it happened. Some people have considered that it may be more subversive, but I highly doubt it. But in any case, what this does is it'll always go to fail and go to fail will mean this key exchange. Yes. Is correctly tied. So all you have to do in this case is just use a certificate that's valid and put in some garbage as a signature and you're going to be good. So these are these are easy, right? These are just simple programing bugs and they completely invalidate the authentication in class. Let's look at something a little bit more. Complicated, so there's several different ways in which you can do digital signatures. This is the most common for RSA. It's a standard called pixel number one one point five. It's it's pretty horrible, but it looks for zero one a bunch of and a zero. And then it has this Digest's info which tells you what type of of hash it is. And this is an assigned one object. And if anybody's worked with someone, it's it's not super fun. And then the message digest back in 2006 at the crypto conference, Daniel Blankenbaker, I hope I pronounce that correctly. I am in Germany anyway. So he found that sort of surmised that if you pass this incorrectly and we're able to put some extra garbage at the end, you can construct a fake signature or some signature that will actually work. And this only works if you're using RSA with the public exponent of three. And I don't need to go too much into the details of RSA, but if you if you do a signature verification, you want it to be fast and RSA and that's really an exponentiation with the public exponent. So people use three because that's incredibly fast. All you have to do is take your message and Cubitt and then you'll see that that's actually going to decrypt the message. And it turns out that if you can arbitrarily choose garbage in there, you can construct some value that cubes into something that looks like this. And then you can actually make a valid signature on on some c ertificate. And and it'll just work. And it turns out that there are several routes in the Certificate Authority, trusted route stores and all sorts of browsers that have been around forever that use this public exponent of three. So this is this is actually what's practical if you can put some garbage at the end of the Digest's. But every implementation now checks that there's no garbage at the end. But there was recently another mistake. Right. So this is another library called Nessus, which is very, very widely used. It was developed by Mozilla to do all the crypto in Firefox as well as it was used in Chrome back in the day. But it turns out there another coding error. And in this case, it's in this digest info. And as I mentioned, it's ACIN one, which is it's an encoding format that is is kind of crazy complicated, much more complicated than it has to be. And there are two encoding formats, B.R., which is what this is named after is the basic encoding rules. There are multiple ways of encoding data and you can just put extra zeros. There's multiple the length of length values is arbitrary so you can have different length length values. And that actually turned out to be the problem here is that there's an integer overflow in the and one length. So you could actually put a bunch of garbage in your length and have a several multi byte length value that it would just skip over the garbage. And as with the previous attack, you can construct a message that cubes into something like this by repeatedly just try and cube roots and there's an algorithm to do that. My colleague Flippo on put it put this up on GitHub. So if you want to exploit this, you can do so and your signatures are going to look weird, right? This is not what a digital signature usually looks like. It usually has quite a bit of entropy, but this is a small number. And if you cube that, you get something that actually looks like like this. And it'll this was actually trusted by by Firefox for a while. So there are subtle ways that, you know, programing bugs can can creep in and break the trust that you get from TLS. But speaking of this, issues of trust, speaking of the public infrastructure, you don't necessarily have to find bugs and flaws to take advantage of it. So here's a quiz for everybody in your say, browsers at home or operating systems, whatever you use, if you're not sort of paranoid like me and have removed syas, how many countries have controls, of course, that things are trusted that can actually sign any certificate that will be trusted so that we have a guess. Too many. Yeah that's. Yeah that's right. It's actually forty six. So this is from AIFS SSL Observatory data. Forty six countries around the world could you know, just arbitrarily create certificates for any website. And that's, that's also kind of crazy. How, how do you really trust that this was issued correctly if there are different entities that are supposed to. Follow rules, but technically can create certificates for anything, this came up earlier in this year and the year before, but it's been a sort of endemic problem with SSL. People want to look inside of encrypted data to look for bad stuff or to say inject ads. And Superfish is one of these things. It was installed on Lenovo laptops. And what it did was it installed its own trusted route into your route store. It made it so that every browser you have will trust this Lenovo route. So there's all sorts of different ways that these sort of things are installed. So they install routes, whether it's your corporate I.T. department, this happens all the time, especially here in Europe, antivirus software or Superfish, which is by your OEM or say the country of Kazakhstan just proposed this for the entire country. So they want to install a route so they can, you know, be the person in the middle, decrypt all your traffic and look at it and go go onwards. And the way that works is they on the fly forge a certificate that is trusted by your computer. So this this is this is bad in itself. But it also turns out that some of these proxies themselves don't validate certificates correctly upstream. So if someone was to manipulate this, that's not your proxy. Anybody can can create sort of a fake certificate. And this is some lot of the research around Superfish found that this is the case. Oops. Not only are you having ads injected into your supposedly encrypted streams, anybody on the outside can fake sites. An extra bonus on this is if you install routes into your store that you trust, they typically browsers will bypass these advanced techniques called keeping. So if you say, for example, in Chrome, you they have a pre computed list of sites and what certificates they should have. If it turns out they see a certificate that's signed by one of these extra routes that was added by, you know, any one of these things, then. Kaepernick doesn't work anymore. Oops, again. So trust, trust is hard and there's a lot of different ways in which in which it's been broken from bad code to bad infrastructure to just bad political organization of of who should get access to to making these keys looking a little more deeply. There's a bunch of different libraries that do crypto. And as I mentioned here, almost all of these were affected by these validation bugs at one point or another in the last in the last decade. So that's the client side. Let's talk about some issues on the server side. And just just as a summary, most websites use Apache, Microsoft, SS, Engine X, sorry, not ISIS and Engine X or say Google's internal stuff. And and this mostly uses open SSL. And you might have heard some things about openness so lately, but I'll get into that earlier in the decade or actually earlier in the last decade, there was another library that was very common and it was by RSA and it was called Be Safe and Be Safe was one of the first robust SSL implementations and SSL implementations. Are all these crypto things that I' ve mentioned require you to generate random numbers to generate these keys. And I don't know if you've heard about this. You probably have. But Dualeh RBG, it turns out there is a random number generator that was standardized benice from the NSA that is almost guaranteed to be back doored. So even if all of your implementation is good, if your random numbers that are using in your system are bad, you can work backwards and decrypt a stream. So this was in B safe and it came up even even in the last couple of weeks where in Juniper Juniper Screen OS they had this dual RBG, although they changed the parameters to not the ones that the NSA knows, but to the ones that potentially they know. But yeah, if randomness is also, you know, another weakness to the system. Heartbleed, this is a big deal last year, sort of, but it turns out this is this is just another dumb bug, Nancy, right. It's just an overread that ends up disclosing information on the server. This one was really bad because there is another architecture problem in Teela servers, which is your private key is kept in the same memory space as everything else. And if you think of the way that people design the security systems defense in depth, this this is kind of absurd that the most exposed system that you have, the Web server, what's actually being connected to by the outside world, has in its memory space the keys to the kingdom, the private key. But Harper just helped reveal this. And and it was another big one that affected. As I said, almost everything uses open SSL and if it was in open SSL for several versions. So implementation bugs are fun, right? It's it's fine. People write code, people mess up writing code. There are formal verification methods are a ways to catch this. But this talk is about tools. And so tell us it might be hard to implement correctly. A lot of people made mistakes. What about the protocol itself? It turns out this has been quite a disastrous couple of years in terms of the prot ocol. Let's this is the timeline of, I guess, tools and SSL versions. SSL one, I guess it's 1994. There's not really a lot of record as to when that happened, but SSL two came out in 94, 99 was SSL three, then the ITF get their hands on it and turned it into a new protocol called Tlas one which on the underlying if you look at the bits and bytes, it's actually SSL three point one. It's not that much is different. There's a stronger definition of padding. I'll get into that later, but there's not many things have changed. Then down in 2006, we came up with one point one, which did almost again not very many changes. It just made sure that in CBC mode there's an I.V. I'll get to that, too. And then tell us one point two, one point three is sort of coming up in the next in the next year. It's it looks like, OK, nothing can really happen. These protocols all seem similar. But in the meantime, people have developed HTP and and how the Web is used for sending information has evolved it back in, say, 94 Web pages were just one static thing, but we didn't have cookies. We didn't have JavaScript. We didn't have all of these sort of fancy bells and whistles that are in the modern web. So. Turns out that HTP is really great for enabling crypto attacks, if you can use you can you can do repeated plaintext attacks, you can do chozen plaintext attacks. There's a lot of different things that HP allows you to do as long as you can get men in the middle access. And as an attacker, this is this is kind of how you do it. If you are on the local network with somebody else, then you can use AAFP spoofing or some other method to get man in the middle position and you can inject random JavaScript onto unencrypted pages. And with that JavaScript, you can trigger the browser to send requests. Now, in HTP, there are some nice things like cross site origin policies that prevent prevent you from from really doing a lot of different things. But you can you can inject JavaScript into one page to make requests to another. And the way that cookies work is they'll always be sent. Even if the server has cross site request forgery protection. You can still send cookies, you can still sound requests. And it turns out that if you can trigger errors and less, then the client will resend the same thing. So it'll send you can send passwords, anything that goes along, you can kind of put whatever you want in the URL and stretch this out, do chozen plaintext things. And what this allows is different, what are called oracles, which is tools to as an attacker reveal the plain text one bit or bite at a time. And compression oracles are, I guess, the easiest to explain in this HTP in order to save space is compressed. And typically we use some algorithm like Guiseppe or a dictionary based hash sorry compression algorithm in which if you have two repeated strings, it's going to be shorter than if you do not have two repeated strings. And that ends up being enough to allow you to decrypt an entire session if you can get the client to repeat. So if you use something like this, you can get a client to keep sending requests and you can as a man in the middle position, you can kind of rearrange bytes and packets around, but you can get the browser to keep sending requests with secret data in it. You can't see inside the browser, but you can see the encrypted packets going past you and crime in breach are. These came out a couple of years ago as a practical, very practical implementations of a oracle for compression. And in a practical sense, this is this is kind of how it works. You choose your padding and padding here is anything at the end of the query string. You control this, you control the JavaScript so you can put whatever you want there. You can get some padding and you want to guess the cookie. So the point is you keep repeating the guesses until it matches the cookie. And if you if your guess matches a cookie. Exactly. Then the message is going to be really small. And if you look at this say say you're guessing one bite at a time, if you have a bad guess, it's probably going to be longer and it's going to be five compress blocks. And if you have the correct guess, it's going to be for compressible blocks. So if you use your padding to kind of align yourself to hit the border between these encryption blocks, you can go one by one decrypting your values in this cookie and go all the way down to the to the bottom. And this is a practical text that's been that's been pulled off. So crime is about Tlas compression. Everyone disabled Tlas compression after crime happened. Breach came out the next year, a black hat. And this relies on HTP compression, which for performance reasons, nobody has turned off. So crime is not really a problem now, but breach is actually very exploitable in many, many, many websites, which is lamentable. But compression is not the only way to have an oracle padding. Oracles are a kind of an older idea. They were originally described by Vodoun in 2002 and they rely on two really, really bad choices that people made in deciding how to encrypt things into us. And these are CBC mode and then the Mac then encrypt construction, so. If you recall, I don't block cyphers require you to have a certain number of blocks of data that go in a certain and that same number comes out. So for as the most popular blocks ever, we have 16 bikes come in, 16 bikes come out. So I'm sure you've all seen the ECB penguin. It's it's the classic example of why CBC matters or why you need to do chaining. If you if you only encrypt blocks on their own and don't chain them together, then you can actually see some if you have low entropy data, if you have data that does that repeats the same kind of 16 by blocks over and over again like images, then you can kind of make out with the images with CBC. This is a way just of mixing one block into the next block. And you just saw in the ciphertext into the next plain text block and encrypt. And t his gives you a good amount of randomness all throughout your data. And decryption happens sort of the same way you start with this initialization vector and then you just or the ciphertext walk into your decrypted block and you go all the way down. So this is this is Corkman. It it looks it looks fine, right? I mean, this is this is something that is a good way to kind of combine crypto or so we thought in the 90s when this was invented. Another debate that happened was if you want to have you need integrity and encryption. So which do you do first? You encrypt first and then you add an integrity tag. You do integrity first and then add encryption somewhat disastrously. They decided to do the integrity first and then encryption and fertilize. It kind of looks like this. There's a data back and then padding and the data itself and then that whole thing is padded up to a 16 by boundry for a yes or an API boundary for does, but up to the 16 by boundary and then encrypted. So this Mac is you have. Excuse me, Mac. The data with the header and the sequence number and that's that's gives you data that's authenticated and then encrypted. But that panning there is not authenticated, which turns out to be a critical flaw now in Telus or an SSL. What does that padding look like? Well, the terminal byte is the number of bytes of padding remaining and then it just have a bunch of garbage so you can pad things with zero and then it has zero bytes before it or one with one bite before up to whatever, whatever, and then it'll stay updated that so that these bytes of padding have to match the original pattern. But. Just having unauthenticated data is enough to give you what's called a padding oracle, so if an attacker is able to distinguish between guessing a pattern correctly or incorrectly, that's enough to decrypt an entire message. And technically, how that works is if you are in a man in the middle position, you want to guess the last bit of data. You modify the previous ciphe rtext block and you put in a guess. And the way CBC mode works is that gets expert in with B or decrypted block and you end up with M, which is the padding. So if you know that the pattern is wrong versus right in the bottom case here, you know that the padding is correct because zero zero zero is the correct padding. B is going to be wrong because this is your X or some bad decrypted data into there. But you can work backwards with this Exalogic to figure out what that value is. And then once you have a you can work back and figure out what the next value before it b is. And all you need to do is to be able to tell as an attacker whether you're in case one or case two this pattern. Oracle was originally built into the protocol. So there was this is the original Vodoun attack in 2002. You'd have a different area code for whether the padding was wrong or the smack was wrong, and as an attacker, you try all 256 values for X, and once you guessed correctly, it's going to give you the error code that says, oh, bad Mac, meaning you got past the padding and your your and then you're right and you actually have decrypted one bite. So this this problem keeps coming back. There are many, many ways in which you can have side panels. So the air side channel works as if you have an incorrect guess bad padding. Karakus that back. Later the next year, Binay and Bromley came up with this timing side channel. So it turns out that if you are having an incorrect guess, then this is going to be fast because you're not doing the Mac and it's going to be slow if you actually do get the padding correctly. So you just look at how long it takes for the server to do this, this computation. And voila, you have another categorical. You just might have to try a couple extra times to get rid of timing jitters. But this lets you decrypt entire messages. And in the context of HTP, this is your entire cookie or your password or something like this now. This is all well and good, but as I mentioned , things keep coming back and. Paterson came out with Lucky 13. This is two years ago, and it relies on a really kind of obscure fact about how this SCHMOCK is actually computed. So, as I mentioned before, you can make this this sort of eight by sequence number five byte header and your data and. The the fix for this timing attack that happened was you just always smacked the whole thing off the paddlings wrong, you smacked the whole message. And it turns out that in most implementations of Mac, there's a lucky bite. There's there's a point in which the Mac takes more time than it would before. There are different compression functions inside of it. And if you're at fifty five bytes, it's going to be faster than if you're at 56 or 57. So if you this is an entire 64 byte message which is just aligned on a as blocks. And so what you can do is try to guess the padding and if you guess the padding wrong, it's going to be a slow hash. And if you guess the first body padding, right, it's going to be also slow. But if you get really lucky and guess the last two bites of padding correctly, it's going to be fast and voila, you're two steps into your Oracle attack and you can take this all the way down. So this this is found again this year, a couple of months ago in Amazon's new implementation as to when they created a tireless and full implementation from scratch, what could go wrong. And they tried to protect themselves from Lucky 13 and ended up having the same sort of thing. So if you look at it, there's this is a graph that a colleague of mine, Filipa, put together and trying to fix this. In goes Krypto Library, which sort of presciently Adam Langley, who wrote it, had a comment saying there's probably a timing side channel in this Mac. And it turned out that, yes, there was and that was lucky 13. So another really, really subtle thing that you would not have predicted in the 90s that came back to bite us. And a really bad thing about this is that forty one point zero a nd one point two, that was the only way to use block ciphers is in CBC mode. So unless you have both servers, tell us one point two, you're kind of screwed. And that leads us to another idea, which is the downgrade attack. And the general philosophy around this is that if you support something old, someone's going to trick you into using it and. You know, there's ways to defend against that, but it's really difficult to get right as a bit of a background that these are cipher suites, right? This is what SSL needs, all these different things to this sort of alphabet soup to decide on what crypto algorithms to use and to break it down. It's. A key exchange certificate, key transport cipher, and then an integrity function, and the server gets a list from the client and it sort of picks its favorite from the list and then chooses it. And if anybody was over at the next hall for the previous talk, you'll know that there's something called export cyphers. And these were supported for a very long time. And this is to comply with this antiquated Nyos crypto export law in the United States. And these are really, really weak ciphers. So these were supported by clients and servers all the way up until this year. They still are. And the fact that the server will always pick the best one of the client, everyone's safe, right? The server will always pick the best one. Turns out, no. These are several attacks that were described in the previous talk in which. You can end up forcing clients of servers to use these really bad, crappy crypto, and the reason is it boils down to the only thing that's in this handshake that's authenticated is this key derivation stuff. So the pieces that you use to derive the shares, shared keys. And so this is what Freekeh is. If you sit in the middle, the client is going to say, hey, these are the ciphers I support. You just change that list into, OK, I only support export ciphers and then the server say, OK, well, I support an export cipher too. I gu ess we're gonna use this. You must be an old client and then all you have to do is crack that export key, which is, you know, 40 bits of encryption and and you're good. Go from there or in some cases that you have to crack a 512 bit RSA key, which is also it's it's computationally doable. The way that logjam works is very similar. This works with RSA Cipher Suites. This works of Diffie home in cipher suites. And this relies on the fact that you can force the client and the server to agree to use these Halman parameters to to Dukey Exchange that are weak, just old crypto, small key sizes, and you crack it. And then as a man in the middle, you can manipulate and read everything we which is sort of the same thing. It relies on the fact that some servers use Diffie Helmund parameters that were pre computed and you can go kind of three quarters of the way through the attack because they're all using the same Apache shared prime for this stiffy home and stuff. So this relies on unauthenticated protocol in the handshake. More on this to come in in the the theme of downgrade attacks poodle. This turned out to be the nail in the coffin for SLV three. You can do this thing called the downgrade dance. You can force browser's to to negotiate down from SSL from tearless one point two to one point one to SSL to 1.0, Tharsis all three. And as I mentioned before, the padding in SSL three is just a bunch of X's and then a number, so you can fill it in with anything. So if you align your blocks so that the last block only has one relevant bit of data, you can do a panic attack. And this when people came up with this, they were just like. Obvious, obviously, we can do this, this is terrible and SASE three is completely broken US poodle, it turns out that I mentioned there's one change between acetone and tell us one one one. One of the major changes is that that padding had to be defined as repeat the same padding value. Well, some servers didn't do that. So you can end up doing the s ame attack in several large percentage of websites are still susceptible to deloused poodle. Another threat to tell us Agent Cryptome. Yeah, I mean, he's he's still doing well. But this is sort of in the news now with with different signature algorithms based on hash functions. This was presented back here at C.C.C. in 2000, 2008. Some researchers were able to forge one certificate from another by finding a collision in this hash function that you use in the signature. But this is kind of the details of it's a little complicated, but they were able to make themselves a trusted certificate authority and issue certificates for everything. And from that point onwards, 95, this is an old hash function, had to be kind of booted from everything. So how how do all these attacks fit on the timeline? This is our test timeline. As I mentioned, these are the major attacks that I that I mentioned and they were really concentrated pastilla at one point two. And this beast, I didn't go into this, but this is the first one of the back rooms. So attacks that you've I forget Browsr something something something that you take a nice word and you make an attack name out of it. But down around Heartbleed, this is when the logo trend happened. So every vulnerability had to have a logo. And there's a really high concentration of these things around the last three or four years. So as I mentioned, if you can look here, tell us one point to that was 2008, 2012. Nobody was using it. This is from SSL Pulse. Even 2014 SSL V3 was supported almost universally. Now we're in a slightly better position, but it takes a really long time for service to upgrade. And same thing for clients. Right now we're seeing something around 75 percent of connections to one point two, which is great. But you can clap for that if you want. But tell us one point two solves most of most of these problems. But most of these browsers, they came out in 2003, 2004, that it was at least five years later. So it takes a whi le for things to get up to date. Now, this is this kind of paints a grim picture for these old vulnerabilities, but at least from the ones that we've seen, we can learn a few lessons. And one is that if an attacker can identify one bit of information about the plaintext, then it's basically over the way that works. Repeated data can be sent and you can take that one bit and expand it to an entire message. There are side channels everywhere, timing side channels, computing side channels. There's been research about working on in the cloud computing if you are able to do cash timing. So if you're running one process and there's crypto hopping on another process, you can find out how long things are taking. This is incredibly dangerous. Almost all the things I mentioned having to do with Oracle's relied on unauthenticated data and having unauthenticated data, doing Mac first and then encrypt, leaving, even patting Padang is incredibly important and verify and checking it correctly is is incredibly hard. It turns out we've messed it up so many times, but adds this is authenticated encryption with additional data. This is a construction that was introduced into us at one point to Ascham is sort of the most popular version of that. We should definitely use those and just drop CBC altogether. CBC is just a really, really difficult, scrappy construction that hasn't serviced it sort of served us well to now. But it's there's there's many ways to attack it and there will be many ways to attack it in the future. Five or nine, the structure for certificate's, which is Ascencion, these are really incredibly hard to implement correctly. There are at least nine years older than SSL itself, these protocols and. They're going to cause you problems using this and as for downgrade attacks, you know, support and secure crypto or protocols at your own risk, because people will be able to somehow downgrade you to using them. So I skipped a lot of issues here that had tons, tons more issu es. I mean, Beast, this is the RSA decryption Oracle ass channel, had a remote code execution bug a few months ago. Triple handshake had to do. I didn't even mention client authentication and how broken that is. The whole ecosystem and cert authorities messing up when it comes to issuing certificates didn't even go into that RC for weaknesses yet. Apparently the stream cipher Patterson around the same time is lucky. 13 found you can decrypt an entire RSA, an entire Teela stream if you use RC for their vulnerabilities and big numbers. There's issues with forward secrecy. Tlas is just absolutely loaded with problems from everything from the implementation side to the protocol itself. So just enough complexity to hang yourself with. Now, I would say this is the end of the talk, but there's just one more thing I'd like to just as a thought experiment, go into some other attacks that follow up what we saw in the last room. So there's other negotiations that happen within Tlas as. As we talked about with Freekeh and with logjam, it's, hey, which Diffie Helmond group do we choose or which Sefer do we choose? There's also some other negotiations that happens. There's NPN and Alpen. These are protocols that allow you to upgrade, to be to. And then there's which which elliptic curves do I support. This is really interesting, actually, tell us supports a ton of kind of crazy elliptic curves and what if you did a downgrade attack on that? So this is a new kind of Tlas vulnerability introducing called Kolff Swap. So this follows the exact same model is freekeh and log jam. And what you do in a man in the middle position is you take the supported cyphers and you swap it with the smallest, weakest possible curves supported by both parties. And you put in make sure that it's a curve that sorry, I elliptic curve diffie hulman for key exchange so that both parties are going to be using these kind of smaller curves to derive their keys. Step two is this one. Bear with me. Really, we d on't know how to do this one yet, but solve this great log problem. So OK. Yeah, yes. That's something that's not really doable right now. But on small curves, it might be. So before we go into that, let's just think about what curves are supported and tell us there's there's some curves here called the curves, and this is one hundred and sixty three bit curve. Over, it's a binary curve, this is this is this is not the type of curve that we typically use, but when elliptic curves were originally standardized for class, it was all the rage. We had prime curves, we had binary curves. They were both kind of equivalent. And if you look at the data and I did a survey of a lot of client hello's, four point three percent of client support this kind of week curve. And this is this is about as strong as I think equivalent to RSA 10 24. So even though when you're doing a negotiation, you expect to be using a really strong elliptic curve, like a 256 bit one for their key exchange, the men in the middle can downgrade you to this nice smaller curve. And if as an attacker, you can break the discrete logarithm problem on this curve, then for the Aleksa top 100, one point two point one three percent of sites. Yeah, you can you can make them both use this really kind of crappy curve for key establishment. But as I sort of as you guys have alluded to and you're chuckling, nobody's really broken DLP for any curves around the size. The sort of largest curve that's been broken is 110 bits or so. But there's a reason we don't use binary curves anymore. They've kind of gone out of style. There are index calculus techniques that are supposedly better than brute brute force on these on binary curves. So people consider them weak. And we're not sure what kind of research has been done in private about, say, binary curves. So just. The good news about this is that in twenty one point three, the new standard, all these curves have been excised. So and there's a lot of these bugs have been kind of removed from the protocol itself. But it's been a long and rocky road for transport layer security, and that's the end. And before we start the Q&A, just a quick announcement, if you hear from me today is incorrect. It switched to astrology. So you should go there if you want to see if there's going to be a stream in this room. But if you want to be there in person, you should go there. I will give you, let's say, 30 seconds to leave and then please be quiet because we want to do Q&A properly. You can use the time to think of questions. We have lots of mikes to two there, one there, one on top and on the other side, too. And also for you on the ISC, on Twitter, ask questions. We have a signal angel that will read out your questions for us. OK, let's start at number one, please, I could you say something about CloudFlare secretary to not face or Wharton School and the others? Yeah, I'd love to. So. As with any one of these protocols or types of crypto that are broken or old, once they're definitively broken or old, they should be completely excised. So there's been a plan to sunset SHA one, let's say, by the end of 2000, by the beginning of twenty seventeen. So the end of next year. And the timing on which this this is to happen is, is sort of debatable. And there's there's really question about whether or not short one is something that is a credible threat to happen within the next year or not, or whether it's a credible threat to happen within the next five years in terms of forging a certificate. So. Whether or not you should continue to issue Shaalan certificates or not, it's really something that. It's a big debate that's going on right now. But if you compare to MI five now we have precent collisions, right? And it took two years. So one thing to keep in mind in this case, and I can go back to the slide about the 2005 collision, is that having a collision in Szechwan is not enough to forge a certificate. Where you have to do is actually get the CI itself t o to predict a certificate that's created by the CIA that aligns perfectly with the certificate that you want to forge. And there are some techniques like putting randomness entropy into the serial number that can help defend against that. But yes, there's a Freestar collision. I don't know. It could be it could be within the next month that we see a real Shomron collision. But in terms of colliding certificates, that's somewhat of a different story. Internet, please. Thank you. Do you think we would be in a better shape today if we had won the race over https? And why have we not moved to something like Specky SDI or any other alternative to Picardie? Well, as speaking as as part of CloudFlare, we control one side of the equation. And as with any secure security protocol, you need both the client and the server to agree on what they're going to use. So the evolution of the entire marketplace determines which algorithms or which protocols you're going to use and ended up winning over the long term because it was most widely supported. And as with any of these things, it's kind of a winner take all. Number three, please. Yes, hi. We clearly suck at implementing protocols, right, so we need to be able to upgrade stuff relatively relatively rapidly because we will break it again and again and again. We'll need to upgrade. Upgrading browsers is hard, like I actually have to take. And that would be a snapshot before the upgrade my browser, because I really don't know what's going to break. Why isn't any of the browsers going there out of OPG where basically data is a completely separate piece of software running in a separate process, communicating over the sender and send it out and it's upgradable without touching any of the other system? Thank you. So the question is extracting Tlas from the browser itself and putting it somewhere else. So one of the interesting things that has happened in the US, the ecosystem, is that many of the browsers are actually doing that for the cert validation. They're outsourcing that to the operating system itself. Firefox is one of the lone exceptions that has their entire PCI stack. I think it's historical reasons, really. I mean, Chrome and Firefox auto updates. And they do so because they want to make sure that you have the latest and greatest things not. They have kind of taken the approach that if your browser upgrades and it breaks too bad for you, it's we know better. And that's that's sort of that's their strategy. So I'm not a browser vendor. I don't work for one of those companies, so I can't really speak to what they want to do. Number two, thank you. You mentioned the defense in depth approach of separating the private key from the Web server. Now, I know that CloudFlare offers key lessons as well, which I really like. Do you think that instead of just being a business advantage to you, this could be a realistic security measure for many websites that they separate their private key to something behind a separate layer of classicism? Yeah, I think this is this is something that could be more generally applicable. And this is something that in the last IETF meeting they brought up. So there's a proposal for separating private key operations from the TSA itself. It was brought up as a see the last item. So I would encourage you to check that out. Signal angel, please. Frank, you, too, wants to know whether IP Version six encryption could replace, tearless or mitigate at least some of these issues. So IPV six encryption, yeah, I think. I don't see that as a replacement for TLC. I see different layers of encryption, whether it's TCP Creped or anywhere lower down on the on the stack as additional layers of defense in depth. So the good thing about classes is that it encapsulates your messages and it's really just point to point on that on that side. So it allows you to really trust the server itself, whereas something like encryption at lower levels. Yes, this is great. I mean, we have encry ption on many different protocols. Wireless, you know, Wi-Fi is encrypted. All of your sort of 3G, 4G signals are encrypted. More encryption is better. And I don't think that having one encryption on a lower layer should stop you from adding encryption on a higher layer. Microphone number four, please. Hello. So you have shown how slowly the standard itself is evolving and even much slower the adoption is evolving, and that would be my question. Do you agree that this should heavily invest, maybe even in the standard, to make it easier adoptable and do something more there and have you concrete plans or proposals how this could be achieved? I don't have any concrete plans or proposals on that, but I think it's a good idea. I mean, I think upgradeability is one of the one of the most difficult problems we're facing in terms of these protocols, because as you saw in all these different versions, there were problems and they had to be fixed in different ways. And we're in, say, the Internet of Things, the buzzwords that we're all talking about. These are embedded devices that are going to have one copy of a protocol. And it's really hard to get firmware to be able to, you know, have a clean upgrade cycle. So I don't have any solution there. But I see that as one of the biggest problems going forward in using protocols that are potentially insecure. OK, thank you. Number six, please. Hey, thanks for this nice self reminders. One thing I'm wondering, when will you have any idea when one or three would be like finalized and when I can finally actually enable it on my Web server? Not yet. I don't have a concrete date on that, but. It should happen within the next year, hopefully. Thanks, Internet, please. Why is CloudFlare using elliptic FDA, even though it is also relying on a very high quality entropy ifop pseudo orangy? So that's a good question. As I mentioned, random number generation is very important for having secure cryptography. And TSA or any sort of DNA based a lgorithm requires entropy for every signature, or at least it did. Originally, there have been more recent RFID on how to use RSA in a deterministic sense where you don't need a lot of entropy and CloudFlare has implemented that for you. So our Codesa is not using pure random numbers. It's using a deterministic algorithm. So if we have an orange collision, it's not a big deal for us. No one plays hi. So after talking with, like a string of calamities and depressing stuff, I wonder if there are things that make you happy or optimistic about the future of. Seems like, you know, we a lot of those attacks were just after Snowden. So people are finally starting to pay attention to things like the CIA browser baseline requirements that were only finalized in 2012 once XP and Android two are dead, will have a shorter longtail like other things that you're looking forward to or that are awesome about deals. Yeah, I think some some of the things that interest me and I'm excited by are, say, the MIT project, which is formal, formally verified to us that I'm one that I sort of glossed over was the triple handshake, handshake, vulnerability, and that was discovered by formal analysis of it. I think the work that's happening in us, one point three to just eliminate, say, bad curves or the RSA handshake all together are, you know, promising steps. But I think they could go a lot farther in terms of removing things having to do with X five or nine and Ascencion one and all these very old legacy technologies that we're carrying along with us. My staff that wasn't too optimistic. Number two, please. Yeah, what's your opinion, just out of curiosity, what's your opinion on that script? I think let's encrypt as great and as much as much as as I sort of railed on tirelessness in this conversation, having an encryption of any sorts is preferred to not having encryption at all. And the greater percentage of. Yeah, I think, you know. I think the greater percentage of the web that's encrypt ed, the better. And me personally, I would be absolutely over over overjoyed if we could get every site to be only. And that's let's encrypt is a big part of that. For four websites that can't use, say, CloudFlare, we offer SSL for free. Not everybody can use our services. This is a great option for getting a free cert. Could we please get two more questions so there's no point in standing up now, we just will finish what we started intended to please suppose we have a customer that insists on still using SSL version three and we badly need this customer how to convince them to upgrade, how to convince them to upgrade. That's a good question. So there are people who have legacy systems, right? SSL, V3, as we saw, it was 100 percent adoption adoption on servers up until Poodle came out. So there are tons of different client libraries that only use SSL V3. One example is Pincham, which is a popular tool to check if your website's online was using SSL V3. It actually upgrades to atleast once if the first connection fails, which we found out once we disabled SSL three. But I would I mean, it's a it's a hard sell, right? I mean. If you have a customer who wants to use a legacy protocol and it's only in use for specific clients that can't be upgraded, then it's better than having no encryption. So I don't really have a have a strong argument microphone number two, please, please make it short if you can. Sure. Clouthier still seems to make life harder for Tor users. When will this stop? Well, there are some people in the front row that I think you should talk to. We're working on it. OK, thanks. OK, please. Once again, thank Nick.