Teleport Workload Identity with SPIFFE: Achieving Zero Trust in Modern Infrastructure
May 23
Virtual
Register Today
Teleport logoTry For Free
Background image

Overview of Securing the Open-source Future

For this 21st episode of Access Control Podcast, a podcast providing practical security advice for startups, Director of Developer Relations at Teleport Ben Arent chats with Filippo Valsorda. Filippo is a cryptography engineer and open-source maintainer. From 2018 to 2022, he worked on the Go Team at Google and was in charge of Go Security. In 2022, he became a full-time open source maintainer and still maintains the cryptography packages that ship as part of the Go Standard library along with maintaining a set of cryptographic tools, such as mkcert, and the file encryption tool, Age. This episode covers cryptography, trust, security and open source.

Access Control Podcast: Episode 21 - Securing the Open-Source Future

  • Key milestones in web cryptography include HTTPS, WebPKI, and the impact of messaging protocols like Signal and WhatsApp on end-to-end encryption.
  • Looking to the future, Filippo discusses the importance of transparency mechanisms in cryptography and highlights the need for accountability.
  • Filippo advises against rolling one's own crypto but encourages collaboration and learning with experienced individuals to build a feedback loop for secure implementations.
  • Filippo shares his thoughts on the current state of Certificate Authorities (CAs).
  • Filippo explains the accountability established by transparency in open source and compares it to closed-source software.
  • Security patching is addressed, highlighting the need for a balance between stability and urgency when applying patches.
  • Filippo explains the potential threats posed by quantum computers and the ongoing efforts to implement post-quantum key exchanges in protocols like SSH and TLS.
  • Cryptographic concerns in cloud computing are discussed, focusing on the importance of trust in cloud platforms while acknowledging the shared responsibility model.
  • In a practical piece of advice for improving security, Filippo recommends being deliberate in trimming dependency trees to reduce vulnerabilities.

Expanding your knowledge on Access Control Podcast: Episode 21 -  Securing the Open-Source Future

Transcript

Ben: 00:00:00.280 Welcome to Access Control, a podcast providing practical security advice from startups, advice from people who've been there. Each episode will interview a leader in their field and learn best practice and practical tips for securing your org. For this episode, I'll be chatting to Filippo Valsorda. Filippo is a cryptography engineer and open source maintainer. From 2018 to 2022, Filippo worked on the Go Team at Google and was in charge of Go Security. In 2022, Filippo became a full-time open source maintainer, and he still maintains the cryptographic packages that ship as part of Go's standard library, along with maintaining a set of cryptographic tools such as mkcert and the file encryption tool, Age. We hope to cover cryptography, trust, security, and open source today. Hi, Filippo. Thanks for joining us today.

Filippo: 00:00:42.695 Hi. Happy to be here.

Getting started in cryptography

Ben: 00:00:44.353 To kick things off, I just wondered — how did you get started in cryptography?

Filippo: 00:00:47.136 This one comes up often enough that I wish I had a better answer to it. What I do remember is in high school doing this module of some I think it was a math day or something like that. And there was this module, which was about RSA and Diffie-Hellman. And that must have been the first contact, really. After that, the next thing I remember was doing the Matasano Cryptopals challenges, which are — I think you can still go to Cryptopals.com, I think. And there is this set of challenges that guide you through building broken cryptography implementations and then breaking them by yourself. And they start easy. They start to implement AES and use it to encrypt a few blocks. And then they go all the way to, okay, so read this paper. This paper broke TLS a few years ago. Now, we're going to break something that has the same issue with exactly the same thing in the paper. And by the end, you're implementing cryptographic attacks that are contemporary. I got to the end of those, and as a high school kid, I was just doing them because they're fun. And at the end, Matasano is like, "Great, cool. So do you want to interview?" And I'm like, "Wait — what do you mean?" It had flown completely over my head that this was obviously a recruitment strategy for Matasano. So yeah — that didn't work out because Matasano did more IPsec than what I enjoyed doing but loved those folks. And I appreciated that kickstart so much.

Ben: 00:02:18.820 I mean — I guess that kind of goes back to Bletchley Park. In the UK, they had a really tough quiz in the newspaper for recruiting people.

Filippo: 00:02:26.417 Oh, yes?

Ben: 00:02:27.136 Yeah.

Filippo: 00:02:27.753 Nice. By the way, sitting on my desk right now, there's The Prof's Book, which is a reprint of the mathematical theory of the Enigma machine written by Alan Turing.

Key milestones in web cryptography

Ben: 00:02:36.864 My next kind of question, from a historical perspective — what do you see as some of the key milestones in, I guess, web cryptography, but since you touched on the Enigma machine, maybe just some of the fundamental milestones you've seen in cryptography?

Filippo: 00:02:50.490 So there's a step function between classical cryptography and modern cryptography, right? Classical cryptography is intellectually interesting, and it's fun to read about it. So both the thing I have on my desk and the Simon Singh book, which I think got me an idea of classical cryptography when I got started. But I think that fundamentally, when computers showed up, cryptography changed completely as a field. So I guess the answer is more about what changed for web cryptography. Web cryptography fundamentally is the WebPKI. Cryptographers like to think about more complex stuff, and more entertaining stuff, and zero-knowledge proofs, and VOPRFs (verifiable oblivious pseudo random functions) or whatever the new fancy thing is. But concretely speaking, the thing that does encrypt the world is HTTPS. And with HTTPS, the WebPKI made of the root programs that decide what the rules are for certificate authorities, all the auditing mechanisms, certificate transparency, which helped both strengthen the rules and catch misbehavior and provides a transparency mechanism to inspect certificates that might be issued not with consent for one reason or another. Concretely, the progress of WebPKI, I think, might be one of the most critical ones.

Filippo: 00:04:15.127 After that, there's probably messaging, the Signal protocol, and how that started all of the other end-to-end encryption protocols down that path, WhatsApp, which adopted the Signal one. And these days, we have Facebook Messenger. We have everything except maybe the Google ones, which are end-to-end encrypted. Well, I guess some of the Google ones are.

Where web cryptography may be going

Ben: 00:04:37.545 What do you see as some things looking to the future?

Filippo: 00:04:40.676 Ooh, good question. So something that I'm trying to work on a lot is the transparency mechanisms like certificate transparency, but making them more widely available. Because fundamentally, at some point, you run out of ways to improve a trust situation. You run out of ways to manage keys so that they're only held by people you trust. At some point, you have to trust something or someone. For example, if you're a device manufacturer, at some point, at the end of the day, your users trust you because you give them the chips. As a CA, fundamentally, you are a trusted party in an ecosystem. And I think we hit a wall in how much we can improve these systems with trust alone. What transparency gives you is accountability. So what transparency does is that it doesn't stop something bad from happening. But it guarantees that if you're going to publish a bad certificate or a fake version of a Go module, or if you're going to misbehave as a trusted authority, that is going to go into an irremovable registry. And transparency is just a reasonably complex set of tools that ensures that if a client accepts something, even if the client is not the one that will actually realize that that was a misbehavior, evidence of that misbehavior will be indelibly and removably logged in a transparency system for others to check, to make this concrete.

Filippo: 00:06:07.550 Every time you do go-get and you fetch a Go module, by default, you're fetching it from Google. That was necessary because fundamentally, you can't make a service that avoids things like left-pad without a trusted entity. And this is actually what almost every language ecosystem does. For JavaScript, you get it from NPMJS. For Rust, you get it from Cargo, which in practice means getting it from GitHub, I think. That's where they're hosted. You're usually getting things from a hosted place. So that's the wall we hit, right? We can't make it more trusted than that. But Go has the go-checksum database. So every time go-get fetches anything, it also goes out and makes sure that that version and hash of the thing it just fetched is in the append-only checks and database. And that means that if Google were to be compromised and inject a bad version of a module, the module auditor could say, "Hey, hold on a second. I didn't publish that." And there's no way for Google to hide or for somebody who attacked Google to hide a version from the author but also show it to someone so that they will execute it. And that's transparency systems. And it's something I'm very excited about becoming easier and easier to integrate into applications. Because right now, you have to do a lot of stuff to build one with these. But I think we're making it better.

Ben: 00:07:23.557 Kind of repeating it back to you, the future — you can't stop many of these supply chain attacks or attacks on code, but you can prevent it through transparency.

Filippo: 00:07:33.796 You can set up incentives, right? This is the same thing as — I like to think about it as open source in terms of how it provides accountability. Open source provides accountability for code. Because let's be honest, it's not like we use open source to actually review every line of code that we depend on. It's nice to think that we could do that if we wanted, but that is not the actual reason we feel safer using open source software than using a closed source binary. The reason it is safer is that there is a reputation staked behind that project. It sounds like I'm harping on Google, but this is just because I was working on the Go project and I was working on making us accountable. But I don't think Google is untrustworthy at all. But again, if Google tomorrow decided to add something at the back door to Go, it couldn't just do that silently, right? Because it's open source. Everybody could see it. And then there would be a conversation on a mailing list, and people would be like, "Hey, what the hell is that?" And then Google would have a reputation to uphold, and they would have to do an investigation and say, "Ah, there was a rogue employee," or something, right? That's what open source gets us. But we don't have that for data. Because most of the time, if you're downloading something from GitHub, GitHub can just give you something with a vector and then not give it to anybody else. And if you don't notice, because you don't review every line of code, well, tough luck, right? Instead, transparency guarantees that there's that accountability. The same thing as open source does for code.

The decision to become a full-time maintainer

Ben: 00:09:07.166 Yeah, I think we've seen a lot with node packages of late. People just published a rogue node package that's not even related to the code that's been checked in. What made you decide to become a full-time maintainer? And especially, what do you think are some of the benefits of working on Go outside of Google?

Filippo: 00:09:21.552 So this was a very deliberate experiment to show that this is possible. Because I had written about this before. While I was at Google, I wrote these two articles saying we should pay maintainers and making the argument that maintainers should professionalize and both deliver like professionals and be paid like professionals. And the conversation around the usual water coolers was along the lines of, "Yeah, yeah. That sounds nice, but that can't work because it's a tragedy of the commons thing. Companies will never pay for something that's free. Truly there's no way to fix this. This will never work. And so I quit and I did that. I'm making it short. You know I was also kind of tired. I took a few months off. But still, I wanted to try it out, figure out what things work, what things don't. Unsurprisingly, I wasn't entirely correct in what I thought would work, but close enough that I was able to course-correct and make it a thing that works. So now, around the same water coolers, the conversation has become, "Well, but that's because it's Filippo. If anybody else tries it, it won't work," which, of course, the water cooler conversation will always be like that.

Filippo: 00:10:38.232 But at least there's a few people who instead are looking at what I'm doing and are telling me, "Hey, interesting. Actually, I also have already a bit of a network. I'm comfortable with being self-employed. So how do I do that? And now I'm gearing up to start helping more people try the same path if they already have a few of the things that help this being successful. Because you asked what the advantages are. And the advantages are many. It's an incentive alignment thing at the end of the day. And it will really sound like I'm harping on Google, but actually, I'm very grateful to Google for all they did for Go. Go wouldn't exist, and they're footing a large bill for Go. But fundamentally, if the number of Go users doubles, it's not like the value to Google of the Go project doubles, right? It doesn't even grow nonlinearly. It doesn't even necessarily grow. The amount of work on the maintenance team increases. But the amount of resources that's rational and reasonable for Google to dedicate to Go doesn't increase. So fundamentally, as Go is more successful, the Go team — but this is true of any open source project at any large company that I've seen, truly not Go-specific and not Google-specific. As the project gets more successful, it becomes harder to match the amount of work with the amount of resources that are available inside the large umbrella company.

Filippo: 00:12:05.863 Instead, with what I do, I have clients that are — I guess we jumped into it a little, but what I do is that I offer advice and access to my clients that have already an investment in Go and that are interested in it. And just like they could take a full-time engineer and say, "Hey, become the resident expert in Go so that if we have an issue with upstream, you already know the people and so that we can come to you when something's particularly complicated." And many companies do that. When we go to the conferences, there's contributor summits, and there's plenty of people from other companies at those summits. But that's expensive. One full-time engineer can be so much money. So instead, what I do is I go to these companies and say, "Hey, you don't need a full-time person doing that. You can also just have a little bit of my time. Most of my time will still be on maintenance, so I will always be up to date, etc. But for this amount, I'll join your Slack and you can ask me questions and so on." So the advantage of that is that as the project is more successful, there are more companies that are building on top of it and more companies that I can sell these contracts to. And so as the project is more successful, there's more work to do, but there's also more money, and so I can hire. And I've already started hiring more people to do maintenance. I've hired someone to do SSH recently, Nicola Mourinho, who's doing fantastic work on SSH.

Filippo: 00:13:22.175 And this is not public yet, but I'm considering funding an effort for the HTML security ecosystem right now because we have an HTML template, which has some design shortcomings, so it keeps coming up with vulnerabilities. And then there's the main HTML sanitizer in the ecosystem, which does not have as many maintenance resources as it should have. So I'm able to identify things like that, needs of the ecosystem, of the open source ecosystem.

Ben: 00:13:50.899 Which could also be different than — Google may not see this as other businesses might have.

Filippo: 00:13:55.576 Exactly. Because maybe Google doesn't have that problem because they have their internal framework, and they only use that or because they have internal security people that can do all the reviews. This actually does affect Google, but they could not affect Google and still affect the open source ecosystem. Instead, here I'm looking at this and I'm saying, "Yeah, somebody should fix that." And then I pay for it and assume that that will make my clients happy and more likely to stay. And so it's justifiable for me to invest in that.

Ben: 00:14:24.942 Yeah. And so for, I guess, casual software developers or businesses, obviously I look at it like an ecosystem. This is great. If you're building a team of go developers, this is a great sign because it has outside support.

Filippo: 00:14:36.151 Exactly.

Ben: 00:14:36.570 Specifically for picking cryptographic libraries, how do you think teams should approach which cryptographic libraries to use or which languages to pick up, and how to tie in the open source community as well in that engagement?

Filippo: 00:14:49.685 You're asking how a company chooses their dependencies fundamentally, right?

Ben: 00:14:54.602 Yeah.

Filippo: 00:14:54.969 It's hard. In fact, some of my clients, a lot of the value they get out of it, is sometimes just sending me over questions like that, because sometimes it requires already knowing enough about the subject matter to assess it. In Go, that happens relatively rarely because the standard library is fairly well maintained and has been very well designed by the people that were there before me. That's one big advantage of a rich standard library, right? I think this is not a very useful answer, right? Somebody's listening and thinking, "Yeah. I have that problem. How do I solve it?" And it sounds like Filippo just said hire him or use Go. And that's not what I —

Ben: 00:15:32.600 I mean, probably I think another follow-up question is people often say, "Don't roll your own crypto," which also has a wide meaning of like, "I've implemented a standard library poorly or I've built my own basics of a standard library." What's your thoughts on this phrase, "Don't roll your own crypto"?

Filippo: 00:15:48.960 Yeah. Okay. On that one, I have opinions with a capital O. It's true. Rolling your own crypto is not something you want to be doing, but fundamentally, nobody wants to roll their own crypto with an asterisk that developers really like JWTs and they truly do not understand why. They always seem so excited about having all of those algorithms available and like all of those options. And I'm like, "Why?" But anyway, aside from JWTs, in general, I've not met a developer who comes to me and is like, "Oh, but I want to do more cryptography myself." So I feel like we coined that at a time when we weren't doing a good job of providing — and we're going back to your previous question of providing good libraries that could solve the abstract problems that users had. If users copy-pasted a bunch of AES block modes into their application, it was because there wasn't a nice AEAD encryption function available in a library they could easily use. So fundamentally, we should probably have fixed our shit instead of going to people and chiding them for compensating for the poor usability of the thing that we as a cryptography engineering community had built.

Filippo: 00:17:07.448 The other effect that this sentence has is that it discourages a lot of people from learning cryptography. A version I heard that I like a lot from, I think, Deirdre Connolly is — don't do cryptography alone. So don't just roll a thing and slap it out there. If you do cryptography, get it reviewed, or develop it together with somebody who has been in the field long enough to know what things are scary, what things will break. Because it's true. It's a discipline where you can't tell if something is broken. A website. A website you might need to be that kind of senior engineer that has experience to know that it would become an unmaintainable mess in five years. That's a thing that's hard to see. But whether or not it renders, whether or not it loads, whether or not the layout is right, whether or not it crashes all the time, are things that you can assess. So you get a feedback loop. You build one, and it's terrible because if I build a website, it's going to be terrible. And I can look at it and say, "Yes, yes. This is terrible. I am not good at building websites." Cryptography ain't like that. You're going to make a completely broken cryptography implementation, and it's going to be fast? Maybe faster than the older ones. And it's going to work, and the ciphertext gets in, the plaintext comes out, and it's perfect. Except, then somebody passes by and breaks it, and it was as good as useless.

Filippo: 00:18:28.287 Doing cryptography with someone who has the experience is probably the better guidance there. Don't roll your own crypto as that just complete hard line, in my experience, has pushed away a bunch of people that didn't have maybe the arrogance to think, "Oh, yeah. This is a thing that you're not supposed to do, but I'm smart enough to do it." There are a lot of people who are very, very good at what they do, or very good at learning, and have not tried, or have stayed away

Ben: 00:18:56.063 Because they've been intimidated by the community. It's not very welcoming.

Filippo: 00:18:59.004 Exactly. And by the reputation of the thing as being this super hard thing, which lots of things are hard. People do a lot of hard things. Cryptography is not harder than most hard things. It has the problem that you don't know when you're doing it right, when you're doing it wrong. So it's important to build yourself a feedback loop that will tell you when you're doing it right and when you're doing it wrong. So that's why I like Deirdre's version of it. Make sure that you have a way to know if what you're doing is broken. But aside from that, it's just another hard thing. People learn to do hard things. Yeah.

Thoughts on the current state of Certificate Authorities

Ben: 00:19:31.663 And I think going back to your initial point, you say in the world of cryptography, there's all these thought exercises and academic problems, but a lot of it kind of comes back to PKI, certificate authorities and more simple [inaudible] problems. What's your current thoughts on the states of certificate authorities in their present form?

Filippo: 00:19:49.416 Any cryptographer would chuckle at key distribution being simpler because it's actually the problem that's always left as an exercise to the user. So PKIs, public key infrastructures, they're all hard. Fundamentally, it's not a problem you can solve in a paper, because you need to solve it in a way that fits the realities of the thing you're doing. The WebPKI will have different needs from the federal government PKI, which will have different requirements from a PKI you run internally for connecting your apps to your servers. And those will be very different. And I don't think you can solve for all of those at the same time. The WebPKI is in a reasonably good state, or at least much better state than it was 15 years ago, because the browsers pretty much did this very strong ratchet moving the goalposts of security, moving the bar higher and higher. Many CAs improved. Some CAs fell below the bar and got distrusted. And now we have certificate transparency, which again, I think is the best thing since sliced bread. And then we have CAA, which is a way to say, "Oh, I only want that CA to issue for me." And auditing mechanisms are pretty good.

Filippo: 00:21:03.100 We've got a close call, which hopefully is going to be fine, although it's not entirely clear with Europe recently, because the new version of the AIDA's regulation was going to say something along the lines of — browsers can't make the rules for CAs that are approved by EU governments anymore. Only ETSI, which is the EU standard body, can make those rules. And regrettably, the standards bodies have been behind compared to the browsers. The browsers have been ratcheting forward the progress I'm talking about in the last 15 years. So that would have been regrettable. The latest on that is that the relevant EU bodies came back saying, "Oh, we never meant that. We always meant that that would only be true for identity certificates," which are these new certificates that they want to add where they can show you the name, the actual name Filippo Valsorda of the entity that owns a website, which is an experiment we did with EV, extended validation. I don't know if you remember it. It was a failure, fundamentally, because if you wanted to require them — show an error if there isn't one — then you can't do automated certificate issuance, which instead is one of the major big steps forward, which I should have mentioned. Let's Encrypt has pushed for automation.

Filippo: 00:22:21.495 There's the ACME protocol. You don't just download a certificate and manually upload it to your web UI anymore. Now you set up ACME that automatically rotates certificates, which means that certificates can have shorter lifetimes, can be rolled if something goes wrong, they can be revoked with much less worry. That's an upcoming extension that will let ACME clients check from time to time and be notified if a certificate is about to be revoked so that they can replace it with no downtime. All of that's great. And if you need to actually do a government identity check, you can't do automation. It's not even clear how you do those identity checks, and the standards for those were poor. So you can't show an error. If you show a green lock in either case, but in one case, you show a name. In one case, you don't. Twitter did a very good study on that at some point, where they served sometimes the IEV cert and sometimes a normal certificate. And users were just logging in with the exact same percentages in both cases. Users did not care. This stuff is hard.

Filippo: 00:23:29.178 Europe got close to messing with that, but now they're saying, "Oh, we only mean that those rules apply to these new identity certificates," which are called QWACs, qualified website authentication certificates. They don't apply to the normal TLS certificates, so you can keep applying your security standards to those. So hopefully, we dodged a bullet there. I say as an Italian, it's not like everything Europe does is bad. It's just this one would have been unfortunate.

Ensuring Certificate Authorities remain trustworthy


Ben: 00:23:54.156 And so then what are some mechanisms in place to ensure certificate authorities remain trustworthy? And I think we're going to touch on this. What happens if they are compromised?

Filippo: 00:24:01.356 The browsers are the main enforcement mechanism there. Certificate transparency ensures that if there is a compromise, it will surface, because certificates will not be accepted by browsers unless they're logged in these public registries, which are the certificate transparency logs. And once things show up there, there's a whole conversation. There's this process which involves Bugzilla, of all things. Some things are just idiosyncratic, and they are just like they are. And they have to provide an incident report, and they have to provide a root cause, and they have to generally provide a satisfactory answer to why they are changing their processes in a way that will prevent this from happening again. And if they don't, there's a number of things that can happen. One of them is distrusting. And this might be the most careful process, and generally speaking, respected process I have been involved in my career. Because it's easy to set standards like we should always do blameless postmortems, and we should always look into — but you know sometimes you do, sometimes you don't. Sometimes you go like — well, yeah, I mean, it's clear what happened there, right? While here, the degree of carefulness is something that, yeah, I haven't encountered professionally. And I don't think it competes with aviation industry investigations. But if I have to think about what's something that's even more careful than this, that's the only thing that I can think of — aviation reports and investigations are stricter.

Ben: 00:25:36.681 Do you think probably as things develop and the internet is this thing, it should be taken more seriously by the community? Or you say it's almost on par with aviation?

Filippo: 00:25:44.946 No, I'm saying it's already taken very seriously. I'm comparing it to aviation. And I'm saying maybe it's not as serious as the aviation investigations, but it's in that ballpark, which for anything tech-related, I think, is high praise. The internet is sort of kept together with string. [laughter] I've seen how the sausage is made. Yeah. No, so this was high praise. It might have sounded like I was saying it should be taken more seriously. But no, no, no, no. I think there's a very good process playing out there. And there's people involved that push for it to stay strong and for its integrity and demand answers. There are regularly episodes where some CA delivers an underwhelming response, and maybe sometimes even the person who's responsible for accepting that response accepts it. And then the community goes like, "No, that was a terrible answer. That does not sound satisfactory. We would like more details, and then we go another round." It's a very good dynamic. I think people don't quite realize how much progress we've made with the WebPKI, because it's easy to dunk on CAs as the weak link and how there's — you trust hundreds of CAs, and any of them can compromise the security of your browser. True. But there is a lot of work that goes into making that, in practice, a system that's trustworthy. Yeah. The world is obviously imperfect and complicated. But I think, yeah, I would not be saying all these things 10, 12, 15 years ago.

Ben: 00:27:24.375 Yeah, it's come a long way. Changing is a little bit on trust. We are an open core, open source company. Our project's open source. And we also use lots of open source software. But this is maybe a rhetorical question. But how can we trust open source software?

Filippo: 00:27:39.940 We touched on it earlier, right? Some people will tell you, "Oh, with open source software, all bugs are shallow," or they'll tell you, "well, you can just audit all the software you use." But no, no, nobody does that. Nobody can do that. There are large tech companies that have policies that say employees are supposed to do that, and they are not doing that. It goes back to what I was telling you earlier, right? There is reputation to stake. You're building a company on it, right? So you have a vested interest to make sure that what you deliver is secure and that you are not misappropriating the trust of your users. Open Source sets things up such that you are accountable for what you do. When I fetch things from your GitHub, it would be cooler if GitHub had a transparency log style system for this and I actually talked to them about it, and I should check in on that conversation. But assuming you're not just force pushing, which would be very noticeable, that's the code that everybody's using, right? And I fetch that, and I can see it. And you have a strong interest in making sure that that's secure and not inserting anything you wouldn't want to stand by.

Filippo: 00:28:46.379 That incentive is not quite as much there for closed source software. Closed source software fundamentally can just do whatever it wants and hope not to get caught. Now, would they? Maybe not. Maybe the current iteration of a certain company wouldn't do that. But time is a dimension. When one picks a dependency, it's also important to think, "I will stick with this dependency for N months and years. So do I have reason to trust both the current instantiation of this and the future ones? And open source, I think, is a strong even market advantage to answer that question, because it answers both the questions of role trust. Is this going to turn malicious? But also of — what do we do if it goes unmaintained? What do we do if we need to replace things? What do we do if we find an issue and the company cannot fix it, but we can find it? Open source has all these advantages to trust.

Recommendations for teams building on standard code libraries

Ben: 00:29:43.281 Kind of like benefits. Yeah. At Teleport, we are a longtime user for the underlying SSH library that would power the initial server product. What are some other recommendations for teams who sort of take standard code libraries and sort of build products and companies on top of them?

Filippo: 00:29:58.185 So we can definitely start with things that are very concrete and immediate like I need you all to follow golang-announce, which is where we publish security announcements and run a linter that will tell you about deprecations, like static check, because that is a major tool we use. We never remove support for anything unless it's fundamentally catastrophically broken cryptographically, for example. Otherwise, we just have this strong backwards compatibility promise so we don't remove something. But when we deprecate it, it's a strong signal that you should probably be using something else. And maybe we will not spend as much energy making sure that that's secure. Or maybe it will not care if there's a timing side channel in a deprecated package. So you need to be using something that will tell you if you're using something that's deprecated. We appreciate issue reports a lot. In fact, more than we appreciate code contributions. That's a bit of a quirk of the Go project, I think. We don't have the reviewer bandwidth to go through review cycles most of the time, but a well-researched issue report really helps. Talk to us. That's not only true for my clients. And this might not have been obvious from context, but Teleport is a client — also because Teleport uses, as you were saying, the SSH libraries. And we can talk about those specifically in a bit. But in general, we appreciate hearing from everybody. We can't promise we'll reply to everybody. But knowing how users use our software — use our libraries — is actually extremely useful.

Ben: 00:31:36.869 Yeah, I think that's one of the joys of open source — is you never quite know where people are deploying it or where they're running it. You're like, "Oh, this SSH library is like a robot in a truck."

Filippo: 00:31:46.035 Ooh, I have another, maybe intuitive, reason that open source is an advantage. I have no visibility as I upstream into closed source applications. So there's a higher chance I'll break you. If you're open source, if you're on GitHub, if you're in the Go module's proxy, when I'm changing something, I'm a little worried about how it's used and whether there's one of those situations where this was not documented like this, but I suspect people have come to rely on it. What I do is I go to the Go module's proxy, fetch the latest version of every Go module, download it all, and just grab and look at all of the users of an API when it's possible and a sample when it's not possible to look at all of them. So by being open source, you might not even notice, but maybe the upstreams are looking at your code and making sure they don't break it. This might be more true for Go than many other projects. I've never seen this degree of attention to not breaking downstreams except maybe from Linux.

Approaches to dealing with security patches

Ben: 00:32:48.618 One thing you said — to subscribe to the mailing list for security patches. And I know this last week there was a SSH security patch that came out. What are some general approaches to dealing with security patches of people who are sort of new to this?

Filippo: 00:33:01.150 Security patching is a little special, right? Generally, you might have your deployment process, which goes through QA and where the main concern is stability. While with security, you might want to rush something through, which is not great. One general thing that's good to do is to make sure you are close enough to update it so that when the security patch comes out, you're not both updating over a number of non-security things, which might bring instability and you would have wanted to do better validation of and applying the security patch. So generally keeping your stuff updated, even when it's not a security patch, is a good thing to do. And this is, I think, a little non-consensus opinion. What I'm about to say might not be what everybody agrees with. But I find that the approach to security patches — of just blindly running a vulnerability scanner and saying, "Oh, there's a vulnerability. Let's fix it, patch it, and deploy," — is not optimal because it both overfixes and underfixes. It overfixes because sometimes you're not affected. Those are written such that they will tell you if you're affected or not. Spending a few minutes reading it and trying to understand — Wait, do we even use that mode? Wait, do we even deploy that package — can save you from a lot of noise.

Filippo: 00:34:21.486 And you might say, "Well, but might as well keep the process of deploying a security patch well-oiled. Better to patch one more than one fewer." Yes, but the problem is that if that means that, then the only amount of resources you can dedicate to a fix is the bare minimum necessary to deploy to production. It means you don't have the time to then do a proper assessment of things that do affect you to make it more concrete. What if Security Fix says, "Oh, for a while, we've encrypted things wrong, and so they're not actually encrypted." This hopefully will never happen, and this has never happened to the Go cryptography standard libraries. And we have an excellent track record, which is actually something I'm extremely proud of. But let's imagine that. Fixing it and deploying it to production is not going to fix the things you have encrypted so far. You need to go back and re-encrypt everything you have stored. You have to figure out who had access to that and assume that all that data is compromised. You might have to rotate secrets. You might have to notify regulators. You might have to tell users. Just deploying a fix is not it.

Filippo: 00:35:31.366 So I feel like we've built an entire industry around noticing when there might be a vulnerability and fixing it by deploying to production when, in fact, I wish we spent more time making sure that we only alert developers when their vulnerability is actually a problem. And then trying to offer them guidance to assess how much risk that brings them and what the remediation steps are. In some cases, just deploy to production. In some cases, much more. In some cases, less. That's what guided the VulnDB and VulnCheck development. VulnCheck is this tool from the Go project that you can use to check for known vulnerabilities in a Go module. What it does is that it doesn't just go like, "Oh, in your Go mod, you mentioned this thing at this version, and there's a vulnerability in it. So ah." Instead, what it does is it goes, "Well, first, are you even using the package that's affected?" Because a module can have a lot of packages. And maybe the vulnerability is in the Azure backend, and you only use the S3 backend, and you're unaffected. Carry on with your day. Then it goes further than that, and it does static analysis at the symbol level, because VulnDB entries have symbols that are the actually affected functions where the fix was. And it checks whether, in your program, those symbols are reachable. And if they're not, it's not going to alert you at the same level. You can add a flag to say, "Actually, tell me about everything."

Filippo: 00:37:04.472 But VulnCheck will try really hard to only tell you about things that really need your attention. The idea is that that's the part of how far the tool can get you. And then with the fact that you're triaging maybe one a month instead of three per week, you can actually sit down and look at it and say, "Hmm, what should we do about this one?"

The state of SSH keys in a post-quantum world

Ben: 00:37:25.270 Yeah. No, I think that's a great answer — a very practical approach also of not having alert fatigue and security issues and then some useful tools. This one is a bit more forward-thinking. I think we touched on this in our initial call — we're moving into this post-quantum world of computing. Specifically, what do you think this means for the state of SSH keys on cryptography in a post-quantum world? And maybe a little introduction like why is this a concern people should be worried about?

Filippo: 00:37:55.052 There is this concern that it might be possible in the next — depends who you ask — 30 or 50 years to build computers that use a different way to compute things, that use the fundamental quantum mechanics of physics to explore more states simultaneously. Cryptography is based on the idea that there is no way to brute-force something that has too many options. Even just a deck of cards — if you shuffle it, there is no way to try an operation for every possible state of the deck. And when I say an operation, I'm talking about — you can't move a single electron by one level using all of the energy that is stored in the mass of planet Earth. It's not like it's a lot of work. No, no. You can't do 2 to the 128 anythings. It's physically not going to work. And that's what cryptography relies on, right? Except that then quantum computers can do some things, not everything. It's not actually magic. It doesn't actually have the security of everything, which you might have heard. Oh, if you have a 128-bit key, it's now not quantum safe because it's actually as if it was a 64-bit key. Not true. Even NIST says that it's fine to have 128-bit keys for — anyway, we'll get to that.

Filippo: 00:39:20.686 So fundamentally, these quantum computers might one day exist. And why is it a problem? Well, it's a problem because if they can break our current algorithms, and mostly they can break things like asymmetric things. They can break elliptic curves. They can break RSA and so on. They can't break as much hashes and encryption ciphers. Those are probably fine. If they come and we are still using the old algorithms, it's a problem. But also, even more importantly, if data was exchanged now, and it was secured with a key exchange that was based on elliptic curves, for example, or on Diffie-Hellman then they can retroactively decrypt that. And that's a major problem. So that's why we are dealing with something so speculative and so forward-thinking. Nobody has a quantum computer that can break cryptography today. And we're not even sure if one will materialize, but there is a risk. And so to protect users now, we have to start thinking about how we deploy things in places that take 20, 30 years to deploy things, which might not be the case of most people listening. And how do we protect now things that might get recorded and then decrypted in the future? And now that one might be relevant for a lot of people listening.

Filippo: 00:40:38.581 So signatures fall in the first category. Signatures, as long as the quantum computer doesn't exist at the time you're verifying a signature, you're going to be fine. So there's not as much urgency on that. We are working with a certain urgency on deploying instead KEMs, key encapsulation mechanisms, which are the key exchange solution for the post-quantum key exchanges. NIST just selected the algorithms. European governments seem to be very happy with those selections, so it seems like we're going to converge. NIST is calling them ML-KEM and MLDSA, because we can't have nice things, because they used to be called Kyber and Dilithium, and those are so much better names. But no, no. We'll have to implement ML-KEM, not Kyber. Anyway, they're basically the same thing. So we're working on that. I've just finished an implementation. NIST is working on a spec. They published a draft. They just closed public comments for it. And we're working as a community on test vectors, which is a drum I hit a lot. I'm as much a testing engineer as a cryptography engineer. Because as we said, it's really hard to tell if a cryptography implementation is broken. So writing good tests and making them reusable across different implementations, etc. What you're asking though was about SSH.

Filippo: 00:41:58.533 So the answer is a bit in what I was saying earlier. Ciphers — they're fine. If you heard that AES-128 is going to break when quantum computers come, that's not true. The reason is kind of technical. I can give you a link to drop in the podcast notes if people want to know more. But suffice to say that NIST itself has a FAQ that says, "No, it's fine. 128 bits keys are fine." So ciphers are going to be fine. Hashes are going to be fine. Authentication keys — we should prepare to roll, but we have at least a decade or two to make the roll. The thing we should worry about now is the key exchanges. And that's what's currently either ECDH over curves like P256 or X25519, curve25519. Diffie-Hellman over curve25519. SSH already has post-quantum key exchange. However, they implemented it before NIST had made its selection, so they used one that didn't end up being selected. It's kind of unlikely we'll ever implement that. That would be a lot of work and a lot of complexity and a lot of risk of introducing a vulnerability.

But hopefully, now that NIST has picked these SSH, we'll standardize — well, we'll specify a new key exchange that uses the new one. And when they do, we'll just bring it to x/crypto/SSH, the Golang.org package, and we'll just make it transparent and automatic. If you're using updated enough clients and servers, we'll make sure that those connections can't be decrypted in the future.

Filippo: 00:43:35.057 TLS is doing the same thing. There's a specification instead for using Kyber this time. So that one, we're aiming for go123. So go122 will come out in February. No chance we're getting into that. We're already in a feature freeze, and there's a lot of stabilization work going on. But the next one, which is going to be, I think, August 2024, will probably have automatic post-quantum encryption for TLS connections using whatever the latest version that's been specified at the time will be.

Ben: 00:44:05.811 So most people don't have much to worry about. Seems like the teams are already working on things.

Filippo: 00:44:11.060 If we're doing our job well and if the way you're protecting your data is TLS and SSH and your things are updated enough. Yeah. If you're encrypting things, exchanging keys, and doing something more custom, you might want to either ask or wonder. And the main thing to wonder about is — are we in a situation where an attacker might record something now, decrypt it in 20 years, and do we care? Yeah. For some things, in 20 years, they don't matter. The connection we're talking over right now, if it gets decrypted even in 10 days, it doesn't matter, right? Not even because this podcast is secret, but because the only reason the encryption of this connection was important was because we don't want anybody to inject something in it, and we don't want anybody to eavesdrop in advance, but not even that. Mostly, it's an integrity thing, right? So it doesn't matter if it gets decrypted in the future. But there are things like, I don't know, [inaudible] nodes that could matter deeply even in the future.

Ben: 00:45:14.591 Yeah, yeah. Makes sense. Actually reminds me — I had a science professor and he was making a 100-year time capsule in the University of St. Andrews. And this was maybe like a decade ago, but the hardest thing for him was deciding what mediums can he put in there that will still last 100 years that are digital. And actually, that's like a tough problem to bury something and to be re-readable in 100 years' time.

Filippo: 00:45:35.765 It is. I am an extremely amateur archivist, and keeping data alive is actually hard.

Ben: 00:45:42.467 Yeah. The bit rot  in the world is real. And I think also as we move to the cloud, there's also a lot more abstractions in things that we do. It used to be, let's say, if you're data hoarding, you'd have a hard drive, and you might collect cold storage.

Filippo: 00:45:54.066 I have like 14 terabytes sitting on the shelf that's behind the screen, ZFS pool and just —

Cryptographic concerns of running cloud computing

Ben: 00:46:03.898 And rather self-hosting and having stuff yourself is kind of good. I guess you can also — for like trust, you know it's your data. You have HSMs if you'd want to sign something. But then as we move to cloud and AWS, you have instance identity documents to prove that this is your machine, not someone else's machine. What do you think are some other cryptographic concerns of running cloud computing?

Filippo: 00:46:24.788 So I was actually having an interesting conversation with someone that I don't know if they want to be named about this. And there's definitely opportunity for even reasonably softer attribution, cryptographic attribution of things. For example, there was this presentation where in the context of those transparency things, Meta needed a way to say, and we promised not to ever delete things from here. And I have solutions for that. I think witness cosigning is the solution for that and then trying to make some of these transparency technologies more accessible. But let's skip that for a second and say, how do you commit to not deleting something? And their answer was, "Well, we're going to put it on S3, and we're going to turn on the feature in S3 where Amazon promises to never delete anything from the bucket," which is a terrifying option that you can actually turn on. And I wonder how that works if somebody turns it on unintentionally on a bucket that costs thousands of dollars per month. I would want to see that conversation.

Filippo: 00:47:28.297 But anyway, there's this option, and they were saying, "Well, we've turned it on." So if you trust Amazon or us, you can count on the fact that we're not doing switcheroos with the bytes in this packet. And that's interesting, because to be fair, we do trust a lot of our platforms, right? So it would be interesting to get things like — and this is not my idea, but to get cryptographic at the station that doesn't necessarily go all the way down to the hardware, which is anyway just a way to trust Intel, right, because fundamentally — or AMD or ARM, because they put the key in the chip anyway. But instead, it's just Amazon saying, "Oh, yeah. So here's the hash of the Docker image that's answering this Fargate call or this Lambda call. And then you could get a signed statement from Amazon's public key that says, "Oh yeah. I served this Lambda with this Docker image." And that would probably be an interesting primitive to use to improve trust by ensuring that the software that's running on the cloud is actually what's being said, at least if you trust the platform. Trust in the platform should not be ultimate. That should not be the only thing your system rests on. But I'll say that for 99% of listeners, Amazon getting popped is a secondary concern. They probably have bigger risks active than AWS being compromised.

Ben: 00:48:58.605 Yeah, I think it's also the shared responsibility model. It's unlikely someone's going to break into your data center, plug into one of the VMs and get your data. It's probably more likely that you didn't set up MFA on your root account and someone got access to it, and they got the keys to everything.

Filippo: 00:49:12.524 Yes, exactly.

Ben: 00:49:13.788 All right, so I'm going to close up here. I have a few questions about open source. And I think I was kind of interested about the future of collaboration. For example, I know Linux still uses the Linux kernel mailing list for sending in requests. [laughter] Even this week actually, we had someone try to exfiltrate a secret from Teleport by opening a pull request to echo the secret in the GitHub actions.

Filippo: 00:49:35.662 Yeah, that's a classic.

The future of collaboration for open source projects

Ben: 00:49:36.780 So then I was like, actually, it may sound rudimentary to have a mailing list to email in your code snippets, but in many ways, it's a feature instead of a bug. What do you think the future of collaboration looks like for open source projects?

Filippo: 00:49:49.724 So this is interesting because I have a little — I think I have conflicting opinions on it. On one hand, I think that making things accessible has value, right? People who defend the LKML, the Linux Kernel Mailing List, often say, "Oh, but it's actually not hard," or "These young whippersnappers should just learn to use a plain text email like I did when I had a terminal connected to the university mainframe." Or maybe not. Or maybe people just learn to use GitHub, and that's a legitimate thing to do as a professional. And then they would still have something to contribute, maybe. So on the one hand, the accessibility angle to this speaks to me. On the other hand, and this will sound in conflict, and I think it's not necessarily in conflict but maybe a bit in tension. I'm not sure that accepting code contributions is that critical to all open source projects as we make it out to be.

Filippo: 00:50:52.556 Open source is a lot of things. Open source is community — it’s integration. That's a major reason to be open source, right — that others can integrate. It's issue reporting. It's working in public, and then, I guess, also actually accepting other people's code. And some projects just live and die by their ability to accept other people's code, and they wouldn't be able to operate. And great that works for them. But in all the projects I've worked on, almost the only reason to accept contributions from others was to encourage them to become maintainers or to keep reporting issues that they were reporting. Yeah. But almost every time, and this might be an artifact of the field I work in, cryptography being a little spicier. But almost every time, rewriting the thing would have taken me less time than reviewing it. And I think that this is maybe not as true for the whole Go project but still more true for the whole Go project than it is for most open source projects. And I'm starting to see this, however, in more projects like SQLite, extremely successful, very open-source project. You can look at the issue tracker. It's public domain. It's basically everywhere. It integrates anywhere. It has plugins, APIs that are stable, all the hallmarks of open source. You can't submit a patch. You just cannot. And you know maybe that's fine.

Filippo: 00:52:22.385 On one hand, there's collaboration between a team. And for that, everybody likes their bike shed a different color. I really like Garrett as a color. I find the GitHub PR review tooling maddening. I need to be able to systematically mark things as resolved and only mark them as resolved when I upload a new patch, because that's the patch that's marking them resolved. I don't want to go in there and click a button. I don't want it to automatically resolve something just because I changed it. Maybe I just responded, "Hey, I'm trying it like this. Do you think it solves the issue?" That's not resolved. I need to be able to iterate on a single comment without adding fix-ups, address review comments, and then squashing them. Sometimes in the Go project, we actually do review of the git commit message because the tooling allows us to do it and because it's useful on a software engineering level. So in the future, when we look back, it's sometimes useful to look at the comment message and say, "Ah, that's why it was implemented like that." So I just really like Garrett.

Filippo: 00:53:27.988 But when you split, how do you interact with your broader community, which is why we should definitely have a GitHub issue-tracker. And how do you collaborate within a project? It allows you to make different choices for those like SQLite uses Fossil, something like that. And it works for them. Great. And Garrett works great for the Go team. I think you would have to pry it from every single maintainer's dead hands. For others, GitHub works as a thing. Great. If mailing patches works for you, sure. As long as it's not stopping the people you want to include as maintainers from becoming maintainers, and that's a reflection maybe the Linux kernel should make. But yeah. I guess I don't [inaudible] that much in code contributions as a driving force of open source, which might be a bit of an unpopular opinion, but.

Ben: 00:54:18.207 No, I think that's — as an open core company too, we use a similar thing because it's always so spicy. It's a security tool. One PR may open up another can of worms that you may not be aware of.

Filippo: 00:54:28.987 Yeah, exactly. And oftentimes, a PR, even being perfectly well-intentioned and actually wanting to integrate well in the project will come from the perspective of a single stakeholder. It will only know about the problems they have, while the maintainer probably knows all the other problems and all the maintenance issues that can stem from it and all the ways this will interact with parts of the code base that the contributor has never looked at nor should they. Which is why I appreciate well-formed issue reports or feature requests. And sometimes I'll look at the PR and be like, "Oh, that helps me understand what you wanted," and then just treat it as just another part of the issue and just go off and write my own patch.

One practical piece of advice for improving security

Ben: 00:55:12.135 Well, I know we're coming up on time, and I've had a great time chatting with you today. We always like to close out with one practical advice that someone could use within their company this week to improve their security.

Filippo: 00:55:24.616 This might be a little — might come off a little aggressive, but the easiest way to not have to worry about vulnerabilities, the easiest way to not have to worry about trust is to have fewer dependencies. And I know that this is talked about often enough, but I think that actually applying more value to a small tree of dependencies is something that, especially in some ecosystems, especially would pay more dividends than one can think. Sometimes it's worth it to just copy-paste some code with the right license headers and everything. Sometimes it's worth it to just reimplement a thing not to pick up a dependency on five new libraries. It is a little more work, but on the other hand, you don't have to worry about the trust, about the vulnerabilities, about everything about those libraries. So I guess to make it short, my advice would be to be deliberate in trimming dependency trees.

Conclusion

Ben: 00:56:27.229 Yeah, I think that's a great tip. Awesome. Well, thanks for joining us today.

Filippo: 00:56:31.606 Yeah, this was fun. Thank you for having me.

Ben: 00:56:34.073 [music] This podcast is brought to you by Teleport. Teleport is the easiest, most secure way to access all your infrastructure. The open source Teleport access plane consolidates connectivity, authentication, authorization, and auditing into a single platform. By consolidating all aspects of infrastructure access, Teleport reduces attack surface area, cuts operational overhead, easily enforces compliance, and improves engineering productivity. Learn more at goteleport.com or find us on GitHub, github.com/gravitational/teleport.



Background image

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs