Transforming Privileged Access: A Dialogue on Secretless, Zero Trust Architecture
Mar 28
Virtual
Register Today
Teleport logoTry For Free
Background image

Introduction

Sasha: Thanks for joining today, LVH. We're going to talk about startup security and I want to intro you saying that LVH is a principal at Latacora.com, here's the website, by the way. It's a company focusing on security teams for startups, and security for startups, that's why I think this conversation's going to be interesting today.

LVH is also known as a fellow of Python Software Foundation where he wears multiple hats. I know that LVH likes Clojure. I'm not sure what's changed since then. Maybe I see that there's some mentions of rest on your blog post, so, we'll talk more about that as well. And, many people know LVH as actually the author of Crypto 101. It's an introductory course and book on crypto. LVH started it in 2013 in a PyCon presentation, and then it evolved into online, freely available course on crypto focusing on breaking, as far as I can remember.

Background on LVH

Working at Latacora

Sasha: So now you're part of Latacora. You're a principal at Latacora, and Latacora focuses on security for startups. Can you tell me a little bit more about the motivation behind joining with Latacora, and actually Latacora's motivation to focus on security specifically for startups?

When we started Latacora ... technically, I'm not a co-founder, but I started before we had our first client, so I feel like I got in pretty early. The other people who started Latacora came from a company called Matasano which used to do application security. I came from, my job before this one was at Rackspace Managed Security, and so we did managed security services. Having a security operations center and having all sorts of agents installed on machines and trying to figure out what's going on with your network.

Despite the fact that we were coming at this from totally different angles, right? Because they're trying to find application security vulnerabilities, and I'm trying to shore up networks, right? They're both in security but they're pretty far apart. We had kind of the same idea, which is the things that we're doing are useful, but they're not useful to startups. There's no good way for startups to consume any of those services.

There's a variety of reasons for that, but a lot of them involve, as a startup, you don't necessarily have the resources to do anything useful with that input, if that makes sense? If you get an app sec test and somebody tells you, "Oh, you've got these ... we found this XSS vulnerability, this XSS vulnerability." What they're probably not going to tell you is what you really should be hearing, which is, "You have this systemic problem where you're building websites this particular way, and if you just built them this other slightly different way, then you could just not have this problem in the future."

If you're a Fortune 50 company, then getting third-party application security pen test is a fantastic value for money, right? Get more people to look at your apps, great. Keep doing that. But if you're a startup, first of all, app sec is not the only thing that you need. There's all these other things that you might be getting compromised through, and in fact far more likely that you're going to get compromised that way.

Simultaneously, even if you just focus on app sec or just net pen, just because somebody tells you, "Hey, you missed a spot." That's helpful if you have a real security team that knows what missed a spot means, but it's not helpful if this is like the only audits that you're getting. Bottom line, simultaneously we figured out, hey, look, things that we're doing, there's nothing wrong with them, intrinsically, but they don't really work well for startups.

Secondly, the startups that are clients that we want to work with, they're interesting in a lot of ways. They're doing stuff, and so, okay, well what do we do in order to create a program, instead, that is specifically tailored to startups? How can we help startups build security teams? What we came up with is that we work with startups, as an alternative to their first security hire, for about 12-18 months. We found that's sort of our sweet spot. Technically, it's half a year, minimum, but 12-18 months is where we do best.

During that time, we build all the things that we think you should need as a startup security practice. That means, we start out with a comprehensive audit. We look at all your systems from a whole bunch of different angles. We do net pen, we do application security, we look at your cloud deployments, we look at what we call deployment security, which is basically how does code leave your developer laptop, between that and ending up in prod? What happens in that space? We also look at what we call corporate security, which is, how do you manage your laptops? What does your VPN look like? How do people sign in to stuff, et cetera.

A lot of those things are things that would be ... that are hard to get in a traditional, transactional security model, because you're supposed to have in-house IT or security people who do that, except when you're a startup, and then you don't have those people. The idea is that we deliver those services for startups by just being around for and being their security team for about a year.

Sasha: Interesting. I think it's really valuable from another standpoint. Usually, when you join a company or look at the company that became the same Fortune 500, 50 or even an enterprise one, they have this IT department and it tries to become secure, but looks like you, from the very beginning, from the very inception, you make the security holistic approach, IT and audits and the way people are deploying stuff, into the very beginning. It seems like a really valuable thing to do, to be honest. The fact that you're going from this realization from Matasano's pen test way to auditing and figuring this stuff out is also really important, really interesting.

But, I'm curious, what is your day to day with any kind of example. You don't need to name the startup, but just generally, okay, you start your day, and you think, "Okay, I've got to do a security audit on this code," or, "I'm going to do a network scan," or, how does that work with Latacora joining the company? What was the usually, begins with and what do you do after the first steps.

LVH: It's definitely super varied. We generally divvy the engagement up into two main parts. We start with, I don't know, a month and change worth of those in-depth audits, which is basically sort of us trying to get to a state of the union. We try and figure out, like, "Okay, look. Here's where your app is. Here are some of the problems that you have," and come up with a series of tactical recommendations and strategic recommendations. Things that you should go fix right now, and things that like, "Hey, look, over the next six months, we'd like to see you move towards more of this, because you'll have security advantage y." That's the first chunk of the engagement.

Then, for the rest, there's a bunch of stuff that we'll do on a day to day basis. For example, we'll do code review, and advisory services at an earlier, at more of a design stage. Sometimes we'll have a conversation with someone, and they'll tell us like, "Okay, look, I want to embed this widget in this partner site," and then we'll talk to them and figure out, like, "Okay, well look, can you actually do that securely at all? What are the downsides to that? What happens when a partner site gets compromised?" We do that first at the design layer, and then later, when people show up with actual pull requests. We go audit the pull requests. Obviously, the pull requests are going to have, you know, we're looking for different vulnerabilities at that point. Within the design doc, you're looking for semantic flaws, and within the pull request, you're looking for very concrete flaws. Within the pull request, it might be like, "You have a cross-site scripting vulnerability right there, right?" Or you have an auth failure right there or something like that.

Generally speaking, we'll do pretty much anything on an advisory basis. What I mean by that is we will help you, instead of ... we don't want to be the security team that says no all the time. I don't want people like show up and I tell them, "No, you can't have this because x, y, z." What we try to do instead is we make sure that all of the people who were involved in a particular project, and that could be people in legal, that could be people in product management, that could be people like individual engineers, but we want to make sure that they understand exactly what they're doing from a security perspective.

Our job is basically to get you to understand risk, and we will certainly advise you whether or not we think that's acceptable, and we'll, in most cases, we'll be able to give you an alternative. At the end of the day, what we're not going to do, is litigate you into doing the right job, into doing the right thing. We will tell you what we think the right answer is. We'll tell you how to get there. At the end of the day, you can take a horse to the water, but you can't make it drink, right? There are certainly cases where the best security answer might not be the best business answer, because, I don't know, you want a super low friction UX, and as a consequence, you can't have super strong auth, because your users are put off by the idea of TOTP 2FA, right? It's possible.

The only thing that we can do is make sure that you really understand what you're getting yourself into. Maybe, if you ask us 100 times, and we tell you 100 times, "We think this is unsafe," and 100 times you decide, "Yeah, well, that's cool, but I'm going to do it anyway," then, maybe you don't really need a security team, and that's okay. Right, I mean I'm not judging. I'm not saying that's necessarily a bad thing, but there are certainly companies out there that like the idea of a security team, and don't love the idea of actually having a security team. So, "Maybe, this isn't a good fit, and we're going to go find someone else to help." [crosstalk 00:18:45].

Starting Crypto 101

Sasha: I want to go back to that a little bit later when I will be asking some war stories, some really interesting things that happened to you. But before that, I wanted just to really get your answers on Crypto 101. We know that it started as a presentation of Python, but what were the real motivations between Crypto 101? I know that you're still working on it. What do you want to add there? We'll just talk about this for a little bit.

LVH: Sure, motivations for Crypto 101, first of all, before Crypto 101, people basically ... the standing advice for anything cryptographic, was don't do it, or at least, the famous internet meme, "don't roll your own crypto," doesn't actually work. You can say that, but it's not particularly helpful. Why is that not helpful? First of all, people misunderstand what you mean when you say, "don't roll your own crypto," because you'll get things like AES ECB, which is totally insecure. If you don't know why, you should go read Crypto 101, or go watch the talk. I talk about it in the talk too. You'll get AES ECB, but people go like, "Yeah, but you told me not to roll my own crypto, and I didn't roll my own crypto. I used AES, the Advanced Encryption Standard. How could this not be good? It's advanced?!"

There's a clear mismatch between what people thought you meant and what you actually wanted them to do. First of all, that. That wasn't really helpful advice. Secondarily, there wasn't a lot of good resources for teaching people how to implement specific cryptographic strategies, that can be anything from as simple as, "Go encrypt this email address," to something as complicated as, "I want to have a shared secret between my laptop and my phone. I have an app on my laptop and I want to pair it to my phone, and how do I do that securely without having to type in a 25-character password?" Or something like that.

There wasn't a lot of good advice for how you do that. Also, there were a lot of people who were sort of interested in this kind of thing. I guess I was also ... I disagree with the premises that teaching people anything about cryptography is a bad idea, because if you teach them something, then they will know enough to be dangerous. At the same time, if you look at the people who actually know what they're doing, and you look at the crypto that they implement, you'll see it's all hyper-conservative. Or a lot of it. A lot of competent developers who have a lot of knowledge about how you securely and insecurely ... how you can do cryptographic primitives or cryptographic schemes securely and insecurely, when you look at the things they come up with, it's boring crypto, right? People love boring crypto. Google's fork of OpenSSL, where they basically try to make it a secure subset of OpenSSL where they don't care as much about backwards compatibility, is called BoringSSL. They consider this a compliment, right? They want their crypto to be boring. Clearly, there's this dissonance where you can't say simultaneously, "We can't teach people," also the people who already learned, don't make these mistakes. Clearly, there's a difference there.

The approach that Crypto 101 takes, instead, is an approach that Cryptopals takes as well. Cryptopals is from Matasano. That is, "Look, we're going to show you the simplest thing that could possibly work, and then we're going to teach you how to break it, and then we're going to move on, and then we're going to teach you how to break that, and then we're going to move on, and we're going to teach you how to break that." At the end of this entire endeavor, after, you end up with TLS.

At the end of it, I don't have to say, "TLS is good. You should use it." If you want to know why, well, now you know why, because all of the bits and pieces of TLS, like the bulk encryption, the authentication on ... well, sometimes you have authenticated encryption in modern TLS, but you know why you need ... why you can't just AES ECB. You know why you need to authenticate your cipher texts, you know why Diffie–Hellman exchange is important, you know why signing matters, et cetera, et cetera, because I made you break all of those things.

If you take literally any single piece away, then TLS doesn't work anymore. Hopefully by doing that, I'm hoping that people who make it all the way through Crypto 101, they just have this new-found respect for TLS, and then therefore stop messing with it.

I don't know if that's ... so far, that has mostly worked out. I don't know of anyone who has went and gone off and done awful things as a consequence of Crypto 101. I certainly know of people who have gone off and done awful things, but I feel like most of their knowledge seems to have come from Wikipedia pages. Those people were going to be dangerous no matter what I do, so, eh. Is that a good answer?

Sasha: Yeah, interesting. You mentioned that there is a notion somewhere that you should not really teach people crypto, because if you teach them wrong, they'll become dangerous enough. Is that a widespread notion in the security community? Is that changing? The Crypto 101, is that your way to try to change this notion? What do you see out there?

LVH: To be clear, I don't think that every single developer should read Crypto 101. It is targeted at every developer, but that's because that's in terms of level, not in terms of I think that every developer should have read the book. It is strictly for people who are interested in it. What I do think we need, we wrote a document called Latacora's Cryptographic Right Answers, which released, I want to say, a couple of weeks ago, and that has very little justification, right? It just says, "If you want to encrypt a string, you should go use secretbox out of NaCl, right? It doesn't tell you why secretbox is good. I don't need to explain adaptive, I don't need to explain chosen-ciphertext attacks, again, in order for you to understand that you should just go use secretbox. Secretbox is good. Stop worrying about it.

I think there's certainly a lot of value in being able to just communicate what the right answer is. Crypto 101 goes way further than that, and actually does go through the breaking. I don't think that's necessary. As for the general security community, I've heard it less and less, this idea of don't implement your own crypto. People still say it, but I get the impression that it's dying down a little bit. I don't want to go so far as to say, "It's gone, and Crypto 101 fixed it." No, I don't think that's true at all. But I certainly hoped that I contributed in whatever small way it is. If there are less crypto vulnerabilities tomorrow as a consequence of Crypto 101, then I'm happy. That was the only thing I was going for.

General Security Advice for Startups

Sasha: Cool. Look, I have tons of questions for you about general advice for startups and I want to jump into them. It's interesting that startups are mostly concerned about survival, especially in the early stages, and very often security is not on top of their priority list, as we know. Actually to just get into, you sort of answered part of this question already, is there any very simple list of steps companies can take to get reasonably secure so the Crypto Right Answers is probably one of the ways, do you know any other checklists or things that you recommend companies to do, or not to do, to become reasonably secure?

LVH: I definitely like the Cryptographic Right Answers format for that. The whatever Right Answers, I'm pretty sure that I'm going to write an AWS IAM Right Answers, and an EC2 Right Answers, whatever, a bunch of those, because, I don't know, people seem to like them and they seem to be helpful, so we'll probably do more of that.

I think I understand what you're saying, but I want to almost object to your premises a little bit, when you say startups are concerned with survival and security is not a huge priority. That's certainly true in a lot of cases. It's absolutely true that security costs money, and it's money that a lot of startups don't have. At the same time, we've also spoken to a lot of startups for whom security is a major selling point, like where they're closing sales because they have a way better security story than their competitor.

Also, obviously getting pops early on and probably making the front page of the Orange website having a bunch of people telling you why you're stupid for not having patched your Jenkins or whatever it is that got you popped. That's also not great publicity. I don't know if it's generally true that ... there are certainly a lot of startups that we've spoken to ... obviously, there's massive selection bias there, right? It's the startups that we, a security consultancy, have spoken to, so it's only the subset of startups that are already vaguely interested in security. That said, there's a lot of startups that are trying to do a good job, and they just don't have access to resources to do that.

There's a handful of things that I would recommend people do overall. Let's see. First of all, stop using AWS Console. Just don't do it. Have a separate account if you must, if you want to mess around with stuff, but don't do it in your prod account. Install aws-vault. It's a project by I think 99designs. I'm sure we can get the link to people, somehow, after this call. What aws-vault does is, instead of putting your credentials in plain text on your disk, it puts your credentials in your operating system keychain, which means that it's way harder to accidentally lose them. This absolutely happens. We've seen people accidentally publish that their dotfiles, which have their .bashrc or their .zshrc or whatever in it, and of course there's an AWS access key in there. Five minutes later, people were mining bitcoin. So, aws-vault's fantastic.

Manage as much of your infrastructure's code as possible, as early as possible. I don't really care how you do that. Kubernetes is great and Terraform is better than some of the other tools I've certainly have seen. As many Terraform setups that I've seen is the number of Terraform setups that I've also seen blow up, but that said ... It's not fool-proof. Let's put it that way, but it's still 1,000 times better than manually managing a bunch of AWS servers.

Instate mandatory code review. Make sure at least a handful of your engineers ... you don't need everyone to be an app sec expert, but if you have a handful of people that know what Server-Side Request Forgery is, or something like that, then you only need a handful of those, plus mandatory code review, to effectively inoculate your entire application, right, because there's always going to be someone who's looking at this code once in a while, and there's a pretty good chance that they're going to be able to stop that kind of vulnerability.

Don't have infrastructure unless you're serious about managing it. I poked fun at Jenkins earlier, but I seriously think that 100 percent of the customers that have a Jenkins, I'm worried about that, right? I'm not saying that there's necessarily an RCU that's going to fall out of that, but Jenkins has serious security vulnerabilities every week. I'm not saying Jenkins is awful, I'm saying, if you have infrastructure, you have to ... you own that now, and you don't get to half-ass it.

Get WireGuard. WireGuard is a VPN. It started out for Linux only. My understanding is that there is a macOS client now that works well. I haven't tried it because I don't use Macs anymore, but WireGuard is super easy to set up. It is crazy fast. You don't have an excuse to have anything listening on the public internet anymore, in my opinion, other than WireGuard.

If you can, get rid of SSH keys on disk, kind of the same problem with AWS credentials being on disk and encrypted. Get Teleport or another SSH TA in order to ... don't have SSH keys on disk.

Concerns with AWS Console and Jenkins

Sasha: Interesting. That's a lot of advice, actually, thanks. [inaudible 00:30:47] Interesting patterns that I've noticed in your advice is that, generally, you don't recommend to do anything manually. For example, don't use AWS Console and prefer infrastructure or any operations as a code. Is that because it's easy to review them, or is it because humans are error-prone? Why don't you like AWS Console?

LVH: There's a couple of reasons why I don't like AWS Console. First of all, for example, we have a work sample test that we use for potential employees. We give them the work sample test, and the work sample test is you get access to an AWS environment, and it has an app running in it, and there are some misconfigurations in the AWS environment that have security implications, and there are some bugs in the app that have vulnerabilities, that have security consequences. For some of the trickier ones, one of the design criteria that not just I, but also attackers use to try and see whether or not a particular intentional misconfiguration, like basically a back door, or some other compromise of your environment, whether or not that's interesting. One of the things I'm looking for, is this entirely invisible from the Console? There are plenty of things, like there interesting CloudTrail misconfigurations that are literally invisible from the Console.

In some cases they are literally invisible, as in no amount of clicking will get you to that information, and in some cases they are de facto invisible, because you'd need to go to 17 different pages that are not obviously linked together before you'd be able to put that together, so there's no way you're actually doing that. One of the reasons I actively dislike ... There was actually a published vulnerability today from a vulnerability that was consequence of one of the default managed policies that AWS created for you had some bonkers-wide permission that basically allowed people to do, effectively, account takeover.

Generally, I don't like the default managed permissions that AWS gives you anyway, but Console just adds to that, because every time that you ... it's just really easy to go into Console and click like, oh, Redshift, right? Then the Console goes off and creates 15 resources. It doesn't tell you about most of them, and most of them are just garbage that you don't want in your account.

Hence the advice, if you are going to mess with a specific service, fine, you can use the Console because it is faster to get started. I do recognize that. But go do that in a separate account. Don't mess up prod.

Does that make sense?

Sasha: Yeah, that makes a lot of sense. Also, you mentioned, so basically visibility is a big problem on AWS Console, and default settings that are not usually easily audited or some misconfigured default settings that you don't have control of are a problem with AWS Console or any, actually, UI templating out of the box thing.

You mentioned Jenkins and you said that Jenkins, you don't actually say that Jenkins is terrible, but at the same time, everyone who has Jenkins you are worried about. Curious, Jenkins obviously has a lot of problems, but are there easy ways you would recommend to secure Jenkins out of the box? Is there some sort of pattern or Terraform example that you guys can publish on Jenkins on Latacora or anything like that we out there could use, anything that comes to mind?

LVH: That is a really good question and I'm not sure. Some general advice, I guess, with things that tend to get you into trouble with Jenkins. One of them is people install Jenkins, and it's one of the first pieces of infrastructure that they have, because you need a build server, something that's easy to deploy, right? Therefore, it's like the odd duck out, and it's like the one piece of infrastructure that's not managed properly, the way that every other piece of infrastructure is. And so Jenkins tends to be worse at getting updates. So, one of them is just, update Jenkins, right? It's not complicated, just keep patching.

The second part is, and one thing that I do have problems with, is there's a lot of people that use Jenkins not just to produce builds, like a build-artist act, where literally what you do is Docker build, right? Docker build dash dash tag my-app. What they're also doing with Jenkins is deploying entire pieces of infrastructure, and so de facto, they're running Jenkins on this random AWS instance, and the role for the AWS instance is basically star-star. It's got admin access to everything, right?

Then the problem is, okay fine, as soon I pop Jenkins, now I literally have your entire AWS account. If you are going to do that, that's fine. I certainly like the idea of having as much infrastructure automated as possible, but consider doing things like use AWS roles, to like temporarily assume a role in order to be able to modify some infrastructure. You have to do some work in order to get there, but there's a bunch of benefits that you get out of it. You get significantly better audit logs, because every time you assume a role, there's a CloudTrail entry for that. You get much better audit logs, and you also, at least, you can start narrowing down what the things are that Jenkins has access to.

Sasha: Makes sense. Okay, basically, take care of your Jenkins. Make sure that Jenkins is not root. All your infrastructure in the sense of being able to do anything with your AWS account, and if you use Jenkins, try to limit its capabilities by separating the workloads of the Jenkins itself, to building and publishing only, and actually leave deployment parts to some other piece of infrastructure. That's what I heard from your advice.

Languages and frameworks

Sasha: Interesting, that's really helpful. Going to another piece of infrastructure, languages and frameworks, and generally it falls into this pattern that, as a participant of the startup community, I personally would love to see more from crypto and security community, is more prescriptive advice, right, in general. That falls into ... the same applies to not only to Jenkins and things like that, but to languages and frameworks. Are there any languages and frameworks you have seen delivering the most problems, and are there any language that are in your opinion are better, or delivering a good level of security, and some language that are, in your experience, are worse? We don't necessarily say that this language is bad, don't use it, but are there any examples of some languages that really were problematic for your security audits, and something that you've seen languages promoting that we shouldn't be doing?

LVH: Yeah, so it's really interesting to hear you say that, that you're actually actively soliciting that kind of advice, because, I don't know, I guess maybe I try to be nice too much. One of things, maybe not too worried about the Cryptographic Right Answers document, but it's very similar, right? In some cases we're telling you, literally do this, and in a lot of cases we're also telling you, specifically, don't do this.

I feel bad about doing that, kind of to your point, it almost feels like it's a question that's angling for me to say, "PHP is bad stuff, don't use PHP." There are frameworks that you can write reasonable PHP software in now, but you're not the first person to tell me, "No, you should get significantly more prescriptive and just say these are the things that Latacora has vetted that you should go use," and I should just get over my hesitation and we should just start doing that.

The one case where I've seen that, very specifically, where tools have just obviously made everything better, is for cross-site scripting attacks. For those of you who don't know, cross-site scripting is a style of injection attack where, because you are not properly sanitizing input or not properly escaping input, you're putting some user, some attacker controlled input, in typically the DOM, so it's somewhere in, for example, the HTML that you're generating on your pageview, and as a consequence of that, the attacker gets javascript execution.

If you're a modern, single-page app that just interacts with an API, all your app does is feed some javascript and interact with an API, so if the attacker gets to run javascript here, then that's account takeover, right? It's a pretty severe vulnerability. So, an example of where stuff has just gotten magically better, because the tools just accidentally fix this problem. I don't think it was a conscious design choice, but accidentally fixed this problem, is, for example, we've seen a lot of XSS come out of poorly conceived jQuery code, because jQuery makes it really fricken easy, call .html with some accidentally under-sanitize the input, and before you know it, you've got an XSS vulnerability.

React has basically made XSS go away, right? You have to call dangerously set innerHTML, it's literally called dangerously set innerHTML, in order to get an XSS vulnerability out of React. It's fantastic. The reason that React is able to do that is just because fundamentally, injection vulnerabilities are about context, right? You need to know, is this string that I have, is this attacker-controlled text, or is this already some javascript or is it JSON or is it HTML or CSS or whatever it is. I need to know what it is in order to correctly be able to put it into context. React, because of the way it builds the DOM, it always knows whether or not, you know, this is an attribute, this is some javascript, this is whatever, so we just see significantly fewer vulnerabilities there.

On the server side, so classic PHP, and I want to make very clear, I'm saying classic PHP, I'm not saying all PHP apps are definitionally screwed. I'm saying, if this was ... the way that you would write a PHP application in 2005, where you have a file you name index.php, and you're embedding some stuff into an HTML page, then that is a cornucopia of XSS vulnerabilities. There's no way that escaping went right there.

We get a little bit better with that with Django templating, because Django templating at least escapes stuff for HTML by default. The problem with that is that it's significantly more complicated than that, because you're not always doing HTML. For example, let's say, a common example that I've seen of this is you have some javascript, you have a script tag somewhere in your page that you're loading, and you want to embed a JSON document, or sorry, a JSON object, and how do you do that in Django? You can't just like put it in there, because it'll get escaped as HTML, and your script tag explodes, so it doesn't actually work.

What you do is you pipe to safe, which again, to use the React example, is the dumbest name for a filter that I ever heard. It should be called ridiculously-dangerous-don't-use. Don't call it safe. I know where the name comes from, but that's what you should have called that function. I kid. I get that this is backwards compatibility, they can't fix it anymore. I'm sure that I'm not the first person to complain about that function name. I don't want to give them too much of a hard time.

As an example of how you can go even better, and make this even better is Go's HTML template. If you look at the escaping ... yes, for everyone listening, yes, LVH is saying nice things about Go. Savor it, it's not going to happen very often, but Go's html/template, it's fantastic. If you look at the escaping rules specifically for it, it knows about all sorts of context, like javascript. It knows if you're looking at an attribute, or if you're looking at a DOM string. If it's an attribute and it's on hover, it knows that that's actually javascript, so the escaping rules for that are subtly different. It knows about title tag being slightly different than everything else. It is really awesome, and it's pretty hard to make ... I'm not saying you can't get XSS out of Go's HTML template, but you have to go out of your way, like you're doing something weird, if you manage to get XSS out of that.

I think that's a nice example of how things are getting better. I want to say, sometimes by accident, sometimes intentionally. Certainly, Go's HTML templating, that is an intentional security feature that they added, right? And it worked out. For React, I want to say they accidentally fixed XSS on the front end. I don't think that was a design goal. But whatever, it worked. I'm taking it.

Sasha: Yeah. It's interesting that your advice is as practical as it could be, because you're not endorsing the language, you're endorsing certain practices and certain frameworks that could be part of the language, could be outside of the language. So, my feedback to you as someone who really seeks this advice would be don't hesitate to give this pragmatic advice and say, "Hey, Go HTML is great." It doesn't mean you endorse Go in general, just saying, "Hey, this is a great way to do this." Why not, it's working.

Do you have, actually, talking about security advice, how would I differentiate, or listeners and anyone else, between good and bad security advice without being a security expert myself? For example, if you Google secure cipher modes and engine x, there will be 15 different articles about with all different unknown abbreviations to many of us. Some of them will be up voted. Some of them will be saying, "Hey, this is bad. Don't do this anymore." Is there anything we can do to differentiate, understand, hey, probably this is something about this security advice that is not good, and this is why? Does anything come to your mind?

LVH: I don't know of anything. That's a huge problem, right? One of the things that I think makes security and cryptography in the specific, but also security in general, really hard is that a lot of the time, getting it wrong looks exactly the same way as getting it right. If you're not a security person, and you're looking at a jQuery XSS vulnerability, it's not staring at you, with the exception of React, where you call dangerously set innerHTML, I don't know, that sounds dangerous.

I don't know of any general rules, except for the usual rules for how do you be an adult in modern society? Well, it involves a lot of critical thinking. You know, if somebody is also selling you a product, then maybe you should give them a little bit of extra scrutiny before you believe what they're selling, but I don't have any great ideas. I also appreciate, and again, maybe this is something that only I am worried about and I should just get over it and get on with it, I'm cognizant of the fact that when we say, when we do things like Cryptographic Right Answers with Latacora, and like I said, we're going to do more of those Right Answers documents, but effectively what we're saying is, "You should use secretbox because I say so." Right? There's no justification. I'm not going to argue with you about whether or not secretbox is good. Well, it turns out if you say bad things about secretbox on hacker news, then I'll probably argue with you about why secretbox is good. But in that document, we're not going to argue with you. I'm not going to litigate you into believing that secretbox is good.

If you want that, then fine, Crypto 101 still exists. Go read the book. But, I think it's going to boil down to a handful of people that, for better or for worse, have good reputations, and hopefully reputations sort of, kind of map to competence. But they certainly don't always, and have just a handful of people that you've decided to trust, and hope that those people put out advice. I'm really sorry, I've got nothing. If you figure it out, please let me know, because that would be very useful.

Sasha: Well, actually, I think you answered your question, but in two parts. The first part is that, look we have to give good answers in the form of Latacora answers, but at the same time, you have to bag up your answers without being just supportive, without justification or backup, by pointing to your Crypto 101 book, that gives anyone reasonable explanations about why did you actually pick this answer. I think the combination of those two will allow people to, first of all, see the answer if they're quickly looking for an answer, and then, with two given different advices, they can pick one that gives more backup in form of more advanced, serious answer that they can look on their own, or maybe ask someone on their team to look into.

And then, if there are two advices on the internet, one is not backed up by anything, and another one is backed up by the whole book that you wrote, and reviewed it and published it and talked about it, that makes more sense, because then it means that it's not just an authority, but it's an interesting combination of this really brief, short advice, backed up with a more detailed book, or more detailed [inaudible 00:47:53] part of it.

Cloud / IaaS providers

Sasha: I think that will be actually really good. Look, I have a bunch of questions, so let's move on. I want to ask you the same question about, is there anything about clouds that you think differentiates them in a security way. Is there, for example, Amazon is more secure, in your opinion, than Azure or Google, and why?

LVH: I think the big clouds are like ... I would generally be wary of people being on tiny VPS providers, where for anything that isn't a toy project. AWS, GCP and Azure are sort of the major ones, and for AWS has a ton of power in IAM, their Identity and Access Management suite. Unfortunately, IAM has its warts, but it does give you a ton of power. It is extremely expressive. The good news is there's a whole bunch of tools, the bad news is there's a whole bunch of tools. In particular, if you just wonder something as simple as, "Hey, I have two instances. Can they talk to each other?" Right? Okay, well you have host firewalls, you've got security groups on the instances, you've got network access control lists, you've got VPC peering, you've AWS Direct Connect, so they might be able to talk to each other via some other bizarre link. There's a gazillion different ways that an instance might be able to talk to each other or not.

I haven't even mentioned VPCs, like VPCs themselves, maybe VPCs separated. It gets complicated pretty quick, but at the end ... Obviously, like in the [inaudible 00:49:28] you have the whole thing about the shared responsibility model, right? I think, from that perspective, I think that AWS and GCP at least, are going to be on a pretty similar level. In terms of are the underlying hosts going to get patched then next time that there's a Zen vulnerability, like yes. AWS and GCP will get their shit together and patch all those Zen vulnerabilities.

But, more generally, I think the question is, do they give you the tools to adequately secure what you run on top of the cloud? I think that AWS is probably in the forefront there. GCP is like second. I haven't worked with Azure recently enough in order to tell you whether or not it's where it used to be. Where it used to be was it was fine, but it wasn't great. It wasn't AWS-grade. AWS has a bunch of extra tools, like AWS Inspector and AWS GuardDuty. I don't know. They're a mixed bag. You should go turn on GuardDuty, sure, but the problem with, as I'm sure many of you know, the problem with a lot of these automated tools is they produce a ton of noise.

It's a mixed bag. I think AWS is probably in the lead. GCP is really close. I would be very wary of anyone trying to run anything on a small VPS.

Latacora War Stories

Sasha: All right. I want to back to your Latacora experience and, just mostly, really curious, is there anything you can tell us, kind of some war stories from [inaudible 00:50:58] startups, especially the most common tech security mistakes and security holes you have encountered so far that are so common that you just want them go away, tell everyone about them, not necessarily mentioning any company, of course, but is anything standing out?

Definitely, in terms of app sec, we found XSS in most customers, except the ones that are definitionally immune, because they have, I don't know, they have a JSON API with some React and not a lot of opportunity for them to mess it up. Basically, I think it's pretty fair to say, if you load jQuery, then I assume that you have an XSS vulnerability.

Sasha: Okay.

LVH: Another attack that we've seen several times, show up several times, is SSRF, so it's Server-Side Request Forgery, basically it's like when you have a webhook, something that calls back out to the internet in order to notify someone that something got done or whatever, and you're able to point that request at something else, and now you're making requests from the server itself, which is potentially on a privileged network. If there's a bunch of services that you have that strictly rely on network segregation for safety, which is not good. In order to exploit an SSRF vulnerability properly, there has to be something else messed up somewhere. So, we found SSRF vulnerabilities in a bunch of places, especially in the context of DNS rebinding, so the idea is that you have SSRF vulnerabilities where something will resolve a URL, and see what IP it points to, but the IP is on an attacker-controlled domain, and when you resolve it the first time, it gives you a safe IP, and you think that it's okay to connect to. When you resolve it the second time, it points to 10.0.0.1, and then you're hitting an internal service.

Something that I think not a lot of people have been thinking about, but we've certainly found stuff, is G Suite authorized apps. So, people will sign in with Google, right, and a lot of them just ask for your email address and potentially your name or something like that. That's not a big deal. There's a lot of those applications that ask for access to Drive, and ask for access to Gmail, and I don't know about you, but I get queasy when some random calendar-scheduling application wants perpetual access to all of my email forever. I don't know, it sounds like a bad idea. We've certainly found a lot of applications that can't really justify, or can poorly justify the things that they're asking for. That's a very common problem.

SMS two-factor auth, still a common problem. I was kind of hoping that NIST would fix this, because NIST fixed passwords, right? NIST changed their recommendations for passwords, and now when people with dumb security questionnaires tell me, "Oh, your password policy is insecure because you don't require symbols," then I can respond by, "Well, we're compliant with NIST, so go away." This is where also SMS two-factor auth is bad, and we shouldn't use it, and yet people are still using SMS 2FA a lot. And I understand why. For a ton of people, SMS 2FA is a fantastic UX, right? They have their phone on them all the time. Being able to type in a four-digit code is easy. They love it from a user-experience perspective, but dear god, I don't love it from an authorization perspective [crosstalk 00:54:21].

Sasha: Thanks for it. It's a lot of advice. I was actually writing all this down, and I'll try to email this to everyone as advice from LVH. I have actually the same question about processes. You mentioned one process mistake on Jenkins level, when people set up Jenkins, they set it up to be more admin than it should really be. Is there anything else about processes that you have encountered in those companies that made them less secure than they should be? Some mistakes, maybe deployment mistakes or login mistakes, anything that comes to mind?

LVH: First of all, again, don't use the Console. If there's one piece of advice ... Well, no sorry, if there's one piece of advice you take away from this, it's install aws-vault. If there's two pieces of advice you take away from this, it's install aws-vault and stop using the Console. The reason for this is simple. If you're manually deploying a service all the time, at one point, you're going to be deploying a service at 5 PM, and this person's kid is crying and sick or whatever, and they're distracted for whatever reason, and they're going to mess it up. Right? It's not because they're a bad human being. It's not because they're bad at their job, it's because you asked them to do a task that should have been automated six months ago and then you didn't do that, and then, now it's a manual process, and manual processes come with potential for failure. This is a pretty well-understood problem.

Just don't use the Console. Another thing that comes up a lot. I'm not sure if this is a process or a tech problem, but CloudTrail, by default, had ... first of all, people will put CloudTrail in unsafe S3 Buckets within the same account. One of the things we tell most of our customers to do, once we get the big problems out of the way, is consider having a separate account that dumps CloudTrail, and just only have CloudTrail right to that one S3 Bucket in that account, and only read CloudTrail from there. By default, CloudTrail had hash file verification off, so CloudTrail has a way where, kind of like a Git repository, where every new commit has in it the hash of the previous commit, right? If you delete a commit in Git, you know. You can't lie about history, or otherwise all the hashes change, right?

Kind of the same idea. That was turned off by default, which basically means like, okay, your audit logs are totally useless, because the next time that I pop an AWS cred, the first thing I'm going to do is, I'm going to look at your CloudTrail Bucket, and I'm going to delete all of the evidence that I was ever here, and that's not very useful. So, secure CloudTrail. I don't understand why the defaults for CloudTrail are unsafe. I know the reasons behind them. I don't think that they're good reasons. They did change the defaults, but it was two months ago, and it was insecure for years, so whatever. It's fixed now, but it's still bad.

Sasha: Interesting. This is really valuable advice, because people don't usually think about their audit trail as something that should be secure itself, because they think that the audit trail is something that looks at the security of the system, but at the same time, kind of going one step further and saying, "Your audit trail is a really important part of your security analysis as well, because if it can be tampered, it means that you're insecure."

Startup Technology

Sasha: Really cool. Look, we're wrapping up. We just have five more minutes left, but I still have one question before we wrap up and probably maybe get some questions from the audience if someone can type your questions and chat, I'll try to read some of them at least. But, there's one thing that always in the internet, on the hacker news, everywhere, called Docker / Kubernetes, and the question that I see a lot, and there's a lot of debates around that, whether Docker / Kubernetes makes security harder or easier? Do they make your job harder or easier, and why?

LVH: I think there is the potential for it, down the line, for it to be a fantastic good thing for security. I don't think that we're necessarily there yet. I also don't think that it necessarily makes it worse right now, but in a lot of cases, it does. First thing that comes to mind when you say that is Kubernetes' requirements for networking, let's see if I can get this right, every pod has to be able to talk to every other pod, the flat network requirements, basically no netseg. People misinterpret that as meaning ... I've seen Kubernetes clusters, as a consequence of that, like, look, every host has a public IP on the internet. "No, that is not, that is not what anyone said. Go back, you made a mistake."

Also, the docs for this are terrible, right? I think if you go to ... Sorry, I don't mean to say terrible. I don't want to sound so negative. But if you go ask the Kubernetes documentation, how to I actually manage networking? There's 15 different providers that will all ostensibly do exactly the same thing. You get confused by all of them, and then eventually you end up with either Calico, Flannel or nothing? I kind of wish that was, I don't know, and kind of to your point, maybe as security people, there should be more specific recommendation.

One thing that is nice, though, at least if you're ... let's say you're putting everything in Kubernetes, the upside that you have is you're already speaking a standard language, so if Kubernetes as a system improves, then in a lot of cases you're going to be able to reap the security benefits from that automatically. For example, one of the things that comes to mind is the Kubernetes secrets management thing. By default, that wasn't great. It solved the problem, but it solved a kind of weird problem that I don't think everyone has. I'm not going to start about secret storage, because we only have ... well we have less than an hour left, so I can't start about secret storage.

The bottom line is, if you at least use, as long as the API isn't intrinsically insecure, if the thing that it's trying to do isn't intrinsically bogus, then at least you have the option of, okay, well look, at some point down the line, there's going to be a better secret storage system that I'm going to be able to plug into this, and I'm still going to be able to run [MiniCube 01:00:19] or whatever it is to run Kubernetes locally, and I have insecure test secrets laying around, whatever, who cares? But then when I deploy it to prod, I'm going to have this way better story. The other thing you get out of Kubernetes is a, everything's written down, and infrastructure with code is fantastic from an audit perspective. Same thing with Docker in general, at least the fact that we've all agreed to you are going to produce an artifact, and I can audit that artifact, that's an improvement, right? At least I know what's ... I have the potential to look at what's running. Generally, it's in the right direction, but I don't think we're there yet.

Q&A

Sasha: Yeah, oh I've got a question from anonymous viewer is asking ... Do you see the question? Has LVH ever encountered FIPS 102 in the field? Federal standards for validation of crypto modules. Perhaps they have seen startups that are trying to achieve federal compliance.

LVH: I have certainly heard the term FIPS 140-2 more than once in my life, not at Latacora specifically, and quite frankly, every time I feel like very day where somebody says FIPS 140-2, that's not going to be a good day. I'm not going to be happy at the end of that day, not because necessarily a bad [inaudible 01:01:35] compliance.

We have not dealt with FIPS 140-2 specifically, I mean we've certainly mentioned it for stuff like, oh, customer X uses this HSM as that certification, or customer X uses KMS which has this certification, something like that. But that's been nothing more than a compliance check box. I certainly am not the person to get you FIPS 140-2 compliance [crosstalk 01:02:08].

Sasha: We shouldn't go to Latacora when we want to get compliance?

LHVA: There are people who can help you with that, but that is not what I'm going to be doing. We also don't have any companies that are looking at FedRAMP. We've got HIPAA and we've got ... mostly, it's HIPAA now, obviously GDPR for everyone, and there's certain kinds of compliance that are kind of easy, relatively speaking, like PCI is not the end of the world, whatever, you could probably do PCI. But no FedRAMP.

Conclusion

Sasha: Gotcha. Look, I think we're almost out of time. Actually it's wrapping up, I think, one hour. We're talking about startup security. I'll try to write up everything that you said in a form of transcript and send it to the audience. I think we've got a bunch of really helpful advice on aws-vault, on using ... why Kubernetes could be better, and things like that. If anyone has any questions, you can always email us. I will send the link, and I will try to send some of those questions to LVH. I'm not sure if he'll have time to answer them, but thanks everyone for your time. Thanks LVH for joining us, I think this was very cool. I learned a lot, personally, myself today. Hopefully it was the same for the audience. And we'll catch up after.

LHV: Just in closing, one of the neat things about Latacora is that, because there's only like a handful of customers that we can work with, we've certainly had to say no to people that we would have liked to work with, that would have liked to work with us. You should not hesitate to ask us questions. Just because you're sending email to an address that eventually, sometimes, results in people signing a contract, doesn't mean that we're going to try to sell you. You should just ask us questions.

We are loud on the internet, we're pretty happy to help as hopefully you experienced this hour. So, you should not hesitate to ask us questions. It's helpful for us. It's helpful for you. If there's anything that you'd like to see, we'd like to know. Yeah, thank you all for your time, and thank you for the opportunity to share some of the stuff that we've been doing this year.

Sasha: Yeah, thanks, thanks LVH. It was really helpful. We'll talk to you later. Bye bye, thanks everyone. I'll share this video after, as well. Bye bye.

Join The Community

Background image

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs