I had a chance to attend the O’Reilly Security Conference earlier this week. I find that when I am at these conferences, I get into a mode of thinking that is more open and creative. Here are some random thoughts I noted during the conference which I may write more about in the near future:
- The somewhat unspoken theme of the conference – at least several of the keynotes – was on reducing the friction of security to the point where, hopefully, doing some given task the desired “secure” way is easier and/or faster than doing it some other way. I really like that concept, but I think it likely requires talent and investment that a lot of companies don’t have available. A great example was one of the presenters discussing how their company’s security team modified operating system libraries to implement a more streamlined user experience for logins. Great in concept, but I suspect that idea doesn’t scale down very well to organizations that don’t have that kind of talent or ability to manage such customized code.
- When I go to a security conference, I have, let’s say, 99 security problems. By the end of the conference, I have 111 security problems. By that I mean that security conference presentations are good at defining problems I previously didn’t know I had.
- There is almost certainly a selection bias on the presentations that are picked by security conferences: talks are generally about problems that the presenters have solved, or mostly solved. Those presenters, their problems, and their solutions exist in an ecosystem largely defined by their culture, skills, risk appetite, and so on. I rarely get “actionable” information out of conference presentations. For me, the most interesting part of security conferences is looking at the logic and creativity behind how the presenter got to their solution. That feels like the important take away, and I wonder if conference presenters ought to play up the thought processes as much as their solutions.
- Thinking about named vulnerabilities like KRACK, Shellshock, and Heartbleed, I’m reminded that we have a pretty immature threat prioritization problem, which has been made worse, in some instance anyhow, by effective vulnerability marketing programs. With the recent spate of high profile worms (if three can be considered a spate), it seems likely that we should inject a “wormability” factor into the vulnerability assessment score. I am sure it’s already represented, at least in part, but it seems intuitive, at least to me, that not all CVSS 10.0 vulnerabilities are created equally – some much more pressing than others. ETERNALBLUE/ETERNALROMANCE/MS17-010 is a good example that enabled the WannaCry outbreak. That presumes we get enough information with the vulnerability disclosure to make such an assessment. It’s also clear to me that we have a “self-constructed vulnerabilities” in our environments that are wormable, but for which there is no patch. NotPetya and Bad Rabbit seem like good examples. I could have powered a small city with the energy spent on hand wringing when I mentioned there is no “patch” for those two issues. As I’ve written on this site in the past, these techniques are commonly exploited by more focused attackers, however there’s been some luck at automating them, and I see no reason that trend won’t continue. We have the CWE concept, but I don’t think that hits the mark of the “self-constructed vulnerabilities”. I think this is more like an “OWASP TOP 10” for infrastructure. Anyhow, I’m not aware of anything that uniformly identifies/measures/rates such “self-constructed vulnerabilities”.
- I see a lot of focus on automation and orchestration creeping into infosec conferences, which I think is a good thing. There was a presentation at this conference on “Inspec” compliance as code. I also recently read the book “Infrastructure as Code” which is pretty enlightening and makes my mind spin with possibilities for having “IT as Code” which would include things like “Infrastructure as Code”, “Security as Code”, “Compliance as Code”, “Resiliency/Redundancy/Recovery as Code”, and so on. I wonder if we will get to the point where our IT is defined in a configuration file that specifies basic organizational parameters which are interpreted and orchestrated into a more or less fully automated, self checking, self monitoring, self healing, and self recovering infrastructure. This seems inevitable.
That’s it. Any thoughts on these?