Category Archives: Uncertainty

Probability of Getting Pwnt

I recently finished listening to episode 398 of the Risky Business podcast where Patrick interviews Professor Lawrence Gordon. The discussion is great, as all of Patrick’s shows are, but something caught my attention.  Prof Gordon describes a model he developed many years ago for determining the right level of IT security investment; something that I am acutely interested in.  Professor points out that a key aspect of determining the proper level of investment is the probability of an attack, and he points out that the probability needs to be estimated by the people who know the company in question best: the company’s leadership.

That got me thinking: how do company leaders estimate that probability?  I am sure there are as many ways to do it as there are people doing it, however the discussion reminded me of a key topic in Daniel Kahneman’s book “Thinking Fast and Slow” regarding base rates. Base rates are more or less an average quantity measured against a population for a given concept. For example, the probability of dying in a car crash is about 1 in 470.  That’s the base rate. If I wanted to estimate my likelihood of dying in a car crash, I should start with the base rate and make adjustments I believe are necessary given unique factors to me, such as that I don’t drive to work every day, I don’t drink while driving and so on. So, maybe I end up with my estimate being 1 in 60o. 

If i didn’t use a base rate, how would I estimate my likelihood of dying in a car crash?  Maybe I would do something like this:

Probability of Jerry dying in a car crash <

1/(28 years driving x 365 x 2 driving trips per day) 

This tells me I have driven about 20,000 times without dying. So, I pin my likelihood of dying in a car crash at less than 1 in 20,000. 

But that’s not how it works. The previous 20,000 times I drove don’t have a lot to do with the likelihood of me dying in a car tomorrow, except that I have experience that makes it somewhat less likely I’ll die.  This is why considering base rates are key. If something hasn’t happened to me, or happens really rarely, I’ll assign it a low likelihood. But, if you ask me how likely it is for my house to get robbed right after it got robbed, I am going to overstate the likelihood.

This tells me that things like the Verizon DBIR or the VERIS database are very valuable in helping us define our IT security risk by providing a base rate we can tweak. 

I would love to know if anyone is doing this. I have to believe this is already a common practice. 

Risk Assessments and Availability Bias

This post is part of my continuing exploration into the linkages between behavioral economics and information security.   I am writing these to force some refinement in my thinking, and hopefully to solicit some ideas from those much smarter than I…

===

In well repeated studies, subjects were asked to estimate the number of homicides in the state of Michigan during the previous year.  The response each subject gave varied primarily based on whether the subject recalled that the city of Detroit is in Michigan.  If Detroit does not come to mind when answering the question, the reported homicide rate is much lower than when Detroit, and it’s crime problem, comes to mind.  This is just one of the many important insights in Daniel Kahneman’s book “Thinking Fast and Slow”.   Kahneman refers to this as the “availability bias”.  I’ve previously written about the availability bias as it relates to information security, but this post takes a bit different perspective.

The implication of the Michigan homicide question on information security should be clear: our assessments of quantities, such as risks, are strongly influenced by our ability to recall important details that would significantly alter our judgement.  This feels obvious and intuitive to discuss in the abstract, however we often do not consider that which we are unaware of.  This is Donald Rumsfeld’s “unknown unknowns”.

In the context of information security, we may be deciding whether to accept a risk based on the potential impacts of an issue.  Or, we may be deciding how to allocate our security budget next year.  If we are unaware that we have the digital equivalent of Detroit residing on our network, we will very likely make a sub-optimal decision.

In practice, I see this most commonly result in problems in the context of accepting risks.  Generally, the upside of accepting a risk is prominent in our minds, and the downsides are obtuse, abstract and we often simply don’t have the details needed to fully understand the likelihood, nor the impact, of the risk that we are accepting.  We don’t know to think about Detroit.

How Do We Know We’re Doing a Good Job in Information Security?

Nearly every other business process in an organization has to demonstrably contribute to the top or bottom lines.

  • What return did our advertising campaign bring in the form of new sales?
  • How much profit did our new product generate?
  • How much have we saved by moving our environment “to the cloud”?

Information security is getting a lot of mind share lately among executives and boards for good and obvious reasons.  However, how are those boards and executives determining if they have the “right” programs in place?

This reminds me of the TSA paradox…  Have freedom gropes, nudie scanners and keeping our liquids in a clear ziplock bag actually kept planes from falling out of the sky?  Or is this just random luck that no determined person or organization has really tried in recent years?

If our organization is breached, or has a less significant security “incident”, it’s clear that there is some room for improvement.  But, do no breaches mean that the organization has the right level of investment, right technologies properly deployed, right amount of staff with appropriate skills and proper processes in place?  Or is it just dumb luck?

Information security is in an even tougher spot than our friends the TSA here.  A plane being hijacked or not is quite deterministic: if it happened, we know about it, or very soon will.  That’s not necessarily the case with information security.   If a board asks “are we secure?”, I might be able to answer “We are managing our risks well, we have our controls aligned with an industry standard, and the blinky boxes are blinking good blinks.”  However, I am blind to the unknown unknowns.  I don’t know that my network has 13 different hacking teams actively siphoning data out of it, some for years.

Back to my question: how do we demonstrate that we are properly managing information security?  This is a question that has weighed on me for some time now.  I expect that this question will grow in importance as IT continues to commoditize and security threats continue to evolve and laws, regulations and fines increase, even if public outrage subsides.  Organizations only have so much money to invest in protection, and those that are able to allocate resources most effectively should be able to minimize costs of both security operations and of business impacts due to breaches.

I recently finished reading “Measuring and Managing Information Risk: A FAIR Approach”, and am currently reading “IT Security Metrics”.  Both are very useful books, and I highly recommend anyone in IT security management read them.   These are generally “frameworks” that help define how, and how not to, assess risk, compare risks and so on.  In the context of a  medium or large organization, using these tools to answer the question “are we doing the right things?” seems intuitive, however at the same time, so mind bogglingly complex as to be out of reach.  I can use these to objectively determine if I am better off investing in more security awareness training or a two factor authentication system, however it won’t inform me that I should have actually spent that extra investment on better network segmentation, since that risk wasn’t on the radar until the lack of it contributed to a significant breach.

Also, there really is no “perfect” security, so we are always living with some amount of risk associated with the investment we make.  Since our organization is only willing or able to invest so much, it explicitly or implicitly accepts some risk.  That risk being realized in the form of a breach does not necessarily mean that our management of information security was improper given the organizational constraints, just as not having a breach doesn’t mean that we ARE properly managing information security.

Without objective metrics that count the number of times we weren’t breached, how does the board know that I am wisely investing money to protect the organization’s data?

Is this a common question?  Are good leaders effectively (and responsibly) able to answer the question now?  If so, how?

 

 

Certainty, Cybersecurity and an Attribution Bonus

In “Thinking Fast And Slow”, Daniel Kahneman describes a spectrum of human irrationalities, most of which appear to have significant implications for the world of information security.  Of particular note, and the focus of this post, is the discussion on uncertainty.

Kahenman describes that people will generally seek out others who claim certainty, even when there is no real basis for expecting someone to be certain.  Take the example of a person who is chronically ill.  A doctor who says she does not know the cause of the ailment will generally be replaced by a doctor who exhibits certainty about the cause.  Other studies have shown that the doctor who is uncertain is often correct, and the certain doctor is often incorrect, leading to unnecessary treatments, worry, and so on.  Another example Kahneman cites is the CFO of companies.  CFO’s are revered for their financial insight, however they are, on average, far too certain about things like the near term performance of the stock market.  Kahneman also points out that, just as with doctors, CFOs are expected to be certain and decisive, and not being certain will likely cause both doctors and CFOs to be replaced.  All the while the topic each is certain about is really a random process, or such a complicated process containing so many unknown and unseen influencing variables as to be indistinguishable from randomness.

Most of us would be rightly skeptical about someone who claims to have insight into the winning numbers of an upcoming lottery drawing, and would have little sympathy when that person turns out to be wrong.  However, doctors and CFOs have myriad opportunity to highlight important influencing variables that weren’t known when their prediction was made.  These variables are what make the outcome of the process random in the first place.

The same dichotomy regarding irrational uncertainty of random processes appears to be at work in information security as well.  Two examples are the CIO who claims that an organization is secure, or at least would be secure if she had an additional $2M to spend, and the forensic company that attributes an attack on a particular actor – often a country.

The CIO, or CISO, is in a particularly tough spot.  Organizations spend a lot of money on security and want to know whether or not the company remains at risk.  A prudent CIO/CISO will, of course, claim that such assurances are hard to give, and yet that is the mission assigned to them by most boards or management teams.  They will eventually be expected to provide that assurance, or else a new CIO/CISO will do it instead.

The topic of attribution, though, seems particularly interesting.  Game theory seems to have a strong influence here.  The management of the breached entity wants to know who is responsible, and indeed the more sophisticated the adversary appears to be, the better the story is.  No hacked company would prefer to report that their systems were compromised by a bored 17 year old teaching himself to use Metasploit over the adversary being a sophisticated, state-sponsored hacking team the likes of which are hard, neigh impossible for an ordinary company to defend against.

The actors themselves are an intelligent adversary, generally wanting to shroud their activities with some level of uncertainty.  We shouldn’t expect that an adversary will not  mimic other adversaries, reuse code, fake timezones, change character sets, incorporate cultural references, and so on, of other adversaries in an attempt to deceive.  These kinds of things add only marginal additional time investment to a competent adversary.  As well, other attributes of an attack, like common IP address ranges, common domain registrars and so on, may be common between adversaries for reasons other than the same actor is responsible, such as that of convenience or, again, an intentional attempt to deceive.  Game theory is at play here too.

But, we are certain that the attack was perpetrated by China.  Or Russia. Or Iran. Or North Korea. Or Israel.  We discount the possibility that the adversary intended for the attack to appear as it did.  And we will seek out organizations that can give us that certainty.  A forensic company that claims the indicators found in an attack are untrustworthy and can’t be relied upon for attribution will most likely not have many return customers or referrals.

Many of us in the security industry mock the attribution issue with dice, an magic 8-ball and so on, but the reality is that it’s pervasive for a reason: it’s expected, even if it’s wrong.