Menu

Skip to content
Infosec Engineering

Infosec Engineering

Category: Uncategorized

A Modest Proposal To Reduce Password Reuse

Posted on May 4, 2017 by jerry

Many of us are well aware of ongoing problem of password reuse between online services.  Billions of account records, including clear text and poorly hashed passwords, are publicly accessible to use in attacks on other services.  Verizon’s 2017 DBIR noted that operators of websites that use standard email address and password authentication need to be wary of the impact of other sites being breached on their own site due to the extensive problem of password reuse.  The authors of the DBIR, and indeed many in the security industry including me, recommend combating the problem with two factor authentication.  That is certainly good advice, but it’s not practical for every site and every type of visitor.  As an alternative, I propose that websites begin offering randomized password to those creating accounts.  The site can offer the visitor an opportunity to easily change that password to something of his or her choosing.  Clearly this won’t end password reuse outright, but it will likely make a substantial dent in it without much, if any, additional cost or complexity associated with two factor authentication.  An advantage of this approach is that it allows “responsible” sites to minimize the likelihood of accounts on their own site being breached by attackers using credentials harvested from other sites.

 

What are your thoughts?

| Leave a comment

Regulation, Open Source, Diversity and Immunity

Posted on February 5, 2017 by jerry

When the Federal Financial Institutions Examination Council released it’s Cybersecurity Assessment Tool in 2016, I couldn’t quite understand the intent behind open source software being called out as one of the inherent risks.  

Recently, I was thinking about factors that likely impact the macro landscape of cyber insurance risk.  By that I mean how cyber insurers would go about measuring the likelihood of a catastrophic scenario that harmed most or all of their insured clients at the same time.  Such a thing is not unreasonable to imagine, given the homogeneous nature of IT environments.   The pervasive use of open source software, both as a component in commercial and other open source products and used directly by organizations, expand the potential impact of a vulnerability in an open source component, as we saw with Heartbleed, ShellShock and others.  It’s conceivable that all layers of protection in a “defense in depth” strategy contain the same critical vulnerability because they all contain the same vulnerable open source component.  
In a purely proprietary software ecosystem, it’s much less likely that software and products from different vendors will all contain the same components, as each vendor writes its own implementation.  This creates more diversity in the ecosystem, making a single exploit that impacts many  I don’t mean to imply that proprietary is better, but it’s hard to work around this particular aspect of risk given the state of the IT ecosystem.  
I don’t know if this is why the FFIEC called open source as an inherent risk.  I am hopeful their reasoning is similar to this, rather than some assumption that open source software has more vulnerabilities than proprietary software.
| Leave a comment

Asymptotic Vulnerability Remediation

Posted on January 23, 2017 by jerry

I was just reading this story indicating that there are still close to 200,000 web sites on the Internet that are vulnerable to Heartbleed and recalled the persistent stories of decade old malware still turning up in honeypot logs on the SAN Internet Storm Center podcast.  It seems that vulnerability remediation must follow an asymptotic decay over time.  This has interesting implications when it comes to things like vulnerable systems being used to botnets and the like: no real need to innovate, if you can just be the Pied Piper to the many long tails of old vulnerabilities.

Also interesting to note is that 75,000 of the vulnerable devices are on AWS.  I wonder if providers, at some point, begin taking action against wayward hosting customers who are potentially putting both their platform and reputation at risk.

I’m also left wondering what the story is behind these 200,000 devices: did the startup go belly up? did the site owner die? is it some crappy web interface on an embedded device that will never get an update again?

#patchyourshit

| Leave a comment

What Does It Take To Secure PHI?

Posted on January 22, 2017 by jerry

I was reading an article earlier today called “Why Hackers Attack Healthcare Data, and How to Protect It” and I realized that this may well be the one-thousandth such story I’ve read on how to protect PHI.  I also realized that I can’t recall any of the posts I’ve read being particularly helpful: mostly containing a few basic security recommendations, usually aligned with the security offerings of the author’s employer.  It’s not that the authors of the posts, such as the one I linked to above are wrong, but if we think of defending PHI as a metaphorical house, these authors are describing the view they see when looking through one particular window of the house.  I am sure this is driven by the need for security companies to publish think pieces to help establish credibility with clients.   I’m not sure how well that works in practice, but it leaves the rest of us swimming in a rising tide of fluffy advice posts proclaiming to have the simple answer to your PHI protection woes.

I’m guessing you have figured out by now that this is bunk.  Securing PHI is hard and there isn’t a short list of things to do to protect PHI.  First off, you have to follow the law, which prescribes a healthy number of mandatory, as well as some addressable, security controls.  But we all know that compliance isn’t security, right?  If following HIPAA were sufficient to prevent leaking PHI, then we probably wouldn’t need all those thought-leadership posts, would we?

One of the requirements in HIPAA is to perform risk assessments.  The Department of Health and Human Services has a page dedicated to HIPAA risk analysis.  I suspect this is where a lot of organizations go wrong, and probably the thing that all the aforementioned authors are trying to influence in some small way.

Most of the posts I read talk about the epidemic of PHI theft, and PHI being sold in the underground market, and then focus on some countermeasures to prevent PHI from being hacked.  But let’s take a step back for a minute and think about the situation here.

HIPAA is a somewhat special case in the world of security controls: they are pretty prescriptive and apply uniformly.  But we know that companies continue to leak PHI.  We keep reading about these incidents in the news and reading blog posts about how to ensure our firm’s PHI doesn’t leak.  We should be thinking about these incidents are happening to help us figure out where we should be applying focus, particularly in the area of the required risk assessments.

HHS has great tool to help us out with this, lovingly referred to as the “wall of shame”.  This site contains a downloadable database of all known PHI breaches of over 500 records, and there is a legal requirement to report any such breach, so while there are undoubtedly yet-to-be-discovered breaches, the 1800+ entries give us a lot of data to work with.

Looking through the data, it quickly becomes apparent that hacking isn’t the most significant avenue of loss.  Over half of the incidents arise from lost or stolen devices, or paper/film documents.  This should cause us to consider whether we encrypt all the devices that PHI can be copied on: server drives, desktop drives, laptop drives, USB drives, backup drives, and so on.  Encryption is an addressable control in the HIPAA regulations, and one that many firms seemingly decide to dance around.  How do I know this?  It’s in right there in the breach data.  There are tools, though expensive and onerous, that can help ensure data is encrypted wherever it goes.

The next most common loss vector is unauthorized access which includes misdirected email, physical mail, leaving computers logged in, granting excessive permissions, and so on.  No hacking here*, just mistakes and some poor operational practices.  Notably, at least 100 incidents involved email; presumably misdirected email.  There are many subtle and common failure modes that can lead to this, some as basic as email address auto-completion.  There likely is not a single best method to handle this – anything from email DLP system quarantining detected PHI transmissions for a secondary review, to disabling email address auto-complete may be appropriate, based on the operations of the organization.  This is an incredibly easy way to make a big mistake, and deserves some air time in your risk assessments.

The above loss types make up roughly 1500 of the 1800 reported breaches.

Now, we get into hacking.  HHS’ data doesn’t have a great amount of detail, but “network server” accounts for 170 incidents, and likely make up the majority of the situations we read about in the news.  There are 42 incidents each involving email and PCs.  Since there isn’t a lot of detail, we don’t really know what happened, but can infer that most PCs-related PHI leaks were from malware of some form, and most network server incidents were from some form of actual “hacking”.  The Anthem incident, for example, was categorized as hacking on a network server, though the CHS breach was categorized as “theft”.

Dealing with the hacking category falls squarely into the “work is hard” bucket, but we don’t need new frameworks or new blog posts to help us figure out how to solve it.  There’s a great document that already does this, which I am sure you are already familiar with: the CIS Top 20 Critical Security Controls.

But which of the controls are really important?  They all are.  In order to defend our systems, we need to know what systems we have that contain PHI.  We need to understand what applications are running, and prevent unauthorized code from running on our devices storing or accessing PHI.  We need to make sure people accessing systems and data are who they say they are. We need to make sure our applications are appropriately secured, and our employees are trained, access is limited properly, and all of this is tested for weakness periodically.  It’s the cost of doing business and keeping our name off the wall of shame.

* Well, there appears to be a small number of miscategorized records in the “theft” category, including CHS, and a few others involving malware on servers.

| Leave a comment

Get On It

Posted on September 14, 2015 by jerry

Days like today are a harsh reminder that we have limited time to accomplish what we intend to accomplish in our lives.  We have limited time with our friends and relatives.

Make that time count.  It’s easy to get into the mode of drifting through life, but before we know it, the kids are grown, or our parents are gone, or or friend passed away, or we just don’t have the energy to write that book.

Get on it.  Go make the world a better place.

Cyber Security As A Science

Posted on January 17, 2015 by jerry

Dan Geer wrote an essay for the National Science Foundation on whether Cyber Security can be considered a science.  The short version is this: what constitutes a “science” is somewhat loose, however based on some commonly held dimensions, cyber security is not yet a science, and most likely could be considered a proto-science.  Mr. Geer’s essay is worth reading for yourself, since there is far more nuance than this post will cover.

Similarly, Alex Hutton has also stated in some previous talks that information security is something of a trade craft and not a science.  Information security, cyber security, or whatever moniker we want to assign it, does indeed seem to be more of a trade craft than a science or engineering discipline.

Mr. Geer’s essay points out a few unique challenges in cyber space relative to other scientific disciplines: a major part of the “thing” being modeled is sentient adversaries that can adapt, learn and deceive, and also that the rapid evolution of technology.

There seem to be other confounding factors as well: the “constituent components” of cyber security are arbitrary and implemented in wildly different fashions by different people and organizations with different levels of skill and incentives, to different specifications, with non-obvious defects, and so on.  Translating just a slice of the challenges in cyber security to civil engineering would yield that some timbers used in construction might objectively look similar but have hidden flaws that manifest under certain circumstances, placing a structure’s integrity at risk.  The flaws with the timber are not apparent and not easily detectable without incurring extraordinary expense, and even so, not all flaws are likely to be uncovered.

With respect to technology producers, the “building materials” we have to work with in information technology are flawed in many ways, most of which are unseen.  With respect to the implementers of technology, the ways in which systems are architected and implemented are generally arbitrary, utilitarian and do not in, any appreciable way, reflect the uncertainty inherent in the technology being used.

If timbers were so structurally flawed, civil engineering, building codes, architecture, engineering and so on would need to accommodate for the uncertainty that comes with building a structure that relies on such timbers.  Information technology very inconsistently deals with this uncertainty. The constant spate of breaches seems to indicate that the uncertainty is often not properly accounted for.

Information technology, and by extension information security, is currently a craft.  Some are exceptionally good at their craft, and some are quite poor.  The proliferation of information technology into daily lives has, in my view, created a somewhat low barrier to entry into this craft.  As a result, we have an extremely wide variation in the quality and care with which information technology is implemented.  Similar to furniture or jewelry created by craftsmen, some of it is exceedingly well designed and built and others are complete crap.

Evolving information security into a science has been a personal interest of mine for some time.  I would propose that a key aspect, though not the only aspect by far, of translating information security into a science is a more objective approach to designing and implementing “systems” that are inherently resilient to failure within certain parameters.  Failure to properly engineer at a “system level” view of information technology is what I see most often leading to the most complex security issues.  This will very likely mean that some current technical implementations don’t economically fit into a more scientific future state, which will mean that technology producers will need to adapt accordingly to support the market.

A significant part of this will be clearly understanding the limitations of technology components and designing in a safety margin and detective capabilities that indicate failure.

This is a complicated topic.  I certainly do not think I have the answers, but I believe I can see the problem, or at least some manifestations of the problem.  As Mr. Geer points out in his essay, the way forward is through continued research, continued evolution of our understanding, better defining the “puzzles” that need to be solved and searching for a paradigm that addresses those puzzles, as well as ensuring that practitioners have a common level of competence.

The question is how to start taking those steps.

Thanks to my Twitter friend Rob Lewis (@infosec_tourist) for the link to Mr. Geer’s essay and his constant needling of me in this direction.

 

| 1 Comment

The Road To Breach Hell Is Paved With Accepted Risks

Posted on December 7, 2014 by jerry

As the story about Sony Picture Entertainment continues to unfold, and we learn disturbing details, like the now infamous “password” directory, I am reminded of a problem I commonly see: assessing and accepting risks in isolation and those accepted risks materially contributing to a breach.

Organizations accept risk every day. It’s a normal part of existing. However, a fundamental requirement of accepting risk is understanding the risk, at least to some level. In many other aspects of business operations, risks are relatively clear cut: we might lose our investment in a new product if it flops, or we may have to lay off newly hired employees if an expected contract falls through. IT risk is a bit more complex, because the thing at risk is not well defined. The apparent downside to a given IT tradeoff might appear low, however in the larger context of other risks and fundamental attributes of the organization’s IT environment, the risk could be much more significant.

Nearly all major man-made disasters are the result of a chain of problems that line up in such a way that allows or enables the disaster and not the result of a single bad decision or bad stroke of luck. The most significant breaches I’ve witnessed had a similar set of weaknesses that lined up just so. Almost every time, at least some of the weaknesses were consciously accepted by management. However, managers would almost certainly not have made such tradeoff decisions if they understood that their decision could have lead to such a costly breach.

The problem is compounded when multiple tradeoffs are made that have no apparent relationship with each other, yet are related.

The message here is pretty simple: we need to do a better job of conveying the real risks of a given tradeoff, without overstating them, so that better risk decisions can be made. This is HARD. But it is necessary.

I’m not proposing that organizations stop accepting risk, but rather that they do a better job of understanding what risks they are actually accepting, so management is not left saying: “I would not have made that decision if I knew it would result in this significant of a breach.”

Posted in Advice Management | 2 Comments

Honey Employees

Posted on October 16, 2014 by jerry

In between bouts of chasing a POODLE around the yard today, my mind wandered into the realm of honeypots, honey drives and honey records.  I had an idea about creating fake a employee complete with a workstation, company email account, facebook page and so on.

The fake employee would exist for purposes of detecting spear phish attempts, lateral movement to the workstation, access of the employee’s documents, email accounts and so on.  Hence the name “honey employee”. This could serve as a early warning system, and to keep an eye on tactics being used by miscreants trying to worm their way in through the employees.

Is anyone doing this already?

Tagged honey employee honey pot | Leave a comment

Something is Phishy About The Russian CyberVor Password Discovery

Posted on August 7, 2014 by jerry

If you’re reading this, you are certainly aware of the story of Hold Security’s recent announcement of 1,200,000,000 unique user ID and passwords being uncovered.

I’m not going to pile on to the stories that assert this is a PR stunt by Hold.  In fact, I think Hold has done some great things in the past, in conjunction with Brian Krebs in uncovering some significant breaches.

However, there are a few aspects of Hold’s announcement that just don’t make sense… At least to me:

The announcement is that 1.2B usernames and passwords were obtained through a combination of pilfering other data dumps – presumably from the myriad of breaches we know of, like eBay, Adobe, and so on, but also from a botnet that ran SQL injection attacks on web sites visited by the users of infected computers which apparently resulted in database dumps from many of those web sites.  420,000 of them, in fact.

That seems like a plausible story.  The SQL injection attack most likely leveraged some very common vulnerabilities – probably in WordPress plugins or in Joomla or something similar.  However, nearly all of the passwords obtained, certainly the ones from the SQL injection attacks, would be hashed in some manner.  Even the Adobe and eBay password dumps were at least “encrypted” – whatever that means.

The assertion is that there were 4.5B “records” found, which netted out to 1.2B unique credentials, belonging to 500M unique email addresses.

I contend that this Russian gang having brute forced 1.2B hashed and/or encrypted passwords is quite unlikely.  The much more likely case is that the dump contains 1.2B email addresses and hashed or encrypted passwords…  Still not a great situation, but not as dire as portrayed, at least for the end users.

If the dump does indeed have actual plain text passwords, which again is not clear from the announcement, I suspect the much more likely source would be phishing campaigns and/or keyloggers, potentially run by that botnet.  However, I believe that Hold would probably have seen evidence if that were the case and would most likely have said as much in the announcement, since it would be an even more interesting story.

Hold is clearly in communication with some of the organizations where records were stolen from ,as indicated in the announcement.  What isn’t clear is whether all of the recognizable organizations were attempted to be contacted, or only the largest, or only those that had a previous agreement in place with Hold.  Certainly Hold has found an interesting niche and is attempting to capitalize on it – and that makes sense to me.  However, it’s going to be a controversial business model that requires organizations to pay Hold in order to be notified if or when Hold finds evidence that the organization’s records have been found.  I’m not going to pass judgement yet.

Tagged Breach CyberVor Passwords | Leave a comment

I Think I Was Wrong About Security Awareness Training

Posted on July 8, 2014 by jerry

Andy and I had a bit of a debate on the usefulness of security awareness training in episode 75 of our podcast. The discussion came up while covering a story about ransom campaigns and how the author recommends amping up awareness training to avoid malware and spear phishing, the two main avenues of attack for these attackers.

I was on the side of there being some benefit and Andy on the side of it not being worthwhile.

The logic goes like this: attackers are becoming so sophisticated, that it isn’t practical to expect a lay person to be able to identify these attacks – technical controls are really the only thing that is going to be effective.

My thinking, at the time, was that awareness training is like anti-virus: you should have it in place to defend against those things that it can, but we all know there are plenty of attacks it won’t stop. I think that is still a reasonable assumption.

However, I’ve since thought about it some, and in think Andy is probably right…

Awareness training is about trying to establish some firewall rules in minds of people in an organization. There’s an implicit hope that the training will avoid *some* number attacks and an understanding that it won’t catch all of them.

However, people aren’t wired to be a control point. There is a lot of research that demonstrates this point, notably in Dan Ariely’s “Predictably Irrational” books. Focus, attention, diligence and even ethics are influenced by many factors, and awareness training would need to compete against fundamental nature of people.

But it’s worse than just not effective, and that is why I think I’m wrong here. Awareness training *is* believed to be a security control by many. Awareness training is mandated by every security standard or framework I can think of, alongside antivirus, firewalls and the like. And because it is viewed as a control, we count on its effectiveness as part of our security program.

At least that is my intuition. I don’t have hard data to back it up, but that would be pretty enlightening experiment – if it were done correctly, meaning not through an opinion survey.

Educating employees on company policies is clearly necessary. However, it seems that focusing on hard controls rather than awareness education would be a better investment. Those are things like:

  • Two factor authentication or password managers and crazy password complexity requirements instead of trying teach what a strong password is
  • Controls to prevent the execution of malware delivered through email instead of how to recognize malicious files
  • Controls to prevent browsing to phishing sites or exploit kits instead of how to
  • And so on.
Tagged Awareness | 2 Comments

Post navigation

  • Older posts

Recent Posts

  • Treating The Disease of Bad IT Design, Rather Than The Symptoms
  • Differentiating IT Risk From Other Business Risk
  • Thoughts on Incentives Driving Bad Security Behavior
  • More Effective Security Policies
  • Thoughts on Autosploit

Categories

  • Advice
  • Behavioral Economics
  • Best Practice
  • Breach statistics
  • Economics
  • Hacking
  • Ideas
  • Malware
  • Management
  • Metrics
  • Resiliency
  • Risk
  • Security Awareness
  • Uncategorized
  • Uncertainty

Recent Comments

  • Fernando on Treating The Disease of Bad IT Design, Rather Than The Symptoms
  • Alex Humphrey on Thoughts on Incentives Driving Bad Security Behavior
  • Concentration of Risk and Internally Inconsistent Regulations – Infosec Engineering on Thoughts On Cyber Insurance and Ponemon Surveys
  • Alex Humphrey on Thoughts on Cloud Computing In The Wake of Meltdown
  • Christian Folini on Random Thoughts From The OReilly Security Conference 2017

Meta

  • Register
  • Log in
  • Entries RSS
  • Comments RSS
  • WordPress.org
Proudly powered by WordPress
Theme: Flint by Star Verte LLC