Category Archives: Advice

An Inconvenient Problem With Phishing Awareness Training

Snapchat recently disclosed that it was the victim of an increasingly common attack where someone in the HR department is  tricked into providing personal details of employees to someone purporting to be the company’s CEO.

In response, the normal calls for “security awareness training!” and “phishing simulations!” is making the rounds.  As I have said, I am in favor of security awareness training and phishing simulation exercises, but I am wary of people or organizations that believe this is a security “control”.

When organizations, information security people and management begin viewing awareness training and phishing simulations as a control, incidents like happened at Snapchat are viewed as a control failure.  Management may ask “did this employee not take the training, or was he just incompetent?”  I understand that your gut reaction may be to think such a reaction would not happen, but let me assure you that it does.  And people get fired for falling for a good phish.  Maybe not everywhere.  Investment in training is often viewed the same as investment in other controls.  When the controls fail, management wants to know who is responsible.

If you ask any phishing education company or read any of their reports, you will notice that there are times of day and days of the week where phishing simulations get more clicks than others, with everything else held constant.  The reason is that people are human.  Despite the best training in the world, factors like stress, impending deadlines, lack of sleep, time awake, hunger, impending vacations and many other factors will increase or decrease the likelihood of someone falling for a phishing email.  Phishing awareness training needs to be considered for what it is: a method to reduce the frequency, in aggregate, of employees falling for phishing attacks.

So, I do think that heads of HR departments everywhere should be having a discussion with their employees on this particular attack.  But, when a story like Snapchat makes news, we should be thinking about prevention strategies beyond just awareness training.  And that is hard because it involves some difficult trade offs that many organizations don’t want to think about.  Not thinking about them, however, is keeping our head in the sand.

What This CISO Did To Protect His Company’s Data Will SHOCK You!

Good, my click bait title worked and you’re here.   I have my cranky pants on, so lets go.

On last week’s podcast episode, Andy and I talked about Rob Graham’s recent blog post “Dumb, Dumber and cybersecurity” where Rob railed on a post titled “10 Steps to Protect Your Business From Cybersecurity Threats“.

Rob rightly points out that none of the 10 recommended steps really address the top issues that companies are getting breached by:

  • SQLi
  • Phishing
  • Password reuse

Perhaps I have some Baader-Meinhof going on, but I am seeing these damn “Top X lists to thwart the evil advanced cyber APT nation-state hacker armies of 15 year olds” EVERY WHERE.


These stupid lists are nothing more than infosec marketing platitudes…

“Keep your AV up to date!”.  Yeah, that’s going to save you.

“Keep your systems patched!”.  Yep.  Show me an organization that is able to do this, and I’ll send you a link to click on.

“Know where your data is!”.  Sure.  It’s every-fucking-where.  OKAY?  Everywhere.

“Abandon the castle wall philosophy and build protection around the data!”.  What?  I guess Google did this, right?

“Restrict employee access to only that which they need!”.   Least privilege and all that, right?

“Restrict network access to only that which is needed!”

and on and on.

These are all, of course, good ideas.  However, they’re not actionable ideas.  And, as Rob pointed out, most aren’t even the way in which businesses are getting compromised.

Let’s pick on one, just as an example of not being actionable: “Restrict employee access to only that which they need!”

Who could argue with that sage advice?  Well, I will.  The issue is that it doesn’t actually solve much in the real world.  Here’s what I mean: If I’m an accountant and need access to the financial database to run queries, restricting access might mean I get a read only account to run my queries with.  This rarely translates into a consideration of the remaining risks associated with the access I was given.  Is there a better way?  The table I am querying has credit card numbers in it, but our database doesn’t let us restrict my access down to a field level, so in order to do my job, I am given the least access possible, which is still way too much.

And so I click on funnycats.exe, because damn, who doesn’t like funny cats?  And the following Sunday, Brian Krebs is on the phone with my company’s PR person asking for an interview about our data that is for sale on a forum somewhere.  BUT BUT BUT… least privilege was followed.

And so it goes.  Cybersecurity is hard.  It takes thought, analysis and consideration of risks; not a bunch of dumb platitudes.





On The Sexiness of Defense

For years now, defenders have been trying to improve the perception of defense relative to offense in the IT security arena.  One only has to look at the schedule of talks at the average security conference to see that offense still draws the crowds.  I’ve discussed the situation with colleagues, who also point out that much of the entrepreneurial development in information security is on the offense/red team side of the fence.

That made me reflect on the many emails I receive from listeners of the security podcast I co-host.  Nearly everyone who has asked for advice, except for a few, was looking for advice on getting into the offensive side of security.

I’ve been pondering why that is, and I have a few thoughts:

Offense captures the imagination

Let’s face it, hacking things is pretty cool.  Many people have pointed out that hackers are like modern-day witches, at least as viewed by some of the political establishment.

Offense is about technology.  We LOVE technology.  And we love to hate some of the technology.

Also, offense activities make for great stories and conferences, and can often be pretty easily demonstrated in front of an audience.

Offense has a short cycle time

From the perspective of starting a security business, the cycle time for developing an “offering” is far shorter than a more traditional security product or service.  The service simply relies on the abilities and reputation of the people performing service.  I, of course, do not mean to downplay the significant talent and countless hours of experience such people have; I am pointing out that by the time such a venture is started, these individuals already possess much of the talent, as opposed to needing to go off and develop a new product.

Offense is deterministic (and rewarding)

Penetrating a system is deterministic; we can prove that it happened.  We get a sense of satisfaction.  Getting a shell probably gives us a bit of a dopamine rush (this would be an interesting experiment to perform in an MRI, in case anyone is looking for a research project).

We can talk about our offensive conquests

Offense are often able to discuss the details of their successes publicly, as long as certain information is obscured, such as the name of a customer.

If you know how to break it…

You must know how to defend it.  My observation is that many organizations seek out offense to help improve their defense.

…And then there is defense

Defense is more or less the opposite of the above statements.  If we are successful, there’s often nothing to say, at least that would captivate an audience.  If we aren’t successful, we probably don’t want to talk about it publicly.  Unlike many people on the offense side, defenders are generally employees of the organization they defend, and so if I get up and talk about my defensive antics, everyone will implicitly know which company the activity happened at, and my employer would not approve of such disclosure.  Defense is complicated and often relies on the consistent functioning of a mountain of boring operational processes, like patch management, password management, change management and so on.

Here’s what I think it would take to make defense sex[y|ier]

What we need, in my view, is to apply the hacker mindset to defensive technology.  For example, a script that monitors suspicious DNS queries and automatically initiates some activities such as capturing the memory of the offending device, moving the device to a separate VLAN, or something similar.  Or a script that detects outbound network traffic from servers and performs some automated triage and/or remedial activity.  And so on.

Certainly there are pockets of this happening, but not enough.  It is a bit surprising too, since I would think that such “defensive hackers” would be well sought after by organizations looking to make significant enhancements to their security posture.

Having said all of that, I continue to believe that defenders benefit from having some level of understanding of offensive tactics – it is difficult to construct a robust defense if we are ignorant of the TTPs that attackers use.

Dealing With The Experience Required Paradox For Those Entering Information Security

I’ve been co-hosting the Defensive Security Podcast for a few years now and receive many emails and tweets asking for advice on getting into the information security field.  I created a dedicated page on the defensive security site with some resources for newcomers to the cybersecurity/information security field.  I asked for advice and received a lot of great feedback, which I incorporated on that page.

I’ve since received feedback that the page is very helpful, however I’m now being asked for advice on addressing a new challenge: how to get a job in information security when all the information security jobs require previous information security experience?

Once again, I turned to my excellent network on Twitter to ask for help in answering that question.  This post is intended to summarize the comments I’ve received.


Network with people in the community by attending local events, such as BSides conferences, ISSA meetings, OWASP meetings, CitySec meetings and so on.

People who attend such meetings are generally aware of openings in their respective organizations, and having an advocate “on the inside” to get through the hiring process is often very helpful.

I will add that researching a topic and giving a presentation at one of these meetings will help to establish yourself as an authority on the topic.  These organizations are often looking for someone to give a presentation.  The process will force you to thoroughly learn your chosen presentation topic and refine your presentation skills, both of which make you a more valuable employee.


Non-profit and not-for-profit organizations, including churches, often can’t afford to pay for information security staff.  Volunteering at an organization is a good way to obtain practical experience.  These kinds of volunteer opportunities can lead to

Contributing to open source projects are another way to not only gain practical experience for your resume, it will also build important skills and build your network of contacts.  There are thousands of information security open source projects around.  Getting to a place where you can contribute to an open source project can be daunting, but the benefits will be worth it.  My best advice is to look for a project that interests you, find the list of open issues and/or pending features, contact the existing developers and ask if they would be willing to entertain your contribution to fix bug X or add feature Y.  Also, don’t be too offended if the developers give your contribution some criticism the first few times around.

Ground Floor

A common question goes something like this: “I want to get into information security, but this position requires years of experience…  How do I get into the field if I have to have experience in the field in order to get into the field?”

Getting into more senior level positions in any field will generally require previous experience in the field.  Generally, these more senior level positions are filled through career progression, not by someone coming in from a different field.  Said another way, you may need to look for a more entry level position that requires less experience, and then build toward your target position.

This can be disheartening for someone who has obtained a more senior level role in another field and is looking to move into information security.  Taking a lower position to get into the security field may require a pay cut.

My recommendation with this strategy is to find an organization that has both entry level positions and the more senior level positions you are interested in, or at least something close to the senior level position.  It’s often faster and easier to get into an organization in a lower level position and take on additional responsibilities, and ultimately progress up to the target position, though this strategy often means that your compensation will be less than market rate.  My advice is to get in on the ground floor, work your way up and gather some experience, and then seek the opportunity you are after.  Clearly, this is not a 6 month plan to get to a senior architect, however combining it with some other advice in this post may make it happen relatively fast.

Leverage Existing Experience

You may not have experience in information security but if you work in IT you likely have had some exposure to security processes.  Maybe it was related to following secure coding practices, or securing servers according to some documentation, or applying patches or any number of other things.  Spend some time to think about how these past experiences related to information security and develop your elevator pitch tying them to the job you want.


Certifications are a good way to establish some credibility, particularly with managers and HR departments.  Many security professionals are skeptical about the utility of certifications like CISSP and CEH, however both carry some weight when seeking a security role.  As well, they will help you to learn some of the language and expose you to different aspects of security, which may, in turn, highlight some particular area of interest for you.  Those two, in particular, are within the grasp of most people willing to spend time studying and are not incredibly expensive.

Home Lab

One of the most commonly suggested recommendations is a home lab.  Of course, that can cost some money to set up, but it doesn’t have to cost a lot.  AWS offers free virtual servers.  Running VMs in Virtualbox on your existing PC works.

My recommendation going into the home lab arena is to have an idea of where you want to go.  Malware analysis? Incident response? Security architecture? Penetration testing?

Depending on your area of focus, you will have different needs for a home lab.  A detailed discussion on possible configuration options for home labs for each of those focus areas would fill pages.  If there is interest, I’ll work on that as well.


Blogging serves four purposes:

  1. it forces you to research a topic and understand it well enough to write something informative
  2. it helps to improve your writing abilities, which is very important
  3. it (hopefully) helps other people
  4. it helps to establish your name in the industry

Branding Yourself

There’s a lot of good resources on personal branding, and I am not qualified to really do the topic justice, but I will point out a few aspects I think are very key:

  1. Consider how your social media presence would be viewed by prospective employers.  Most all employers will do at least some minimal amount of research on you.  What will they find?  Will they see rants, complaints about current positions, or socially and politically divisive comments?
  2. Build a social network of people in the industry, particularly those in the specific area you are interested in.  Ask questions and contribute to the discussions.
  3.   Make contributions to the industry.  Blog.  Podcast.  Offer to help people.
  4. Clearly identify the position you want, and develop your story on how your experience in work, volunteering, home labs, blogging and so on, relate to that position.

Employers don’t want to hire a problem child.   They want to hire a productive person who is well respected.  I would recommend seeking out other resources on personal branding to learn more.

Speaking, Writing and Presenting

This didn’t come up as a recommendation, however I will tell you that finding information security professionals who are able to write and speak clearly can sometimes be a challenge.  Remember: your writing and your speaking are often the only things that people, including prospective employers, know about you, and they will form initial opinions of you very quickly.  Make them count.  Take pride in your writing style.

Freakonomics for Information Security

There are many big questions in IT security.  Big questions that have significant implications.  There isn’t a venue, outside of security conferences and academic papers, for such questions to be asked and answered.  Security vendors often step in and provide answers, restating questions in a way that suits the vendor’s product portfolio.

I’m a fan of Freakonomics.  Some of their work is controversial to be sure, however they attempt to answer questions few people even think to ask, but which often have significant implications for society.

I’ve been thinking: IT security could really benefit from a Freakonomics-like ‘think tank’ and not only try to answer some of the hard questions, but indeed think of the hard questions to ask.  Questions that may be unpopular, particularly with vendors.  Questions like:

  • What is the limit of the effectiveness of security awareness training?
    • What factors influence this limit?
  • Is there a relationship between the level of a person targeted in an organization and the size or cost of a resulting breach?
  • What is the optimal strategy for picking an anti-virus vendor?
    • What would happen if we didn’t use anti-virus?
  • Is there a relationship between the ratio of IT budget to IT security budget and the likelihood of being breached?
  • Are mega-breaches actually  rare, despite the headlines?
  • Is there a way to estimate the frequency that organizations are breached, but don’t know it?
  • How often are risk assessments  wrong?
  • What is the optimal strategy to prioritize patches?
  • How informative and useful are security vendor research reports, like the DBIR and M-Trends?
  • How quickly do I need to detect an attack happening in order to prevent data loss?
    • What does this say about the level of investment we should give to detection versus protection?
  • What alternatives exist to the current IT security arms race?
  • How much of responsibility should the designers of IT systems carry in a breach vs. the end user(s) who were involved?
  • How does the life cycle of IT systems impact security/security breaches?
    • For instance, the old, unsecurable OPM application, Windows XP/2003, and the move to the cloud
  • Are some IT development processes more “risky” than others?
  • Is it reasonable for a company, who is trying to maximize profit, to invest what is actually needed to properly secure it’s systems?
  • Is there a relationship between the background and experience of IT and/or infosec staff and the likelihood of being beached?
  • Are targeted attacks actually targeted?  Or do they just seem that way after the fact?
  • How quickly is the sophistication of attackers advancing?
  • …and many, many more.

Are these questions already being asked and answered?  How much interest is there in such a thing?

Ideas For Defending Against Watering Hole Attacks For Home Users

In episode 106, we discussed a report detailing an attack that leveraged the website to direct visitors to an exploit kit and subsequently infect certain designated targets in the defense and financial industries using two zero day vulnerabilities.  A number of people have asked me for ideas on how to defend against this threat from the perspective of a home user, so I thought it best to write a blog post about it.   Just a heads up: this is aimed at Windows users.

One of the go-to mitigations for the defending against drive by web browser style attacks are ad blockers, like AdBlock Plus.  In the Forbes instance, it isn’t clear whether an ad blocker would have helped, since the malicious content may not have originated from an ad network, and instead was added through a manipulation of the Forbes site itself to include content from the exploit-hosting site.  Many of the targeted watering hole attacks commonly alter the web site itself.  Regardless, recent reports indicate AdBlock Plus accepts payment from ad networks in return for allowing ads through.  I would not consider ad blocking a reasonable protection in any instance.

A much more effective, however more painful, avenue is NoScript, however NoScript is a FireFox plugin, and I’ve not found great plugins that work as well for Chrome, IE or Opera.  With some fiddling, NoScript can provide a reasonable level of protection from general web content threats while mostly keeping your sanity intact.  Mostly.  You will probably not want to install NoScript on your grandparents’ computer.  NoScript can be a blunt instrument, and if the user is not diligent, will likely opt to simply turn it off, at which point we are back where we started from.

Running Flash and Java are like playing with matches in a bed of dry hay.  NoScript certainly helps, but it’s not a panacea.  For most people, the Java browser plugin should be disabled.  Don’t worry, you can still play Minecraft without the plugin.  By the way, every time you update Java, the plugin is re-installed and re-enabled.  Flash… Well, use NoScript to limit where Flash scripts come from to those you really need.

Browsing using a Windows account that does not have administrator rights also mitigates a lot of known browser exploits.  To do this,  create a wholly separate user account which does not have administrator rights and use that unprivileged account for general use, logging out or using UAC (requiring the username and password of the ID that has administrator rights) to perform tasks that require administrator rights.  It’s important that you use a separate account, even those UAC gives the illusion that administrative operations will always prompt for permission to elevate authority when you are using an account with administrator rights.  UAC was not designed to be a security control point, though.  This might be a hassle that home users may not find palatable or be disciplined enough to stick with, however it is effective at blocking many common attacks.

Finally, using Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) will block many exploit attempts, and is definitely worth installing.  The default policy is pretty effective in the latest versions of EMET.  The configuration can be tweaked to protect other applications not in the default policy, but doing so will require some testing, since some of these protections can cause applications to crash if they were not built with those settings in mind.

Finally, a web filter such as Blue Coat K9 can help prevent surreptitious connections to malicious web servers hosting exploit kits, so long as the site is known malicious.

Remarkably, anti-virus didn’t make the list.  Yes, it needs to be installed and kept up to date, but don’t count on it to save you.

One additional thought for those who are really adventurous: install VirtualBox or use HyperV to install Windows or Linux in a virtual machine and use the browser in the virtual machine.  I’ll write a post on the advantages of doing this sometime in the future.

Do you have other recommendations?  Leave a comment!

Some Infosec Wins

This is the time of year when bloggers and media publish lists of the biggest breaches of year, biggest infosec fails of the year, and so on.  2014 certainly saw a distinguished list of failures.  But I’m feeling optimistic, so I want to write something about infosec wins.  Most of the time we don’t hear about infosec wins, for obvious reasons.  Occasionally we do, though.

Two that come to mind are the recent ICANN breach and the UPS Store breach from earlier in the year.  Both were indeed breached, but both also apparently discovered the breach in a timely manner and reacted to minimize the damage.  These two wins highlight an important capability organizations need to continue to refine: detecting breaches early, rather than relying on a phone call from Brian Krebs.

As my friend and co-host Andy Kakat says, we have to free some of our security staff from the daily grind of “addressing tickets” in order to focus on building these detection capabilities.  Hopefully 2015 will see more infosec wins.


The Legacy Of The Sony Pictures Entertainment Breach

The disclosures by Edward Snowden in 2013 drove a flurry of activity in many companies, much of which centered on keeping confidential information out of the hands of dirty contractors.

Much of enterprise risk management seems, to me at least, to follow the TSA playbook: consider the threat after it manifests itself somewhere, then become fixated on it.

Which leads me to wonder how the Sony Pictures Entertainment (SPE) attack will be ingested by ERM processes at large. Certainly, the threat of losing intellectual property has been a central fixture for many years, but I suspect this will add a new dimension. Information security threats, I suspect, are about to go from being a bothersome potential for lost IP to an existential threat.

The concept of a focused and competent attacker bent on dismantling and destroying the company likely hasn’t been considered very often, but that may now change which will yield some interesting implications on IT generally. We certainly don’t have all the details about what happened to SPE yet, but it seems highly likely that common tactics were used, which we know from many other venues are very hard to defend against, particularly in large and complex IT environments.

Named Vulnerabilities and Dread Risk

In the middle of my 200 mile drive home today, it occurred to me that the reason Heartbleed, Shellshock and Poodle received so much focus and attention, both within the IT community and generally in the media, is the same reason that most people fear flying: something that Gerd Gigerenzer calls “dread risk” in his book “Risk Savvy: How to Make Good Decisions”.  The concept is simple: most of us dread the thought of dying in a spectacular terrorist attack or a plane crash, which are actually HIGHLY unlikely to kill us, while we have implicitly accepted the risks of the far more common yet mundane things that will almost certainly kill us: car crashes, heart disease, diabetes and so on. (At least for those of us in the USA)

These named “superbugs” seem to have a similar impact on many of us: they are probably not the thing that will get our network compromised or data stolen, yet we talk and fret endlessly about them, while we implicitly accept the things that almost certainly WILL get us compromised: phishing, poorly designed networks, poorly secured systems and data, drive by downloads, completely off-the-radar and unpatched systems hanging out on our network, and so on.  I know this is a bit of a tortured analogy, but similar to car crashes, heart disease and diabetes, these vulnerabilities are much harder to fix, because addressing them requires far more fundamental changes to our routines and operations.  Changes that are painful and probably expensive.  So we latch on to these rare, high-profile named-and-logo’d vulnerabilities that show up on the 11 PM news and systematically drive them out of our organizations, feeling a sense of accomplishment once that last system is patched.  The systems that we know about, anyhow.

“But Jerry”, you might be thinking, “all that media focus and attention is the reason that everything was patched so fast and no real damage was done!”  There may be some truth to that, but I am skeptical…

Proof of concept code was available for Heartbleed nearly simultaneous to it’s disclosure.  Twitter was alight with people posting contents of memory they had captured in the hours and days following.  There was plenty of time for this vulnerability to be weaponized before most vendors even had patches available, let alone implemented by organizations.

Similarly, proof of concept code for Shellshock was also available right away.  Shellshock, in my opinion and in the opinion of many others, was FAR more significant than Heartbleed, since it allowed execution of arbitrary commands on the system being attacked, and yet there has only been one reported case of an organization being compromised using Shellshock – BrowserStack.  By the way, that attack happened against an old, unpatched dev server that hadn’t been patched for quite some time after ShellShock was announced.  We anecdotally know that there are other servers out on the Internet that have been impacted by ShellShock, but as far as anyone can tell, these are nearly exclusively all but abandon web servers.   These servers appear to be subscribed to botnets for the purposes of DDOS.  Not great, but hardly the end of the world.

And then there’s Poodle.  I don’t even want to talk about Poodle.  If someone has the capability to pull off a Poodle attack, they can certainly achieve whatever end far easier using more traditional methods of pushing client-side malware or phishing pages.

The Road To Breach Hell Is Paved With Accepted Risks

As the story about Sony Picture Entertainment continues to unfold, and we learn disturbing details, like the now infamous “password” directory, I am reminded of a problem I commonly see: assessing and accepting risks in isolation and those accepted risks materially contributing to a breach.

Organizations accept risk every day. It’s a normal part of existing. However, a fundamental requirement of accepting risk is understanding the risk, at least to some level. In many other aspects of business operations, risks are relatively clear cut: we might lose our investment in a new product if it flops, or we may have to lay off newly hired employees if an expected contract falls through. IT risk is a bit more complex, because the thing at risk is not well defined. The apparent downside to a given IT tradeoff might appear low, however in the larger context of other risks and fundamental attributes of the organization’s IT environment, the risk could be much more significant.

Nearly all major man-made disasters are the result of a chain of problems that line up in such a way that allows or enables the disaster and not the result of a single bad decision or bad stroke of luck. The most significant breaches I’ve witnessed had a similar set of weaknesses that lined up just so. Almost every time, at least some of the weaknesses were consciously accepted by management. However, managers would almost certainly not have made such tradeoff decisions if they understood that their decision could have lead to such a costly breach.

The problem is compounded when multiple tradeoffs are made that have no apparent relationship with each other, yet are related.

The message here is pretty simple: we need to do a better job of conveying the real risks of a given tradeoff, without overstating them, so that better risk decisions can be made. This is HARD. But it is necessary.

I’m not proposing that organizations stop accepting risk, but rather that they do a better job of understanding what risks they are actually accepting, so management is not left saying: “I would not have made that decision if I knew it would result in this significant of a breach.”