Category Archives: Hacking

Central Banks and Used Switches

Salacious headlines are making the rounds, complete with possibly the worst stock hacker picture ever, indicating that the $81 million dollar theft from the Central Bank of Bangladesh was pretty easy to pull off because the bank used “second hand routers” and implying that there was no firewall employed by the bank.

The money was stolen when criminals hijacked the SWIFT terminal(s) at the Central Bank of Bangladesh, and proceeded to issue transfers totally $1 billion to foreign bank accounts.  Fortunately, most of the transactions were cancelled after the attackers apparently made a spelling mistake in the name of one of the recipients.

We don’t know all that much about how the crime really happened, and a Reuters story gives a little more detail, but not much more, based on comments from an investigator.

We know is the following:

  1. The Central Bank of Bangladesh has 4 “servers” that it keeps in an isolated room with no windows on the 8th floor of it’s office.
  2. Investigators commented that these 4 servers were connected to the bank network using second hand, $10 routers or switches (referred to as both in various sources).
  3. Investigators commented that the crime would have been more difficult if a firewall had been in place.

And so we end up with a headline that reads “Bank with No Firewall…” and “Bangladesh Bank exposed to hackers by cheap switches, no firewall: police”.

The implication is that the problem arose from the quality of the switches and the lack of a firewall.  These factors are not the cause of the problem.  This bank could have spent a few thousand dollars on a managed switch, and a few tens of thousands on a fancy next gen firewall from your favorite vendor.  And almost certainly they would have been configured in a manner that still let the hack happen.  If an organization does not have the talent and/or resources to design and operate a secure network, as is apparently the case here, they we will end up with the fancy managed switch configured to be a dumb switch and the firewall will probably have a policy that lets all traffic through in both directions.  We are pointing the finger at the technology used, but the state of the technology is a symptom, not the problem.

We can infer from the story that the four SWIFT servers in the isolated room are attached to a cheap 5 or 10 port switch, plugged into a jack that connects those systems to the broader, probably flat, bank network.  I strongly suspect that the bank does indeed have a firewall at it’s Internet gateway, but there was very likely nothing sitting between the football watching, horoscope checking, phishing link clicking masses of bank employee workstations to protect those delicious SWIFT terminals in the locked room*.  Or maybe the only place to browse the Internet in private at the bank is from the SWIFT terminals themselves.  After all, the room is small, locked and has no windows**.

It doesn’t take expensive firewalls or expensive switches to protect four systems in a locked room.  But, we apparently think of next gen firewalls as the APT equivalent of my tiger repellent rock***.

*I have no idea if they really do this, but it happens everywhere else, so I’m going with it.

** I have no idea if they did this, either, but I know people who would have done it, were the opportunity available to them.

***Go ahead and laugh.  I’ve NEVER been attacked by a tiger, though.

On The Sexiness of Defense

For years now, defenders have been trying to improve the perception of defense relative to offense in the IT security arena.  One only has to look at the schedule of talks at the average security conference to see that offense still draws the crowds.  I’ve discussed the situation with colleagues, who also point out that much of the entrepreneurial development in information security is on the offense/red team side of the fence.

That made me reflect on the many emails I receive from listeners of the security podcast I co-host.  Nearly everyone who has asked for advice, except for a few, was looking for advice on getting into the offensive side of security.

I’ve been pondering why that is, and I have a few thoughts:

Offense captures the imagination

Let’s face it, hacking things is pretty cool.  Many people have pointed out that hackers are like modern-day witches, at least as viewed by some of the political establishment.

Offense is about technology.  We LOVE technology.  And we love to hate some of the technology.

Also, offense activities make for great stories and conferences, and can often be pretty easily demonstrated in front of an audience.

Offense has a short cycle time

From the perspective of starting a security business, the cycle time for developing an “offering” is far shorter than a more traditional security product or service.  The service simply relies on the abilities and reputation of the people performing service.  I, of course, do not mean to downplay the significant talent and countless hours of experience such people have; I am pointing out that by the time such a venture is started, these individuals already possess much of the talent, as opposed to needing to go off and develop a new product.

Offense is deterministic (and rewarding)

Penetrating a system is deterministic; we can prove that it happened.  We get a sense of satisfaction.  Getting a shell probably gives us a bit of a dopamine rush (this would be an interesting experiment to perform in an MRI, in case anyone is looking for a research project).

We can talk about our offensive conquests

Offense are often able to discuss the details of their successes publicly, as long as certain information is obscured, such as the name of a customer.

If you know how to break it…

You must know how to defend it.  My observation is that many organizations seek out offense to help improve their defense.

…And then there is defense

Defense is more or less the opposite of the above statements.  If we are successful, there’s often nothing to say, at least that would captivate an audience.  If we aren’t successful, we probably don’t want to talk about it publicly.  Unlike many people on the offense side, defenders are generally employees of the organization they defend, and so if I get up and talk about my defensive antics, everyone will implicitly know which company the activity happened at, and my employer would not approve of such disclosure.  Defense is complicated and often relies on the consistent functioning of a mountain of boring operational processes, like patch management, password management, change management and so on.

Here’s what I think it would take to make defense sex[y|ier]

What we need, in my view, is to apply the hacker mindset to defensive technology.  For example, a script that monitors suspicious DNS queries and automatically initiates some activities such as capturing the memory of the offending device, moving the device to a separate VLAN, or something similar.  Or a script that detects outbound network traffic from servers and performs some automated triage and/or remedial activity.  And so on.

Certainly there are pockets of this happening, but not enough.  It is a bit surprising too, since I would think that such “defensive hackers” would be well sought after by organizations looking to make significant enhancements to their security posture.

Having said all of that, I continue to believe that defenders benefit from having some level of understanding of offensive tactics – it is difficult to construct a robust defense if we are ignorant of the TTPs that attackers use.

Cyber Introspection: Look at the Damn Logs

I was talking to my good friend Bob today about whatever came of Dick Cheney’s weather machine when he interrupted with the following question:

“Why, as an community, are we constantly seeking better security technology when we aren’t using what we have?”

Bob conveyed the story of a beach response engagement he worked on for a customer involving a compromised application server.  The application hadn’t been patched in years and had numerous vulnerabilities for anyone with some inclination to exploit.  And exploited it was.  The server was compromised for months prior to being detected.

The malware dropped on the server for persistence and other activities was indeed sophisticated.  There was no obvious indication that the server had been compromised.  System logs were cleared from the time of the breach and subsequent logs had nothing related to the malicious activity on the system.

A look at the logs from a network IDS sensor which monitors the network connecting the server to the Internet showed nearly no alerts originating from that server until the suspected date of the intrusion, as determined by forensic analysis of the server.  On that day, the IDS engine started triggering many, many alerts as the server was attempting to perform different activities such as scanning other systems on the network.

But no one was watching the IDS alerts.

The discussion at the client quickly turned to new technologies to stop such attacks in the future and to allow fast reaction if another breach were to happen.

But no one talked about more fully leveraging the components already in place, like IDS logs.  IDS is an imperfect system that requires care and feeding (people); clearly an inferior option when compared to installing a fancy advanced attack.

I previously wrote a similar post a while back regarding AV logs.

Why are we so eager to purchase and deploy yet more security solutions, which are undoubtedly imperfect and also undoubtedly requires resources to manage, when we are often unable to get full leverage from the tools we already have running?

Maybe we should start by figuring out how to properly implement and manage our existing IT systems, infrastructure and applications.  And watch the damn logs.

JPMC Is Getting Off Easy

News today indicates that the JPMC breach which was discovered earlier in 2014 was the result of a neglected server not being configured to require 2FA as it should have been.   That was a pretty simple oversight, right?  Well, no so fast.  There are a lot of other details that previously surfaced which paint a more complicated picture.

– First, we know that the breach started via a vulnerability in a web application.

– Next, we know that the breach was only detected after JPMC’s corporate challenge site was breached and JPMC started examining other networks for similar traffic and found the attackers were also on it’s systems.

– We also know that “gigabytes” of data on 80 million US households was stolen.

– Finally, we know that the breach extended to at least 90 other servers in the JPMC environment.

Attributing the breach to missing 2FA on a server seems very incomplete.

Certainly we have seen a number of breaches attributed to unmanaged systems, such as Bit9 and BrowserStack. This is why inventory is the #1 critical cyber security control. Without it, we don’t know what needs to be secured.

We can also include at least:
– Application vulnerability
– Gigabytes of data being exfiltrated undetected
– Hacker activity and command and control activity on 90 different servers undetected
– Configuration management

This isn’t intended to drag JPMC through the mud; rather it’s to point out that these larger breaches are the unfortunately alignment of a number of control deficiencies rather than a single, simple oversight in configuring a server.

The Elephant In The Room With Preventing DDOS Attacks

DDOS attacks have been a regular fixture in infosec news for some time now. Primarily, those attacks have been using open DNS resolvers, though recently NTP flared up as a service of choice. The community made dramatic improvements in the number of NTP servers which were susceptible to being used in DDOS attacks in a pretty short amount of time. However, both open resolvers and NTP continue to be a problem. And there are likely to be other services targeted in the future, like SNMP.

One common theme is that these services are UDP-based, and so it’s trivial to spoof a source IP address and get significant traffic amplification directed toward the victims of these DDOS attacks.

While I think it’s necessary to focus on addressing the open resolver problem, NTP and similar issues, I’m very surprised that we, as a community, are not pushing to have ISPs implement a very basic control that would dramatically restrict these kinds of attacks: simple source address egress filtering.

Yes, this likely puts additional load on routers or firewalls, but it’s pretty basic hygiene to not allow packets out of a network with a source address which that ISP does not announce for. I am sure there are some edge case exceptions, such as with asymmetric routing, but it should be manageable between the customer and the ISP.

So, each time we hear about a DDOS attack and ponder the pool of poorly configured DNS servers, I propose that we should also be pondering the ISPs who allow traffic out of their networks with a source address that is clearly spoofed.

Lessons From The Neiman Marcus Breach

Bloomberg released a story about a forensic report from Protiviti detailing the findings of their investigation into the Neiman Marcus breach. There are very few details in the story, but what is included is quite useful.

First, Protiviti asserts that the attackers who breached Neiman do not appear to be the same as those who breached Target. While this doesn’t seem like a major revelation to many of us, it does point out that there are numerous criminals with the ability to successfully pull off such attacks. And from this, we should consider that these “sophisticated” attacks are not all that hard to perpetrate given the relative reward.

Next, Protiviti was apparently unable to determine the method of entry used by the attackers. While that is unfortunate, we should not solely focus on hardening our systems against initial attack vectors, but also apply significant focus to protecting our important data and the systems that process and store that data. Criminals have a number of options to pick from for initial entry, such as spear phishing and watering hole attacks. We need to plan for failure when we design our systems and processes.

The activities of the attackers operating on Neiman systems apparently created nearly 60,000 “alerts” during the time of the intrusion. It is very hard to draw specific conclusions because we don’t actually know what kind of alerts are being referenced. I am going to speculate, based on other comments in the article, that the alerts were from anti-virus or application white listing:

…their card-stealing software was deleted automatically each day from the Dallas-based retailer’s payment registers and had to be constantly reloaded.

…the hackers were sophisticated, giving their software a name nearly identical to the company’s payment software so that any alerts would go unnoticed amid the deluge of data routinely reviewed by the company’s security team.

The company’s centralized security system, which logged activity on its network, flagged anomalous behavior of a malicious software program though it didn’t recognize the code itself as malicious or expunge it, according to the report. The system’s ability to automatically block the suspicious activity it flagged was turned off because it would have hampered maintenance, such as patching security holes, the investigators noted.

The 59,746 alerts set off by the malware indicated “suspicious behavior” and may have been interpreted as false positives associated with legitimate software.

However, some of these comments are a bit contradictory. For instance:

payment registers and had to be constantly reloaded


it didn’t recognize the code itself as malicious or expunge it

In any event, a key take away is that we often have the data we need to detect that an attack is underway.

Next is a comment that highlights a common weakness I covered in a previous post:

The server connected both to the company’s secure payment system and out to the Internet via its general purpose network.

Servers that bridge network “zones”, as this Neiman server apparently did, are quite dangerous and exploitation of them tends to be one of the common traits of many breaches. Such systems should be eliminated.

Finally, a very important point from the story to consider is this:

The hackers had actually broken in four months earlier, on March 5, and spent the additional time scouting out the network and preparing the heist…

This should highlight for us the importance of a robust ability to detect malicious activity on our network and systems. While some attacks will start and complete before anyone could react, many of these larger, more severe breaches tend to play out over a period of weeks or months. This has been highlighted in a number if industry reports, such as the Verizon DBIR.

One Weird Trick To Secure You PCs

Avecto released a report which analyzed recent Microsoft vulnerabilities and found that 92% of all critical vulnerabilities reported by Microsoft were mitigated if when the exploit attempt happened on an account without local administrator permissions. Subsequently, there has been a lot of renewed discussion about removing admin rights as a mitigation from these kinds of vulnerabilities.

Generally, I think it’s a good idea to remove admin rights if possible, but there are a number of items to think about which I discuss below.

First, when a user does not have local administrator rights, a help desk person will generally need to remotely perform software installs or other administrative activities on the user’s behalf. This typically involves a support person logging on to the PC using some manner of privileged domain account which was configured to have local administrator rights of the PCs. Once this happens, a cached copy of the login credentials used by the support staff are saved to the PC, albeit in a hashed manner. Should an attacker be able to obtain access to a PC using some form of malware, she may be able to either brute force recover the password from the hash or use a pass-the-hash attack, which would grant the attacker far broader permissions on the victim organization’s network than a standard user ID would. Additionally, an attacker who already has a presence on a PC may use a tool such as mimikatz to directly obtain the plain text password of the administrative account.

You might be thinking “but, if I remove administrator rights, attackers would be very unlikely to gain access to the PC in manner to steal hashes or run mimikatz, both of which require at least administrator level access. What gives?”

That is a good question which dovetails into my second point. The Avecto report covers vulnerabilities which Microsoft deems the severity to be critical. However, most local privilege escalation vulnerabilities I could find are only rated Important by Microsoft. This means that if even if you don’t have administrator rights, I can trick you into running a piece of code of my choosing, such as one delivered through an email attachment or even using a vulnerability in another piece of code like Java, Flash Player or PDF reader, and my code initially would be running with your restricted permissions, however my code could then leverage a privilege escalation flaw to obtain administrator or system privileges. From there, I can then steal hashes or run mimikatz. Chaining exploits in attacks is not all that uncommon any longer, and we shouldn’t consider this scenario to be so unlikely that it isn’t worth our attention.

I’ll also point out that many organizations don’t quickly patch local privilege escalation flaws, because they tend to carry a lower severity rating and they intuitively seem less important to focus on, as compared to other vulnerabilities which are rated critical.

Lastly, many of the recent high profile, pervasive breaches in recent history heavily leveraged Active Directory by means of credential theft and subsequent lateral movement using those stolen credentials. This means that the know-how for navigating Active Directory environments through credential stealing is out there.

Removing administrator rights is generally a prudent thing to do from a security standpoint. A spirited debate has been raging for years on whether removing administrator rights costs money, in the form of additional help desk staff who now have to perform some activities which users used to do themselves and related productivity loss by the users who now have to call the help desk, or is a net savings because there are less malware infections, less misconfigurations by users, less incident response costs, and associate higher user productivity, or if those two factors simply cancel each other out. I can’t add a lot to that debate, as the economics are going to be very specific to each organization considering removing administrator rights.

My recommendations for security precautions to take when implementing a program to remove admin rights are:
1. Prevent Domain Administrator or other accounts with high privileges from logging into PCs. Help desk technicians should be using a purpose-created account which only has local admin rights on PCs, and systems administrators should not be logging in to their own PCs with domain admin rights.
2. Do not disable UAC.
3. Patch local privilege escalation bugs promptly.
4. Use tools like EMET to prevent exploitation of some Oday privilege escalation vulnerabilities.
5. Disable cached passwords if possible, noting that this isn’t practical in many environments.
6. Use application whitelisting to block tools like mimikatz from running.
7. Follow a security configuration standard like the USGCB.

Please leave a comment below if you disagree or have any other thoughts on what can be done.

H/T @lerg for sending me the story and giving me the idea for this post.

What The Target Breach Can Teach Us About Vendor Management

A recent report by Brian Krebs identified that Fazio Mechanical, the HVAC company who was compromised and used to attack Target, was breached through an “email attack”, which allegedly stole Fazio’s credentials.

In my weekly security podcast, I rail pretty hard on workstation security, particularly for those systems which have access to sensitive systems or data, since attacking the workstation has become a common method for our adversaries. And hygiene on workstations is generally pretty terrible.

However, I am not going to pick on that aspect in this post. I want to explore this comment in Krebs’ story:
“But investigators close to the case took issue with Fazio’s claim that it was in full compliance with industry practices, and offered another explanation of why it took the Fazio so long to detect the email malware infection: The company’s primary method of detecting malicious software on its internal systems was the free version of Malwarebytes Anti-Malware.”

This assertion seems to be hearsay by some unknown sources rather than fact, although Krebs’ sources tend to be quite reliable and accurate in the past. I am going to focus on a common problem I’ve seen which is potentially demonstrated by this case.

I am not here to throw rocks at Target or Fazio. Both are victims of a crime, and this post is intended to be informative rather than making accusations, so I will go back to the fictitious retailer, MaliciousCo, for this discussion. We know that MaliciousCo was compromised through a vendor who was itself compromised. As I described in the last post, MaliciousCo has a robust security program, which includes vendor management. Part of the vendor management program includes a detailed questionnaire which is completed annually by vendors. A fictitious cleaning company, JanitorTech, was compromised and led to the breach of MaliciousCo.

Like Fazio, JanitorTech installed the free version of Malwarebytes (MBAM) on its workstations and an IT person would run it manually on a system if a user complained about slowness, pop-ups or other issues. When MaliciousCo would send out its annual survey, the JanitorTech client manager would come to a question that read: “Vendor systems use anti-virus software to detect and clean malicious code?” and answer “yes” without hesitation because she sees the MBAM icon on her computer desktop every day. MaliciousCo saw nothing particularly concerning in the response; all of JanitorTech’s practices seem to align well with MaliciousCo policies. However there is clearly a disconnect.

What’s worse is that MaliciousCo’s vendor management program seems to be oblivious to the current state of attack techniques. The reliance on anti-virus for preventing malicious code is a good example of that.

So, what should MaliciousCo ask instead? I offer this suggestion:
– Describe the technology and process controls used by the vendor to prevent, block or mitigate malicious code.

I have personally been on both sides of the vendor management questionnaire over the years. I know well how a vendor will work hard to ‘stretch’ the truth to in order to provide expected answers. I also know that vendor management organizations are quick to accept answers given without much evidence or inspection. Finally, I saw that vendor management questionnaires, and the programs behind them, tend not to get updated to incorporate the latest threats.

This should serve as an opportunity for us to think about our own vendor management programs: how up-to-date they are, and whether there is room for the kind of confusion demonstrated in the JanitorTech example above.

What The Target Breach Should Tell Us

Important new details have been emerging about the Target breach. First came news that Fazio Mechanical, an HVAC company, was the avenue of entry into the Target network, as reported by Brian Krebs.

This started a firestorm of speculation and criticism that Fazio was remotely monitoring or otherwise accessing the HVAC units at Target stores and that Target connected those HVAC units to the same networks as POS terminals and, by extension, was not complying with the PCI requirement for 2 factor authentication for access to the environment containing card data, as evidenced by Fazio’s stolen credentials leading to the attackers having access to the POS networks.

Fazio Mechanical later issued a statement indicating that they do not perform remote monitoring of Target HVAC systems and that “Our data connection with Target was exclusively for electronic billing, contract submission and project management.”

In a previous post on this story, I hypothesized about the method of entry being a compromised vendor with access to a partner portal, and the attacker leveraging this access to gain a foot hold in the network. Based on the description of access in Fazio Mechanical’s statement, this indeed appears to be exactly what happened.

We still do not know how the attacker used Fazio’s access to Target’s partner systems to gain deeper access into Target’s network. Since the point of this post is not to speculate on what Target did wrong, but rather what lessons we can draw from current events, I will go back to my own hypothetical retail chain, MaliciousCo (don’t let the name fool you, MaliciousCo is a reputable retailer of fine merchandise). As described in my previous post, MaliciousCo has an extranet which includes a partner portal for vendors to interact with MalicousCo, such as submitting invoices, processing payments, refunds and work orders. The applications on this extranet are not accessible from the Internet and require authenticated VPN access for entry. MaliciousCo’s IT operation has customized a number of applications used to for conducting business with its vendors. Applications such as this are generally not intended to be accessible from the Internet and often don’t get much security testing to identify common flaws, and where security vulnerabilities are identified, patches can take considerable time for vendors to develop and even longer for customers to apply. In MaliciousCo’s case, the extranet applications are considered “legacy”, meaning there is little appetite and no budget to invest in them, and because they were highly customized, applying security patches for the applications would take a considerable development effort. Now, MaliciousCo has a robust security program which includes requirements for applying security patches in a timely manner. MaliciousCo’s IT team assessed the risk posed by not patching these applications and determined the risk to be minimal because of the following factors:

1. The applications are not accessible from the Internet.
2. Access to the extranet is limited to a set of vendors who MaliciousCo’s vendor management program screens for proper security processes.
3. There are a number of key financial controls outside of these applications that would limit the opportunity for financial fraud. An attacker couldn’t simply gain access to the application and start submitting invoices without tripping a reconciliation control point.
4. The applications are important for business, but down time can be managed using normal disaster recovery processes should some really bad security incident happen.

Given the desire to divert IT investment to strategic projects and the apparently small potential for impact, MaliciousCo decides against patching these extranet applications, as other Internet accessible application receive. Subsequently, MaliciousCo experiences a significant compromise when an attacker hijacks the extranet VPN account of a vendor. The attacker identified an application vulnerability which allowed a web shell to be uploaded to the server. The attacker then exploited an unpatched local privilege escalation vulnerability on the Windows OS which hosts the extranet application and uses these privileges to collect cached Active Directory credentials for logged in administrators using a combination of mimikatz and JtR. While the extranet is largely isolated from other parts of the MaliciousCo network, certain network ports are open to internal systems to support functionality like Active Directory. From the compromised extranet application server, the attacker moves laterally, first to an extranet domain controller, then to other servers in the internal network environment. From here, the attacker is able to access nearly any system in the MaliciousCo environment, create new Active Directory user IDs, establish alternative methods of access into the MaliciousCo network using reverse shell remote access trojans, mass distribution of malware to MaliciousCo endpoints, collection and exfiltration of data, and so on.

MaliciousCo didn’t fully understand the potential impacts resulting from a compromise of its extranet applications when evaluating the security risks associated with those applications.

We don’t know what happened yet in the case of Target, and MaliciousCo is just a story. But, scenario has apparently played out at organizations like DigiNotar, the State of South Carolina and many others.

Why does this happen?

In my view, the problem is largely a failure to understand the capabilities and common tactics of our adversaries, along with an incomplete understanding of the interplay within and between complex IT system, Active Directory in particular. I intently follow the gory details of publicly disclosed breaches and it is clear to me that attackers are following a relatively common methodology which often involve:
– gaining initial entry through some mechanism (phishing, web app vulnerability, watering hole)
– stealing credentials
– lateral movement via systems which have connectivity with each other using stolen credentials
– establishing a ‘support infrastructure’ inside the victim network
– establishing persistence on victim systems
– identifying and compromising targets using stolen or maliciously created credentials or other via hijacking standard management tools employed by the victim
– exfiltration (or other malicious action)

While we don’t know the details of what happened in the case of Target, it seems quite clear that the attacker was able to laterally move from a partner application server onto networks where POS terminals reside. The specific means by which that happened are not clear and indeed we may never know for sure.

I believe that we, as defenders, need to better understand the risks posed by situations like this. I am not proposing that such security risks must always require action. Rather, based on my experience in IT, I believe these risks often go unidentified, and so are implicitly accepted due to lack of awareness, rather that consciously evaluated.

In the next post, I cover what we can learn regarding the security of vendors based on what has been disclosed about the Target breach so far.

Thoughts On Avoiding The Complex POS Attacks

I have been watching the Target breach story unfold with great interest. In full disclosure, I have no insight into what has happened at Target, beyond the reports that are publicly available. What follows is purely hypothesis and speculation for the purposes of identifying potential mitigations for what may have happened.

Clearly, we don’t know the precise details of how the attack was carried out, however there has been a lot of analysis of various aspects of what is known, including this report from SecureWorks. Malcovery has also released a report speculating the method of entry based on the file hashes provided by in an earlier report by iSight. I am most interested in identifying how the attack happened and what can be done to defend against such an attack. The SecureWorks report provides a good high level list of activities, but not a lot of specificity. For example:

– Firewall ACLs — Access control lists (ACLs) at network borders can be an effective short-term mitigation technique against specific hosts during an active incident when response policies dictate that network traffic to a hostile host be terminated.
– Network segmentation — Organizations should segment PCI networks to restrict access to only authorized users and services.

Among many others are great concepts. However, my observation is that these are not deterministic states, rather they are subjective. What is an “authorized user or service”?

Malcovery believes that the Target attack likely began with a web server being compromised with an SQL injection attack. Let’s assume this is true for a moment in my hypothetical retailer MaliciousCo (oddly, the victim). My web server is on a dedicated network segment. But my site is, of course, a web app connected to a database server. My web site needs to connect to my SQL server, but I don’t want my SQL server hanging out on a network that is accessible to the Internet, even if I don’t allow Internet-originated traffic to the SQL server itself, so I put it on an internal network, because I have other business applications and processes that need to access that same database server. Now, I have a legitimate case where my web site and SQL server are authorized to talk to one another. Because I am a diligent architect, I even route the traffic between the web and SQL servers through an IPS. However, I have created a path into my organization from my web server to my internal systems.

One of the first lessons here, assuming this is the case, is that there should not be ANY connectivity between external server networks and internal networks. The one caveat I would extend is to allow INBOUND traffic from limited internal hosts and the external networks. Outbound traffic into internal networks is not permitted. Not for SQL, not for active directory, not for anything. Additionally, outbound traffic to the Internet should be blocked from hosts on Internet accessible networks too. Only allowing inbound connections from the Internet. The exception might be a very specific mechanism for accessing a payment gateway.

Having done this, any intrusion into my web server is contained on the server itself, along with any other systems that might be on that same network, and there is no practical avenue of lateral movement into the innards of my MaliciousCo network.

Interestingly, in the case of Target, I doubt very highly that the problem involved the main web environment, which includes their online retail operation. We know that the breach didn’t involve the online part of Target’s business. We have also heard Target make reference to a vendor’s credentials being used to commit the breach. At this point, it’s not at all clear exactly what they meant, but I theorize Target is referring to the BMC Patrol user ID and password seen hard coded in the POS malware. However, this opens up another line of consideration: extranets or vendor portals. I have no insight into whether Target actually has such a thing, but my hypothetical mega retailer MaliciousCo does. This vendor portal is used by vendors to receive orders, submit invoices, communicate shipment information and so on. This portal is wholly separate from my main web presence. Access to the vendor portal is obtained via an authenticated VPN and isn’t accessible to the Internet at large.

If one of my suppliers becomes compromised, an attacker might have access to my vendor portal. Since I don’t have any direct control, or even indirect control over my vendor’s security posture (yes, I have them complete a checklist once per year, but we both know this is a Kabuki dance), I opt to treat the vendor portal exactly as I do my Internet sites by isolating them. This effectively restricts the ability of an attacker controlling my vendor portal from lateral movement into my network.

Having said all of this, we don’t actually know how Target was breached. We know that a number of major breaches in the past have happened as a result of SQL injection on web servers. But, it’s also possible that the initial attack looked like Syrian Electronic Army attack, relying on iteratively more sophisticated and deeper spear phishing attacks. Or, maybe it perpetrated using a watering hole attack using a site of interest to the retail industry – after all, we are hearing that there are many retailers involved. Or maybe it’s an attack on Cold Fusion running somewhere in their environment. My point is that there are many windows of opportunity. If MaliciousCo does a stellar job of isolating the web environment, determined attackers are going to try another approach to get at my juicy POS terminals.

My POS terminals should be on a strictly isolated network with all required Supporting infrastructure contained on that network. The only exception being specific access to a payment gateway.

Planning for failure of other controls, my POS terminals themselves should be well locked down, using application white listing to block execution of any unknown software.

Configuring and isolating environments like this is inefficient, cumbersome and expensive. Our adversaries are clever and highly motivated. I am not proposing that we have to take these drastic and costly precautions; we can continue to optimize the design and operation of our IT environments around the axis of efficiency, but we should not feign surprise when major breaches occur. Breaches are inevitable where we have the intersection of means, opportunity and incentives. We can’t do a lot about the means or incentives variables. But we do control the opportunity variable.

I’ll be following up with a few more posts about different aspects we can learn from such as monitoring later.

By the way, I am not proposing that I have the only answer to this. This is a thought experiment and I encourage you to post your views, ideas or criticisms in the comments.