I recently read this post on TechNet regarding the difference in approaches between attackers and defenders. Specifically that defenders tend to think of their environment in terms of lists:
- Lists of data
- Lists of important systems
- Lists of accounts
But, attackers “think” in graphs. Meaning that they think of the environment in terms of the interconnections between systems.
I’ve been pondering this thought since I read the TechNet post. The concept seems to partly explain what I’ve written about in the past regarding bad risk decisions.
My one critique of the TechNet post is that it didn’t (at least in my view) clearly articulate a really important attribute of thinking about your network as a graph: considering the inter-connectivity between endpoints from the perspective of each endpoint.
In our list-based thinking mode, we have, for instance, a list of important systems to protect and a list of systems that are authorized to access each protected system. What is often lost in this thinking is the inter-connectivity between endpoints down-stream. As the TechNet article describes it:
“For the High Value Asset to be protected, all the dependent elements must be as protected as thoroughly as the HVA—forming an equivalence class.”
The pragmatic problem I’ve seen is that the farther we get away on the graph from the important asset to be protected, the more willing we are to make security trade offs. However, because of the nature of the technology we are using and the techniques being successfully employed by attackers, it’s almost MORE important to ensure the integrity of downstream nodes on the graph to protect our key assets and data.
This creates a tough problem for large networks, and I found the comments on the TechNet post slightly telling: “Can you tell me the name of the tool to generate these graphs?” The recommendations in the TechNet post are certainly good, however often too vague… “Rethink forest trust relationships”. That sounds like sage advice, but what does it mean? The problem is that there doesn’t appear to be a simple or clean answer. To me, it seems that we need some type of methodology to help perform those re-evaluations. Or, as I’ve talked about a lot on my podcast, we need a set of “design patterns” for infrastructure that embody sound security relationships between infrastructure components.
Another thought I had regarding graphs: graphs exist at multiple layers:
- Network layer
- Application layer
- User ID/permission layer (Active Directory’s pwn once, pwn everywhere risk)
- Intra-system (relationship between process/applications on a device)
Final Thoughts (for now)
The complexity of thinking about our environments in graphs shouldn’t dissuade us from using it (potentially) as a tool to model our environment. Rather, that complexity, to me, indicates that we should likely be thinking about building more trusted and reliable domains (the abstract definition of domain) that relate to each other based on the needs of protecting “the environment”, and less about trying to find some new piece of security technology to protect against the latest threats.