Category Archives: Psychology

Forcing Attacker Decisions

Today I want to talk to you about forcing decisions and how you can use the concept to gain a strategic advantage in your infosec work.

Over the past few years, the Golden State Warriors have revolutionized the game of basketball while winning two NBA championships and putting up record-setting numbers in a variety of statistical categories. They aren’t just winning, they’re dominating and changing how people approach the game at a fundamental level. There are a lot of reasons for this, but none more apparent than their fast-paced offense that is built around passing.

I recently read an article from ESPN describing how Warriors coach Steve Kerr formulated his offense. The article’s worth reading if you care about basketball, but even if you don’t I think there’s one quote in there that’s relevant beyond basketball. The Warriors star player, Steph Curry, had this to say:

“The main goal is to just make the defense make as many decisions as you can so that they’re going to mess up at some point with all that ball movement and body movement and whatnot.”

The concept is simple but powerful. When a player makes a pass to another player, it forces the five defenders to make a decision and react. There are a lot of variables that have to be considered very quickly.

Now, let’s say that as soon as the second player receives a pass, they make another pass. What happened?

It forces the defense to consider a new set of variables, probably before they’ve had a chance to fully react to the variables encountered from the first pass. This mental reset causes confusion and slows the ability to react with the correct adjustment. Every quick pass compounds the opportunity for confusion. The Warriors rely on this to succeed, and it’s one of the reasons they track their passes per game statistic aggressively.

Watch the guys in blue. Notice how lost they look after the first few passes? They’re lost! A couple of them have basically given up on the play by the time the shot goes up.

Of course, this concept goes well beyond basketball. It relates to all decision-making.

 

Forcing Attacker Decisions

Any time you make a decision you are processing all available information. Good decision making is based on understanding every variable and having time to thoughtfully process the data. I believe network defenders are in a unique position to force poor attackers into poor decisions.

Home court advantage matters.

The attacker doesn’t know your network. To learn it, they have to go through a period of iterative discovery. An attacker gains access to something, pokes around, gains access to something else, pokes around more, rinse and repeat. The attacker doesn’t know your network, but they will learn it as they move closer to their objective. Each step of discovery provides an opportunity to force decisions through the strategic introduction of information. When this happens, an attacker might do something aggressive enough that it trips a signature, pivot around rapidly and leave a few extra breadcrumbs in your logs, or withdraw completely. 

Let’s talk about a few ways you can accomplish this.

Honeypots. I’m not talking about traditional external malware-catching honeypots that we’ve all set up and forgotten about. Production honeypots sit inside the network and are designed to mimic systems, processes, and data. Nobody should ever access these things, so any access constitutes an alert worthy event. Beyond detection, internal honeypots can also serve to confuse attackers and waste their time. Security is an economic problem and when defenders can increase the cost of an attack, this can serve to ward off opportunistic or lesser resourced attackers.

Deception Traps. The use of deception tech (beyond honeypots) is on the rise, and I think we’re well behind on leveraging these concepts. I’m not talking hacking back or security through obscurity — I’m talking passive engagement of attackers on your home court through automated traps. These are the traps that, when interacted with, provide information to an attacker that can confuse their understanding of the network itself. For example, IP space that responds to scans, but houses no systems. Perhaps some of those systems respond differently to scans depending on the source or time of day. Another example might be web application directories that when accessed, redirect the user to random pages or create endless redirect loops. One last example might be running processes on a system that appear to be named after multiple antivirus binaries. That would certainly be confusing to see multiple AV tools on a single system. One more — what if an attacker discovered user accounts and logs that indicate they aren’t the only attacker on the system? That might cause them to make a hasty retreat or try an aggressive pivot. These ideas aren’t exclusively about detection (although they could be used in that way). They’re about providing confusing information at inopportune times.

Scheduled Shutdowns and Restricted Logon Hours. It’s insanely easy to configure systems to shut down during off-hours and to limit certain user accounts to specific login hours. Yet, I never see anyone doing it. Sure, you have to account for users who might work late and keep systems up so that they receive important updates. However, forcing these schedules will throw an attacker who is poking around on your network off. They will likely figure it out eventually, but that still gives them a time window wherein decisions could become hastened when they know the deadline is approaching. There’s no better way to force decisions than to put a time limit on them. As an added bonus, limiting login times can help workers maintain a healthier work/life balance, and shutting systems down lowers your electric bill and is good for the environment.

 

Conclusion

These strategies won’t completely stop the attacker, but they do have the potential to slow them down enough so that you may detect them before they reach their goal and you’re dealing with a larger breach. Furthermore, it increases the attacker’s cost and effort to reach their goal, and this might just be enough to ward off opportunistic attackers or attacks based on automated processes.

While these techniques aren’t appropriate for every security program maturity level, they provide an opportunity for innovation in the open source and commercial product space.

The Warriors succeed because they tried something different using the skillful talent they had. You can do the same, but like Steve Kerr and Steph Curry, you may need to think differently and apply a unique strategy. The fundamentals matter, but so does being different.

My challenge to you: Think of a way you can force an attacker into making a bad decision. Have some cool ideas? Post them in the comments below.

Gaining Technical Experience with Deliberate Practice

When I moved to Georgia I started riding my mountain bike several times a week. Almost every other day I’d pop out of the office before lunch and ride three one-mile circuits on a trail near my house.  Six months after starting, I realized I’d never looked at the data collected by the trip logger on my bike. I thought it’d be really interesting to see how my speed had improved as I biked more.

I was pretty disappointed when I saw the data.

My performance hadn’t increased a bit.

After months of biking the same three miles, I had no noticeable gains in my cycling ability. I wasn’t finishing the ride any faster, and I wasn’t any less tired. What gives?

 

Me failing at life (dramatization).

 

I got angry and decided to spend some time focusing on performance. Hell hath no fury like a geek with the ability to collect data and manipulate independent variables. I looked up YouTube videos about cycling posture and breathing techniques. I also started reviewing my trip log after every ride and setting goals. I wanted the average of my ride times to improve by at least a few seconds every week.

After sticking with this regimen for just another month, the results were in. I had improved my performance by nearly 30%.

Why was I able to accomplish so much in a month after not making any progress in six months?

The answer is that I stopped riding mindlessly and began deliberately practicing.

 

Practice vs. Performance

We mostly think of practice vs. performance in terms of athletes, so let’s stick with that example. In a given basketball game lasting an hour, an active player might get up a dozen two-pointers, a few three-pointers, and a couple of layups. They might make about forty passes and jump for twenty rebounds. They might also run three offensive and defensive schemes twenty times each.

Now let’s compare those stats with a week of hour-long daily practice sessions as I’ve done in the table below.

 

1 Game 1 Week of Practice
Two pt shots 12 500
Three pt shots 3 500
Layups 2 200
Passes 40 1000
Rebound attempts 20 200
Offensive schemes 3 x 20 5 x 100
Defensive schemes 3 x 20 5 x 100

 

Clearly, what happens in a game performance is a much smaller subset of what is practiced. The purpose of practice is to develop individual skills in preparation for a performance. Practice is a planned, mindful exercise. An entire practice might be devoted to a single skill like shooting, or mastering a skill within a specific scenario, like rebounding in a man-to-man defense.

A performance combines every facet of your practice. Performance is full of surprise and unpredictable. While practice is actively thoughtful, performance generally involves acting out of muscle memory and getting into a zone that many psychologists called “flow”. Experts experience a much stronger state of flow because they’ve developed more muscle memory (both mentally, and physically).

 

The Secret of Practice

In the example I just described, you might notice that the amount of time allotted to practice is much greater than performance. Be careful though — the development of skill isn’t purely a function of time. That’s why my six months of riding trails showed no improvement in my cycling skill. It’s also why you drive a car every day but are probably ill-equipped to steer a race car around Daytona at 200mph.

The secret of building expertise through practice isn’t that experts log more hours of practice. Experts log higher quality practice.

Whether you’re an athlete or an analyst, the characteristics of high-quality practice are the same.

Requirement 1: A clearly defined long-term goal

The goal of practice is to perform well. What does peak performance look like? In sports, this usually means putting up good stats or winning a game. In intellectual pursuits, it might mean arriving at an accurate answer or completing a task quickly. High-quality practice works toward long-term goals.

Requirement 2: An understanding of the component parts of that goal

Performance is made up of multiple skills used in a variety of scenarios. High-quality practice requires that you understand your long-term goal well enough to break it down into these component parts so you can focus on them individually.

Requirement 3: 100% concentration and effort

Performance is all about muscle memory and flow, but those things are established in practice. You practice a skill several times so that when you really need to use it, you can do so quickly. Performance isn’t just about completing a task, it’s about doing it efficiently and effortlessly. To do this, you must be mindful of the task you’re performing and apply all your attention to it. This way, simpler tasks can become automatic and you can devote previous limited working memory resources to understanding other unknowns.

Requirement 4: Immediate and informative feedback

You practice so that you can get the mistakes out of your system. Of course, this requires that you’re able to spot the mistakes in the first place. You have to collect data and establish a feedback loop. Informative feedback is one reason coaching is so important. A coach’s primary role is to help spot the mistakes you’re making and equip you with the tools to overcome them.

Requirement 5: Repetition, reflection, and refinement

When you combine all the elements I’ve mentioned thus far, you get the blueprint for practice. High-quality practice means repeating skills, reflecting on how well you completed the skill, and refining your approach to the skill.  These things must all be deliberate. You have to practice with the goal of getting better. Just going through the motions won’t get you anywhere.

 

Out of Practice

Infosec practitioners stink at deliberate practice.

It’s something that most of us don’t spend any time thinking about. I ask every analyst I meet how they practice their craft. The answer I always get without fail is this:

 

Attacking a VM and/or reviewing the logs generated from the attacks can be an effective practice strategy, but most never follow through with it and even more don’t approach it the right way. If there were a graveyard for VMs built for this purpose but only used once, it’d be overflowing. I don’t want those VM’s to have died in vain, so I’m going to tell you how you can practice smarter.

 

Developing a Practice Plan

If you want to become an expert at anything and accelerate the accumulation of experience you need to deliberately practice it. You have to be strategic about how you focus your practice. I recommend creating a practice plan, which is built from a list of skills used during performance of a job and scenarios where you might encounter them.

In our basketball example, skills include shooting, passing, and rebounding. Scenarios include different schemes like man-to-man defense, dribble-drive offense, and inbounding plays. Multiple skills will be encountered differently based on the scenario where they are needed.

The same thing applies to information security. Consider the work of a malware analyst. Three skills you’ll use during reverse engineering include:

  1. Simulating responsive services
  2. Identifying imported code libraries
  3. Understanding network communication sequences

Those skills manifest differently depending on the situations you’ll encounter. Those situations can be defined a number of ways:

  1. Platform: Windows, Mac, or Linux
  2. Malware Function: Droppers, worms, exploits, RATs
  3. Malware Techniques: Registry persistence, VM detection, heap spraying, opening network listeners, keylogging

Now you have what you need to make a practice plan. You simply combine skills with the scenarios you’ll encounter them in. So, you might wind up practicing the following things:

  1. Simulating a DNS server to resolve a domain requested by an iOS dropper
  2. Identifying the code libraries imported by Windows malware to understand its purpose
  3. Understanding the network communication of a RAT to build IDS signatures

If you practice these things, you’ll be able to do them effortlessly when you’re under the gun to analyze a real malware sample you’ve found on your network.

This example is focused on malware analysis, but it can easily be applied to red teaming, alert review, threat hunting, web application development, socket programming, or just about any technical skill you can think of. You just need to clearly identify the skills and situations associated with the job.

 

Challenge

Now that you know about deliberate practice, I’m challenging you to take action. I want you to examine a facet of your job you want to get better at and break it down like I just did. Figure out the skills you need to be good at the job, and the scenarios where you’ll use those skills. Make a list of both and reply to the comments on this post with your practice plan.

 

The Cult of Passion

Would you perform your job if you weren’t paid? That’s the question people are often asked to measure their passion for a profession. But, it’s not that simple.

Go one step further and really consider that statement. It’s not asking you if you would work for nothing, it’s asking you if you would pay to work in your job. By choosing to work without compensation you are incurring a direct cost for equipment, commuting expenses, education, and more. You’re also incurring an indirect expense because the time spent working prevents you from earning a salary elsewhere.

So, would you pay to work in information security? You probably wouldn’t. Does that mean you aren’t passionate about infosec? I would wager that most practitioners are not. You would pay to garden, play soccer, barbecue, or play the guitar…but you wouldn’t take a financial loss to install patches, look at packets, and change firewall rules.

But, if that’s the case then why does our industry seem to revolve around passion? Nearly every blog post you read about hiring or job-seeking discusses the importance of passion and they often provide advice for how to demonstrate it. Some advice goes so far as to highlight passion as the most important characteristic you can exhibit. Infosec is described less of a job and more as a lifestyle. This sounds a lot less like job advice and more like recruitment for a cult.

In this post, I’m going to talk about passion, myths commonly associated with it, and how the cult of passion harms the practice of information security.

Passion as a Currency

Passion is commonly equated with extreme motivation surrounding a specific topic. In its simplest form passion manifests through hard work and time spent. These are both traits that are viewed admirably, especially in the US. Working from sunrise to sunset harkens back to memories of farmers earning an honest living while providing food for the masses, or to middle-class factory workers going the extra mile to provide for their families. These images are pervasive and are the backbone of society.

Of course, hard work isn’t truly a measure of passion. The farmer isn’t always passionate about farming. He’s passionate about providing a living for his family. The factory worker doesn’t love stamping car frames for 12 hours a day, but it enables the things he or she is truly passionate about.

In truth, passion isn’t reliably measurable either, because it can only be measured relative to others. In infosec hiring, an interviewer may only see someone else as passionate if they appear to exhibit passion in the same way as them and to a greater degree. Jim speaks at 12 security conferences a year, contributes to 5 open source projects, and works 16 hours a day. These are things he finds value in and how he would quantify his own passion. He is interviewing Terry, who only speaks at one or two conferences a year, contributes to one open source project, and works about 10 hours a day. Jim is likely to see Jerry as someone who isn’t very passionate. However, this is a purely relative viewpoint. It might not also consider things that Jerry does that Jim doesn’t value as a form of passion such as mentoring less experienced practitioners or doing tech-related community service.

When you attempt to evaluate people via traits that are difficult to objectively measure (like passion), you present an opportunity for undesirable results. This is something often seen with faith in religion. A false prophet commits to lead followers to the promised land if only they demonstrate appropriate faith. That faith might be prayer, tithing 10% of your income, tithing 100% of your income, or violently killing people of opposing faith. I highlight the wide range here because it shows the extremes that can arise when your currency isn’t objectively measurable.

In information security, we use passion as an unquantifiable currency to measure the potential success of someone in our field. A common piece of advice given to someone who wants to work in information security is that it isn’t simply enough for infosec to be your job. If you want to be successful in infosec, it must be the thing that gets you up in the morning. There must be more to this.

Do You Really Mean Passion?

Psychologically, passion is either harmonious or obsessive. Vallerand describes this better than I can:

Harmonious passion originates from an autonomous internalization of the activity into one’s identity while obsessive passion emanates from a controlled internalization and comes to control the person. Through the experience of positive emotions during activity engagement that takes place on a regular and repeated basis, it is posited that harmonious passion contributes to sustained psychological well-being while preventing the experience of negative affect, psychological conflict, and ill-being. Obsessive passion is not expected to produce such positive effects and may even facilitate negative affect, conflict with other life activities, and psychological ill-being.

What do people mean when they talk about passion in infosec? Rarely is it ever defined through any other mechanism but example. If you ask most to describe someone who is passionate about information security they’ll say that these people spend copious amounts of time outside of work on infosec projects, contribute to open source, go to a lot of security conferences, are actively involved in the security community, or have a blog.

Assuming you’ve found someone who does all of those things, can you guarantee that means they are passionate about infosec? How would you be able to differentiate them from someone who is passionate about being successful, or making money, or being recognized for being an expert? Finally, how do you differentiate harmonious and obsessive passion? That is a very challenging proposition.

Passion is very difficult to attribute to a source. In fact, most people aren’t good at identifying the things they are passionate about themselves. The vast majority of security practitioners are not passionate about information security itself. Instead, they’re passionate about problem-solving, being an agent of justice, being intelligent, being seen as intelligent, actually being intelligence, solving mysteries, making a lot of money, or simply providing for their families.

In most cases, I don’t think the trait people are looking for is actually passion. Instead, they’re looking for curiosity. Curiosity has a motivational component and is often described simply as “the desire to know.” It is rooted in our ability to recognize a gap between our knowledge on a topic and the amount of available knowledge out there. When we recognize that gap, we make a subconscious gamble about the risk/reward of pursuing the knowledge and eventually decide to try and close the gap or not. This is called information gap theory, and through this theory we can gain a better understanding of trait and applied curiosity that can improve our ability to teach and hire people.

Diverging from Cult Mentality

Passion has its place. I know some people who truly are passionate about the practice of security, and they are among the top practitioners in our field.  However, it is unwise to constantly compare yourselves to these people. I offer the following:

For information security practitioners…

Hard work matters, but you can work hard and not allow this industry to pull you into the cult of passion. Choose where and how you spend your time so that your work enriches your personal life, and enjoy a personal life that enriches your work. If you fall victim to the thought that information security must be your life, you will eventually burn out. You will suffer, and if there is anybody left around you, they will suffer too.

Here are professions of people who work 8-10 hours a day and go home and don’t think about work: doctors, lawyers, engineers, scientists, researchers. Do the top 5% of practitioners in these fields think about work all the time? Probably. But you also probably aren’t one of those people. Not everyone is extraordinary and that’s okay. There is this myth that we all must be the best. As Ricky Bobby famously said, “If you ain’t first, your last!”. But, by constantly trying to be the best it breeds things like imposter syndrome, self-doubt, and depression. In an industry where so many have substance abuse problems and we’ve lost far too many friends, these are feelings we should actively avoid promoting.

For hiring managers…

It isn’t just limiting to only hire people who make infosec their life, it’s exclusionary. You’re missing out on people with diversity of interests that will enrich your security program. You’re also preventing people who have more important personal life issues from finding gainful employment.

To pursue the knowledge that exists in the curiosity information gap I discussed earlier, a person should be aware the gap exists. Otherwise, they don’t know what they don’t know. This implies that a job candidate needs to know a little about a topic to be strongly motivated to pursue knowledge in it and sustain that pursuit. The last part is important. Sure, the journey of a thousand miles begins with a single step, but that first step is also usually the easiest. It’s quite a few miles in where we normally lose people. This is one reason why the notion of trying to hire TRULY entry level people based on passion in infosec is a fool’s errand. Someone with no experience in this field does not have a proper footing to be passionate about it. If they are passionate about infosec, then that passion can’t be trusted to be sustained. You’re hiring based on a mirage.

A key to maintaining interest is a constant stream of novel information. For a novice, most things within a field can be novel because the key is to building passion is exploration. To transition to expertise, an individual must find novelty in the nuance of specific topics. Someone who enjoys nuance is best set up to be an expert. Most people will never truly be world-class experts in something, but again, that’s perfectly fine.

For job seekers…

Much to my dismay, most people will never read this article, truly understand passion, and cultivate an ability to notice genuine curiosity. That means you have to play the game that is hiring. People will keep asking about passion, but reframe the question under your own terms. Tell them you see passion as a term used to describe curiosity and motivation. Try to identify what really motivates you and how your curiosity pushes to toward goals. Relate to people at a personal, human level. A lot of candidates talk about how they eat/breathe/sleep infosec. You don’t have to do that. Instead, talk about how you critically think about important problems and optimize your time so that you don’t have to be work 16 hour days to be successful. Hard work is important, but working smart is much more important, and is actually sustainable.

Conclusion

Along my pursuit to understand passion I’ve learned that it’s a highly contentious topic. People hate to have their passion questioned, and I’m sure this article will stoke that fire. I wonder why that is? I would wager that many quantify their own ability and maybe even their own self-worth in their subjective self-evaluation of their own passion. Once again, passion is a good thing and measuring yourself based on some degree of it is probably fine. It’s when we choose to measure others based on our subjective views of their passion that we get into trouble and create cult-like scenarios. We can do better.

My goal with this article was to share my understanding of passion, how it’s often misinterpreted, and how that can negatively affect our industry. Once of the most liberating moments of my life was when I figured out that I wasn’t passionate about information security, it was just infosec that allowed me to achieve other things I was passionate about. If others can relate then I hope they can feel the same liberation someday through a better understand of passion. If you are truly passionate about infosec itself, then that’s great too, we need you!

 

Further reading:

 

Know Your Bias – Availability Heuristic

This is part three in the Know your Bias series where I examine a specific type of bias, how it manifests in a non-technical example, and provide real-world examples where I’ve seen this bias negatively affect a security practitioner. You can view part one here, and part two here. In this post, I’ll discuss the availability heuristic.

The availability heuristic is a mental shortcut that relies on recalling the most recent or prevalent example that comes to mind when evaluating data to make a decision.

For the security practitioner, this type of bias is primarily an attack on your time more so than your accuracy. Let’s go through a few examples both inside and outside of security before discussion ways to mitigate the negative effects availability heuristic can have.

Availability Heuristic Outside of Security

Are you more likely to be killed working as a police officer or as a fisherman? Most people select police officer. However, statistics show that you are as much as 10x more likely to meet your end while working on a fishing boat [1]. People get this wrong because of the availability heuristic. Whenever a police officer is killed in the line of duty, it is often a major news event. Police officers are often killed in the pursuit of criminals and this is typically viewed as a heroic act, which means it becomes a human interest story and news outlets are more likely to cover it.

Try this yourself. Go to Google News and search for “officer killed”. You will almost certainly find multiple recent stories and multiple outlets covering the same story. Next, search for “fisherman killed”, and you’ll find a lot fewer results returned. When there are results, they are typically only covered by the locale the death happened in and not picked up by national outlets. The news disproportionately covers the death of police officers over fishermen. To be clear, I’m not questioning that practice at all. However, this does explain why most tend to think that the police work is more deadly than being a fisherman. We are more likely to trust the information we can recall more quickly, and by virtue of seeing more news stories about police deaths, the availability heuristic tricks us into thinking that police work is more deadly. I’d hypothesize that if we posed the same question to individuals who were regular viewers of the Discovery Channel show “The Deadliest Catch”, they might recognize the danger associated with commercial fishing and select the correct answer to the question.

One thing we know about human memory and recall is that it is efficient. We often go with the first piece of information that can be recalled from our memory. Not only does the availability of information drive our thoughts, it also shapes our behavior.  It’s why advertisers spend so much money to ensure that their product is the first thing we associate with specific inputs. When many Americans think of cheeseburgers they think of McDonalds. When you think of coffee you think of Starbucks. When you think of APT you think of Mandiant. These aren’t accidental associations — a lot of money has been spent to ensure those bonds were formed.

Availability Heuristic in Security

Availability is all about the things you observe the most and the things you observe most recently. Consider these scenarios that highlight examples of how availability can affect decisions in security practice.

Returning from a Security Conference

I recently attended a security conference where multiple presenters showed examples that included *.top domains that were involved with malicious activity. These sites were often hosting malware or being used to facilitate command and control channels with infected machines. One presenter even said that any time he saw a *.top domain, he assumed it was probably malicious.

I spoke with a colleague who had really latched on to that example. He started treating every *.top domain he found as inherently malicious and driving his investigations with that in mind. He even spent time actively searching out *.top domains as a function of threat hunting to proactively find evil. How do you think that turned out for him? Sure, he did find some evil. However, he also found out that the majority of *.top domains he encountered on his network were actually legitimate. It took him several weeks to realize that he had fallen victim to the availability heuristic. He put too much stock in the information he had received because of the recency and frequency of it. It wasn’t until he had gathered a lot of data that he was able to recognize that the assumption he was making wasn’t entirely correct. It wasn’t something that warranted this much of his time.

In another recent example, I saw a colleague purchase a lot of suspected APT owned domains with the expectation that sinkholing them would result in capturing a lot of interesting C2 traffic. He saw someone speak on this topic and thought that his success rate would be higher than it was because they speaker didn’t cover that topic in depth. My colleague had to purchase a LOT of domain names before he got any interesting data, and by that point, he had pretty much decided to give up after spending both a lot of time and money on the task.

It is very hard for someone giving a 30-minute talk to fully support every claim they make. It also isn’t easy to stop and cite additional sources in the middle of a verbal presentation. Because our industry isn’t strict about providing papers to support talks, we end up with a lot of opinions and not much fact. Those opinions get wheels and they may be taken much farther than the original presenter ever intended. This tricks people who are less metacognitively aware into accepting opinions as fact. 

Data Source Context Availability

If you work in a SOC, you have access to a variety of data sources. Some of those are much lower context like flow data or DNS logs, and some are much higher context like PCAP data or memory. In my research, I’ve found that analysts are much more likely to pursue high-context data sources when working an investigation, even when lower context data sources contain the information they need to answer their questions.

On one hand, you might say that this doesn’t matter because if you are arriving at the correct answer, why does it matter how you got there? Analytically speaking, we know that the path you take to an answer matters. It isn’t important just to be accurate in an investigation, you also need to be expedient. Security is an economic problem wherein the cost to defend a network needs to be low and the cost to attack it needs to be high. I’ve seen that users who start with higher context data sources when it is not entirely necessary often spend much more time in an investigation. By using higher context data sources when it isn’t necessary, it introduces an opportunity for distractions in the investigation process. The more opportunity for distracting information, the more opportunity that availability bias can creep in as a result of the new information being given too much priority in your decision making. That isn’t to say that all new information should be pushed aside, but you also have to carefully control what information you allow to hold your attention.

Structured Adversary Targeting

In the past five years, the security industry has become increasingly dominated by fear-based marketing. A few years ago it was the notion that sophisticated nation-state adversaries were going to compromise your network no matter who you were. These stories made national news and most security vendors began to shift their marketing towards guaranteeing protection against these threats.

The simple truth is that most businesses are unlikely to be targeted by nation-state level threat actors. But, because the news and vendor marketing have made this idea so prevalent, the availability of it has led an overwhelming number of people to believe that this could happen to them. When I go into small businesses to talk about security I generally want to talk about things like opportunistic attacks, drive-by malware, and ransomware. These are the things small businesses are mostly likely to be impacted by. However, most of these conversations now involve a discussion about structured threat actors because of the availability of that information. I don’t want to talk about these things, but people ask about them. While this helps vendors sell products, it takes some organizations’ eye off the things they should really be concerned about. I’m certain Billy Ray’s Bait Shop will never get attacked by the Chinese PLA, but a ransomware infection has the ability to destroy the entire business. In this example, the abundance of information associated with structured threat actors clouds perspective and takes time away from more important discussions. 

Diminishing Availability Heuristic

The stories above illustrate common places availability heuristic manifests in security. Above all else, the availability of information is most impactful to you in how you spend your time and where you focus your attention. Attention is a limited resource, as we can only focus on one or two things at a time. Consider where you place your attention and what is causing you to place it there.

Over the course of the next week, start thinking about the places you focus your attention and actively question why information led you to do that. Is that information based on fact or opinion? Are you putting too much or too little time into your effort? Is your decision making slanted in the wrong direction?

Here are a few ways you can recognize when availability heuristic might be affecting you or your peers and strategies for diminishing its effects:

Carefully consider the difference between fact and opinion. In security, most of the publicly available information you’ll find is a mix of opinions and facts and the distinction isn’t always so clear. Whenever you make a judgment or decision based on something elsewhere, spend a few minutes considering the source of the information and doing some manual research to see if you can validate it elsewhere.

Use patience as a shield. Since your attention is a limited resource, you should protect at accordingly. Just because new information has been introduced doesn’t mean it is worthy of shifting your attention to it. Pump the breaks on making quick decisions. Take a walk or sleep on new information before acting to see if something still matters as much tomorrow as it does today. Patience is a valuable tool in the fight to diminish the effects of many biases.

Practice question-driven investigating. A good investigator is able to clearly articulate the questions they are trying to answer, and only seeks out data that will provide those answers. If you go randomly searching through packet capture data, you’re going to see things that will distract you. By only seeking answers to questions you can articulate clearly, you’ll diminish the opportunity for availability bias to distract your attention.

Utilize a peer group for validation. By definition, we aren’t good at recognizing our own biases. When you are pursuing a new lead or deciding whether to focus your attention on a new task or goal, considering bouncing that idea off of a peer. They are likely to have had differing experiences than you, so their decision making could be less clouded by the recency or availability of information that is affecting you. A question to that group can be as simple as “Is ____ as big of a concern as I think it is?”

If you’re interested in learning more about how to help diminish the effects of bias in an investigation, take a look at my Investigation Theory course where I’ve dedicated an entire module to it. This class is only taught periodically, and registration is limited.

[1] http://www.huffingtonpost.com/blake-fleetwood/how-dangerous-is-police-w_b_6373798.html

Know Your Bias – Anchoring

In the first part of this series I told a personal story to illustrated the components and effects of bias in a non-technical setting. In each post following I’ll examine a specific type of bias, how it manifests in a non-technical example, and provide real-world examples where I’ve seen this bias negatively affect a security practitioner. In this post, I’ll discuss anchoring.

Anchoring occurs when a person tends to rely too heavily on a single piece of information when making decisions, most often based on information received early in the decision-making process.

Anchoring Outside of Security

Think about the average price of a Ford car. Is it higher or lower than 80,000? This number is clearly too high, so you’d say lower. Now let’s flip this a bit. I want you to think about the average price of a Ford car again. Is it higher or lower than 10,000? Once again, the obvious answer is that it’s higher.

These questions may seem obvious and innocent, but here’s the thing. Let’s say that I ask one group of people the first question, and a separate group of people the second question. After that, I ask both groups to name what they think the average price of a Ford car is. The result is that the first group presented with the 80,000 number would pick a price much higher than the second group presented with the 10,000 number. This has been tested in multiple studies with several variants of this scenario [1].

In this scenario, people are subconsciously fixating on the number they are presented and it is subtly influencing their short term mindset. You might be able to think of a few cases in sales where this is used to influence consumers. Sticking with our car theme, if you’ve been on a car lot you know that every car has a price that is typically higher than what you pay. By pricing cars higher initially, consumers anchor to that price. Therefore, when you negotiate a couple thousand dollars off, it feels like you’re getting a great deal! In truth, you’re paying what the dealership expected, you just perceive the deal because of the anchoring effect.

They key takeaway from these examples is that anchoring to a specific piece of information is not inherently bad. However, making judgements in relation to an anchored data point where too much weight is applied can negatively effect your realistic perception of a scenario. In the first example, this led you to believe the average price of a car is higher or lower than it really is. In the second example, this led you to believe you were getting a better deal than you truly were.

 

Anchoring in Security

Anchoring happens based on the premise that mindsets are quick to form but resistant to change. We quickly process data to make an initial assessment, but our ability to hone that assessment is generally weighed in relation to the initial assessment. Can you think of various points in a security practitioner’s day where there is an opportunity for an initial perception into a problem scenario? This is where we can find opportunities for anchoring to have occurred.

List of Alerts

Let’s consider a scenario where you have a large number of alerts in a queue that you have to work through. This is a common scenario for many analysts, and if you work in a SOC that isn’t open 24×7 then you probably walk in each morning to something similar. Consider this list of the top 5 alerts in a SOC over a twelve hour period:

  • 41 ET CURRENT_EVENTS Neutrino Exploit Kit Redirector To Landing Page
  • 14  ET CURRENT_EVENTS Evil Redirector Leading to EK Apr 27 2016
  • 9 ET TROJAN Generic gate[.].php GET with minimal headers
  • 2 ET TROJAN Generic -POST To gate.php w/Extended ASCII Characters (Likely Zeus Derivative)
  • 2 ET INFO SUSPICIOUS Zeus Java request to UNI.ME Domain

Which alert should be examined first? I polled this question and found that a significant number of inexperienced analysts chose the one at the top of the list. When asked why, most said because of the frequency alone. By making this choice, the analyst assumes that each of these alerts are weighted equally. By occurring more times, the rule at the top of the list represents a greater risk. Is this a good judgement?

In reality, the assumption that each rule should be weighted the same is unfounded. There are a couple of ways to evaluate this list.

Using a threat-centric approach, not only does each rule represent a unique threat that should be considered uniquely, some of these alerts gain more context in the presence of others. For example, the two unique Zeus signatures alerting together could pose some greater significance. In this case, the Neutrino alert might represent a greater significance if it was paired with an alert representing the download of an exploit or communication with another Neutrino EK related page. Merely hitting a redirect to a landing page doesn’t indicate a successful infection, and is a fairly common event.

You could also evaluate this list with a risk-centric approach, but more information is required. Primarily, you would be concerned with the affected hosts for each alert. If you know where your sensitive devices are on the network, you would evaluate the alerts based on which ones are more likely to impact business operations.

This example illustrates the how casual decisions can come with implied assumptions. Those assumptions can be unintentional, but they can still lead you down the wrong path. In this case, the analyst might spend a lot of time pursuing alerts that aren’t very sensitive while delaying the investigation of something that represents a greater risk to the business. This happens because it is easy to look at a statistic like this and anchor to a singular facet of the stat without fully considering the implied assumptions. Statistics are useful for summarizing data, but they can hide important context that will keep you from making uninformed decisions that are the result of an anchoring bias.

 

Visualizations without Appropriate Context

As another example, understand that numbers and visual representations of them have a strong ability to influence an investigation.  Consider a chart like the one in the figure below.

This is a treemap visualization used to show the relative volume of communication based on network ports for a single system. The larger the block, the more communication occurred to that port. Looking at this chart, what is the role of the system whose network communication is represented here? Many analysts I polled decided it was a web server because of the large amount of port 443 and 80 traffic. These ports are commonly used by web servers to receive requests.

This is where we enter the danger zone. An analyst isn’t making a mistake by looking at this and considering that the represented host might be a web server. The mistake occurs when the analyst fully accepts this system as a web server and proceeds in an investigation under that assumption. Given this information alone, do you know for sure this is a web server? Absolutely not.

First, I never specified whether this treemap exclusively represents inbound traffic, and it’s every bit as likely that it represents outbound communication that could just be normal web browsing. Beyond that, this chart only represents a finite time period and might not truly represent reality. Lastly, just because a system is receiving web requests doesn’t necessarily mean its primary role is that of a web server. It might simply have a web interface for managing some other service that is its primary role.

The only way to truly ascertain whether this system is a web server is to probe it to see if there is a listening service on a web server port or to retrieve a list of processes to see if a web server application is running.

There is nothing wrong with using charts like this to help characterize hosts in an investigation.  This treemap isn’t an inherently bad visualization and it can be quite useful in the right context. However, it can lead to investigations that are influenced by unnecessarily anchored data points. Once again, we have an input that leads to an assumption.  This is where it’s important to verify assumptions when possible, and at minimum identify your assumptions in your reporting. If the investigation you are working on does end up utilizing a finding based on this chart and the assumption that it represents a web server, call that out specifically so that it can be weighed appropriately.

 

Diminishing Anchoring Bias

The stories above illustrate common places anchoring can enter the equation during your investigations. Throughout the next week, try to look for places in your daily routine where you form an initial perception and there is an opportunity for anchoring bias to creep in. I think you’ll be surprised at how many you come across.

Here are a few ways you can recognize when anchoring bias might be affecting you or your peers and strategies for diminishing its effects:

Consider what data represents, and not just it’s face value. Most data in security investigations represents something else that it is abstracted from. An IP address is abstracted from a physical host, a username is abstracted from a physical user, a file hash is abstracted from an actual file, and so on.

Limit the value of summary data. A summary is meant to be just that, the critical information you need to quickly triage data to determine its priority or make a quick (but accurate) judgement of the underlying events disposition. If you carry forward input from summary data into a deeper investigation, make sure you fully identify and verify your assumptions.

Don’t let your first impression be your only impression. Rarely is the initial insertion point into an investigation the most important evidence you’ll collect. Allow the strength of conclusions to be based on your evidence collected throughout, not just what you gathered at the onset. This is a hard thing to overcome, as your mind wants to anchor to your first impression, but you have to try and overcome that and try to examine cases holistically.

An alert is not an answer, it’s merely a question. Your job is to prove or disprove the alert, and until you’ve done one of those things the alert is not representative of a certainty. Start looking at alerts as the impetus for asking questions that will drive your investigation.

If you’re interested in learning more about how to help diminish the effects of bias in an investigation, take a look at my Investigation Theory course where I’ve dedicated an entire module to it. This class is only taught periodically, and registration is limited.

[1] Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of personality and social psychology73(3), 437.