Category Archives: Analysis

The Role of Curiosity in Security Investigations

curiousgeorgeI’ve written a lot about the metacognitive gap in digital forensics and incident response. As an industry, we aren’t very effective at identifying the skills that make our expert practitioners so good at what they do, and we are even worse at teaching them. While there are a myriad of skills that make someone good at finding evil and eradicating adversaries, what’s the most important? What is the “X Factor” that makes an investigator great?

That’s a highly subjective question and everyone has an opinion on it biased towards his or her own experience. Regardless, I recently posed this question to some of my coworkers and Twitter followers. The most common answer I received was centered on curiosity.

Based on these results, I conducted a semi-formal survey where I asked 12 experienced analysts to rate the importance of possessing a highly curious mind while attempting to solve an investigation.

In the first survey item, I asked respondents to address the statement “A curious mind is important for an investigator to arrive at a state of resolution in an investigation with accurate and thorough results.”

All 12 investigators responded Strongly Agree using a 5-point Likert scale.

In a second question, I asked respondents to address the statement “A curious mind is important for an investigator to arrive at a state of resolution in an investigation in a timely manner.”

Using the same rating sale, 10 investigators responded Strongly Agree and 2 responded Agree.

Finally, I asked respondents to address the statement “A curious mind is a primary factor in determining whether an investigator will be successful in resolving/remediating an investigation.”.

Using the same rating sale, all 12 analysts responded Strongly Agree.

Clearly, expert practitioners believe that a curious mind is important in terms of accuracy, thoroughness, and speed at which an investigation is conducted. While curiosity isn’t the only thing makes an investigator successful in their craft, it certainly warrants attention as a key player. In this post I will talk about curiosity as a trait, how it manifests in the investigative process, how it’s measured, and whether it’s a teachable skill.

What is Curiosity?

There are many definitions of curiosity scattered across psychology research text, but the one I think most accurately depicts the construct from an applied perspective comes from Litman and Spielberger (2003). They state that curiosity can be broadly defined as a desire to acquire new information, knowledge, and sensory experience that motivates exploratory behavior.

Lowenstein (1994) also provides relevant insight by defining curiosity as “the desire to know.” In this sense, he describes that a desire to know more can arise when a person encounters stimuli that are inconsistent with an idea he or she holds. When this is experienced, the person may feel some kind of deprivation that can only be alleviated by resolving this inconsistency and closing the knowledge gap that has been identified. This jives well with the thoughts of other great psychology thinkers like Kant and Freud.

Curiosity takes form early in life when infants start exploring the world around them to test the limitations of their own body. Many developmental psychologists agree that this curiosity and simple but constant experimentation is the foundation of early learning and normal development. As we grow older, our curiosity continues to spark experimentation.

While curiosity has been considered a research-worthy construct from a theoretical perspective, there has been little effort put into pinning down the neural substrates that underlie it. This is unfortunate, but something to look forward to as neurology and brain imaging techniques continue to rapidly advance.

As it relates to computer security investigations, curiosity manifests practically in a number of ways that most of us can easily recognize. A few of those include the following:

 

Dead End Scenarios

The most dreaded scenario in an investigation occurs when an investigator reaches a point where there are still unanswered questions, but there are no leads left to pursue answers. This is common, especially when things like data retention and availability often limit us. In these scenarios a required data source might not be available, a lead from other evidence could run dry, or the data might not point to an obvious next step.

A limited amount of curiosity can be correlated with an increased number of dead end experiences encountered by an investigator. Without adequate motivation to explore additional avenues for answering questions, the investigation might miss logical paths to the answers they are seeking. They may also fail to adequately ask the appropriate questions.

 

Hypothesis Generation

The investigative process provides many opportunities for open-ended questions, such as “What is responsible for the network traffic?” or “Why would this internal host talk to that external host?” The process of reasoning through these questions is usually characterized initially by divergent thinking to generate ideas to be explored in route to a possible solution. This manifests as an internal dialog when conducted by a single analyst, but can be expressed verbally when a group is involved.

When presented with an open-ended question, curiosity is responsible for motivating the internal evaluation of hypothetical situations. Without curiosity, an individual won’t conduct mind-wandering exercises and may only consider a small number of potential hypotheses when there is potential for many other valid ones. In this scenario an investigator might not be pursuing the correct answers because they haven’t considered all of the potential questions that should be asked.

Note: It’s probably worth noting here that creativity plays a role in this process too, and is linked to curiosity depending on which model you subscribe to. That, however, is a little beyond the scope of what I want to talk about here.

 

Data Manipulation

Looking at the same data in different ways can yield interesting results. This can include using sed, grep, and awk to pull specific columns out of a data stream for comparison, using uniq and sort to aggregate field values, or reducing PCAP data into flows for comparison of multiple data streams.

While having the skills to manipulate data is a separate discussion, having the desire to find out if the manipulation of existing data into a new format will yield useful results is a product of curiosity. Investigators who lack curiosity to find out if such an exercise would be fruitful end up in more dead end scenarios and may take longer routes towards resolving investigations.

 

Pivoting to Tangential Evidence

The goal of collecting and reviewing evidence is to yield answers relevant to the question(s) an investigator has asked. However, it’s common for the review of evidence to introduce tangential questions or spawn completely new investigations. Within an investigation, you might review network connections between a friendly internal host and a potentially hostile external host only to find that other friendly devices have communicated with the hostile device and warrant examination. In another example, while examining web server logs for exploitation of a specific vulnerability, you might find unrelated evidence of successful SQL injection that warrants a completely separate investigation.

Curiosity is a key determinant in whether an investigator chooses to pivot to these tangential data points and pursue their investigation. Without the motivation that curiosity provides, an investigator may neglect to provide more than a cursory glance to these data points, or fail to note them down for later review. This can result in missed intrusions or improper scoping.

Relating Curiosity and Experience

Our cognitive processes don’t operate in a vacuum. Any decision we make is influenced by a symphony of different traits and emotions working in concert together. Some work in perfect harmony while others operate as opposing forces, and curiosity is not exempt. When we talk about curiosities role in an investigation, we also have to talk about experience.

Earlier, I mentioned that curiosity is pivotal to human development, and that our experimentation at an early age is motivated by curiosity to learn more about ourselves and the world around us. This isn’t a characteristic that goes away with time; we just become more aware of it. As we get older, we gain more experience and become more selective of what experiments we conduct. This manifests in many forms of our lives and in every day decisions. For example, a person who has never slept in on a Tuesday might hit the snooze button a few times because curiosity motivates them to explore the benefits and/or consequences of that action.

Experience serves as both a motivating and regulating force for curiosity. In an investigation, I believe this is best illustrated by assessing curiosity and experience as they relate to each other. Consider the following scenarios where we assess the level of curiosity (C) and experience (E) possessed by an individual investigator.

High C / Low E:

With a lot of curiosity but little experience, an investigator is jumpy. This person’s curiosity drives them to dig into everything that seems new, and without experience to regulate it, this persons ends up chasing a lot of ghosts. They will encounter dead end scenarios frequently because they will choose to pursue inconsequential leads within the evidence they are reviewing. They will rarely admit to encountering a dead-end scenario because their lack of experience doesn’t permit them to realize they’ve encountered one. This person will generate many ideas when hypothesis generation is required, but many of those ideas will be unrealistic because of a lack of experience to weed out the less useful ones. They will seek alternate views of data constantly, but will spend a considerable amount of time pursuing alternate views that don’t necessarily help them. Instead of learning to use tools that get them close to the views they want, they’ll spend time attempting to do more manual work to get the data precisely how they desire even if going that extra 20% doesn’t provide a discernable benefit to their investigation. Even though this person will spend a lot of time failing, they will fail fast and gain experience quickly.

Low C / High E

An investigator possessing a lot of experience but little curiosity could be described as apathetic. This doesn’t necessarily mean they aren’t effective at all, but it does make them less likely to investigate tangential leads that might be indicative of a larger compromise scope or a secondary compromise. In many cases, a person in this state may have started with a high degree of curiosity, but it may have waned over time as their experience increased. This can result in the investigator using their experience as a crutch to make up for their lack of curiosity. They won’t encounter too many dead end scenarios because of this, but may be more prone to them in new and unfamiliar situations. This person will manipulate data, but will rely on preexisting tools and scripts to do so when possible. They will carefully evaluate the time/reward benefit of their actions and will trust their gut instinct more than anything else. This person’s success in resolving investigations will be defined by the nature of their experience, because they will be significantly less successful in scenarios that don’t relate to that experience. These individuals won’t be as highly motivated in terms of out-of-the-box thinking and may be limited in hypothesis generation.

High C / High E

Because this person has a high level of curiosity they will be more motivated to investigate tangential leads. Because they also possess a high level of experience, they will be more efficient in choosing which leads they follow because they will have a wealth of knowledge to reflect upon. When encountering a dead-end scenario, this person should be able to move past it quickly, or if they claim they’ve hit a true dead end, it’s more likely to be an accurate representation of the truth. This person will excel in hypothesis generation and will provide valuable input to lesser experienced investigators relating to how their time could be best spent. They will seek to perform data manipulation when possible, but will be adept at realizing when to use already available tools and when to create their own. They will realize when they’ve found a data manipulation solution that is good enough, and won’t let perfect be the enemy of good enough. This presents an ideal scenario where the investigator is highly capable of resolving an investigation and doing so in a timely manner. These individuals are ideal candidates for being senior leaders, because they can often effectively guide less experienced investigators regarding what leads are worth pursuing and what the right questions to ask are. This person is always learning and growing, and may have several side projects designed to make your organization better.

Low C / Low E

This presents an undesirable scenario. Not only does this person not have the experience to know what they are looking at, they don’t have enough curiosity to motivate them to perform the necessary research and experimentation needed to learn more. This will handicap their professional growth and have them getting outpaced by their peers with a similar amount of experience.

 

If you are an investigator or have spent time around a lot of them then the descriptions you read in each of these scenarios might remind you of someone you know, or even yourself at different points in your career. It’s important to also consider progression, because the level of curiosity and experience of a person changes throughout their career. In these scenarios, a person always starts with no experience but their level of curiosity may affect how quickly that experience is gained.

 

High Curiosity – Sustained

c1

In this ideal scenario, an investigator learns very quickly, and the rate at which they learn also grows. As they realize there is more to learn, they begin to consume more information in more efficient ways.

 

High Curiosity – Waning

c2

While many start very curious, some experience a waning level of curiosity as their experience grows. When this happens, these investigators will rely more on their experience and their rate of learning will slow.

 

Low Curiosity – Sustained

c3

An investigator with a sustained level of low curiosity will continually learn, but at a very slow rate through their career. Peers with a similar number of years experience will outpace them quickly.

 

Low Curiosity – Growing

c4

If an investigator is able to develop an increased level of curiosity over time, their rate of learning will increase. This can result in dramatic mid to late career growth.

 

Each of these scenarios represents a bit of an extreme case. In truth, the progression of an investigators career is affected by many other factors, and curiosity can often take a back seat to other prevailing forces. Most of us who have served in an investigative capacity also know that curiosity often comes in peaks and valleys as new ideas or technologies are discovered. For instance, new tools like Bro have sparked renewed interest for many in the field of network forensics, while the maturity of memory analysis tools like Volatility have sparked curiosity for many in host-based forensics. A new job or changes in someone’s personal life can also positively or negatively affect curiosity.

Recognizing and Assessing Curiosity

We’ve established that curiosity is a desirable trait, and we’ve reviewed examples of what an investigator possessing varying degrees of curiosity and experience might look like. It’s only logical to consider whether curiosity is a testable characteristic. Several researchers have tackled this problem, and as a result there are different tests that can be used to measure varying degrees of this construct.

Available tests include, but are not limited to, the State-Trait Personality Inventory (Spielberger et al, 1980), the Academic Curiosity Scale (Vidler & Rawan, 1974), and the Melbourne Curiosity Inventory (Naylor, 1981). All of these tests are simple self-reported pencil and paper inventories designed to ask somewhat abstract questions in order to assess different facets of curiosity. Some use likert scales to evaluate whether statements describe them, where as others use agreement/disagreement choices in response to whether specific activities sound interesting. These tests all use different models for curiosity, spanning three, four, and five-factor models. They also all require someone with an understanding of administering personality tests to deliver and interpret the results.

A paper published by Reio, et al (2016) completed a factor analysis study of eleven different test designed to measure facets of curiosity. Their findings confirmed research done other psychologists that supports a three-factor model for curiosity delineated by cognitive, physical thrill seeking, and social thrill seeking components. Of course, the former of those is most interesting in our pursuits.

Psychometrics and personality testing is a very unique field of research. While many tests exist that can measure curiosity to some degree, their delivery, administration, and interpretation isn’t entirely feasible by those outside of the field. Simply choosing which test to administer requires a detailed understanding of test reliability and validity beyond what would be expected in a normal SOC. Of course, there is room for more investigation and research here that might yield simplified versions of these personality inventories that are approachable by technical leaders. This is yet another gap that can be bridged where psychology and information security intersect.

Teaching and Promoting Curiosity

Many believe that there is an aspect of curiosity that is a product of nature, and one that is a product of nurture. That is to say some people are born innately with a higher level of curiosity than others. The nature/nurture debate is one of the most prolific arguments in human history, and it goes well beyond the scope of my this article. However, I think we can stipulate that all humans are born with an innate ability to be curious.

If we know curiosity is important, that humans are born with a capacity for it, and we have models that can assess it, the practical question is whether we can teach it. As the field of cognitive psychology has grown, academics have sought to increase the practical application of research in this manner, incorporating the last hundred years of research on reasoning, memory, learning, and other relevant topics.

Nichols (1963) provides useful insight about scenarios that can inhibit and foster curiosity. He identifies three themes.

 

Theme 1: Temperance

A state of temperance is a state of moderation or restraint. While we usually think that it’s in our best interest to absorb all the information we can in an investigation, this can actually serve to limit curiosity. In short, a hungry man is much more curious than a well-fed one.

I think Nichols says it best, “Intemperance in a work situation is largely a condition we bring upon ourselves by limiting our mental exercise to a narrow span of interest. This is most commonly manifested in an over-attention paid to the details of what we are doing. Once our mind becomes satiated by an abundance of minor facts, we cannot, merely by definition, provide it with new and fresh ideas that will allow us to expand our intellectual perception. Our capacity to do so is restricted by our inability to cram more into a mind that is already overburdened by minutiae. Instead, if we recognize that our responsibility is to focus on the vital few points rather than the trivial many, we will have released ourselves so that we may—as the juggler does—examine these areas from several vantage points and mentally manipulate them in a way that will be both more productive and give greater self-satisfaction (Nichols, 1963, p.4). “

 

Theme 2: Success and Failure

When know this from basic principles of conditioning that humans will use avoidance techniques to prevent experiencing a stimulus that is perceived as negative. Because of this, an investigator who repeatedly attempts to perform the same activity and fails will be dissuaded from pursuing that activity. As we’ve established curiosity as a motivation to fill knowledge gaps, it’s clear to see the correlation between repeated failure and decreased curiosity.

For example, an investigator who has little scripting ability might decide that they would like to write a script to output the contents packet capture file and print all of the DNS queries and responses. If they attempt this multiple times and fail, they will eventually just move on to other methods of inquiry. At this point they are much less likely to pursue this same task again, and worse, are much less likely to attempt to solve similar problems using scripting techniques.

 

Theme 3: Culture

Whenever someone is surrounded by a group of others without any sense of curiosity, it’s likely that their level of curiosity will slow or cease growing at all. Fortunately, the opposite of the previous case is also true, as Nichols noted, “Just as association with a group methodically killing curiosity soon serves to stifle that precious commodity within us, becoming part of a group concerned with intellectual growth stimulates our personal curiosity and growth. This does not mean that each of us must throw over our present job, don a white lab coat, and head for the research and development department. It does mean that we should be discriminating in our choice of attitudinal surroundings both on and off the job. Specifically, it requires that we surround ourselves with doers, with competition that will give us incentive to exercise the creative abilities that grow out of intellectual curiosity. We all have the opportunity to find and benefit from an environment that stimulates our curiosity if we only seek it (Nichols, 1963, p.4).”

 

I’ve written extensively about creating a culture of learning, but there is something to be said for creating a culture of curiosity as a part of that. In a more recent study (Koranda & Sheehan, 2014), a group of researchers concerned with organizational psychology in the advertising field built upon that practical implications of Nichols’ work and designed a course with the goal of promoting curiosity in advertising professionals. This, of course, is another field highly dependent on curiosity for success. While this study stopped short of using one of the aforementioned inventories to measure curiosity before and after the course, the researchers did use less formal surveys to ascertain a distinguishable difference in curiosity for those who had participated in the course.

Based on all these things we can identify impactful techniques that can be employed in the education of computer security investigators encompassing formal education, shorter-term focused training, and on-the-job training. I’ve broken those into three areas:

Environment

  • When possible, encourage group interaction and thinking as much as possible. It exposes investigators to others with unique experience and ways of thinking.
  • Provide an environment that is rich in learning opportunities. It isn’t enough to expect an investigator to wade through false positive alerts all day and hope they maintain their curiosity. You have to foster it when scenario-based learning that is easily accessible.

Tone

  • Encourage challenging the status quo and solving old problems in new ways. This relates directly to data manipulation, writing custom code, and trying new tools.
  • Stimulate a hunger for knowledge by creating scenarios that allow investigators to fail fast and without negative repercussions. When an investigator is met with success, make sure they know it. Remember that experience is the thing we get when we don’t get what we wanted.
  • Pair lesser experienced investigators with mentors. This reduces the change of repetitive failure and increases positive feedback.

Content

  • Tie learning as much as possible to real world scenarios that can be solved in multiple ways. If every scenario is solved in the same way or only provides one option, it limits the benefits of being curious, which will stifle it.
  • Create scenarios that are intriguing or mysterious. Just like reading a book, if there isn’t some desire to find out what happens next then the investigator won’t invest time it and won’t be motivated towards curiosity. The best example I can think of here is the great work being done by Counter Hack with Cyber City and the SANS Holiday Hacking Challenges.
  • Present exercises that aren’t completely beyond comprehension. This means that scenario difficulty should be appropriately established and paired correctly with the individual skill sets of investigators participating in them.

 

Of course, each of these thoughts presents a unique opportunity for more research, both of a practical and scientific manner. You can’t tell someone to “be more curious” and expect them to just do it any more than you can tell someone “be smarter” and expect that to happen. Curiosity is regulated by a complex array of traits and emotions that aren’t fully understood. Above all else, conditioning applies. If someone is encouraged to be curious and provided with opportunities for it, they will probably trend in that direction. If a person is discouraged or punished for being curious or isn’t provided opportunities to exhibit that characteristic, they will probably shy away from it.

Conclusion

Is curiosity the “X factor” that makes someone good at investigating security incidents? It certainly isn’t the only one, but most would agree that it’s in that conversation and it’s importance can’t be understated.

In this article I discussed the construct of curiosity, why it’s important, how it manifests, and what can be done to measure and promote it. Of course, beyond the literature review and application to our field, many of the things presented here are merely launching points for more research. I look forward to furthering this research myself, and hearing from those who have their own thoughts.

 

References:

Koranda, D., & Sheehan, K. B. (2014). Teaching Curiosity: An Essential Advertising Skill?. Journal Of Advertising Education18(1), 14-23

Litman, J. A., & Spielberger, C. D. (2003). Measuring epistemic curiosity and its diversive and specific components. Journal of personality assessment,80(1), 75-86.

Lowenstein, G. (1994). `The Psychology of Curiosity: A Review and Reinterpretation.

Naylor, F. D. (1981). A state-trait curiosity inventory. Australian Psychologist,16(2), 172-183.

Nichols, R. G. (1963). Curiosity – The Key to Concentration. Management Of Personnel Quarterly2(1), 23-26.

Reio, T. J., Petrosko, J. M., Wiswell, A. K., & Thongsukmag, J. (2006). The Measurement and Conceptualization of Curiosity. The Journal Of Genetic Psychology: Research And Theory On Human Development167(2), 117-135. doi:10.3200/GNTP.167.2.117-135

Vidler, D. C., & Rawan, H. R. (1974). Construct validation of a scale of academic curiosity. Psychological Reports35(1), 263-266.

Inattentional Blindness in Security Investigations

*Disclaimer: Psychology Related Blog Post*

bellJoshua woke up on a frigid Friday morning in Washington, DC and put on a black baseball cap. He walked to the L’Enfant metro station terminal and found a nice visible spot right near the door where he could expect a high level of foot traffic. Once positioned, he opened his violin case, seeded it with a handful of change and a couple of dollar bills, and then began playing for about 45 minutes.

During this time thousands of people walked by and very few paid attention to Joshua. He received several passing glances while a small handful stopped and listened for a moment. Just a coupe lingered for more than a minute or two. When he finished playing, Joshua had earned about twenty-three dollars beyond the money he put into the case himself. As luck would have it, twenty of those dollars came from one individual who recognized Joshua.

Joshua Bell is not just an ordinary violin player. He is a true virtuoso who has been described as one of the best modern violinist in the world, and he has a collection of performances and awards to back it up. Joshua walked into that metro terminal, pulled out a three hundred year old Stradivarius violin, and played some of the most beautiful music that most of us will hear in our lifetime. That leaves the glaring questions: why did nobody notice?

Inattentional Blindness

Inattentional blindness (IB) is an inability to recognize something in plain sight, and it is responsible for the scenario we just described. You may have heard this term before if you’ve had the opportunity to subject yourself to this common selective attention test: https://www.youtube.com/watch?v=vJG698U2Mvo.

As humans, the ability to focus our attention on something is a critical skill. You focus when you’re driving to work in the morning, when you are performing certain aspects of your job, and when you are shopping for groceries. If we didn’t have the ability to focus our attention, we would have a severely limited ability to perceive the world around us.

The tricky thing is that we have limited attention spans. We can generally only focus on a few at a time, and the more things we try to focus on, the less overall focus can be applied to any one thing. Because of this, it is easy to miss things that are right in front of us when we aren’t focused on finding them. In addition, we also tend to perceive what we expect to perceive. These factors combine to produce situations that allow us to miss things right in front of our eyes. This is why individuals in the metro station walked right by Joshua’s performance. They were focused on getting to work, and did not expect a world-class performer to be playing in the middle of the station on a Friday morning.

Manifestations in Security

As security investigators, we must deal with inattentional blindness all the time. Consider the output shown in Figure 1. This screenshot shows several TCP packets. At first glance, these might appear normal. However, an anomaly exists here. You might not see it because it exists in a place that you might not expect it to be, but it’s there.

IB-1

Figure 1: HTTP Headers

In a profession where we look at data all day it is quite easy to develop expectations of normalcy. As you perform enough investigations you start to form habits based on what you expect to see. In the case of investigating the TCP packets above, you might expect to find unexpected external IP addresses, odd ports, or weird sequences of packets indicating some type of scan. As you observe and experience these occurrences and form habits related to how you discover them, you are telling your mind to build cognitive shortcuts so that you can analyze data faster. This means that your attention is focused on examining these fields and sequences, and other areas of these packets lose part of your attention. While cognitive shortcuts like these are helpful they can also promote IB.

In the example above, if you look closely at other parts of the packets, you will notice that the third packet, a TCP SYN packet initiating the communication between 192.168.1.12 and 203.0.113.12 actually has a data length value of 5. This is peculiar because it isn’t customary to see data present in a TCP SYN packet whose purpose is simply to establish stateful communication via the three-way handshake process. In this case, the friendly host in question was infected with malware and was using these extra 5 bytes of data in the TCP SYN to check in to a remote host and provide its status. This isn’t a very common technique, but the data is right in front of our face. You might have noticed the extra data in the context of this article because the nature of the article made you expect something weird to be there, but in practice, many analysts fail to notice this data point.

Let’s look at one more example. In Figure 2 we see a screen populated with alerts from multiple sources fed into the Sguil console. In this case, we have a screen full of anomalies waiting to be investigated. There is surely evil to be found while digging into these alerts, but one alert in particular provides a unique anomaly that we can derive immediately. Do you see it?

IB-2

Figure 2: Alerts in Sguil

Our investigative habits tell us that the thing we really need to focus on when triaging alerts is the name of the signature that fired. After all, it tells us what is going on and can relay some sense of priority to our triage process. However, take a look at Alert 2.84. If you observe the internal (RFC1918) addresses reflected in all of the other alerts, they all relate to devices in the 192.168.1.0/24 range. Alert 2.84 was generated for a device in the 192.168.0.0/24 range. This is a small discrepancy, but if this is not on a list of approved network ranges then there is a potential for a non-approved device on the network. Of course, this could just be a case of someone plugging a cheap wireless access point into the network, but it could also be a hijacked virtual host running a new VM spun up by an attacker, or a Raspberry Pi someone plugged into a hidden wall jack to use as an entry point on to your network. Regardless of the signature name here, this alert is now something that warrants more immediate attention. This is another item that might not be spotted so easily, even by the experienced analyst.

Everyone is susceptible to IB, and it is something we battle ever day. How can we try to avoid missing things that are right in front of our eyes?

Diminishing the Effects

The unfortunate truth is that it isn’t possible to eliminate IB because it is a product of attention. As long as we have the ability to focus our attention in one area, then we will become blind to things outside of that area. With that said, there are things we can do to diminish some of these affects and improve our ability to investigate security incidents and ensure we don’t miss as much.

Expertise

The easiest way to diminish some of the affects of IB is through expertise in the subject matter. In our leading example we mentioned that there were a few people who stopped to listen to Joshua play his violin in the station. It is useful to know that at least two of those people were professional musicians themselves. Hearing the music as they walked through the station triggered the right mechanisms in their brain to allow them to notice what was occurring, compelling them to stop. This was because they are experts in the field of music and probably maintain a state of awareness related to the sound of expert violin playing. Amongst the hustle and bustle of the metro station, their brain allowed them not to miss the thing that people without that expertise had missed.

In security investigations it’s clear to see IB at work in less experienced analysts. Without a higher level of expertise these junior analysts have not learned how to focus their attention in the right areas so that they don’t miss important things. If you hand a junior analyst a packet capture and ask them where they would look to find evil, chances are their list of places to look would be much shorter than a senior analyst, or it would have a number of extraneous items that aren’t worth being included. They simply haven’t tuned their ability to focus attention in the right places.

More senior analysts have developed the skill to be able to selectively apply their attention, but they rarely have the ability to codify it or explain it to another person. The more experienced analysts get at identifying and teaching this information, the better chance of younger analysts getting necessary expertise faster.

Directed Focus

While analysts spend most of their time looking at data, that data is often examined through the lens of tools like SIEMs, packet sniffers, and command line data manipulation utilities. As a young industry, many of these tools are very minimal and don’t provide a lot of visual cues related to where attention should be focused. This is beneficial in some ways because it leaves the interpretation fully open to the analyst, but without having opinionated software this sort of thing promotes IB. As an example, consider the output of tcpdump below. Tcpdump is one of the tools I use the most, but it provides no visual queues for the analysts.

IB-3

Figure 3: Tcpdump provides little in the way of visual cues to direct the focus of attention

We can compare Tcpdump to a tool like Wireshark, which has all sorts of visual cues that give you an idea of things you need to look at first. This is done primarily via color coding, highlighting, and segmenting different types of data. Note that the packet capture shown in Figure 3 is the same one shown in Figure 4. Which is easier to visually process?

IB-4

Figure 4: Wireshark provides a variety of visual cues to direct attention.

It is for this reason that tools developed by expert analysts are desirable. This expertise can be incorporated into the tool, and the tool can be opinionated such that it directs users towards areas where attentional focus can be beneficial. Taking this one step farther, tools that really excel in this area allow analyst users to place their own visual cues. In Wireshark for example, analysts can add packet comments, custom packet coloring rules, and mark packets that are of interest. These things can direct attention to the right places and serve as an educational tool. Developing tools in this manner is no easy task, but as our collective experience in this industry evolves this has to become a focus.

Peer Review

One last mechanism for diminishing the affects of IB that warrants mention is the use of peer review. I’ve written about the need for peer review and tools that can facilitate it multiple times. IB is ultimately a limitation that is a product of an analyst training, experience, biases, and mindset. Because of this, every analyst is subject to his or her own unique blind spots. Sometimes we can correlate these across multiple analyst who have worked in the same place for a period of time or were trained by the same person, but in general everyone is their own special snowflake. Because of this, simply putting another set of eyes on the same set of data can result in findings that vary from person to person. This level of scrutiny isn’t always feasible for every investigation, but certainly for incident response and investigations beyond the triage process, putting another analyst in the loop is probably one of the most effective ways to diminish potential misses as a result if IB.

Conclusion

Inattentional blindness is one of many cognitive enemies of the analyst. As long as the human analyst is at the center of the investigative process (and I hope they always are), the biggest obstacle most will have to overcome is their own self imposed biases and limitations. While we can never truly overcome these limitations without stripping away everything that makes us human, an awareness of them has been scientifically proven to increase performance in similar fields. Combining this increased level of metacognitive awareness with an arsenal of techniques we can do to minimize the effect of cognitive limitations will go a long way towards making us all better investigators and helping us catch more bad guys.

Investigations and Prospective Data Collection

confused-winnerOne of the problems we face while trying to detect and respond to adversaries is in the sheer amount of data we have to collect and parse. Twenty years ago it wasn’t as difficult to place multiple sensors in a network, collect packet and log data, and store that data for quite some time. In modern networks, that is becoming less and less feasible. Many others have written about this at length, but I want to highlight two main points.

Attackers play the long game. The average time from breach to discovery is over two hundred days. Despite media jargon about “millions of attacks a day” or attacks happening “at the speed of light”, the true nature of breaches is that they are not speedy endeavors from the attackers side. Gaining a foothold in a network, moving laterally within that network, and strategically locating and retrieving target data can take weeks or months. Structured attackers don’t win when they gain access to a network. They win once they accomplish their objective, which typically comes much later.

Long term storage isn’t economical. While some organizations are able to store PCAP or verbose log data in terms of months, that is typically reserved for incredibly well funded organizations or the gov/mil, and is becoming less common. Even on smaller networks, most can only store this data in terms of hours, or at most a few days. I typically only see long term storage for aggregate data (like flow data) or statistical data. The amount of data we generate has dramatically outgrown our capability to store and parse through that data, and this issue it only going to worsen for security purposes.

Medicine and Prospective Collection

The problem of having far too much data to collect and analyze is not unique to our domain. As I often do, let’s look towards the medical field. While the mechanics are a lot different, medical practitioners rely on a lot of the same cognitive skills to investigate afflictions to the human condition that we do to investigate afflictions to our networks. These are things like fluid ability, working memory, and source monitoring accuracy all work in the same ways to help practitioners get from a disparate set of symptoms to an underlying diagnosis, and hopefully, remediation.

Consider a doctor treating a patient experiencing undesirable symptoms. Most of the time a doctor can’t look back at the evolution of a persons health over time. They can’t take a CAT scan on a brain as it was six months ago. They can’t do an ultrasound on a pancreas as it was two weeks ago. For the most part, they have to take what they have in front of them now or what tests can tell them from very recent history.

If what is available in the short term isn’t enough to make a diagnosis, the physician can determine criteria for what data they want to observe and collect next. They can’t perform constant CAT scans, ultrasounds, or blood tests that look for everything. So, they apply their skills and define the data points they need to make decisions regarding the symptoms and the underlying condition they believe they are dealing with. This might include something like a blood test every day looking at white blood cell counts, continual EKG readings looking for cardiac anomalies, or twice daily neurological response tests. Medical tests are expensive and the amount of data can easily be overwhelming for the diagnostic process. Thus, selectively collecting data needed to support a hypothesis is employed. Physicians call this a clinical test-based approach, but I like to conceptualize it as prospective data collection. While retrospective data looks at things that have previously been collected up until a point in time, prospective data collections rely on specific criteria for what data should be collected moving forward from a fixed point in time, for a set duration. Physicians use a clinical strategy with a predominate lean towards effective use of prospective data collection because they can’t feasibly collect enough retrospective data to meet their needs. Sound familiar?

Investigating Security Incidents Clinically

As security investigators, we typically use a model based solely on past observations and retrospective data analysis. The prospective collection model is rarely leveraged, which is surprising since our field shares many similarities with medicine. We all have the same data problems, and we can all use the same clinical approach.

The symptoms our patients report are alerts. We can’t go back and look at snapshots of a devices health over the retrospective long-term because we can’t feasibly store that data. We can look back in the near term and find certain data points based on those observations, but that is severely time limited. We can also generate a potential diagnosis and observe more symptoms to find and treat the underlying cause of what is happening on our networks.

Let’s look at a scenario using this approach.

Step 1

An alert is generated for a host (System A). The symptom is that multiple failed login attempts where made on the devices administrator account from another internal system (System B). 

Step 2

The examining analyst performs an initial triage and comes up with a list of potential diagnoses. He attempts to validate or invalidate each diagnosis by examining the retrospective data that is on hand, but is unable to find any concrete evidence that a compromise has occurred. The analyst determines that System B was never able to successfully login to System A, and finds no other indication of malicious activity in the logs. More analysis is warranted, but no other data exists yet. In other scenarios, the investigation might stop here barring any other alerting. 

Step 3

The analyst adds his notes to the investigation and prunes his list of diagnoses to a few plausible candidates. Using these hypothesis diagnoses as a guide, the analyst generates a list of prospective collection criteria. These might include:

  • System A: All successful logins, newly created user accounts, flow data to/from System B.
  • System B: File downloads, attempted logins to other internal machines, websites visited, flow data to/from System A.

This is all immensely useful data in the context of the investigation, but it doesn’t break the bank in terms of storage or processing costs if the organization needs to store the data for a while in relation to this small scope. The analyst tasks these collections to the appropriate sensors or log collection devices. 

Step 4

The prospective collections record the identified data points and deliver them exclusively to the investigation container they are assigned to. The analyst collects these data points for several days, and perhaps refines them or adds new collections as data is analyzed.

Step 5

The analyst revisits and reviews the details of the investigation and the returned data, and either defines additional or refined collections, or makes a decision regarding a final diagnosis. This could be one of the following:

  • System B appears to be compromised and lateral movement to System A was being attempted.
  • No other signs of malicious activity were detected, and it was likely an anomaly resulting from a user who lost their password. 

In a purely retrospective model the later steps of this investigation might be skipped, and may lead the analyst to miss the ground truth of what is actually occurring. In this case, the analyst plays the long game and is rewarded for it.

Additional Benefits of Prospective Collection

In addition to the benefits of making better use of storage resources, a model that leverages prospective collection has a few other immediate benefits to the investigative process. These include:

Realistic-Time Detection. As I’ve written previously, when the average time from breach to detection is greater than two hundred days, attempting to discover attackers on your network the second they gain access is overly ambitious. For that matter, it doesn’t acknowledge the fact that attackers may already be inside your network. Detection can often its hardest at the time of initial compromise because attackers are typically more stealthy at this point, and because less data exists to indicate they are present on the network. This difficulty can decrease over time as attackers get sloppier and generate more data that can indicate their presence. Catching an attacker +10 days from initial compromise isn’t as sexy as “real time detection”, but it is a lot more realistic. The goal here is to stop them from completing their mission. Prospective collection supports the notion of realistic-time detection.

Cognitive Front-Loading. Research shows us that people are able to solve problems a lot more efficiently when they are aware of concepts surrounding metacognition (thinking about thinking) and are capable of applying that knowledge. This boils down to have an investigative philosophy and a strategy for generating hypotheses and having multiple approaches towards working towards a final conclusion. Using a prospective collection approach forces analysts to form hypotheses early on in the process, promoting the development of metacognition and investigation strategy.

Repeatability and Identified Assumptions. One of the biggest challenges we face is that investigative knowledge is often tacit and great investigators can’t tell others why they are so good at what they do. Defining prospective collection criteria provides insight towards what great investigators are thinking, and that can be codified and shared with less experienced analysts to increase their abilities. This also allows for more clear identification of assumptions so those can be challenged using structured analytic techniques common in both medicine and intelligence analysis. I wrote about this some here, and spoke about it last year here.

Conclusion

The purpose of this post isn’t to go out and tell everyone that they should stop storing data and refocus their entire SOC towards a model of prospective collection. Certainly, more research is needed there. As always, I believe there is value in examining the successes and failures of other fields that require the same level of critical thinking that security investigations also require. In this case, I think we have a lot to learn from how medical practitioners manage to get from symptoms to diagnosis while experiencing data collection problems similar to what we deal with. I’m looking forward to more research in this area.

Investigating Like a Chef

Whenever I get the chance I like to try and extract lessons from practitioners in other fields. This is important because the discipline of information security is so new, while more established professions have been around, in some cases, for hundreds of years. I’ve always had a keen interest in culinary studies, mostly because I come from an area of the country where people show that they love each other by preparing meals. I’m also a bit of a BBQ connoisseur myself, as those of you who know me can solemnly attest to. While trying to enhance my BBQ craft I’ve had the opportunity to speak with and read about a few professional chefs and study how they operate. In this post I want to talk a little bit about some key lessons I took away from my observations.

If you have ever worked in food service, or have even prepared a meal for a large number of people you know that repetition is often the name of the game. It’s not trimming one rack of ribs, its trimming a dozen of them. It’s not cutting one sweet potato, its cutting a sack of them. Good chefs strive to do these things in large quantities while still maintaining enough attention to detail so that the finished product comes out pristine. There are a lot of things that go into making this happen, but none more important than a chef mastering their environment. This isn’t too different than a security analyst who investigates hundreds of alerts per day while striving to pay an appropriate amount of attention to each individual investigation. Let’s talk about how chefs master their environment and how these concepts can be applied to information security.

Chefs minimize their body movement. If you are going to be standing up in a kitchen all day performing a bunch of repetitive and time sensitive tasks, then you want to make sure every step or movement you make isn’t wasted. This prevents fatigue and increases efficiency.

As an example, take a look at Figure 1. In this image, you will see that everything the chef needs to prepare their dish is readily available without the chef having to take extra steps or turn around too often. Raw products can be moved from the grocery area, rinsed in the sink, sliced or cut on the cutting board, cooked on the stove, and plated without having to turn more than a few times or move more than a couple of feet.

chef_mentalmoves

Figure 1: A Chef’s Workspace is Optimized for Minimal Movement

Chefs learn the French phrase “mise en place” early on in their careers. This statement literally means, “put in place”, but it specifically refers to organizing and arranging all needed ingredients and tools required to prepare menu items during food service. Many culinary instructors will state that proper mise en place, or simply “mise” in shorthand, is the most important characteristic that separates a professional chef from a home cook.

There is a lot of room for mise in security investigations as well. Most analysts already practice this to some degree by making sure that their operating system is configured to their liking. They have their terminal windows configured with a font and colors the make it easy to read, they have common OSINT research sites readily accessible as browser favorites, and they have shortcut icons to all of their commonly used tools. At a higher level, some analysts even have custom scripts and tools they’ve written to minimize repetitive tasks. These things are highly encouraged.

While analysts don’t have to worry about physical movement as much, they do have to work about mental movement. In an ideal situation an analyst can get to the end of an investigation with as few steps as possible, and a strategic organization of their digital workspace can help facilitate that. I’ve seen some organizations that seek to limit the flexibility analysts have in their workspace by enforcing consistent desktop environments or limiting access to additional tools. While policies to enforce good security and analysis practices are great, every analysts learns and processes information in a different way. It isn’t only encouraged that analysts have flexibility to configure their own operating environments, it’s critical to helping them achieve success.

Beyond the individual analysts workstation, the organization can also help out by providing easy access to tool and data, and processes that support it. If an analyst has to connect to five systems to retrieve the same data, that is too much mental movement that could be better spent formulating and answering questions about the investigation. Furthermore, if organizations limit access to raw data it could force the analyst to make additional mental moves that slow down their progress.

Chefs make minimal trips to the fridge/pantry. When you are cooking dinner at home you likely make multiple trips to the fridge to get ingredients or to the pantry to retrieve spices during the course of your meal. That might look something like this:

“I think this soup needs a bit more tarragon, let me go get it. “

or…

“I forgot I need to add an egg to the carbonara at the end, I’ll go get it from the fridge.”

Building on the concept of mise en place, professional chefs minimize their trips to the fridge and pantry so that they always have the ingredients they need with as few trips as possible. This ensures they are focused on their task, and also minimizes prep and clean up time. They also ensure that they get an appropriate amount of each ingredient to minimize space, clean up, and waste.

chef_mise

Figure 2: Chef’s Gather and Lay Out Ingredients for Multiple Dishes – Mise en Place

One of the most common tasks an analyst will perform during an investigation is retrieval of data in an attempt to answering questions. This might include querying a NetFlow database, pulling full packet capture data from a sensor, or querying log data in a SIEM.

Inexperienced analysts often make two mistakes. The first is not retrieving enough data to answer their questions. This means that the analyst must continue to query the data source and retrieve more data until they get the answer they are looking for. This is equivalent to a chef not getting enough flour from the pantry when trying to make bread. On the flip side, another common pitfall is retrieving too much data, which is an even bigger problem. In these situations an analyst may not limit the time range of their query appropriately, or simply may not use enough filtering. The result is a mountain of data that takes a significant amount of time to wade through. This is equivalent to a chef walking back from the fridge with 100 eggs when they only intend to make a 3-egg omelet.

Learning how to efficiently query data sources during an investigation is product of asking the right questions, understanding the data you have available, and having the data in a place that is easily accessible and reasonably consolidated. If you can do these things you should be able to ensure you are making less trips back to the pantry.

Chefs carefully select, maintain, and master their tools. Most chefs spend a great deal of time and money purchasing and maintaining their knives. They sharpen their knives before every use, and have them professionally refinished frequently. They also spend a great deal of time practicing different types of cuts. A dull or improperly used knife can result in inconsistently cut food, which can lead to poor presentation and even cause under or overcooked food if multiple pieces of food are cooked together but are sized differently. Of course, this could also lead to you accidentally cutting yourself. These concepts go well beyond knives; a bent whisk can result in clumped batter, and an unreliable broiler can burn food. Chefs have to select, maintain, and master a variety of tools to perform their job.

chef_tools

Figure 3: A Chef’s Travel Kit Provides Well-Cared For Essential Tools

In a security investigation tools certainly aren’t everything, but they are critically important. In order analyze network communication you have to understand the protocols involved at a fundamental level, but you also need tools to sort through them, generate statistics, and work towards decision points. Whether it is a packet analysis tool like Wireshark, a flow data analysis tool like SiLK, or an IDS like Snort, you have to understand how those tools work with your data. The more ambiguity placed between you and raw data, the greater chance for assumptions that could lead to poor decisions. This is why it is critical to understand how to use tools, and how they work.

Caring for tools goes well beyond purchasing hardware and ensuring you have enough servers to crunch data. At an organization level it requires hiring the right number of people in your SOC to help manage the infrastructure. Some organizations attempt to put that burden on the analysts, but this isn’t always scalable and often results in analysts being taken away from their primary duties. This is also the “piling on” of responsibilities that results in analysts getting frustrated and leaving a job.

Beyond this, proper tool selection is important as well. I won’t delve into this too much here, but careful consideration should be given to free and open source tools, as well as the potential for developing in house tools. Enterprise solutions have their place, but that shouldn’t be the default go-to. The best work in information security in most cases is still done at the free and open source level. You should look for tools that support existing processes, and never let a tool alone dictate how you conduct an investigation.

Chefs can cook in any kitchen. When chefs master all of the previously mentioned concepts, it allows them to apply those concepts in any location. If you watch professional cooking competitions, you will see that most chefs come with only their knife kit and are able to master the environment of the kitchen they are cooking in. For example, try watching “Chopped” sometime on Food Network. These chefs are given short time constraints and challenging random ingredients. They organize their workspace, assess their tools, make very few trips to get ingredients, and are able to produce five star quality meals.

chef_chopped

Figure 4: Professional Chef’s Competing in an Unfamiliar Kitchen on Food Network’s Chopped

In security investigations, this is all about understanding the fundamentals. Yes, tools are important as I mentioned earlier, but you won’t always work in an environment that provides the same tools. If you only learn how to use Arcsight then you will only ever be successful in environments that use Arcsight. This is why understanding higher-level investigative processes that are SIEM-independent is necessary. Even at a lower level, understanding a tool like Wireshark is great, but you also need to understand how to work with packets using more fundamental and universal tools like tcpdump, as you may not always have access to a graphical desktop. Taking that step further, you should also understand TCP/IP and network protocols so that you can make better sense of the network data you are analyzing without relying on protocol dissectors. A chef’s fundamental understanding of food and cooking methods allows them to cook successfully in any kitchen. An analyst’s fundamental understanding of systems and networking allows them to investigate in any SOC.

Conclusion

Humans have been cooking food for thousands of years, and have been doing so professionally for much longer than computers have even existed. While the skills needed to be chef are dramatically different than those needed to investigate network breaches, there are certainly lessons to be learned here. Now, if you’ll excuse me, writing this has made me hungry.

* Figures 1-3 are from “The Four-Hour Chef” by Tim Ferriss. One of my favorite books.

Teaching Good Investigation Habits Through Reinforcement

Press_for_food-fullThe biggest responsibility that leaders and senior analysts in a SOC have is to ensure that they are providing an appropriate level of training and mentoring to younger and inexperienced analysts. This is how we better our SOC’s, our profession, and ourselves. One problem that I’ve written about previously relates to the prevalence of tacit knowledge in our industry. The analysts who are really good at performing investigations often can’t describe what makes them so good at it, or what processes they use to achieve their goals. This lack of clarity and repeatability makes it exceedingly difficult to use any teaching method other than having inexperienced analysts learning through direct observation of those who are more experienced. While observation is useful, a training program that relies on it too much is flawed.

In this blog post I want to share some thoughts related to recent research I’ve done on learning methods as part of my study in cognitive psychology. More specifically, I want to talk a bit about one specific way that humans learn and how we might be able to better frame our investigative processes to better the investigation skills of our fellow analysts and ourselves.

Operant Conditioning

When most people think of conditioning they think of Pavlov and how he trained his dogs to learn to salivate at the sound of a tone. That is what is referred to as learning by classical conditioning, but that isn’t what I want to talk about here. In this post, I want to instead focus on a different form of learning called operant conditioning. While classical conditioning is learning that is focused on a stimulus that occurs prior to a response and is associated with involuntary response, operant conditioning is learning that is related to voluntary responses and is achieved through reinforcement or punishment.

An easy example of operant conditioning would be to picture a rat in a box. This box contains a button the rat can push with its body weight, and doing so releases a treat. This is an example of positive reinforcement that allows to rat to learn the associated that pressing the button results in a treat. The relationship is positively reinforced because a positive stimulus is used.

Another type of operant conditioning reinforcement is negative reinforcement. Consider the same rat in a different box with a button. In this box, a mild electrical charge is passed to the rat through the floor of the box. When the rat presses the button, the electrical charge stops for several minutes. In this case, negative reinforcement is being used because it teaches the rat a behavior by removing a negative stimulus. The key takeaway here is that negative reinforcement is still reinforcing a behavior, but in a different way. Some people confuse negative reinforcement with punishment.

Punishment is the opposite of reinforcement because it reduces the probability of a behavior being expressed. Consider the previous scenario with the rat in the electrified room, but instead, the room is only electrified when the rat presses the button. This is an example of a punishment that decreases the likelihood of the rat pressing the button.

Application to Security Investigation

I promise that all of this talk about electrifying rats is going somewhere other than the BBQ pit (I live in the deep south, what did you expect?). Earlier I spoke about the challenge we have because of tacit knowledge. This is made worse in many environments where you have access a mountain of data but have an ambiguous workflow that can allow an input (alert) to be taken down hundreds of potential paths. I believe that you can take advantage of a fundamental construct like operant conditioning to help better your analysts. In order to make this happen, I believe there are three key tasks that must occur.

Identify Unique Investigative Domains

First, you must designate domains that lend themselves to specific cognitive functions and specializations. For instance, triage requires different skills sets and cognitive processes than hunting. Thus, those are two separate domains with different workflows. Furthermore, incident response requires yet another set of skills and cognitive processes, making it a third domain of investigation. Some organizations don’t really distinguish between these domains, but they certainly should. I think there is work to be done to fully establish investigative domains (I expect lots of continued research here on my part), and more importantly, criteria for defining these domains. But at a minimum you can easily pick out a few domains relevant to your SOC, like I’ve mentioned above.

Define Key Workflow Characteristics and Approaches

Once you’ve established domains you can attempt to define their characteristics. This isn’t something you do in an afternoon, but there are a few clear wins. For instance, triage is heavily suited to divergent thinking and differential diagnosis techniques. On the other hand, hunting is equally reliant on convergent and divergent thinking and is well suited to relational (link) analysis. These are characteristics you can key on in your workflows moving on to the next step.

Apply Positive and Negative Reinforcement in Tools and Processes

Once you know what paths you want analysts to take, how do you reinforce their learning so that they are compelled to do so? While some of us would like to consider a mechanism that provides punishment via electrified keyboards, positive and negative reinforcement are a bit more appropriate. Of course, you can’t give an analyst a treat when they make good decisions, but you can provide reinforcement in other ways.

For an investigation, there is no better positive stimulus than providing easy and immediate access to relevant data. When training analysts, you want to ensure they are smart about what data they gather to support their questioning. Ideally, an analyst only gathers the amount of information the need to get the answer they want. More skilled analysts are able to do this quickly without spending too much time re-querying data sources for more data or whittling excess away from data sets that are too large. Whenever an analyst has a questions and your tool or process helps them answer it in a timely manner, you are positively reinforcing the use of that tool or process. Furthermore, when the answer to that question helps them solve an investigation, you are reinforcing the questions the analyst is putting forth, which helps that analyst learn what questions are most likely to help them achieve results.

Negative reinforcement can be used advantageous here as well. In many cases analysts arrive at points in an investigation where they simply don’t know what questions to ask next. With no questions to ask, the investigation can stall or prematurely end. When chasing a hot lead, this can result in frustration, despair, and hopelessness. If the tools and processes used in your SOC can help facilitate the investigation by helping the analysts determine their next logical set of questions, then that can serve as negative reinforcement by removing the negative stimuli of frustration, despair, and hopelessness. At this point you aren’t only help the analyst further a single investigation, you are once again reinforcing questions that help them learn how to further every subsequent investigation they will conduct.

Other Thoughts

While the previous sections identified some structured approaches you can take towards bettering your analysts, I had a few less structured thoughts I wanted to share in bullet points. These are ways that I think SOC’s can help achieve teaching goals in every day decisions:

  • How can you continually provide positive reinforcement to help analysts learn to make good decisions?
  • If you are making a decision for analysts, let them know. Little things like data normalization and timestamp assumptions can make a difference. Analyst knowledge of these things further help them understand their own data and how we manipulate it for their (hopeful) betterment. Less abstraction from data is critical to understanding the intricacies of complex systems.
  • You must be aware of when you punish your analysts. This occurs when a tool or process prevents the user from getting data they need, takes liberties with data, fails to produce consistent results, etc. If a process or tool is frustrating for a user, then that punishment decreases the likelihood that they will use it, even if it represents a good step in the investigation. You want to at all costs avoid tools and processes that steer your analysts away from good analytic practices.

Conclusion

This is another post that is pretty heavy in theory, but it isn’t so far away from reality that it doesn’t’ have the potential for real impact in the way you make decisions about the processes and tools used in your SOC, and how you train your analysts. As our industry continues to work on developing workflows and technologies we have to think beyond what looks good and what feels right and grasp the underlying cognitive processes that are occurring and the mental challenges we want to help solve. One method for doing this is a thoughtful use of operating condition as a teaching tool.