Increase Security Reporting with Contact Cards

After a really nice weekend, Zeke walked into his office and logged into his workstation. He made a habit of getting to the office around 7 AM on Mondays so that he could go through the pile of unanswered e-mails from the previous week and clear out all junk that accumulated over the weekend. As he was going through e-mails, he spotted one with the subject line “401k performance highlights” that was sent towards the end of the day on Friday. He had seen it then, but was in a rush to get out the door. He opened the e-mail and read that his 401k had performed better than expected over the previous quarter. A link was provided to review the details so he clicked it. That was when he realized he had made a mistake.

The link didn’t go to his 401k provider as expected, but instead went to a suspicious looking fake site that was made to look like it. Zeke had gone through information security awareness training, so he knew how to spot forgeries like this. It was very obvious that the site was hosted on a domain made to look like a legitimate bank, but was in fact not associated with it at all. He immediately closed his browser window and waited. His system seemed to be fine. No weird pop ups, no prompts to download any software, no weird slowness. He had dodged a bullet. He remembered that he was supposed to report occurrences like this, but wasn’t entirely sure where. Was it the IT helpdesk? No, it was a special e-mail address….or was it a phone number? He quickly looked up the contact number for someone he knew who worked in information security. Hmm, no answer. It’s still pretty early so that person might not be in. He shrugged it off and went about the business of responding to unanswered mail. By the time his day got going, he forgot all about the incident and it never got reported.


Stories like the one here are common in large organizations. Many companies do their part and invest heavily in security awareness programs that teach employees how to spot phishing attempts and security anomalies so that they can be reported. However, many organizations struggle with the final step in this process — effectively enabling their users to report anomalous activity.

So, why do users fail to report security issues when they recognize them? You might expect me to provide a long-winded underlying psychology perspective. However, the answer is much simple than that. People rarely have the opportunity to report security incidents, so they forget how to do it. Just like Zeke, they forget the e-mail address or the phone number. Further, they are often hesitant about whether anyone actually cares about what they’ve observed, and they’re worried that incompetence will be exposed. Of course, there isn’t incompetence here, but all that matters is that the individual perceives the possibility of it. Nobody likes feeling dumb, so someone certainly aren’t going to go out of their way to report an incident if it means potentially going through multiple layers of people. That increases the chance that perceived incompetence gets exposed. In summation, people are less likely to report security incidents unless it’s incredibly easy, and that is a defense mechanism that is difficult to overcome.

How do you eliminate that defense mechanism? Well, that’s a difficult question with multiple facets and is far beyond the scope of what I have to offer in this humble blog post. I think the better and more practical approach is to make security incident reporting as simple as conceivably possible. For that, I offer a non-technical solution.

The best strategy for reporting security incidents that I’ve seen is simple. Provide every employee with a security reporting contact card (SRCC), like the one seen in the image below.

As you can see, this card is incredibly simple. Ideally, this is printed to the same size as a business card on heavy stock. This provides a form factor sizable enough to display all the necessary information, but small enough to be easily tucked into a wallet or purse. The idea is simple: every employee receives one of these cards and is requested to keep it close at all times. While most employees can’t be expected to remember an obscure e-mail address or phone number, they will remember that they were given this card and can reference it without hassle.

So, what is necessary to include on the SRCC? This can vary, but I recommend making sure that the card can answer these questions:

  • Who can I e-mail/call for any network security related issue?
  • Who can I e-mail/call for any physical security related issue?
  • Where can I forward phishing or scam e-mails?

The purpose of this card is not to educate people on what’s reportable. That should be a goal of your information security awareness training and certainly won’t fit on a card. However, the three questions mentioned above cover a lot of the things that will generally be reportable. By providing e-mail addresses and phone numbers, you provide an implied escalation path if something is deemed to important or time sensitive for e-mail.

A few other tips:

  • Use a bright, easily noticeable color on your card. This will help people spot it and remember where it is in their wallet/purse.
  • Make the design somewhat different from your normal company business card template.
  • Setup a special e-mail address to receive forwarded phishes separate from your normal intake queue. This will help with prioritization and tasking.
  • Build this into your security awareness training. Occasionally challenge people to produce these cards so everyone gets used to keeping them handy. Make sure you are clearly identifying what is reportable and what isn’t based on your organization’s threat model and ability to evaluate reported incidents.

If you want to get started right away, here’s a Microsoft Word template you can download and use now. Just replace the logo and information with your own, print it out, and hand them out to your employees.


So what ever happened with Zeke? Well, it turns out that once he clicked on the link in the phishing e-mail, his browser downloaded a flash exploit which led to a malware infection. An attacker was able to leverage access to this system to retrieve sensitive information about Zeke’s company. It took the attackers a few weeks to complete their attack. But, since the detection tools in the organization didn’t pick it up, and Zeke never reported it, the attack wasn’t discovered until much later. The organization did a lot of things right, but a failure to make incident reporting easy subverted much of their effort. Security contact cards a simple, affordable, effective way to make sure you don’t fall victim to a similar scenario. The minimal cost of printing and distributing these cards is nothing in comparison to what you might pay if something goes unreported.

 

Training Course Scholarships

I’m glad to announce that I’m offering full scholarships for my online training courses to individuals employed by non-profit human services organizations. These are given out based on availability, and each application is evaluated individually by me. This covers the courses listed on my training page, and is my way of serving those who are helping others.

To apply, visit this link and fill out the application:

https://www.surveymonkey.com/r/HQCW5ZN

Know Your Bias – Anchoring

In the first part of this series I told a personal story to illustrated the components and effects of bias in a non-technical setting. In each post following I’ll examine a specific type of bias, how it manifests in a non-technical example, and provide real-world examples where I’ve seen this bias negatively affect a security practitioner. In this post, I’ll discuss anchoring.

Anchoring occurs when a person tends to rely too heavily on a single piece of information when making decisions, most often based on information received early in the decision-making process.

Anchoring Outside of Security

Think about the average price of a Ford car. Is it higher or lower than 80,000? This number is clearly too high, so you’d say lower. Now let’s flip this a bit. I want you to think about the average price of a Ford car again. Is it higher or lower than 10,000? Once again, the obvious answer is that it’s higher.

These questions may seem obvious and innocent, but here’s the thing. Let’s say that I ask one group of people the first question, and a separate group of people the second question. After that, I ask both groups to name what they think the average price of a Ford car is. The result is that the first group presented with the 80,000 number would pick a price much higher than the second group presented with the 10,000 number. This has been tested in multiple studies with several variants of this scenario [1].

In this scenario, people are subconsciously fixating on the number they are presented and it is subtly influencing their short term mindset. You might be able to think of a few cases in sales where this is used to influence consumers. Sticking with our car theme, if you’ve been on a car lot you know that every car has a price that is typically higher than what you pay. By pricing cars higher initially, consumers anchor to that price. Therefore, when you negotiate a couple thousand dollars off, it feels like you’re getting a great deal! In truth, you’re paying what the dealership expected, you just perceive the deal because of the anchoring effect.

They key takeaway from these examples is that anchoring to a specific piece of information is not inherently bad. However, making judgements in relation to an anchored data point where too much weight is applied can negatively effect your realistic perception of a scenario. In the first example, this led you to believe the average price of a car is higher or lower than it really is. In the second example, this led you to believe you were getting a better deal than you truly were.

 

Anchoring in Security

Anchoring happens based on the premise that mindsets are quick to form but resistant to change. We quickly process data to make an initial assessment, but our ability to hone that assessment is generally weighed in relation to the initial assessment. Can you think of various points in a security practitioner’s day where there is an opportunity for an initial perception into a problem scenario? This is where we can find opportunities for anchoring to have occurred.

List of Alerts

Let’s consider a scenario where you have a large number of alerts in a queue that you have to work through. This is a common scenario for many analysts, and if you work in a SOC that isn’t open 24×7 then you probably walk in each morning to something similar. Consider this list of the top 5 alerts in a SOC over a twelve hour period:

  • 41 ET CURRENT_EVENTS Neutrino Exploit Kit Redirector To Landing Page
  • 14  ET CURRENT_EVENTS Evil Redirector Leading to EK Apr 27 2016
  • 9 ET TROJAN Generic gate[.].php GET with minimal headers
  • 2 ET TROJAN Generic -POST To gate.php w/Extended ASCII Characters (Likely Zeus Derivative)
  • 2 ET INFO SUSPICIOUS Zeus Java request to UNI.ME Domain

Which alert should be examined first? I polled this question and found that a significant number of inexperienced analysts chose the one at the top of the list. When asked why, most said because of the frequency alone. By making this choice, the analyst assumes that each of these alerts are weighted equally. By occurring more times, the rule at the top of the list represents a greater risk. Is this a good judgement?

In reality, the assumption that each rule should be weighted the same is unfounded. There are a couple of ways to evaluate this list.

Using a threat-centric approach, not only does each rule represent a unique threat that should be considered uniquely, some of these alerts gain more context in the presence of others. For example, the two unique Zeus signatures alerting together could pose some greater significance. In this case, the Neutrino alert might represent a greater significance if it was paired with an alert representing the download of an exploit or communication with another Neutrino EK related page. Merely hitting a redirect to a landing page doesn’t indicate a successful infection, and is a fairly common event.

You could also evaluate this list with a risk-centric approach, but more information is required. Primarily, you would be concerned with the affected hosts for each alert. If you know where your sensitive devices are on the network, you would evaluate the alerts based on which ones are more likely to impact business operations.

This example illustrates the how casual decisions can come with implied assumptions. Those assumptions can be unintentional, but they can still lead you down the wrong path. In this case, the analyst might spend a lot of time pursuing alerts that aren’t very sensitive while delaying the investigation of something that represents a greater risk to the business. This happens because it is easy to look at a statistic like this and anchor to a singular facet of the stat without fully considering the implied assumptions. Statistics are useful for summarizing data, but they can hide important context that will keep you from making uninformed decisions that are the result of an anchoring bias.

 

Visualizations without Appropriate Context

As another example, understand that numbers and visual representations of them have a strong ability to influence an investigation.  Consider a chart like the one in the figure below.

This is a treemap visualization used to show the relative volume of communication based on network ports for a single system. The larger the block, the more communication occurred to that port. Looking at this chart, what is the role of the system whose network communication is represented here? Many analysts I polled decided it was a web server because of the large amount of port 443 and 80 traffic. These ports are commonly used by web servers to receive requests.

This is where we enter the danger zone. An analyst isn’t making a mistake by looking at this and considering that the represented host might be a web server. The mistake occurs when the analyst fully accepts this system as a web server and proceeds in an investigation under that assumption. Given this information alone, do you know for sure this is a web server? Absolutely not.

First, I never specified whether this treemap exclusively represents inbound traffic, and it’s every bit as likely that it represents outbound communication that could just be normal web browsing. Beyond that, this chart only represents a finite time period and might not truly represent reality. Lastly, just because a system is receiving web requests doesn’t necessarily mean its primary role is that of a web server. It might simply have a web interface for managing some other service that is its primary role.

The only way to truly ascertain whether this system is a web server is to probe it to see if there is a listening service on a web server port or to retrieve a list of processes to see if a web server application is running.

There is nothing wrong with using charts like this to help characterize hosts in an investigation.  This treemap isn’t an inherently bad visualization and it can be quite useful in the right context. However, it can lead to investigations that are influenced by unnecessarily anchored data points. Once again, we have an input that leads to an assumption.  This is where it’s important to verify assumptions when possible, and at minimum identify your assumptions in your reporting. If the investigation you are working on does end up utilizing a finding based on this chart and the assumption that it represents a web server, call that out specifically so that it can be weighed appropriately.

 

Diminishing Anchoring Bias

The stories above illustrate common places anchoring can enter the equation during your investigations. Throughout the next week, try to look for places in your daily routine where you form an initial perception and there is an opportunity for anchoring bias to creep in. I think you’ll be surprised at how many you come across.

Here are a few ways you can recognize when anchoring bias might be affecting you or your peers and strategies for diminishing its effects:

Consider what data represents, and not just it’s face value. Most data in security investigations represents something else that it is abstracted from. An IP address is abstracted from a physical host, a username is abstracted from a physical user, a file hash is abstracted from an actual file, and so on.

Limit the value of summary data. A summary is meant to be just that, the critical information you need to quickly triage data to determine its priority or make a quick (but accurate) judgement of the underlying events disposition. If you carry forward input from summary data into a deeper investigation, make sure you fully identify and verify your assumptions.

Don’t let your first impression be your only impression. Rarely is the initial insertion point into an investigation the most important evidence you’ll collect. Allow the strength of conclusions to be based on your evidence collected throughout, not just what you gathered at the onset. This is a hard thing to overcome, as your mind wants to anchor to your first impression, but you have to try and overcome that and try to examine cases holistically.

An alert is not an answer, it’s merely a question. Your job is to prove or disprove the alert, and until you’ve done one of those things the alert is not representative of a certainty. Start looking at alerts as the impetus for asking questions that will drive your investigation.

If you’re interested in learning more about how to help diminish the effects of bias in an investigation, take a look at my Investigation Theory course where I’ve dedicated an entire module to it. This class is only taught periodically, and registration is limited.

[1] Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of personality and social psychology73(3), 437.

 

 

Know your Bias – Foundations

In this blog series I’m going to dissect cognitive biases and how they relate to information security. Bias is prevalent in any form of investigation, whether you’re threat hunting, reversing malware, responding to an incident, attributing network attacks, or reviewing an IDS alert. In each post, I’ll describe a specific type of bias and how it manifests in various information security specialties. But first, this post will explain some fundamentals about what bias is and why it can negatively influence investigations.

 

What is Bias?

Investigating security threats is a process that occurs within the confines of your mind and centers around bridging the gap between your limited perception and the reality of what has actually occurred. To investigate something is to embrace that perception and reality aren’t the same thing, but you would like for them to be. The mental process that occurs while trying to bridge that perception-reality gap is complex and depends almost entirely on your mindset.

A mindset is how you see and approach the world and is shaped by your genetics and your collective world experience. Everyone you’ve ever met, everything you’ve ever done, and every sense you’ve perceived molds your mindset. At a high level, a mindset is neither a good or bad thing, but it can lead to positive or negative results. This is where bias comes in.

Bias is a predisposition towards a certain way of thinking, and it can be the difference in a successful or failed investigation. In some ways, bias is good when it allows us to learn from our previous mistakes and create mental shortcuts to solving problems. In other ways its bad, and can lead us to waste time pursuing bad leads or jump to conclusions without adequate evidence. Let’s consider a non-technical example first.

 

The (very) Personal Effects of Bias

On a night like any other, I laid my head down on my pillow at about 11 PM. However, I was unexpectedly awoken around 2 AM with wrenching stomach pain unlike anything I’d ever felt before. I tossed and turned for an hour or so without relief, and finally woke my wife up. A medical professional herself, she realized this might not be a typical stomach ache and suggested we head to the emergency room.

About an hour later I was in the ER being seen by the doctor on shift that night. Based on the location of the pain, he indicated the issue was likely gal bladder related, and that I probably had one or more gall stones causing my discomfort. I was administered some pain medication and setup with an appointment with a primary care physician the next day for further evaluation.

The primary care physician also believed that a gal bladder was the likely cause of the previous night’s discomfort, so she scheduled me for an ultrasound the same day. I walked over to the ultrasound lab where they spent about twenty minutes trying to get good images of the area. Shortly thereafter, the ultrasound technician came in and looked at the image and shared his thoughts. He had identified my gal bladder and concluded that it was full of gal stones, which had caused my stomach pain. My next stop was a referral to a surgeon, where my appointment took no more than ten minutes as he quickly recommended I switch to a low fat diet for a few weeks until he could perform surgery to remove the malfunctioning organ.

Fast forward a couple of weeks later and I’m waking up from an early morning cholecystectomy operation. The doctor is there as soon as I wake up, which strikes me as odd even in my groggy state.

“Hey Chris, everything went fine, but…”

Let me take this opportunity to tell you that you never want to hear “but…” seconds after waking up from surgery.

“But, we couldn’t remove your gal bladder. It turns you don’t have one. It’s a really rare thing and less than .1% of the population is born without one, but you’re one of them.”

At first I didn’t believe him. As a matter of fact, I was convinced that my wife had put him up to it and they were messing with me while I wasn’t completely with it. It wasn’t until I got home that same day and woke up again several hours later that I grasped what had happened. I really wasn’t born with a gal bladder, and the surgery had been completely unnecessary.

 

Figure 1: This is a picture of where my gal bladder should be

 

Dissecting the Situation

Let’s dissect what happened here.

  1. ER Doctor: Believed I was having gal bladder issues based on previous patients presenting with similar symptoms. Could not confirm this in the ER, so referred me to a primary care physician.
  2. Primary Care Doctor: Also believed the gal bladder issue was likely, but couldn’t confirm this in her office, so she referred me to a radiologist.
  3. Radiologist: Reviewed my scans to attempt to confirm the diagnosis of the previous two doctors. Found what appeared to be confirming evidence and concluded my gal bladder was malfunctioning.
  4. Surgeon: Agreed with the conclusion of the radiologist (without looking at the source data himself) and proceeded to recommend surgery, which I went through.

So where and why did things go wrong?

For the most part, the first two doctors were doing everything within their power to diagnose me. The appropriate steps of a differential diagnosis instruct physicians to identify the most likely affliction and perform the test that can rule that out. In this case, that’s the ultrasound that was performed by the radiologist, which is where things started to go awry.

The first two doctors presented a case for a specific diagnosis, and the radiologist was predisposed towards confirming this bias before even looking at the scans. It’s a classic case of finding something weird when you go looking for it, because everything looks a little weird when you want to find something. In truth, the radiologist wasn’t confident in his finding, but he let the weight of the other two doctor’s opinions bear down on him such that he was actually seeking to confirm their diagnosis more than trying to independently come to an accurate conclusion. This is an example of confirmation bias, which his perhaps the most common type of bias encountered. It also represents characteristics of belief bias and anchoring.

The issue here was compounded when I met the surgeon. Rather than critically assessing the collective findings of all the professionals that were involved to this point, he assumed a positive diagnosis and merely glanced at the ultrasound results in passing. All of the same biases are repeated here again.

In my case bias led to an incorrect conclusion, but understand that even if my gal bladder had been the issue and the surgery went as expected, the bias was still there. In many (and sometimes most) cases, bias might exist and you will still reach the correct conclusion. That doesn’t mean that the same bias won’t cause you to stumble later on.

 

Consequences of Bias

In my story, you see that bias resulted in some fairly serious consequences. Surgery is a serious matter, and I could have suffered some type of complication while on the table, or a post op infection that could have resulted in extreme sickness or death.

In any field, bias influences conclusions and those conclusions have consequences when they result in action being taken. In my case, it was a few missed days of work and a somewhat painful recovery. In security investigations, bad decisions can result in wasted time pursuing leads or missing something important. The latter could have drastic effects if you work for a government, military, ICS environment, or any other industry where lives depend on system integrity and uptime. Even if normal commercial environments, bad decisions resulting from the influence of bias can lead to millions of lost dollars.

Let’s examine one other effect of this scenario. I’m a lot less likely to trust the conclusions of doctors now, and I’m a lot less likely to agree to surgery without a plethora of supporting evidence indicating the need for it. These things in themselves are a form of bias. This is because bias breeds additional bias. We saw this with the relationship between the radiologist and the surgeon as well.

The effects of bias are highly situational. It’s best not to think of bias as something that dramatically changes your entire outlook on a situation. Think of bias like a pair of tinted glass that subtly influences how you perceive certain scenarios. When the right bias meets the right set of circumstances, things can go bad quickly.

 

Countering Bias

Bias is insanely hard to detect because of how it develops in either an individual or group setting.

In an individual setting, bias is inherent to your own decision making. Since humans inherently stink at detecting their own bias, it is unlikely you will become aware of it unless your analysis is reviewed by someone else who points it out. Even then, bias usually exists in small amounts and is unlikely to be noticed unless it meets a certain threshold that is noticeable by the reviewer.

In a group setting, things get complicated because bias enters from multiple people in small amounts. This creates a snowball effect in which group bias exists outside the context of any specific individual. Therefore, if hand offs occur in a linear manner such that each person only interacts one degree downstream or upstream, it is only possible to detect overwhelming bias at the upstream levels. Unfortunately, in real life these upstream levels are usually where the people are less capable of detecting the bias because they have lesser subject matter expertise in the field. In security, think managers and executives trying to catch this bias instead of analysts.

Let me be clear – you can’t fully eliminate bias in an investigation or in most other walks of your life. The way humans are programmed and the way our mindsets work prohibit that, and honestly, you wouldn’t want to eliminate all bias because it can be useful at times too. However, you can minimize the effects of negative bias in a couple of ways.

Evidence-Based Conclusions

A conclusion without supporting evidence is simply a hypothesis or a belief. In both medicine and information security, belief isn’t good enough without data to support it. This is challenging in many environments because visibility isn’t always in all the places we need it, and retention might not be long enough to gather the information you need when an event occurred far enough in the past. Use these instances to drive your collection strategy and justify appropriate budget for making sound conclusions.

In my case, two doctors made hypotheses without evidence. Another doctor gathered weak evidence and made a bad decisions, and another doctor confirmed that bad decision because the evidence was presented as a certainty when it actually wasn’t. A review of the support facts would have led the surgeon to catch the error before deciding to operate.

Peer Review

By definition, you aren’t aggressively aware of your own bias. Your mindset works the way it works and that is “normal” to you, so your ability to spot biased decisions when you make them is limited. After all, nobody comes to a conclusion they know to be incorrect. As Sheldon from the Big Bang Theory would say, “If I was wrong, don’t you think I’d know it?”

This is where peers come in. The surgeon might have caught the radiologist’s error if he had thoroughly reviewed the ultrasound results. Furthermore, if there was a peer review system in place in the radiology department, another person might have caught the error before it even got that far. Other people are going to be better at identifying your biases than you are, so that is an opportunity to embrace your peers and pull them in to review your conclusions.

Knowledge of Bias

What little hope you have of identifying your own biases doesn’t manifest during the decision-making process, but instead during the review of your final products and conclusions. When you write reports or make recommendations, be sure to identify the assumptions you’ve made. Then, you can review your conclusions as weighed against supporting evidence and assumptions to look for places bias might creep in. This requires a knowledge of common types of bias and how they manifest, which is precisely the purpose of this series of blog posts.

Most physicians are trained to understand and recognize certain types of bias, but that simply failed in my case until after the major mistakes had been made.

 

Conclusion

The investigation process provides an opportunity for bias to affect conclusions, decisions, and outcomes.  In this post I described a non-technical example of how bias can creep in while attempting to bridge the gap from perception to reality. In the rest of this series I’ll focus on specific types of bias and provide technical and non-technical examples of the bias in action, along with strategies for recognizing the bias in yourself and others.

As it turns out my stomach aches eventually stopped on their own. I never spoke to the radiologist again, but I did speak to the surgeon at length and he readily admitted his mistake and felt horrible for it. Some people might be furious at this outcome, but in truth, I empathized with his plight. Complex investigations, be it medical or technical, present opportunity for bias and we all fall victim to it from time to time. We’re all only human.

 

 

If you’re interested in learning more about how to help diminish the effects of bias in an investigation, take a look at my Investigation Theory course where I’ve dedicated an entire module to it. This class is only taught periodically, and registration is limited.

Announcing the Investigation Theory Online Course

Investigation Theory LogoI’m excited to announce my newest training course, with a portion of the proceeds supporting multiple charities.

Register Here

When I first started out, learning how to investigate threats was challenging because there was no formal training available. Even in modern SOCs today, most training is centered around specific tools and centers too much around on the job training. There has never been a course dedicated exclusively to the fundamental art and science of the investigation process…until now.

If you’re a security analyst responsible for investigating alerts, performing forensics, or responding to incidents then this is the course that will help you gain a deep understanding how to most effectively catch bad guys and kick them out of your network. Investigation Theory is designed to help you overcome the challenges commonly associated finding and catching bad guys.

  • I’ve got so many alerts to investigate and I’m not sure how to get through them quickly.
  • I keep getting overwhelmed by the amount of information I have to work with an investigation.
  • I’m constantly running into dead ends and getting stuck. I’m afraid I’m missing something.
  • I want to get started threat hunting, but I’m not sure how.
  • I’m having trouble getting my management chain to understand why I need the tools I’m requesting to do my job better.
  • Some people just seem to “get” security, but it just doesn’t seem to click for me.

Course Format

Investigation Theory is not like any online security training you’ve taken. It is modeled like a college course and consists of two parts: lecture and lab.  The course is delivered on-demand so you can proceed through it at your convenience. However, it’s recommended that you take a standard 10-week completion path, or an accelerated 5-week path. Either way, there are ten modules in total, and each module typically consists of the following components:

  • 1 Core Lecture: Theory and strategy is discussed in a series of video lectures. Each lecture builds on the previous one.
  • 1 Bonus Lecture: Standalone content to address specific topics is provided in every other module.
  • 1 Reading Recommendation: While not meant to be read on pace with the course, I’ve provided a curated reading list along with critical questions to consider to help develop your analyst mindset.
  • 1 Quiz: The quiz isn’t meant to test your knowledge, but rather, to give you an opportunity to apply it to reinforce learning through critical thinking and knowledge retrieval.
  • 1 Lab Exercise: The Investigation Ninja system is used to provide labs that simulate real investigations for you to practice your skills.

Investigation Ninja Lab Environment

This course utilizes the Investigation Ninja web application to simulate real investigation scenarios. By taking a vendor agnostic approach, Investigation Ninja provides real world inputs and allows you to query various data sources to uncover evil and decide if an incident has occurred, and what happened. You’ll look through real data and solve unique challenges that will test your newly learned investigation skills. A custom set of labs have been developed specifically for this course. No matter what toolset you work with in your SOC, Investigation Ninja will prepare you to excel in investigations using a data-driven approach.

This slideshow requires JavaScript.

Get stuck in a lab? I’m just an e-mail away and can help point you in the right direction. Enjoy the labs and want to go farther? You can purchase additional access to more labs, including our upcoming “Story Mode” where you create a character and progress through eight levels of investigation scenarios while trying to attain the rank of Investigation Ninja!

Instructor Q&A

This isn’t a typical online course where we just give you a bunch of videos and you’re own your own. The results of your progress, quizzes, and labs are reviewed by me and I provide real time feedback as you progress. I’m available as a resource to answer questions throughout the course.

Syllabus

  1. Metacognition: How to Approach an Investigation
  2. Evidence: Planning Visibility with a Compromise in Mind
  3. Investigation Playbooks: How to Analyze IPs, Domains, and Files
  4. Open Source Intel: Understanding the Unknown
  5. Mise en Place: Mastering Your Environment with Any Toolset
  6. The Timeline: Tracking the Investigation Process
  7. The Curious Hunter: Finding Investigation Leads without Alerts
  8. Your Own Worst Enemy: Recognizing and Limiting Bias
  9. Reporting: Effective Communication of Breaches and False Alarms
  10. Case Studies in Thinking Like an Analyst

Plus, several bonus lectures!

Cost

The course and lab access are $597 for a single user license. Discounts are available for multiple user licenses where at least 10 seats are purchased (please contact me to discuss payment). A significant portion of the purchase price will go to support multiple charities including the Rural Technology Fund, the Against Malaria Foundation, and others.

You’ll receive:

  • 6-mo Access to Course Videos and Content
  • 6-mo Access to Investigation Ninja
  • A Certification of Course Completion
  • Continuing Education Credits (CPEs/CEUs)

Sign Up Now!

This course is only taught periodically and space is limited.