Category Archives: Network Security

Writing for Security: Why You Hate It

quillOver the next couple of weeks, I’ll be sharing a multi part series about technical writing for security professionals. If your job requires you to write reports of any kind, or you enjoy blogging, then I think you’ll enjoy it. We’re going to talk about some of the underlying reasons you probably don’t like writing as much as you could, how good writing can help you further your career, and some tactical tips for being a better writer.

I recently conducted an open-ended survey of information security practitioners to ask them what their biggest pain point was. I was fortunate to get a ton of responses and ended up with a list just short of a mile long. The list was very diverse, but there were a few themes that emerged. One that I wasn’t surprised to see was that many expressed their disinterest for the part of their job that involves writing.

Let’s dig into why so many people hate writing.

I’d rather be hunting, pen testing, etc

This was by far the most common thing I’ve heard, and I’m sure you can relate. Most of us get the thrill in our job by catching bad guys or pretending to be them. You probably got into this business because you want to help people be more secure, you love solving puzzles, or you just love breaking things. When you have to write up your findings it really just takes time away from that. This is especially painful when you can’t get ahead of your alert queue or you can’t find enough time to go hunting as it is. For consultants, you also have to contend with the limitation on hours for the gig. If you have to spend 50% of your hourly budget writing a report, that’s time that could be spent actually doing the job.

I’m not good at it

Many practitioners simply aren’t good at putting their actions and findings into words. We all know great investigators and pen testers who can do truly amazing things, but have the communication skills of a rock. To make things worse, many experience imposter syndrome where they perceive their writing is poor when it really isn’t. The only thing worse than being bad at writing is thinking you’re bad at it. After all, who wants to spend time doing something they believe they suck at?

Nobody listens to my findings and recommendations

Perhaps the most bone crushingly painful part of writing occurs when you spend a lot of time validating something and coming up with specific recommendations, only to find out they were ignored. You might have experienced this when you perform a pen test a year after performing another one, only to find the same vulnerabilities still exist. Worse yet, you might respond to a breach only to find the recommendations for preventing similar breaches weren’t followed, resulting in another one.

These are just a few of the reasons you probably hate writing. I don’t blame you. They are all legitimate pains and over time they can crush your soul. It might surprise you, but at one time in my life I hated writing, too. I avoided it like the plague and while I wasn’t horrible at it, I certainly wasn’t very good.

Moving Past the Hate

If you knew me then, you’d have a hard time believing I was capable of writing a few books and hundreds of articles, let alone a PhD dissertation. Luckily, when I was earlier on in my career I quickly figured out that writing was an important part of it. So, how did I turn something I hated and wasn’t great at into one of my biggest strengths? I did what, most hackers do, I broke it down into parts that made sense and developed a system.

I sat down and thought about things I liked to read, and how I could relate those to the things I had to write. I’m a big fan of modern authors like Tom Clancy, John Grisham, and Stephen King. I broke down what I liked about their work and started thinking about what they did successfully and how it could relate to technical writing. Eventually I was able to produce a repeatable system that I owe a lot of my career success to.

I’m not going to go in depth on my system here (I’m going to be sharing that later), but in the next post I’ll share some reasons why hackers hate writing that they may not realize.

More on Writing

The truth is that most of us don’t really enjoy the part of our job that requires writing reports. However, no matter what area of security you work in, a great deal of your success will be determined by your ability to do this thing. Fortunately, writing doesn’t have to be as painful as it is. By spending some time up front, hacking the process a bit, and setting up a repeatable system, you can speed up the writing process and gain back the time you can spend breaking things and hunting the adversary. Not only that, you can also become a much more effective agent of change, helping your network or your clients network become more secure. It can even help you become a better teacher and build your community resume by learning how to share your expertise better through a public blog.

If you’re interested in learning more about my personal systems for better technical writing, I’ll be releasing more articles in that area soon, as well as a couple of videos. You can subscribe to the mailing list below to get access to that content first, along with a few exclusives that won’t be on the site.

Sign Up for the Mailing List Here

 

Survey: Technical Writing Pain Points

I’m currently working on a multi-part series on technical writing. Specifically, I’m addressing some of the more common things I hear from security pros who hate writing things like penetration test reports and incident reports. I’ll also be addressing informal technical writing, like blog posts.

I’d love to hear from you. What do you dislike about technical writing the most? What is it that makes you dread it or put it off until the lat minute?

Either let me know in the comments, tweet me using the hashtag #ReportingPain, or you can fill out the anonymous survey here: https://www.surveymonkey.com/r/JS9KVX2.

Research Call for Security Investigators

brainicon_blueI’m currently seeking security investigators for a research study I’m conducting on cognition and reasoning related to the investigative process. I need individuals who are willing to sit down with me over the phone and participate in an interview focused on individual investigations they’ve worked. The interviews will be focused on describing the flow of the investigation, your thought process during it, and challenges you encountered. I’ll ask you to describe what happened and how you made specific decisions. Specifically, I’m looking for investigations related to the following areas:

  • Event Analysis: You received some kind of alert and investigated it to determine whether it was a true positive or false positive.
  • Incident Response: You received notification of a breach and performed incident response to locate and/or remediate affected machines.

Ideally, these should be scenarios where you felt challenged to employ a wide range of your skills. In either domain, the scenario doesn’t have to lead to a positive confirmation of attacker activity. Failed investigations that led to a dead end are also applicable here.

A few other notes:

  • You will be kept anonymous
  • Any affected organization names are not needed, and you don’t have to give specifics there. Even if you do, I won’t use them in the research.
  • You will be asked to fill out a short (less than five minute) demographic survey
  • The phone interview will be recorded for my review
  • The phone interview should take no longer than thirty minutes
  • If you have multiple scenarios you’d like to walk through, that’s even better
  • At most, the scenario will be generalized and described at a very high level in a research paper, but it will be done in a generic manner that is not attributable to any person or organization.

If you’d like to help, please e-mail me at chris@chrissanders.org with the subject line “Investigation Case Study.”

Applied Network Security Monitoring, the book!

I’m thrilled to announce my newest project, Applied Network Security Monitoring, the book, along with my co-authors Liam Randall and Jason Smith.

Better yet, I’m excited to say that 100% of the royalties from this book will be going to support some great charities, including the Rural Technology Fund, Hackers for Charity, Hope for the Warriors, and Lighthouse Youth Services.

You can read more about the book, including a full table of contents at its companion site, here: http://www.appliednsm.com/.

Information Security Incident Morbidity and Mortality (M&M)

It may be a bit cliché, but encouraging the team dynamic within an information security group ensures mutual success over individual success. There are a lot of ways to do this, including items I’ve discussed before such as fostering the development of infosec superstars or encouraging servant leadership. Beyond these things, there is no better way to ensure team success within your group than to create a culture of learning. Creating this type of culture goes well beyond sending analysts to formalized courses or paying for certifications. It relies upon adopting the mindset that in every action an analyst takes, they should either be teaching or learning, with no exceptions. Once every analyst begins seeing every part of their daily job as an opportunity to learn something new or teach something new to their peers, then a culture of learning is flourishing.

A part of this type of organizational culture is learning from both successes and failures. The practice of Network Security Monitoring (NSM) and Incident Response (IR) are ones that are centered on technical investigations and cases, and when something bad eventually happens, incidents. This is not unlike medicine, which is also focused on medical investigations and patient cases, and when something bad eventually happens, death.

Medical M&M

When death occurs in medicine, it can usually be classified as something that was either avoidable or inevitable from both a patient standpoint and also as it related to the medical care that was provided. Whenever a death is seen as something that may have been prevented or delayed with modifications to the medical care that was provided, the treating physician will often be asked to participate in something called a Morbidity and Mortality Conference, or M&M as they are often referred to casually. In an M&M, the treating physician will present the case from the initial visit, including the presenting symptoms and the patients initial history and physical assessment. This presentation will continue through the diagnostic and treatment steps that were taken all the way through the patient’s eventual death.

The M&M presentation is given to an audience of peers, to include any other physicians who may have participated in the care of the patient in question, as well as physicians who had nothing to do with the patient. The general premise is that these peers will question the treatment process in order to uncover any mistakes that may have been made or processes that could be improved upon.

The ultimate goal of the medical M&M as a team is to learn from any complications or errors, to modify behavior and judgment based upon experiences gained, and to prevent repetition of errors leading to complications. This is something that has occurred within medicine for over one hundred years and has proven to be wildly successful.

Information Security M&M

I’ve written about how information security can learn from the medical field on multiple occasions, including recently discussing the use of Differential Diagnosis for Network Security Monitoring. The concept of M&M is also something that I think transitions very well to information security.

As information security professionals, it is very easy to miss things. I’m a firm believer that prevention eventually fails, and as a result, we can’t be expected to live in a world free from compromise. Rather, we must be positioned so that when an incident does occur, it can be detected and responded to quickly. Once that is done, we can learn from whatever mistakes occurred that allowed the intrusion, and be better prepared the next time.

When an incident occurs we want it to be because of something out of our hands, such as a very sophisticated attacker or an attacker who is using an unknown zero day. The truth of the matter is that not all incidents are that complex and often times there are ways in which detection, analysis, and response could occur faster. The information security M&M is a way to collect that information and put it to work. In order to understand how we can improve from mistakes, we have to understand why they are made. Uzi Arad summarizes this very well in the book, “Managing Strategic Surprise”, a must read for information security professionals. In this book, he cites three problems that lead to failures in intelligence management, which also apply to information security:

  • The problem of misperception of the material, which stems from the difficulty of understanding the objective reality, or the reality as it is perceived by the opponent.
  • The problems stemming form the prevalence of pre-existing mindsets among the analysts that do not allow an objective professional interpretation of the reality that emerges from the intelligence material.
  • Group pressures, groupthink, or social-political considerations that bias professional assessment and analysis.

The information security M&M aims to provide a forum for overcoming these problems through strategic questioning of incidents that have occurred.

When to Convene an M&M

In an Information Security M&M, the conference should be initiated after an incident has occurred and been remediated. Selecting which incidents are appropriate for M&M is a task that is usually handled by a team lead or member of management who has the ability to recognize when an investigation could have been handled better. This should occur reasonably soon after the incident so important details are fresh on the minds of those involved, but far enough out from the incident that those involved have time to analyze the incident as a whole, post-mortem. An acceptable time frame can usually be about a week after the incident has occurred.

M&M Presenter(s)

The presentation of the investigation will often involve multiple individuals. In medicine, this may include an initial treating emergency room physician, an operating surgeon, and a primary care physician. In information security, this could include an NSM analyst who detected the incident, the incident responder who contained and remediated the incident, the forensic investigator who performed an analysis of a compromised machine, or the malware analyst who reverse engineered the malware associated with the incident.

M&M Peers

The peers involved with the M&M should include at least one counterpart from each particular specialty, at minimum. This means that for every NSM analyst directly involved with the case, there should be at least one other NSM analyst who had nothing to do with it. This aims to get fresh outside views that aren’t tainted by feeling the need to support any actions that were taken in relation to the specific investigation. In larger organizations and more ideal situations, it is nice to have at least two counterparts from each specialty, with one being of lesser experience than the presenting individual and one being of more experience.

The Presentation

The presenting individual or group of individuals should be given at least a few days notice before their presentation. Although the M&M isn’t considered a formal affair, a reasonable presentation is expected to include a timeline overview of the incident, along with any supporting data. The presenter should go through the detection, investigation, and remediation of the incident chronologically and present new findings only as they were discovered during this progression. Once this chronological presentation is given, the incident can then be examined holistically.

During the presentation, participating peers should be expected to ask questions as they arise. Of course, this should be done respectfully by raising your hand as the presenter is speaking, but questions should NOT be saved for after the presentation. This is in order to frame the questions to the presenter as a peer would arrive at them during the investigation process.

Strategic Questioning

Questions should be asked to presenters in such a way as to determine why something was handled in a particular manner, or why it wasn’t handled in an alternative manner. As you may expect, it is very easy to offend someone when providing these types of questions, therefore, it is critical that participants enter the M&M with an open mind and both presenters and peers ask and respond to questions in a professional manner and with due respect.

Initially, it may be difficult for peers to develop questions that are entirely constructive and helpful in overcoming the three problems identified earlier. There are several methods that can be used to stimulate the appropriate type of questioning.

Devils Advocate

One method that Uzi Arad mentions in his contribution to “Managing Strategic Surprise” is the Devils Advocate method. In this method, peers attempt to oppose most every analytical conclusion made by the presenter.  This is done by first determining which conclusions can be challenged, then collecting information from the incident that supports the alternative assertion. It is then up to the presenter to support their own conclusions and debunk competing thoughts.

Alternative Analysis (AA)

R.J. Heuer presents a several of these methods in his paper, “The Limits of Intelligence Analysis”. These methods are part of a set of analytic tools called Alternative Analysis (AA).

Group A / Group B

This analysis involves two groups of experts analyzing the incident separately, based upon the same information. This requires that the presenters (Group A) provide supporting data related to the incident prior to the M&M so that the peers (Group B) can work collaboratively to come up with their own analysis to be compared and contrasted during the M&M. The goal is to establish to individual centers of thought. Whenever points arise where the two groups reach a different conclusion, additional discussion is required to find out why the conclusions differ.

Red Cell Analysis

This method focuses on the adversarial viewpoint, in which peers assume the role of the adversary involved with the particular incident. In doing this, they will question the presenter as to how their investigative steps were completed in reaction to the attackers actions. For instance, a typical defender may solely be focused on finding out how to stop malware from communicating back to the attacker, but the attacker may be more concerned with whether or not the attacker was able to decipher the communication that was occurring. This could lead to a very positive line of questioning that results in new analytic methods that help to better assess the impact of the attacker to benefit containment.

What If Analysis

This method is focused on the potential causes and effects of events that may not have actually occurred. During detection, a peer may ask a question related to how the attack might have been detected if the mechanism that did detect it didn’t do so. In the response to the event, a peer might question what the presenter would have done had the attacker been caught during the data exfiltration process rather than after it had already occurred. These questions don’t always relate directly to the incident at hand, but provide incredibly valuable thought provoking discussion that will better prepare your team for future incidents.

Analysis of Competing Hypothesis

This method is similar to what occurs during a differential diagnosis, where peers crate an exhaustive list of alternative assessments of symptoms that may have been presented. This is most effectively done by utilizing a whiteboard to list every potential diagnosis and then ruling those out based upon testing and review of additional data. You can review my article on differential diagnosis of NSM events here for a more thorough discussion of this type of questioning.

Key Assumptions Check

Most all sciences tend to make assumptions based upon generally accepted facts. This method of questioning is designed to challenge key assumptions and how they affect the investigation of a scenario. This most often pairs with the What If analysis method. As an example, in the spread of malware, it’s been the assumption that when operating within a virtual machine, the malware doesn’t have the ability to escape to the host or other virtual machines residing on it. Given an incident being presented where a virtual machine has been infected with malware, a peer might pose the question of what action might be taken if this malware did indeed escape the virtual environment and infect other virtual machines on the host, or the host itself.

Outcome

During the M&M, all participants should actively take notes. Once the M&M is completed, the presenting individuals should take their notes and combine them into a final report that accompanies their presentation materials and supporting data. This reporting should include a listing of any points which could have been handled differently, and any improvements that could be made to the organization as a whole, either technically or procedurally. This report should be attached the case file associated with the investigation of the incident.

Additional Tips

Having organized and participated in several of these conferences and reviews of similar scope, I have a few other pointers that help in ensuring they provide value.

  • M&M conferences should be held only sporadically, with no more than one per week and no more than three per month.
  • It should be stressed that the purpose of the M&M isn’t to grade or judge an individual, but rather, to encourage the culture of learning.
  • M&M conferences should be moderated by someone at a team lead or lower management level to ensure that the conversation doesn’t get too heated and to steer questions in the right direction.
  • If you make the decision to institute M&M conferences, it should be a requirement that everybody participates at some point, either as a presenter or a peer.
  • The final report that is generated from the M&M should be shared with all technical staff, as well as management.
  • Information security professionals, not unlike doctors, tend to have big egos. The first several conferences might introduce some contention and heated debates. This is to be expected initially, but will work itself out over time with proper direction and moderation.
  • The M&M should be seen as a casual event. It is a great opportunity to provide food and coordinate other activities before and after the conference to take the edge off.
  • Be wary of inviting upper management into these conferences. Their presence will often inhibit open questioning and response and they often don’t have the appropriate technical mindset to gain or provide value to the presentation.

It is absolutely critical that when initiating these conferences, it is done with care. The medical M&M was actually started in the early 1900s by a surgeon named Dr. Ernest Codman at Massachusetts General Hospital in Boston. MGH was so appalled that Dr. Codman suggested that the competence of surgeons should be evaluated that he eventually lost his staff privileges. Now, M&M is a mainstay in modern medicine and something that is done in some of the best hospitals in the world. I’ve seen instances where similar types of shunning occur in information security when these types of peer review opportunities are suggested. As information security practitioners it is crucial that we are accepting of this type of peer review and that we encourage group learning and the refinement of our skills.

References:

  • Campbell, W. (1988). “Surgical morbidity and mortality meetings“. Annals of the Royal College of Surgeons of England 70 (6): 363–365. PMC 2498614.PMID 3207327.
  • Arad, Uzi (2008). Intelligence Management as Risk Management. Paul Bracken, Ian Bremmer, David Gordon (Eds.), Managing Strategic Surprise (43-77). Cambridge: Cambridge University Press.
  • Heuer, Richards J., Jr. “Limits of Intelligence Analysis.” Orbis 49, no. 1 (2005)