Category Archives: Network Security

Introducing the Source Code Podcast

A few weeks ago on Twitter, I teased that I was working on a new podcast called “Source Code”. Creating a podcast is something I’ve always wanted to do, but I’ve never really had the opportunity to pursue it until now. There are a lot of great podcasts in the information security space already, and I’ve been fortunate enough to be guests on a couple of them. So, what makes mine different (aside from being able to make fun of my accent)?

Source Code is an information security podcast that’s all about education. Rather than simply providing technical segments or news, Source Code is focused on the people that push information security forward and battle in the trenches every day.

We interview practitioners from every facet of information security about their origin story. This includes how they go their start, how they got into the field, and the career decisions that made them successful (or slowed them down) along their path. It’s the story of their source code — what makes them tick. We also talk about current opinions on the state of security education to include what we’re doing right and what we’re doing wrong.

You’ll hear from plenty of household names you’ve heard of, as well as some people you should know about with interesting back stories and unique contributions to the field. Source Code celebrates the diversity of backgrounds that makes information security a unique place to exist.

The #1 question I get asked is “How do I get into infosec?” My hope is that through this podcast, I create a library of stories that can help answer that question by showing people that there are a ton of different ways to get started, and each one can lead to great success.

The podcast will live here:

You can also subscribe to it using your favorite podcasting platform:

If you like what you hear, I’d sincerely appreciate you subscribing, “liking”, or giving a positive review of the podcast on whatever platform you use. 

The show is seasonal, and the first season will have eight episodes that will be released every other Friday (you get this one early). I have some GREAT guests lined up, so stay tuned.

I hope you enjoy it!

Increase Security Reporting with Contact Cards

After a really nice weekend, Zeke walked into his office and logged into his workstation. He made a habit of getting to the office around 7 AM on Mondays so that he could go through the pile of unanswered e-mails from the previous week and clear out all junk that accumulated over the weekend. As he was going through e-mails, he spotted one with the subject line “401k performance highlights” that was sent towards the end of the day on Friday. He had seen it then, but was in a rush to get out the door. He opened the e-mail and read that his 401k had performed better than expected over the previous quarter. A link was provided to review the details so he clicked it. That was when he realized he had made a mistake.

The link didn’t go to his 401k provider as expected, but instead went to a suspicious looking fake site that was made to look like it. Zeke had gone through information security awareness training, so he knew how to spot forgeries like this. It was very obvious that the site was hosted on a domain made to look like a legitimate bank, but was in fact not associated with it at all. He immediately closed his browser window and waited. His system seemed to be fine. No weird pop ups, no prompts to download any software, no weird slowness. He had dodged a bullet. He remembered that he was supposed to report occurrences like this, but wasn’t entirely sure where. Was it the IT helpdesk? No, it was a special e-mail address….or was it a phone number? He quickly looked up the contact number for someone he knew who worked in information security. Hmm, no answer. It’s still pretty early so that person might not be in. He shrugged it off and went about the business of responding to unanswered mail. By the time his day got going, he forgot all about the incident and it never got reported.

Stories like the one here are common in large organizations. Many companies do their part and invest heavily in security awareness programs that teach employees how to spot phishing attempts and security anomalies so that they can be reported. However, many organizations struggle with the final step in this process — effectively enabling their users to report anomalous activity.

So, why do users fail to report security issues when they recognize them? You might expect me to provide a long-winded underlying psychology perspective. However, the answer is much simple than that. People rarely have the opportunity to report security incidents, so they forget how to do it. Just like Zeke, they forget the e-mail address or the phone number. Further, they are often hesitant about whether anyone actually cares about what they’ve observed, and they’re worried that incompetence will be exposed. Of course, there isn’t incompetence here, but all that matters is that the individual perceives the possibility of it. Nobody likes feeling dumb, so someone certainly aren’t going to go out of their way to report an incident if it means potentially going through multiple layers of people. That increases the chance that perceived incompetence gets exposed. In summation, people are less likely to report security incidents unless it’s incredibly easy, and that is a defense mechanism that is difficult to overcome.

How do you eliminate that defense mechanism? Well, that’s a difficult question with multiple facets and is far beyond the scope of what I have to offer in this humble blog post. I think the better and more practical approach is to make security incident reporting as simple as conceivably possible. For that, I offer a non-technical solution.

The best strategy for reporting security incidents that I’ve seen is simple. Provide every employee with a security reporting contact card (SRCC), like the one seen in the image below.

As you can see, this card is incredibly simple. Ideally, this is printed to the same size as a business card on heavy stock. This provides a form factor sizable enough to display all the necessary information, but small enough to be easily tucked into a wallet or purse. The idea is simple: every employee receives one of these cards and is requested to keep it close at all times. While most employees can’t be expected to remember an obscure e-mail address or phone number, they will remember that they were given this card and can reference it without hassle.

So, what is necessary to include on the SRCC? This can vary, but I recommend making sure that the card can answer these questions:

  • Who can I e-mail/call for any network security related issue?
  • Who can I e-mail/call for any physical security related issue?
  • Where can I forward phishing or scam e-mails?

The purpose of this card is not to educate people on what’s reportable. That should be a goal of your information security awareness training and certainly won’t fit on a card. However, the three questions mentioned above cover a lot of the things that will generally be reportable. By providing e-mail addresses and phone numbers, you provide an implied escalation path if something is deemed to important or time sensitive for e-mail.

A few other tips:

  • Use a bright, easily noticeable color on your card. This will help people spot it and remember where it is in their wallet/purse.
  • Make the design somewhat different from your normal company business card template.
  • Setup a special e-mail address to receive forwarded phishes separate from your normal intake queue. This will help with prioritization and tasking.
  • Build this into your security awareness training. Occasionally challenge people to produce these cards so everyone gets used to keeping them handy. Make sure you are clearly identifying what is reportable and what isn’t based on your organization’s threat model and ability to evaluate reported incidents.

If you want to get started right away, here’s a Microsoft Word template you can download and use now. Just replace the logo and information with your own, print it out, and hand them out to your employees.

So what ever happened with Zeke? Well, it turns out that once he clicked on the link in the phishing e-mail, his browser downloaded a flash exploit which led to a malware infection. An attacker was able to leverage access to this system to retrieve sensitive information about Zeke’s company. It took the attackers a few weeks to complete their attack. But, since the detection tools in the organization didn’t pick it up, and Zeke never reported it, the attack wasn’t discovered until much later. The organization did a lot of things right, but a failure to make incident reporting easy subverted much of their effort. Security contact cards a simple, affordable, effective way to make sure you don’t fall victim to a similar scenario. The minimal cost of printing and distributing these cards is nothing in comparison to what you might pay if something goes unreported.


The Effects of Opening Move Selection on Investigation Speed

What follows is a shortened version of a longer paper that will be released at a later time. You can also learn more about this research by watching my recent Security Onion Conference 2016 video where I discuss these results and other similar experiments.


The core construct of computer network defense is the investigation. It’s here where human analysts receive anomalous signals and pursue evidence that might prove the existence of an information system breach. While little formalized methodology related to the investigation process exists, research in this area is beginning to emerge.

Existing research in human thought and knowledge acquisition is applicable to information security and computer network defense. Daniel Kahneman provided modern research in support of a dual-process theory of thinking that defines two separate processes governing human thought. The first process, called intuitive or system 1 thinking, is automatic and usually kicks in without directly acknowledging it. You don’t have to think about brushing your teeth or starting your car, you just do it. The second process, called reflective or system 2 thinking, is deliberate and requires attentive focus. You have to think about math problems and your next move when playing checkers. Both of these systems are in play during the investigation process.

In an investigation, analysts use intuitive thought when pursuing data related to an alert. This is most often the case when the analyst makes their opening move. The analyst’s opening move is the first data query they execute after receiving an alert. By virtue of being the first move, the analyst doesn’t apply a lot of explicit reflective thought on the issue and they simply jump to the place they believe will provide the quickest and most definitive answer. It’s assumed that in these cases, the analyst’s first move is probably the data source they perceive as being the most valuable.

The goal of the present research is to determine which common data source analysts were more likely to use as their opening move, and to assess the impact of that first move on the speed of the investigation.


The foundation of this research was a purpose built investigation simulator. The investigation simulator was built to recreate the investigation environment in a tool agnostic manner, such that individual scenarios could be loaded for a sample population and the variables could be tightly controlled.

A pool of security analysts was selected based on their employment history. Every analysts selected was currently or recently in a role were they were responsible for investigating security alerts to determine if a security compromise had occurred. Demographic information was collected, and analysts were placed into three skills groups based on their qualifications and level of experience: novice, intermediate, or expert.


Group A – Exploit Kit Infection

The primary experiment group was asked to connect to the investigation simulator remotely and work through the investigation scenario provided to arrive at a conclusion whether an infection or compromise had successful occurred.

The scenario presented the user with a Suricata IDS alert indicating that an internal host visited an exploit kit landing page.



Figure 1: The Suricata IDS alert that initiated the simulation

The following data sources were provided for investigation purposes:

Data Source Query Format Output Style
Full packet capture (PCAP) Search by IP or port TCPDump
Network flow/session data Search by IP or port SiLK IPFIX
Host file system Search by filename File path location
Windows Logs Option: User authentication/process create logs Windows event log text
Windows Registry Option: Autoruns/System restore/application executions (MUI cache) Registry keys and values
Antivirus Logs Search by IP Generic AV log text
Memory Option: Running process list/Shim cache Volatility
Open Source Intelligence Option: IP or domain reputation/file hash reputation/Google search Text output similar to popular intelligence providers

Table 1: Data source provided for Group A

Subjects were instructed that they should work towards a final disposition of true positive (an infection occurred), or false positive (no infection occurred). Whenever they had enough information to reach a conclusion, they were to indicate their final disposition in the tool, at which point the simulation exited.

The simulator logged every query the analysts made during this experiment, along with a timestamp and the start and end time. This produced a timeline of the analysts enter investigation, which was used to evaluate the research questions.


Group B – PCAP Data Replaced with Bro Data

Based on results achieved with group A, a second non-overlapping sample group of analysts were selected to participate in another experiment. Since group indicated a preference for higher context PCAP data, the second scenario removed the PCAP data option and replaced it with Bro data, another high context data source that is more structured and organized. The complete list of data sources provided for this group were:

Data Source Query Format Output Style
Bro Search by IP Bro
Network flow/session data Search by IP or port SiLK IPFIX
Host file system Search by filename File path location
Windows Logs Option: User authentication/process create logs Windows event log text
Windows Registry Option: Autoruns/System restore/application executions (MUI cache) Registry keys and values
Antivirus Logs Search by IP Generic AV log text
Memory Option: Running process list/Shim cache Volatility
Open Source Intelligence Option: IP or domain reputation/file hash reputation/Google search Text output similar to popular intelligence providers

Table 2: Data source provided for Group B

All experiment procedures and investigation logging measures remained in place, consistent with group A.

Group C – Survey Group

A third semi-overlapping group was selected at random to collect self-reported statistics to assess what opening move analysts self reported they would be more likely to make given a generic investigation scenario.

Using a combination of manually polling analysts and collecting responses from Twitter polling, analysts were asked the following question:

In a normal investigation scenario, what data source would you look at first?

The multiple-choice options presented were:

  1. PCAP
  2. Flow
  3. Open Source Intelligence
  4. Other


The first item evaluated was the distribution of opening moves. Simply put, what data source did analysts look at first?

In Group A, an 72% of analysts chose PCAP as their first move, 16% chose flow data, and the remaining 12% chose OSINT. The observed numbers differ significantly from the numbers analysts reported during information polling. In the Group C polling, 49% of analysts reported PCAP would be their first move, 28% chose flow data, and 23% chose OSINT.


Chart 1: Opening move selection observed for Group A

The mean time to disposition (MTTD) metric was calculated for each first move group by determining the difference between start and end investigation time for each analysts and averaging the results of all analysts within the group together. Analyst’s who chose PCAP had a MTTD of 16 minutes, those who chose flow had a MTTD of 10 minutes, and those who chose OSINT had a MTTD of 9 minutes.


Chart 2: Time to disposition for Group A


In Group B where PCAP data was replaced with Bro data, 46% of analysts chose Bro data as their first move, 29% chose OSINT, and 25% chose flow.


Chart 3: Comparison of group A and B opening moves

Analysts who chose Bro had a MTTD of 10 minutes, while those who chose flow and OSINT and MTTDs of 10 minutes and 11 minutes, respectively.


Chart 4: Comparison of group A and B average time to close


While not entirely conclusive, the data gained from this research does provide several suggestions. First, given an overwhelming 72% of people chose to begin their investigation with PCAP data, it’s clear that analysts prefer a higher context data source when its available, even if other lower context data sources available. In these simulations there were multiple ways to come to the correct conclusion, and PCAP data did not have to be examined at all to reach it.

The data also suggests that an opening move to a high context but relatively unorganized data source can negatively affect the speed an analyst reaches an appropriate conclusion. The MTTD for analysts whose opening move was PCAP in Group A was significantly higher than those who started with lower context data sources flow and OSINT. This is likely because PCAP data contains extraneous data that isn’t beneficial to the investigator, and it takes much longer to visually parse and interpret. Examining the results of the group B experiment further supports this finding. PCAP was replaced with Bro log data, which generally contains most of the same useful information that PCAP provides, but organizes it in a much more intuitive way that makes it easier to sift through. Analysts who chose Bro data for their opening move had a MTTD that was much lower than PCAP and comparable to flow and OSINT data sources.

The comparison between observed and reported opening moves highlights another finding that analysts often don’t understand their own tendencies during an investigation. There was a significant difference between the number of people who reported they would choose to investigate an anomaly with PCAP, and those who actually did. Opening move selection is somewhat situational however, so the present study did not introduce enough unique simulations to truly validate the statistics supporting that finding.

Possible limitations for this study mostly center on a limited number of trials, as only one simulation (albeit modified for one group) was used. More trials would serve to strengthen the findings. In addition, there is some selection bias towards analysts who are more specialized in network forensics than host forensics. This likely accounts for no first moves being to host-based data. Additionally, in the simulations conducted here access to all data sources took an equal amount of time. In a real world scenario, some data sources take longer to access. However, since PCAP and other higher context data sources are usually larger in size on disk, the added time to retrieve this data would only strengthen these findings that PCAP data negatively affects investigation speed.


Overall, this research provides insight into the value of better organized higher context data sources. While PCAP data contains an immense level of context, it is also unorganized, hard to filter and sift through compared to other data types, and has extraneous data not useful for furthering the investigation. To improve investigation efficiency, it may be better to make opening moves that start with lower context data sources so that a smaller net can be cast when it comes time to query higher context sources. Furthermore, when more organized higher context data sources are available, they should be used.

While the present research isn’t fully conclusive due to its sample size and a limited number of simulation trials, it does provide unique insight into the investigation process. The methods and strategy used here should be applicable for additional research to further confirm the things this data suggests.


Interested in learning more about the investigation process, choosing the right data sources to look at, and becoming a better analyst? Sign up here to be the first to hear about my new analyst training course being released later this year. Mailing list subscribers will get the first opportunity to sign up for the exclusive web-based course, and space is limited. Proceeds from this course will go to benefit charity.

Writing for Security: Action Items that Provoke Change

quillMost people don’t realize it, but the success of what you write will probably be measured by how actionable it is. I’ve read hundreds of security assessments and forensic reports that go into a perfect level of detail, only to find that they fall short of delivering what every report needs: something actionable.

Imagine watching a great movie. They’ve done a wonderful job developing complex characters, the plot engagingly builds, and you’re on the edge of your seat the entire time. Right as the climax is happening and the story is coming to it’s pivotal point…the credit start rolling. It’s over. Although you might have enjoyed the couple of hours you invested up unto that point, you’re going to walk away with a bad taste in your mouth because you were robbed of a satisfactory conclusion. We all know movies like this, and usually chalk it up to lazy writing. This is exactly what you’re doing when you write without providing something actionable.

Whether you’re writing a security assessment report or an incident response report, your purpose isn’t merely to inform, it’s also to persuade. It isn’t enough that someone knows there is a vulnerability on their network. They have to be persuaded to implement controls that mitigate the risk of that vulnerability. It doesn’t matter if your forensic report does a good job explaining how an attacker got in. It has to persuade the reader to implement the necessary process changes or install the right tools to prevent it from happening again.

There are plenty of techniques you can use to be persuasive when you write, but before you do that you must identify what you want the reader to do. These are your action items, and the ability to identify them is what makes you an expert. Plenty of people can find vulnerabilities or find evidence of an attacker, but if you can’t identify actions to mitigate the risk associated with those findings then you’re report isn’t useful.

Identifying action items is all about mitigating risk. You should give the reader advice that prevents bad guys from doing some thing, or detects when they do it. You should always do both when possible.

Prevention Action Items

Prevention is as simple as making changes that keep bad things from happening. If you can give your reader steps they can take that prevent an attacker from doing something, that’s usually a win.

In reporting, I like to conceptualize change in terms of how difficult it is to accomplish. After all, it’s a lot harder to persuade someone to make a change if it’s going to be insanely difficult. Part of good writing is being honest with your readers, so it helps if to identify the level of effort required with a requested change. If it’s going to be easy you should make that clear so the reader is compelled to do it quickly. If it’s going to be difficult, be up front about and break it into steps. Your readers will appreciate this and will trust you more.

Changes will typically be categorized in terms of people, process, and technology.

  • People: Changing mindsets, providing training, hiring new staff, replace existing staff.
  • Process: Changing the way human or tech-centric things are done, adding new processes.
  • Technology: Configuration changes, additional software, new technology.

In most case, technology changes will be the easiest and people changes will be the hardest. It’s easy to manipulate systems, but it’s very difficult to acquire new people or change the way existing people think. The latter requires a lot more political and financial capital. This hierarchy of difficulty is how you should approach identifying prevention actions in your report. You should also report them in order of easiest to hardest within each individual finding.

In a lot of cases, some changes might touch all three areas. For example, building a security operations team requires hiring new people, building new processes, and implementing new technology. These massive changes should be saved for last and you should provide plenty of ancillary resources for the reader, as they will often involve topics that need to be covered in much larger depth.

When you are ready to start identifying action items, it’s helpful to ask yourself these three questions, filling in the blanks with the pertinent details from the finding you’re addressing:

  1. Are there any changes that can be made to prevent an attacker from __________?
  2. Is there anything new that can be done to prevent an attacker from __________?
  3. Is there anything that should be stopped in order to prevent an attacker from __________?

Let’s look at some examples of common findings and their action items. Notice that some action items combine categories, and some categories aren’t present.


Security Assessment: Web Server – Utilizing Plaintext Authentication

  • Technology: Change authentication method


Security Assessment: Local Windows Admin Account – No Password Rotation

  • Technology: Purchase password management software
  • Process: Institute manual change process


Incident Report: Attacker Guessed VPN Password

  • Technology: Institute lockout after three failed authentication attempts. Enforce stronger password requirements and more frequent rotation.
  • Technology + Process: Implement two-factor authentication.
  • People: Train users to use passwords that can’t be easily guessed


Incident Report: Workstation Compromised Because User Clinked Phishing Link

  • Technology: Install an e-mail threat protection appliance.
  • Process: Force users to use non-admin accounts and escalate privileges when administrative actions are needed.
  • People: Provide phishing awareness training to users.


Incident Report: Attacker Moved Laterally with Ease Due to Flat Network

  • Technology: Architect network for better segmentation.


These are just examples, but you can see where we started with technology and moved towards people. In most cases, the technology solution is going to be the easiest to implement in terms of labor hours. Of course, this doesn’t mean that a technology solution is always the best, but it is a step in the right direction. You want to give the reader the easy wins so they are more compelled to keep working towards to bigger wins. The first step is the hardest to take.

Detection Action Items

Detective controls are designed to detect when bad things happen. The vast majority of reports you’ve read probably don’t include them, which is a shame. Whenever you are making preventive recommendations, you should also make detective recommendations. There are a few reasons why:

  • Many organizations won’t be able to implement protective changes in a timely manner, or at all due to political or budgeting restraints.
  • Prevention eventually fails, and a key tenant of security is having multiple layers of controls.
  • The findings you’ve identified may have already been exploited, and an ability to retroactively detect this can help uncover a breach.

If you’re a consultant, writing detection action items can be difficult because there are a wide array of detection technologies. It’s hard to tailor detection content exclusively for a single customer without an intimate knowledge of their detection strategy. As a place to start, consider asking your client about their detection strategy and relevant technologies so that you can tailor your recommendations to them. This can be a part of the initial scoping call.

If you’re findings are related to your own network, or you’re writing a blog post, it’s a lot easier provide detection action items based on the precise technologies your using, or at a minimum prevailing open source standards. You can start with these questions:

  1. Are there any network-based indicators that can be used for detection?
  2. Are there any network-based behaviors that can be used for detection?
  3. Are there any host-based indicators that can be used for detection?
  4. Are there any host-based behaviors that can be used for detection?

This isn’t all encompassing, but in a lot of cases you will be able to derive some type of host or network based indicators or behaviors. An indicator can be something simple like a list of MD5’s or domain names, and will usually be representative of known evil. A behavior is usually more complex and will indicate a behavior that is normally legitimate, but could be the results of an attackers actions in some cases. This might be an action like a user account being added to an administrative group, or the use of the command “ping -n 1”. Both normal activities, but something that might not be done too often and worth of investigation in relation to the identified activity as it relates to the attacker or breach you’re describing.

In all of these cases, recommendations towards specific technologies are what will differentiate you. Don’t just give someone a list of domain names, also give them a Snort or Suricata rule that will detect them, including relevant context and information links. Don’t just give someone malware characteristics, give them the YARA rule to search for it. You might think that’s time consuming, and you’re right. Don’t be lazy! If you truly want to promote change and you expect your reader to go the extra mile, you’re writing has to do it as well. Something as small as creating a 10 line bash script to detect something will endear your reader/client to you forever, and will show your hands-on expertise.

More on Writing

If the things you write require the user to take action, you’re going to have to work harder to get them to do it. Just because you’ve written a very clear and informative statement on what a problem is and how to fix it doesn’t necessarily mean someone will take the action you want. The easier you can make this for people, the more they will be likely to actually pursue your recommendations. Going the extra mile in your writing will be rewarded with actions. If you can outline some prevention and detection action items, you’ll be writing content that will get people moving.

If you’re interested in learning more about my personal systems for better technical writing, I’ll be releasing more articles in that area soon, as well as a couple of videos. You can subscribe to the mailing list below to get access to that content first, along with a few exclusives that won’t be on the site.

Sign Up for the Mailing List Here

Writing for Security: Making People Give a Damn

quillIf you really want your technical content to matter for people you have to appeal to their needs. There are primary needs like food, water, sleep, or sex, but it’s difficult to tie those things to malware analysis or threat intelligence reports. If you look to secondary needs you will find things like employment, resources, morality, family, self-esteem, confidence, achievement, and respect. Hopefully, a light bulb went off when looking at this list. If you really want people to care about your content you have to appeal to one or more of these things. Let’s dig into a few of them.

Employment, Achievement, and Respect

I want to lead with employment because it is the secondary need most tied to primary needs. Everyone needs to eat, and unless your Silicon Valley startup actually made it past the second round of funding you probably need a job to buy food for yourself and your family. If your writing can appeal to someone’s need for employment, they are going to care about it.

Tangentially related are achievement and respect, because everyone wants to achieve success in the workplace and be respected while doing it. These are grouped together because most believe that being well respected and achieving positive things will lead to further career success. In most places this is definitely true.

When you’re writing something, ask yourself if it will help someone get a better job or a higher salary in their current job. You may want to think it’s much more complicated than that, but it really isn’t. You may be a person who says “Chris, I’m not in this line of work for the money, so I can’t relate to that.” If you were being completely honest with yourself, you certainly wouldn’t do your job for free, or probably even for half of your current salary. You have to eat and you have to provide for your family and so does everyone else. If you can write something that helps your reader do that, you are appealing to primal psychological needs and people will gravitate towards that.

The best way to appeal to these needs is to provide an opportunity for meaningful action. That action will vary depending on what you’re writing, but here are a few examples:

Penetration Testing Report [You want the reader to fix a finding]:

  • An example of how a finding would be exploited so it can be independently validated and recreated.
  • A news story showing how a similar finding was attacked that can be used to justify the time/resources to fix it to management.
  • A detection signature that can be applied to a Snort/Suricata/Bro IDS so the user can detect exploitation if it can’t be fixed in a timely manner.
  • A list of log types that can be ingested by a SIEM if detective controls are a primary risk reduction strategy.

Threat Intel Blog Post [You want the reader to defend against this threat actor]:

  • A diagram showing the flow of the attack and where protective/detective controls could be applied.
  • Reference links to attacks conducted by this threat group that can be used to justify the time/resources to fix it to management.
  • A detection signature that can be applied to a Snort/Suricata/Bro IDS to that can be used to detect actor activity.
  • A listing of network and host based artifacts that the user can build into their own detection infrastructure and SIEM.

Alert Investigation Ticket [You want management to provide funding for bigger sensors]:

  • A timeline showing the flow of the investigation and areas where it was stalled due to lack of visibility to justify the ask to management.
  • A hypothetical description of how the investigation could have gone and how much time might have been saved if more data was available.
  • A list of the exact type of sensor you need along with a broad cost estimate.
  • A success stories from a colleague/peer who has the level of visibility you desire.

Forensic Report [You want the company to educate users on spear phishing]:

  • A diagram showing how an attacker was able to gain an initial foothold into the network by phishing a number of users.
  • Industry reporting on statistics of users who are susceptible to phishing.
  • Links to news articles of other breaches showing how phishing was a primary attack vector.
  • A guide explaining how the IT staff could conduct a phishing test with the user base to determine how vulnerable they truly are.
  • A list of vendors (or if you’re a vendor, a price quote) on performing an external phishing test.
  • Links to free or paid phishing awareness training programs.
  • A list of tips that can be e-mailed to all users within the company.

If you give the reader a chance to take action from your writing then you’re giving them the chance to achieve something and to gain respect from their peers and boss by doing it. Doing this in a way that truly empowers them is a bit of a balancing act, which we’ll talk about next.

Confidence and Self-Esteem

Nobody likes feeling stupid. If you write something with a lot of technical detail it’s probably a good thing, but if it goes so aimlessly in–depth that it goes over the head of most people reading it, they aren’t going to connect with you. Appealing to primary and secondary needs doesn’t matter if your reader walks away thinking they aren’t smart enough to do anything about the problem you present. That’s why it’s so crucial to go the extra mile. In infosec, your goal is usually to inform, but it’s frequently to persuade. If you want someone to head down a path towards a goal you must realize that the hardest step for them to take is the first. The more work you can do for the reader up front, the more likely they are to take that first step. This means providing actionable examples and step-by-step guides that get them moving. This is more work on you up front as the author, but readers don’t reward lazy writing.

If you provide a call to action that asks the reader to write 10,000 lines of code or change the entire culture of their corporation, they aren’t going to feel confident enough to act on it. There’s a place for that type of writing, but most of the time it shows laziness on your part for not going the extra mile to give them actionable techniques for getting started down whatever path your trying to get them to take.

Figuring out where to position your material can be tricky, but there are a few things to think about when writing it:

  • What’s the lowest common denominator you are trying to appeal to?
    You don’t have to dumb everything down far enough that someone with no experience should be able to get going, but you should assume that most of your readers aren’t as smart as you. If they were, why would they need to read what you’re writing?
  • What is something the reader can do today/tomorrow/next week?
    If you can phase out your action items over the course of time it makes it can make a larger task become less overwhelming. Even something as simple as downloading a tool or sending an e-mail is a step. If the reader can accomplish that step, they are going to build confidence and be more likely to accomplish the next step. It’s a snowball effect.
  • Where can the reader learn more about the concepts they need to make this actionable?
    If you are correctly assuming the reader isn’t as knowledgable about the topic as you are, then you need to do whatever you can to minimize that gap. If you want them to take action on something they don’t know much about, you absolutely must provide reference to resources where they can learn more. If you want a user to write a signature for a malware family, link or provide supporting information about the techniques the malware uses and the libraries it relies on. If you want a user to fix an XSS vulnerability in a piece of code, link or provide examples of different types of XSS protection and libraries that demonstrate different techniques.

If you read all of this and don’t think you need to go the extra mile because your writing is to inform and not to persuade, then I’d say you’re probably fooling yourself, or you’re a lazy writer. Both will result in content that isn’t appealing to your readers, and it will be forgotten.


One of the oldest debates in history is whether mankind is inherently good or evil. I’m certainly not going to solve that debate here, but I think it’s safe to say that you probably got into information security because you have some sense of right vs. wrong. In most cases, the network you are protecting or assessing represents good, and the real or hypothetical bad guys who want to steal something from it represent evil.

Whether it’s nature or nurture, most humans have a sense of morality from a young age. Whether you realize it or not, you’ve built archetypes of the good guys and the bad guys and in most cases you probably want to be the guy with the cape saving the day. This is important to consider when you write, because if you can tap into someone’s sense of morality then you are going to reach parts of the reader that most writing can’t touch.

I want to be clear on this that I don’t want you to start making moral decisions for someone. In our field, it’s ridiculously easy to stumble into a debate about things like privacy vs. security, and you probably aren’t going to change someone’s mind there. Furthermore, a lot of people enter a way of thinking in irrational ways. Cognitive psychology tells us that someone who enters a line of thought irrationally is not likely to leave that mindset because of rational though. The goal isn’t to manipulate someone’s sense of morality; it is to appeal to it by causing the reader to ask questions.

So what if there is a new piece of malware being used to attack agriculture companies? These companies are targeted all the time. Nobody is really going to care about that unless they work at one of the targeted companies who were affected. Now, what if you consider that the malware caused a significant financial loss that led to a Q2 earnings miss resulting in layoffs of hundreds of people? That changes things a bit. Because someone used the malware to attack this organization, real people were hurt, and the reader will ask themselves whether this is morally wrong. Again, your job isn’t to tell people it’s wrong. Your job is to get them to ask themselves where this action points on their moral compass.

Getting people to ask questions about the moral disposition of something isn’t always easy, and it often requires some digging. One method for getting to this point is by using the 5 Why’s method. Take a fact that you are writing about and ask yourself why it matters, then ask yourself why that matters. For example:


Hypothetical Fact: A government contractor was the victim of an attack, resulting in the theft of intellectual property

  1. Why does that matter? The attacks on the government contractor was linked to group X due to similar TTPs
  2. Why does that matter? Group X is comprised of operators believe to be North Korean
  3. Why does that matter? North Korean threat actors have attacked a number of western media outlets and government contractors and are advancing their capability
  4. Why does that matter? The North Korean government has expressed interest in harming western countries through advancing weapons technology
  5. Why does that matter? If North Korea succeeds, the consequences could result in conflict or war.


Hypothetical Fact: A newly discovered piece of malware redirects users to a site that scrapes their social media profile if they are logged into Facebook and harvests personal information

  1. Why does that matter? An unknown attacker could gain access to your personal information.
  2. Why does that matter? The attacker could use this personal information to obtain more information about you through social engineering or password reset questions.
  3. Why does that matter? The attacker could collect enough information to steal your identity
  4. Why does that matter? The attacker could cause significant financial loss or ruin your credit score, preventing you from being able to take out a loan on a car or home.


In both of these examples, I’ve presented scenarios that mirrors things you’ve probably actually read at some point,  and gone through a process to translate them into their core; things that should provoke questions of morality. Is it right/wrong for North Korea to start a conflict? Is it right/wrong for someone to steal your identity? In these cases both answers are probably pretty clear-cut. In a lot of cases it won’t be so obvious. The important thing is to get people to ask the question.

More on Writing

Writing is a lot more enjoyable when people care about what you’ve written. In the current security landscape you can’t go more than a couple of days without someone writing a blog post detailing the latest threat actor campaign or malware they’ve discovered. If you’re responsible for writing content like this, whether internally or externally, appealing to primary and secondary needs will guarantee that people care more about what you have to say.

If you’re interested in learning more about my personal systems for better technical writing, I’ll be releasing more articles in that area soon, as well as a couple of videos. You can subscribe to the mailing list below to get access to that content first, along with a few exclusives that won’t be on the site.

Sign Up for the Mailing List Here