BSides Augusta 2014 Slides and Video – Defeating Cognitive Bias and Developing Analytic Technique

I recently gave a presentation at BSides Augusta on the topic “Defeating Cognitive Bias and Developing Analytic Technique”.


At the center of many defensive processes is human analysis. While we spend a lot of time performing analysis, we don’t spend nearly enough time thinking about how we perform analysis. The human mind is poorly wired to deal with most complex analysis scenarios effectively. This can be attributed to the inherent complexity of solving technical issues where so many uncertainties exist, and also to the cognitive and unmotivated biases that humans unknowingly apply to their analysis. All of these things can diminish our ability to get from alert to diagnoses quickly and effectively.

In this presentation, I plan to discuss the mental challenges associated with technical defensive analysis by leveraging research associated with traditional intelligence analysis. I will discuss how complexity can overwhelm analysis, how cognitive bias can negatively influence analysis, and techniques for recognizing and overcoming these limiting factors. This will include a few fun mental exercises, as well as an overview of several strategic questioning techniques including analysis of competing hypothesis, red cell analysis, and “what if” analysis. Finally, I will discuss several structured analysis techniques, including two different techniques that can be used specifically for NSM analysis: relational investigation and differential diagnosis.


The video for this presentation can be found here:

The slides for this presentation can be found here:

Evolving Towards an Era of Analysis

I’ve spent the majority of my career thinking about how to build a better mousetrap. More to the point, better methods to catch bad guys. This includes everything from writing simple IDS signatures, to developing detection systems for the US Department of Defense, to helping build commercial security software. In these roles I mostly focused on network security monitoring, but there are quite a few other facets of computer network defense. This includes malware reversing, incident response, web application analysis, and more. While these subspecialties are diverse and require highly disparate skill sets, they all rely on analysis.

Analysis is, more or less, the process of interpreting information in order to make a decision. For network defenders, these decisions usually revolve around whether or not something represents malicious activity, how impactful and widespread the malicious activity is, and what action should be taken to contain and remediate it. These are decisions that can literally cost companies millions of dollars as we saw in the 2013 Target breach, or even eventually result in a loss of human life, which something like Stuxnet in 2010 could have yielded. Clearly, analysis is of incredible importance as it is a determinate phase of the decision making process. If that is the case, then why do we spend so little time thinking about analysis? Before we dive into that, let’s take a look at how we got here.

Evolutions Past

My experience is mostly grounded in network security monitoring, so while this article appeals to many areas of computer network defense, I’m going to frame it through what I know. Network security monitoring can be broken into three distinct phases: collection, detection, and analysis. These take form a something I refer to as the NSM Cycle.


Figure 1: The NSM Cycle

Collection is a function of hardware and software used to generate, organize, and store data to be used for detection and analysis. Detection is the process by which collected data is examined and alerts are generated based on observed events and data that are unexpected. This is typically accomplished through some form of signature, anomaly, or statistically based detection. Analysis occurs when a human interprets and investigates alert data to make a determination if malicious activity has occurred. Each of these processes feed into each other, with analysis feeding back into a collection strategy at the end of the cycle, which constantly repeats. This is what makes it a cycle. If that last part didn’t happen, it would simply be a linear process.

While the NSM cycle flows from collection to detection and then analysis, this is not how the emphasis we as an industry has placed on these items has evolved. Looking back, the industry began its foray into what is now known as network security monitoring with a focus on detection. In this era came the rise of intrusion detection systems such as Snort that are still in use today. Organizations began to recognize that the ability to detect the presence of intruders on their network, and to quickly respond to the intrusions, was just as important as trying to prevent the intruder from breaching the network perimeter in the first place. These organizations believed that you should attempt to collect all of the data you can so that you could perform robust detection across the network. Thus, detection went forth and prospered, for a while.


Figure 2: The Evolution of NSM Emphasis

As the size, speed, and function of computer networks grew, organizations on the leading edge began to recognize that it was no longer feasible to collect 100% of network data. Rather, effective detection relies on selectively gathering data relevant to your detection mission. This ushered in the era of collection, where organizations began to really assess the value received from ingesting certain types of data. For instance, while organizations had previously attempted to perform detection against full packet capture data for every network egress point, now these same organizations begin to selectively filter out traffic to and from specific protocols, ports, and services. In addition, these organizations are now assessing the value of data types that come with a decreased resource requirement, such as network flow data. This all worked towards performing more efficient detection through smarter collection. This brings us up to speed on where we stand in the modern day.

Era of Analysis

While some organizations are still stuck in the detection era (or worse yet, in the ancient period with a sole focus on prevention), I believe most organizations currently exist somewhere in the collection era. In my experience, the majority of organizations are just entering that era, while more mature organizations are in a more advanced stage where they’ve really developed a strong collection strategy. That begs the question, what’s next? Welcome to the analysis era.

Graduate anthropology students at the Kansas State University recently began a study surrounding the ethnography of a typical security operation center (SOC). Ethnography refers to a systematic study of people and culture from the viewpoint of the subject of the study. In this case, the people are the SOC analysts and the culture is how they interact with each other and the various other constituents of the SOC. This study had some really unique findings, but one of the most important to me was centered on the prevalence of tacit knowledge.

Tacit knowledge, by definition, is knowledge that cannot easily be translated into words. The KSU researches were able to quickly identify that SOC analysts, while very skilled at finding and remediating malicious activity, were very rarely able to describe exactly how they went about conducting those actions.

“The tasks performed in a CSIRT job are sophisticated but there is no manual or textbook to explain them. Even an experienced analyst may find it hard to explain exactly how he discovers connections in an investigation. The fact that new analysts get little help in training is not surprising. The profession is so nascent that the how-tos have not been fully realized even by the people who have the knowledge.”

If you’ve ever worked in a SOC then you can likely related to this. Most formal “training” that occurs for a new analyst is focused on how to use specific tools and access specific resources. For example, this might include how to make queries in a SIEM or how to interface with an incident tracking system. When it becomes time to actually train people to perform analysis, they are often relegated to shoulder surfing while watching a more experienced analysts perform their duties.

While this “on the job training” can be valuable, it is not sufficient in and of itself. By relying solely on this technique we are not properly considering how analysis works, what analytic techniques work best, and how to educate people to those things. Ultimately, we are doing an injustice to new analysts and to the constituents that the SOC serves.

Thinking about Thinking

One of the positive things about this analysis problem is that we are by no means the first industry to face it. As a matter of fact, many professions have gone through paradigm shifts where they were forced to look inward at their own thought processes to better the profession.

In the early-to-mid 1900s, the medical field transitioned from an era where a single physician could practice all facets of medicine to an era where specialization in areas such as internal medicine, neurology, and gastroenterology were required in order to keep up with the knowledge needed to treat more advanced afflictions.

Around the same time, the military intelligence profession underwent a revamp as well. Intelligence analysts realized that policy and battlefield disasters of the past could have been avoided with better intel-based decision making and began to identify more structured analytic techniques and working towards their implementation. This was required in order to keep up with a changing battle space and an evolving threat.

Similar examples can be found in physics, chemistry, law, and so on. All around us, there are examples of professions who had to, as a whole, turn inwards and really think about how they think. As we enter the era of analysis, it is time that we do the same. In order to do this, I think there are a few critical things we need to begin to identify.

Developing Structured Analytic Techniques

The opposite of tacit knowledge is explicit knowledge. That is knowledge that has been articulated, codified, and stored. In order for the knowledge possessed by SOC analysts to transition from tacit to explicit we must take a hard look at the way in which analysis is performed and derive analysis techniques. An analysis technique is a structured manner in which analysis is conducted. This centers on a structured way of thinking about an investigation from the initial triage of an alert all the way to the point where a decision is made regarding malicious activity having occurred.

I’ve written about a few such techniques already that are derived from other professions. One such method is relational investigation, which is a technique taken from law enforcement. The relational method is based upon defining linear relationships between entities. If you’ve ever seen an episode of “CSI” or “NYPD Blue” where detectives stick pieces of paper to a corkboard and then connect those items with pieces of yarn, then you’ve seen an example of a relational investigation. This type of investigation relies on the relationships that exist between clues and individuals associated with the crime. A network of computers is not unlike a network of people. Everything is connected, and every action that is taken can result in another action occurring. This means that if we as analysts can identify the relationships between entities well enough, we should be able to create a web that allows us to see the full picture of what is occurring during the investigation of a potential incident.

 Figure 15-1

Figure 3: Relational Investigation

Another technique is borrowed from the medical profession, and is called differential diagnosis. If you’ve ever seen an episode of “House” then chances are you’ve seen this process in action. The group of doctors will be presented with a set of symptoms and they will create a list of potential diagnoses on a whiteboard. The remainder of the show is spent doing research and performing various tests to eliminate each of these potential conclusions until only one is left. Although the methods used in the show are often a bit unconventional, they still fit the bill of the differential diagnosis process.

The goal of an analyst is to digest the alerts generated by various detection mechanisms and investigate multiple data sources to perform relevant tests and research to see if a network security breach has happened. This is very similar to the goals of a physician, which is to digest the symptoms a patient presents with and investigate multiple data sources and perform relevant tests and research to see if their findings represent a breach in the person’s immune system.  Both practitioners share a similar of goal of connecting the dots to find out if something bad has happened and/or is still happening.

Figure 15-6

Figure 4: Differential Diagnosis

I think that as we enter the era of analysis it will be crucial to continue to develop new analytic techniques, and for analysts to determine which techniques fit their strengths and are most appropriate in different scenarios.

Recognizing and Defeating Cognitive Biases

Even if we develop structured analytic techniques, we still have to deal with the human element in the analysis process. Unfortunately, humans are fallible due to the nature of the human mindset. A mindset is, more or less, how someone approaches something or his or her attitude towards it. A mindset is neither a good thing nor a bad thing. It’s just a thing that we all have to deal with. It’s a thing we all have that is shaped by our past, our upbringing, our friends, our family, our economic status, or geographic location, and many other factors that may or may not be within our control.

When dealing with our mindset, we have to consider the difference between perception and reality. Reality is grounded in a situation that truly exists, and perception is based on our own interpretation of a situation. Often times, especially in analysis, a gap exists between perception and reality. The ability to move from perception to reality is a function of cognition, and cognition is subject to bias.

Cognitive bias is a pattern or deviation in judgment that results in analysts drawing inferences or conclusions in a manner that isn’t entirely logical. Where as a concrete reality exists, an analyst may never discover it do to a flawed cognition process based on his or her own flawed subjective perception. Those are a lot of fancy psychology words, but the bottom line is that humans are flawed, and we have to recognize those flaws in our thought process in order to perform better analysis. In regards to cognitive bias, I believe this is accomplished through identifying the assumptions made during analysis, and conducting strategic questioning exercise with other analysts in order to identify biases that may have affected the analysts.

One manner in which to conduct this type of strategic questioning is through “Incident Morbidity and Mortality.” The concept of an M&M conference comes from the medical field, and is used by practitioners to discuss the analytic and diagnostic process that occurred during a case in which there was a bad outcome for the patient. This can be applied security analysis in the same manner, but doesn’t necessarily have to be associated with an investigation where discrete failure occurred. This gives analysts an opportunity to present their findings and be positively and constructively questioned by their peers in order to identify and overcome biases.

The flawed nature of human thinking will ensure that we never overcome bias, but we can minimize its negative impact through some of the techniques mentioned here. As we enter the era of analysis, I think it will become crucial for analysts to begin looking inward at their own mindset so that they can identify how they might be biased in an investigation.


As an industry we have been pretty successful at automating many things, but analysis is something that will never be fully automated because it is dependent upon humans to provide the critical thinking that can’t be replicated by programming logic. While there is no computer that can match the power of the human brain, it is not without flaw. As we inevitably enter the era of analysis, we have to refine our processes and techniques and convert tacit knowledge into explicit knowledge so that the complex problems we will continue to face can be solved in faster and more efficient manner. Ultimately, the collection and detection era are something that we own, but it is entirely likely that a lot of the analysis era will be owned by our children, so the groundwork we lay now will have dramatic impact on the shape of network security analysis moving forward.

I talk in much more detail about several of the things discussed herein Applied Network Security Monitoring, but I also have several blog posts and a recent presentation video on these topics as well:

Practical Packet Analysis 3rd Edition Research

Practical Packet AnalysisAfter a lot of demand, I’ve started researching content for Practical Packet Analysis, 3rd edition. There is no timeline for release yet, but for those of you who have read either of the previous editions, what would you like to see in a third edition? Specific scenarios? Additional protocols? Let me know in the comments here, or e-mail me directly at Thanks!


So You Want To Write an Infosec Book?

mybooksWhile I don’t consider myself to be a prolific writer of the 21st century, I have been blessed to have the opportunity to write five different technical books over the past 12 years. I do a little bit of speaking here and there and am always blogging as well, so I frequently meet people or receive e-mails from folks who want to write an information security book. Because of that, and in light of recently finishing my last book project, I thought that now would be the perfect time to share some of my experiences in technical book writing.

*This post was originally written in 2014 but was updated in 12/2017




Before I dive into my lessons learned, here is a brief summary of the books I’ve written to help frame the things I’m going to talk about.

  • “Saving Time and Money with Virtual Server” – Published by O’Reilly in 2005 as an e-book only. This sold very poorly and was my first foray into paid technical writing with a real publisher. Most people don’t even know that I wrote it.
  • “Practical Packet Analysis – 1st Edition” – Published by No Starch Press in 2007 in print. My first print book, released when I was 19. This sold very well but received mixed reviews early on due to some technical issues which were eventually rectified.
  • “Practical Packet Analysis – 2nd Edition” – Published by No Starch Press in 2011 in print. This has been my best selling book. It has been translated to half a dozen or so languages and is used as a textbook by many universities. It is also incredibly well reviewed, having an average rating of 4.5 stars with over 50 reviews on Amazon.
  • “Applied Network Security Monitoring” – Published by Syngress in late 2013 in print. This is my newest book. So far, it has been very well reviewed. I was the lead author of this book but also had contributions from several friends as co-authors, with Jason Smith contributing a few chapters, David Bianco writing a chapter, and Liam Randall contributing in a couple of places.
  • “Practical Packet Analysis – 3rd Edition” – Published by No Starch Press in 2017 in print. This hasn’t been out for too long but is picking up where PPA 2 left off with great sales and multiple translations. It is also incredibly well reviewed, having an average rating of 4.6 stars with over 75 reviews on Amazon. I’ve also built an online class version of it.


As you can see, I have a pretty wide array of experience with several types of books, several publishers, and several models of book writing. I’m by no means an authority on the subject of the business of writing, the grammar/structure of writing (just ask my editors), or even the “best” way to go about getting your first book deal. However, I do have experience to share that I think is useful.


Lessons Learned


Writing a Book is Hard

Writing a book is probably one of the single hardest things you will ever do. If that isn’t the case, then you are probably doing something wrong, or simply not taking enough risk. When you estimate the amount of work that you think a book might take to complete, go ahead and multiply that by five.


The first edition of Practical Packet Analysis took a year to research and write, and that was a bit rushed. Because of this, the quality suffered. The second edition of Practical Packet Analysis took about two years to research and write, keeping in mind it still used 25% of the content from the first edition. Applied NSM took FOUR YEARS to research and write, and that was with the help of co-authors and even cutting some things out of the original table of contents.


If you aren’t strong-willed, dedicated, and goal-oriented, then you aren’t going to be able to successfully write a book. It is very easy to get excited about putting words on paper at the beginning of a project. However, this excitement can begin to wane several months into the project when it seems like you are slogging through content at a snail’s pace and you can’t see the forest for the trees. This is the point in which most books flounder out and never get finished.


As I’ve moved into the content creation business where I work with individuals to develop online courses I’ve learned that some people just don’t have what it takes to dedicate themselves to large projects like this, or they simply just aren’t in a place in life where they can adequately prioritize it. There’s nothing wrong with that if you’re self-aware enough to realize you’re that person.


Don’t underestimate the difficulty of writing a book. It is a massive, consuming task that requires you to possess skills in technical writing, time management, research, and the technology you are writing about. It isn’t too hard to get a book writing contract. It is very hard to finish a book writing project, and it is incredibly hard to write a good information security book.


Assess Your Motivation

Because writing a book is so difficult, you have to possess the right motivation for it to be successful. So what does the “right” motivation look like? Well, ask yourself why you want to write the book. Some good reasons might include:

  • You are a natural teacher and like to share the knowledge you have with others.
  • You have a unique understanding of something technically complex and think others could benefit from your methods and approaches.
  • You have a plethora of experience and you think that you can use your advanced knowledge to better teach the fundamentals of a discipline.
  • You have a lot of knowledge in an area for which no formally written knowledge exists.


With that in mind, I usually hear more bad reasons for wanting to write a book than good ones. Some of these include:

  • “I want to be a big name in this industry.”
  • “I want to bring in some extra income.”
  • “I want to prove my skills so that I can get a better job.”


I could spend a lot of time ranting about each of these bad types of motivation, but I’ll keep it short and say that you should never write a book ONLY to get name recognition, to make money, or to get a better job. While it is possible that the book could result in those things, you should write a book because you care about the topic and you want to help people. It’s part of what some people call “servant leadership.” That is where you gain respect because you serve your constituency. In the case of book writing, this constituency is the information security community as a whole. If you are a good steward of that community, you will have the opportunity to prosper.


You are Responsible for Your Content

This is the most important lessons learned I can provide. One of the hardest lessons I’ve learned in my career is that you, as the lead author, are ultimately responsible for the content of your book. I learned this lesson because of a very big mix up that occurred when writing the first edition of Practical Packet Analysis. I was pretty young when I wrote this book (I started it when I was 18), and looking back, I probably could have used a few more years of experience before I wrote it. While writing the book, Gerald Combs (the creator of Wireshark) agreed to be the technical editor for the book. This was really helpful for me at the time because I knew that Gerald’s years of experience would certainly catch any technical errors I might make in my writing.


A couple months after the book was released, it received a very poor review from a very big name in the industry. This would eventually lead to a few more bad reviews right around that time. The reviews were centered on the fact that the book contained quite a few technical errors. Of course, the publisher and I went back to Gerald to see why they were missed. That is when we discovered that there was some miscommunication, and Gerald was under the impression that he was only supposed to perform a technical review of the content directly related to Wireshark, and not all of the protocol-specific information and other content. This wasn’t Gerald’s fault or the publisher’s fault. It was on me for not ensuring the expectations were communicated correctly. I take full responsibility.


Dealing with this was pretty rough. A book isn’t like a blog post that you can go back and make edits. Once it’s out there in print, it’s there forever. We were ultimately able to fix the issues and publish fixes in later print runs of the book and in an errata. Some of them were things that were inaccurately stated, others were facts that were just presented in a way that left too much room for incorrect interpretations, and a few were just production issues that didn’t get caught. However, at this point, the damage was done. It was very personally embarrassing, and I still consider it to be a dark stain on my career to this day. I didn’t truly consider the issue rectified until I was able to complete the second edition of the book. I’m incredibly thankful to Bill and the folks at No Starch Press for allowing me that opportunity because I’m not sure most publishers would have done so.


I’m now incredibly cognizant of the technical content of my books. I research to an extreme amount and I also rely on multiple technical editors. Applied NSM was edited for technical content by David Bianco, but I also had technical edits performed by a dozen or so other people based upon their expertise in certain content areas. For instance, several members of the SiLK team reviewed the sections about SiLK, and Joel Esler from Cisco/Sourcefire was kind enough to review the chapter on Snort. Not only did the multiple layers of technical editing catch things that were missed, it also helped to provide some additional unique perspective on the concepts presented in the book.


The key point here isn’t to be scared of technical errors. Every book will have some errors, and that is what an errata page is for. The takeaway here is that every word in your book is ultimately your responsibility. You can’t fully rely on co-authors, contributing authors, technical editors, copy editors, etc. There is no passing the buck in the book writing business. You have to own every word and you have to proofread and research until your eyes bleed.


Don’t Rely Solely on Your Own Expertise

One of the big mistakes I made early on in my writing career was thinking that it was 100% on me to generate all of the knowledge that was put into my book. If you really want to know the difference between the first and second editions of Practical Packet Analysis, this is one of the big ones. In the first edition, all of the content was straight from my head, using techniques that I used in my day-to-day job. While these were useful to me, I didn’t think about studying the techniques used by other people to see how they applied the same knowledge. Quality suffered as a result.


Fast-forward several years when I began researching content for the second edition. This time, I reached out to others to see how they did packet analysis. I asked what techniques they used, what their favorite Wireshark features were, and what additional tools they found useful. Because of this, I was able to incorporate additional perspective into the book, which made it applicable to a lot more people. Not only that, but I learned a lot and strengthened my own practices.


I continued this thread with Applied NSM, even bringing in co-authors with drastically varied experience. A lot of the time there is no “right way” and the “best way” will depend on the environment the knowledge is being applied to. Bringing in the expertise of others can really help the depth and usefulness of your content. This is a statement promoting collaboration above anything else.


You Won’t Make Money Writing Technical Books

If you want to write a technical book to make money then you are going to be in for a surprise. In general, technical books don’t generate a lot of revenue. While there are some exceptions with widely sold books that appeal to a broad mass of people like “Windows 7 for Dummies”, titles like “Applied Cryptography” are going to have a limited audience. No matter how good your book is the audience for it is going to be limited by the number of active practitioners.


People like to see numbers, so let’s do some simple math. My agreement with No Starch Press was for a 12% royalty on all copies of Practical Packet Analysis that were sold (with a higher percentage for subsidiary works and foreign translations). This is standard within their royalty structure menu and something they have publicized in the past, so I have no reservations in publishing that here.


Let’s say that you write a book that costs $30. This means that you see $3.60 from every copy sold (we won’t worry about subsidiary works at the moment – We are also assuming the book sells directly from the publisher and not from a book reseller, which would result in a lesser rate based upon what the publisher sells to the book reseller for). Now, let’s say the book sells extraordinarily well and you’ve sold 10,000 copies. That is a lot of copies for a technical book. If it is an information security book specifically, it’s an even more impressive number. That means you have made $36,000 dollars.


Now, let’s consider how long it took you to write the book. The break down for a smaller book that might sell for $30 bucks could look like this:


  • 6 Months – Initial Research
  • 12 Months – Writing
  • 6 Months – Editing and Marketing


These are pretty fair estimates. Now, let’s say that you are working a full-time job, so you are doing all of this during your spare time, and that averages out to about 4 hours per day. You might skip a day here or there, but you will also probably be working more on the project on the weekends. This averages out to a total of 2920 hours. This sounds like a lot of hours, but if you are going to research and write a proper book, this isn’t too crazy. See the earlier section about how writing a book is hard. If we divide that $36,000 by 2920 hours, that comes out to a bit more than $12/hour. Again, this is if your book sells VERY well. If you write an information security book and it sells a more realistic number, like 5000 copies, then you are only making about $6/hour. That is less than the federal minimum wage. Want to get even more depressed? This money hasn’t been taxed yet. Go ahead and send a third to one half of it to Uncle Sam.


I don’t really know anybody who has made a consistent living exclusively from writing information security books. The folks I do know who don’t have “day jobs” bolster this income with public speaking, training, and consulting. While writing a great book can certainly lead to these things, the royalty income from the book alone isn’t enough.


Personally, I’m a big advocate of donating author royalties to charitable organizations. 100% of the royalties from all of my books go to support a few different charitable organizations, including the Rural Technology Fund.


Have a Strong Stomach

When you write a book and put it out there to the world, you will invariably have to deal with book reviews. These reviews are very important to the success of the book, especially early on. By extension, these reviews are also important to your career, as they will be used to define the quality of your work by a lot of people. Because of that, you should take reviews very seriously. However, with that comes the issue of bad reviews and bad reviewers.


No matter how good your book is, some people won’t like it. Practical Packet Analysis 2nd Edition has an average rating of 4.5 stars on Amazon with over 50 reviews and I know it’s a great book. However, it has gotten at least a couple of bad reviews. Some of these include:


  • A 3 star review from someone who was upset the book only focused on Wireshark, even though Wireshark is in the subtitle of the book and this is made very clear from the beginning.
  • A 2 star review where the reader is upset that I talk about outdated protocols like “Palm OS Protocol.” I’m not sure what he is reading, but I don’t even talk about Pam OS Protocol in the book.
  • A 1 star review because the reader was upset that Amazon didn’t ship the book to him fast enough, which had nothing to do with my writing. Fortunately, Amazon removed this review since it was completely unrelated.


Ultimately, you are going to get a few negative reviews no matter what you do. Some people like to use book reviews as an opportunity to bash people when they think they could have done better, or simply because they think it makes them look like an expert to harshly critique someone else’s work.  There are also people who don’t read the book description before they buy it and are upset that the content wasn’t exactly what they were expecting. Sometimes you also have readers who are very skilled in a particular topic and buy an entry-level book and are upset that the content is too rudimentary for them. These things can all lead to negative reviews. This was incredibly hard for me when I started writing and is still something I struggle with today. When you devote a lot of time and effort to something, you hate to see it torn down in just a few paragraphs. It’s something you just have to learn to stomach. I still get irked when someone raves about how much they love a book but then knock it down to a 3-star rating because there were a few typos.


Write Content Before You Sign the Contract

In most cases, when you want to write a book you will write an abstract with a table of contents and then use that information to pitch the book to a publisher (along with whatever specifics they ask for). If it is accepted, the publisher and the author will agree to terms, contracts will be signed, and then the book actually gets written. While this can be effective, I think that you should start writing the book well before you even think about submitting it to a publisher. As a matter of fact, I wouldn’t sign a publishing contract now without having at least 20% of the book already written. Let me explain why…


When you sign a contract with a publisher, one thing they will want from you is a production schedule that details when you expect to complete certain portions of the book. This is important for the publisher for a variety of reasons, the most of which is that the execution of a contract now means that they are investing money in you and your project. In addition to their paying you for your work, they will also be paying project managers, copy editors, compositors, graphic artists, and marketing staff to ensure that your book is produced effectively and able to be sold. They are also fronting the cost of the initial printing of the book. It takes a lot of work to get the book from your computer to the shelves at Barnes and Noble. Now consider that the publishers will have multiple book projects going on at once, and you can grasp how difficult their job is. They need to be able to effectively schedule the resources used to produce your book so that they are making efficient use of their time and money.


With that said, it is VERY hard to ascertain exactly how long it will take you to write a book until you are already a bit into it. This is hard to explain if you’ve never experienced it, but it holds true for a lot of authors I know for a few reasons. First of all, sometimes it can be very difficult to start a chapter. When I wrote the Snort/Suricata chapter of Applied NSM is took me nearly a week to come up with the first few pages of introductory material. After I was finally happy with that text, I was able to produce the remaining 50 or so pages in relatively short order. Framing introductions and core concepts can be very difficult and if you don’t do it correctly then the reader might get lost while trying to understand more advanced concepts.


Beyond this, I also know several authors who plan to write a book, only to get 50 pages into it to realize that the concept isn’t really going to work out. I can personally tell you that I’ve considered writing three additional books that I never finished because it took my writing quite a bit to realize that their wasn’t enough relevant content to make the book successful.


When you begin writing a book it is your project and you can call the shots. The second you sign a publishing contract it is no longer just your project. You are on the hook and your project has become an investment for other people. No publisher will ever fault you for having too much content already written before you sign the contract. As a matter of fact, it is likely that this additional content will help the publisher better understand your platform, which could lead to an increased chance of getting a writing contract.  If you spend a great deal of time writing content only to realize that the book isn’t going to pan out or that publishers aren’t interested, then that isn’t a total wash. As the late Randy Pausch said, the thing you get when you don’t get what you want is experience.



Have a Backup Plan

While writing Applied NSM, I was a bit shocked when my first chapter came back from copy edit with only one error marked on the manuscript. I’ve written enough to know where my weaknesses are, and I know that there are things editors will usually change in my writing (for better or for worse). So naturally, when the only thing that was brought up was a misspelled word, I was a bit concerned. I reread the manuscript and found a couple of things I had missed in the initial draft that the copy editors hadn’t caught. I was submitting the second chapter soon, so I intentionally placed several errors in the text to see if the copyediting group caught them; and to my dismay they didn’t catch a single one.


I brought this to this attention of my project manager at Syngress, and was shocked to discover that Elsevier (the parent company of Syngress) had recently outsourced their copy editing to a division in India. They admitted that they had just made this switch and were still trying to sort out some quality issues, but that it would take quite a bit of time to do this.


At this point, I was in a bit of a bind because we were on a very tight schedule and I had promised readers a certain release date. Syngress had no ability at this point to provide an effective copy edit (although the PM offered to help where he could). Fortunately, I had a backup plan and utilized the services of my wife (who is now an MD, but originally majored in English and has quite a bit of editing experience) and a third party who will remain anonymous. Through the combined efforts of these two individuals, the book still received the copyedit it needed.


Surprise is a product of complexity. Writing a book is a very complex process, which means that surprise at any given point in the process is likely. This can take a lot of forms: copy editors could do a poor job, a co-author might not be able to complete his contribution, or the publisher might change your deadlines. Think ahead and try to have a backup plan for as many situations as you can.


Leave Wiggle Room

One of the hard things about technical writing is that there are so many “gotchas” to specific scenarios. While something might be true 99% of the time, that 1% can come back to haunt you in your book. For instance, you could write a book about the TCP protocol and definitively say that this is how all of the associated concepts work, writing directly to the RFC specification. However, if you’ve looked at multiple examples of the TCP protocol in action, you will know that not every system implements TCP per specification, meaning that your text could be wrong in some scenarios.


Because of this, it is very important to avoid writing in a “matter of fact” style. You should always leave some wiggle room for interpretation because it isn’t possible to explain every way in which something might be implemented. This means making sure your text highlights the difference between absolutes and indefinites, and you preface descriptions with assumptions you are making about operating environments. This will save your readers some potential headache when they go to try and repeat your techniques.


Don’t Sacrifice Your Tone

The thing that defines you as a writer isn’t your technical knowledge; it is your tone. No matter how much you know about a subject, you must be able to effectively relay that in the written word. Beyond that, it is how you deliver your message that will endear you to readers. I take great pride in that fact that people tell me that I write in a way that makes complex subjects very accessible, and that I can do it in a manner that sounds like me. The people who know me personally will say that when they read my books, they can almost hear me saying the things in it. That is because I have my own unique tone.


At some point in the writing process, you will have to deal with editors. I love editors, and my writing wouldn’t be what it is without them. However, a lot of editors will try to change your tone, especially younger and less experienced ones. This isn’t too different from how programmers work. If you hand a programmer someone else’s code and tell them to work with it, they will probably first try to change it around so it fits their normal coding style. This might involve replacing a few functions, changing how variables are named, or changing how tabs are used. It’s one thing to replace a function with something that is better for reasons of performance or security, but to replace it just because you normally use another one is a different story. Just like this, an editor shouldn’t replace a word because its one they use, they should have a reason. This might include making the sentence clearer or more grammatically correct.


I’ve had the chance to work with a lot of editors. Bill at No Starch is one of my favorites because he truly makes my writing better without changing my tone. They are still my words, but they are delivered more effectively because of his subtle changes. It may take a while, but learn what your tone is. Once you’ve got it locked down, defend it.


Don’t Self Publish the First Time

Self-publishing has never been easier, so I often get asked if you should SP or go with a traditional publisher. This is a trade-off. With a traditional publisher, you get the benefit of their experience and their marketing power. With self-publishing, you get more control. If this is your first book, I recommend you go with a publisher. There are a LOT of moving parts to a book and while you might be able to do all of them you won’t be able to do all of them well. You probably also won’t be able to market your book well to audiences outside your immediate circle and many publishers are good at that. Ultimately, you need to learn the process and the industry and your publisher will be your guide.


During this time, you’ll learn that your publisher’s goals don’t always match yours. They want to make money. That doesn’t always correlate to you making money or having the impact you want to have. I’ve had publishers ask me to sign new contracts because they wanted a bigger cut, outsource copyediting to poor English speakers (discussed above), and “encourage” me to add content promoting their other books when I had different material I wanted to recommend. You will inevitably see things your publisher does that you don’t like. I have good and bad things to say about every publisher I’ve worked with. Once you’ve gone through the process the first time, then maybe consider self-publishing for your second book if you decide to write one. It will be a tremendous amount more work but you will have more control. That’s the trade-off.




There are a lot of blog posts and websites that will tell you how to get a writing contract or how to write good technical content. In my opinion, doing those things are the easy part. The hard part of writing a book is all about being prepared, planning ahead, and having the right frame of mind before, during, and after the process. My hope is that this article provides some useful insight into some of these things. While the tone of this article may seem grim at times, I absolutely love writing and plan to continue doing so. If I didn’t scare you too bad and you plan to pursue writing an information security book, then I wish you the best of luck! If you have insight from the book writing process that you’d like to share, then I’d love to hear it, so please feel free to e-mail me or leave a comment.