Cuckoo’s Egg – Week 6 Notes

Session Recording: (Available 1/12-1/19)

Next Sessions Registration:

This week, we reviewed chapters 31-37.

Sventek comes back again, this time through another link traced back to Germany. He tries to copy the telnet and rlogin programs back to his computer. This is probably to introduce password stealing functionality, so Cliff halts this by physically introducing noise on the line and messing up the transfers. The attacker also continues to search for specific terms on milnet.

Cliff makes his calls to let his stakeholders know what is going on. He reaches Greg Fennel at the CIA who tells him “Just tell me what happened. Don’t embellish, don’t interpret.”

Cognitive Bias and Estimative Probability

The statement from the CIA’s Greg Fennel is interesting and valuable because it elicits a neutral evidence-based response. This is something we should strive for in information security. After all, a conclusion without supporting evidence is an opinion. We have to inject opinions sometimes to fill in where evidence doesn’t exist, but it should be done sparingly and only when necessary.

In relation to this, I spent time discussing cognitive bias and how it can affect the interpretation and acquisition of facts. I listed and described a few of the more common biases that persist in security. I also discussed the importance of using measuring language of estimative probability and the class went through an exercise to practice.

More Reading:

Dig Deeper Exercises:

  • Level 1
    • Review the words of estimative probability. Look through the last few things you’ve written. Should you make any adjustments?
  • Level 2
    • Review the list of cognitive biases and research one of them. Can you think of a time you’ve been subject to that bias?
  • Level 3
    • Pair up with a friend and review the list of biases. Can you identify biases in each other?

Cliff spends time searching Usenet for news about hackers that might be related. He comes into contact with Bob at the University of Toronto. Bob tells Cliff that attackers from the German Chaos Computer Club broke into his network through CERN, and they had also been in the Fermilab computers as well. They went by the aliases Hagbard and Pengo. It turns out these same usernames were observed during a Stanford breach. 

Open Source Intelligence and the Diamond Model

Cliff’s examination of Usenet threads related to the breach he was investigating is an example of open source intelligence (OSINT) investigation. The power of collective intelligence is vast and is something many security practitioners rely on when conducting investigations. I discussed sources of OSINT and demonstrated pivoting based on indicators from a real investigation. I also discussed the Diamond Model as a method of assimilating and characterizing collected information to form a clear picture of events that have transpired.

More Reading:

Dig Deeper Exercises:

  • Level 1
    • Sign up for an Alienvault OTX account and familiarize yourself with the interface. Read a few of the blog posts and explore the available information.
  • Level 2
    • Find one of the file hashes from and search for it on VirusTotal. Review the output.
  • Level 3
    • Go to and pick a blog post. Search for an IP and Domain on Alienvault OTX and see if you can find related malicious infrastructure.


Cliff discovers additional victims of the attackers. This includes the Ballistic Research Laboratory and TRW, a company developing US keyhole spy satellites.

Meanwhile, the Bundespost gets back in touch and shares that the source of the call is a VAX computer at the University of Bremen. They discovered an account that appears to be compromised and are going to start monitoring it for the next time the attacker comes back.

Cliff’s boss comes in and tells him that it is time to end the investigation. Cliff fails to convince him otherwise and begins a plan to change the password for all 1200 users in the network. Fortunately, the FBI got involved and convinced his boss to keep the investigation open for a little while longer.


When Cliff believes the investigation is over he starts to think about the incident response process. While the PICERL model didn’t exist in Cliff’s time, he was actually thinking about the transition from identification to containment. The standard incident response model is called PICERL: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. I briefly provided an overview of this process as an introduction to incident response.

More Reading:

Not long after this, Cliff hears that the DOE is filing a complaint about LBL for not reporting this incident when it happened. Of course, Cliff did do that! He has it recorded in his log book. Good thing he saved that.

Meanwhile, the attacker comes back and pivots through LBL to access the Optimis Army Database to search for specific keywords related to military data. Cliff informs the network operator who plugs the hole.

After this, Cliff observes the attacker breaking into Space Command. Using a default password, they are actually able to get SYSTEM privileges. However, he screwed up. He lost his connection because he tried to list too much data at once. Then, he didn’t realize that the password on the account had expired and he hadn’t set a new one. This means he couldn’t get back into the account. He was locked out. Sadly, some system operator resets the account to the same password and the attacker gets back in and creates his own account. Cliff informs them of the issue so they can remediate.

While all this is going on the attacker is traced to a few different locations in Germany while the University of Bremen is closed for the holidays. Eventually, they call is traced to a local exchange in Hannover where they finally believe the attacker to be. Two barriers now exist. First, to trace this any further Cliff needs a German warrant and that can only be requested from a high-level government official. Second, the antiquated phone exchange requires someone physically present to trace the call while it is active.

Questions to Consider

Zeke at the NSA asks Cliff, “If [the attacker] is so methodological, how can you prove you’re not just following some computer program?”

What characteristics of an attacker can indicate a human at the other end instead of an automated process?


Next Session

January 25th 7:30PM ET

Read Chapters 38-46

Register/Attend Here:


Cuckoo’s Egg – Week 5 Notes

Session Recording: (Available 1/5-1/12)

Next Sessions Registration:

This week, we reviewed chapters 24-30.

Cliff arrives at the lab and talks with his boss who wants him to discuss an ongoing attack being dealt with at Stanford. Dan from Stanford calls Cliff and mentions that he would have e-mailed Cliff about the details, but he is concerned that someone else might be reading it so he chose to discuss in person. They had been relying on the phone more in light of this.

Stanford had a similar monitoring system and saw the attacker (also traced to McClean, VA) uploaded a homework problem to their server complete with this name, Knute Sears, and the name of his teacher, Mr. Maher. Due to the nature of the homework, they believed it to be associated with a high school kid. In an effort to help and potentially connect the LBL and Stanford breaches Cliff worked with his sister to look for schools that might have a Mr. Maher. They only found one, but he was a history teacher, not math. There was also no Knute Sears enrolled there. This was a dead end.

Operational Security for Infosec Practitioners

We provide general security advice to users all the time, but we also must consider the security of our operational tasking. OPSEC is unique to the operations of an individual role and the security role is no different. We often research potentially malicious sites and files and have to protect ourselves from the inherent nature of that work using special precautions just like someone dealing with biological weapons might need to take extra opsec precautions to protect themselves to a greater degree than someone in the general public would.

To this end, we discussed OPSEC concerns related to browsing specifically. I discussed information available to the browser by just visiting a website and how people take advantage of that. I also discussed the modern advertising ecosystem and how it makes a perfect platform for the distribution of malicious code. We played a game where I showed ads and students guessed whether they were legit or led to malware. The conclusion? It’s often impossible to tell, even for those with a trained eye. This compounds the problem ad networks present. I provided several practical steps practitioners can take to strengthen their OPSEC including running ad and script blockers, disabling password manager autofill, disabling browser prefetch, and browsing from a VM to reduce attack surface.

More Reading:

Dig Deeper Exercises:

  • Level 1
    • Implement the “safify” alias and test it out a few times.
  • Level 2
    • Install an ad blocker like uBlock Origin or AdBlock Plus. Visit two sites you frequent and view the logs generated by your ad blocker. How many ad networks did you find?
  • Level 3
    • Take the same page and find the HTML/Javascript that delivered the ad. Analyze its function and consider what would need to be changed in order for malicious content to be delivered here.

Cliff receives a call from Mike Gibbons at the Virginia FBI office. He is much more interested in the case than the California FBI and agrees with Cliff’s plan to have MITRE trace the call the next time the attacker shows up. This event soon took place and MITRE was able to confirm that someone was connecting to LBL from their network but they were unable to trace the source of the call due to the complexity of their network. 
Cliff formulates the hypothesis that MITRE might be serving as a hop point for many different attackers. For this to be true, three things would have to occur. 

  1. There would have to be a way for anyone to connect to MITRE’s network
  2. A MITRE system would have to allow strangers to authenticate to it
  3. They would have to provide unaudited outgoing long distance telephone service

Cliff knew that the third thing was already true. He wanted to test the first two, but to do that he would have to assume the role of an attacker and conduct a pseudo-penetration test. He connects to Tymnet and uses a MITRE account and finds at least one system called AEROVAX left wide open that he can dial out from. This confirms his hypothesis. 

While poking around the MITRE network he also discovers that the AEROVAX system has been infected for at least six months with a trojan horse that is stealing passwords at login. He informs MITRE about this issue and in exchange for this information they agree to send him a copy of their outgoing phone bill so he can assess the movement of the attacker.

Cliff receives the phone bill information and writes correlation software to analyze it. He highlights the calls he knows are from the attacker and flags calls before and after them. Eventually, he comes up with a list of probable calls made by the attacker. It includes several familiar entities like Anniston, along with others like Oak Ridge, San Diego, and Norfolk. He also discovers a bunch of short 1 minute phone calls to military bases and ponders the reason behind them. 

Attacker Pivoting

At this point we’ve seen the attacker pivot through all sorts of networks to reach their goal. This is done to keep people from finding the true identity of the attacker. It protects them from prosecution and relaliation while also providing resiliiency to their attack infrastructure. I discussed and demonstrate some very simple techniques attackers can use for pivoting. This included a demonstration of “living off the land” with SSH and using netcat to shovel command line access back to an attacker through an intermediary host. I also discussed purpose built malware like HTRAN. Finally, I discussed some realities of attribution in the modern landscape and its limitations.


More Reading:

Dig Deeper Exercises:

  • Level 1
    • Create a simple chat client by using a netcat relay between two hosts. Try implementing this in TCP and UDP.
  • Level 2
    • Expand your chat client to go through an intermediate jump host as to conceal your originating IP from the victim.
  • Level 3
    • TCP Spoofing is a technique that attackers can theoretically use to send data to a network indirectly, but it is challenging in practice. Research this technique to understand how it works and its limitations.


MITRE decides to shut down it’s outbound modems basically eliminating the pathway the attacker was taking into the LBL network. At this point Cliff thinks the investigation might be over. He ties up a few loose ends by notifying network owners at Navy Regional Automated Data Center and an unmentioned Georgia college about potential breaches on their network. Through these discussions he confirms activity similar to what he has observed, as well as a similar compromise on the JPL network in California.

Cliff also refines his profile of the attacker he is tracing. The attacker is fluent in Unix and VMS which means it is unlikely they are a high school student akin to what the Stanford breach had revealed. Meanwhile, Teejay from the CIA calls and asks Cliff to send an updated copy of his logbook.

Cliff also builds another statistical analysis tool to calculate the attacker’s average login time. This turned out to be from 12-3PM on weekdays, and as early as 6 AM on weekends. This supported the notion that if the attacker was in Europe they would only break in during the evening during the week day but were more flexible during the weekend.

Sventek comes back and this time Cliff initiates a trace via his Tymnet contacts. They find the call is coming from a new location. It’s coming from International Telephone and Telegraph company (which means it is international) and traced to the Westar 3 satellite. This means the call is coming from Spain, France, Germany, or Britain but at first they can’t definitively say where until they get more information. Cliff gets a call from Ron Vivier later and finds that the call has been traced to the German Datex Network in West Germany.

There are now two possibilities. Either the hacker is indeed dialing in from Germany, or they are using the Datex network as a hop point in a similar fashion to how someone would use Tymnet. Either way, the next step would be to request information from German Bundespost, the government monopoly that runs the communication network.

Cliff starts piecing together more of the puzzle and confirms that local times in Germany sync up with his weekday after-hours theory on call times. He also remembers that a username used by the attacker one was Jaeger, which is German for Hunter. Cliff isn’t ready to fully accept this conclusion, but some of his data points seemed to fit.


Questions to Consider

What OPSEC failures by attackers can lead to attribution by defenders?


  • Use of public tools
  • Custom malware
  • Attack sourcing
  • Shared infrastructure
  • Multiple victims


Next Session

January 11th 7:30PM ET

Read Chapters 30-37

Register/Attend Here:


Source Code S2: Episode 6 – Jennifer Kolde

I’m joined by Jennifer Kolde of the Vertex project. Jen formerly served as an investigator for the federal government and was an analyst on Mandiant/FireEye’s intel team. Her background is interesting, as she actually came to investigative work from a technical writing background. We discussed her story, what it means for someone with technical skills to become a good intel analyst, and her experience testifying to Congress about structured threats.

Jen chose to support the Alzheimer’s Foundation with her appearance. These funds will go to support individuals and families living with Alzheimer’s, as well as research for treatment, earlier diagnosis, and a cure.

You can find the open source Synapse application Jen mentioned here:

Listen Now:

You can also subscribe to it using your favorite podcasting platform:


If you like what you hear, I’d sincerely appreciate you subscribing, “liking”, or giving a positive review of the podcast on whatever platform you use. As always, I love hearing your feedback as well and you can reach me @chrissanders88.

Special thanks to our title sponsor, Ninja Jobs!



Source Code S2: Episode 5 – Grady Summers

This week we’re joined by Grady Summers, CTO of FireEye, former CISO of General Electric, and my former boss. During our conversation, Grady discusses his rise through the ranks at one of the largest companies in the world and his decision to leave GE behind to join Mandiant. He talks about FireEye’s place in history and some of the unique challenges they face. We also discuss buzzword solutions and which products he thinks are overblown and which ones show real promise.

Sergio chose to support the Love and Grace Haiti with his appearance. These funds will go to support the care and education of 25 Haitian kids.

You can find Grady on Twitter @GradyS.

Listen Now:

You can also subscribe to it using your favorite podcasting platform:


If you like what you hear, I’d sincerely appreciate you subscribing, “liking”, or giving a positive review of the podcast on whatever platform you use. If you like what you hear, make sure to let Grady know by tweeting at him @GradyS. As always, I love hearing your feedback as well and you can reach me @chrissanders88.

Special thanks to our title sponsor, Ninja Jobs!



Cuckoo’s Egg – Week 4 Notes

Session Recording: (Available 12/8-1/4)

Next Sessions Registration:

This week, we reviewed chapters 15-23.

Cliff discovers the attacker attempting to find a pathway into the CIA system by querying the Milnet NIC. He doesn’t find any computers, but he does find the names of four people. Cliff calls these people and finally gets in touch with someone to let him know that the attacker was searching for a CIA computer. The CIA take interest and send someone out the following Monday.

Cliff presents his findings to the CIA, including an agent named Teejay. He learns that DOCKMASTER isn’t a Navy shipyard, but actually an unclassified NSA system. The CIA lets Cliff know they can’t do much and it’s up to the FBI to pursue it. Teejay tells Cliff to keep monitoring and keep him informed regardless. He also shares a story about the zero trust model used at the CIA and a time when an insider intercepted agent data. He was caught when a secretary noticed the last login time on her terminal was something unexpected.

Most Security Practitioners are Choice Architects

The story Teejary shared about the CIA is interesting because of how they caught it. A secretary who was on vacation came back and logged in to her terminal. When a user there logs in they see the output of the last successful login they made. The secretary noticed her last login occurred while she was on vacation and she notified someone, which began the investigation that caught the inside attacker. The last login message is a trigger for a choice, and the people who implemented it are choice architects. All security people are, to some degree, choice architects.

The concept of libertarian paternalism (note: the term libertarian has nothing to do with politics) poses that it is possible and legitimate for someone to affect behavior while also respecting the freedom of choice. We have the ability to allow users to make their own choices while also “nudging” them towards choices that are in their best security interest. This is why default options exist, for example.

In class, we went through several examples of choice architecture that are less than desirable including Facebook’s implementation of “Last Login”, how Word/Excel notify users about macros, and Outlook’s user experience for opening attachment.

More Reading:

Dig Deeper Exercises:

  • Level 1
    • Observe your daily work and note opportunities for security-based choice architecture.
  • Level 2
    • Choose one of the examples you found, or one I presented in class and come up with a way to better nudge users towards a more secure state.
    • Optional: E-mail/DM your idea to me for feedback (

The attacker logs back in and finds a password to the Livermore lab network. This lab does secret research and those computers are supposed to be isolated. They have unclassified computers connected to the network, however. Cliff discovers this when he observes the attacker log into the LBL lab from Livermore. He wasn’t aware that was even possible, but as attackers often do, a new pathway was discovered.

That attacker breaks into the MIT network from LBL. Cliff calls the network operator and discovers this was likely possible because a scientist who accessed Livermore’s computers also accessed MIT computers, and probably left his password laying around.

Network Architecture, Zero-Trust Networks, Beyond Corp, and Air Gaps

A network should be built with defensibility in mind. This means building a network assuming you will be attacked, and assuming at least some of those attacks will be successful. I discussed the components of a defensible network as defined by Richard Bejtlich. A defensible network must be: monitored, inventories, controlled, claimed, minimized, assessed, and current. 

Traditional networks are perimeter focused. Many call this the M&M model with a crunch external shell and a soft interior. Things inside the network are trusted, things outside are not. However, the perimeter has shifted over time thanks to the heavy usage of cloud apps for critical services, the needs of remote or WFH employees, and bring your own device (BYOD).

Many people are now looking to Zero Trust Network models like Google’s BeyondCorp. When you plug into a ZT network, you aren’t automatically afforded any trust. You have to gain trust through multiple factors. Your system has to authenticate via a certificate, the user has to authenticate in two ways, the user has to be enrolled in the proper job classification, and more. All assets are available over the Internet. There’s no VPN to access things anymore or single points of trust assessment, it a combination of multiple rules and trust evaluations going on all the time. This is an oversimplification, but it changes how you might think of a traditional perimeter network.

Air-gapped networks are those that are theoretically physically disconnected from public Internet-touching networks. I say theoretically because in practice many of them aren’t. Someone once said that an air-gapped network is really just a high latency network.


More Reading:

Dig Deeper Exercises:

  • Level 1
    • Research BeyondCorp and examples of real-world deployments outside Google. What were the challenges faced?


Cliff discusses the attack with friends and draws a link between some of the attacker activity. The passwords he’s chosen…jaeger and hunter are german. Benson and hedges are also German — a specific brand of cigarettes.

The attacker breaks into an ELXSI super computer at LBL by guessing a password to a default SYSTEM level account. Cliff discovers this and writes a program to slow the computer down to a crawl when the attacker dials into it. This is to not give away that the attacker has been discovered.

Cliff strengthens his monitoring system by purchasing a pager to notify him when a compromised account logs in. This keeps him from sleeping at the office.

Cliff calls the DOE about the Livermore break in. They tell him to keep it quiet, but to call the National Computer Security Center, which operates out of the NSA. The NCSC is receptive, but can’t do anything about it.

Cliff does some legal research and discovers a warrant isn’t legally required to do a phone trace (USCA SS 3121). He looks over his notes and realizes he wrote down all the numbers the VA telco operator said during the trace. There are only a few available permutations, so he social engineers the operator and has her check the registered owner of all of them, claiming he was erroneously charged for calls to these numbers. Only one is active, and it points to MITRE, a defense contractor in McClean, VA.

He calls the VA Telco and asks them if they could confirm the number he found on his own. They aren’t supposed to do that, but they do it anyway. This is essentially a form of social engineering by getting someone to confirm a piece of information rather than just asking them for it.

Social Engineering

Cliff used social engineering to extract information that he needed to further his investigation. Social engineering in security is an act that influences a person to take an action that may or may not be in their best interest. It usually takes the form of phishing (e-mail), vishing (phone), or impersonation (e-mail, phone, or in person). The human plays a significant role in many breaches. The success rate of external pen tests with humans out of scope is often fairly low (<20%). With humans in scope, it is usually near or at 100%. 

In class we examined a few different SE scenarios and debated which types of scenarios would be most effective. We discussed Maslow’s Hierarchy of Needs and how attackers will leverage primary and secondary needs to illicit action, supress action, reveal information, or change information.


More Reading:

Dig Deeper Exercises:

  • Level 1
  • Level 3
    • Experiment with BeEf to get a sense of what control an attacker has simply by getting you to visit a link.

He speaks to a network operator at MITRE who says that it is impossible his network is hacked. He agrees to put a trace on the line and wait for Cliff to call him the next time the attacker logs in. This would validate the connection. 


Questions to Consider

Are Zero Trust Networks inevitable for all modern networks?

  • Why or why not?
  • What current challenges exist for specific types of networks (see below) to move towards a ZT/BeyondCorp model?
    • Small networks
    • ICS network
    • International networks


Next Session

January 4th 7:30PM ET

Read Chapters 24-30

Register/Attend Here: