Case Study 35-1: The Morris Worm's Buffer Overflow (1988) — The First Famous Exploit

Introduction

November 2, 1988, was the day the internet first experienced a worm. Robert Morris, a 23-year-old Cornell graduate student, released a self-propagating program that would become the most studied piece of malicious software in computing history. Not because it was sophisticated — by modern standards it was remarkably simple — but because it was first. The Morris Worm established the template for network worms that still applies today.

The Worm exploited three vulnerabilities: a debug backdoor in sendmail, trust relationships in rsh, and a buffer overflow in fingerd. We focus here on the buffer overflow because it illustrates, in the clearest historical context, exactly what this chapter describes.

🔐 Security Note: This analysis is historical, educational, and fully documented in public academic literature. The fingerd vulnerability has been fixed for more than three decades; the code patterns involved are well-documented in computer security textbooks and research papers. Understanding this history is fundamental to understanding why modern security features exist.

The fingerd Service

In 1988, BSD Unix systems ran a service called fingerd that allowed users to query information about logged-in users on remote machines. You could type finger user@hostname and receive information about that user. The service was a social feature — people used it to see if friends were online.

The fingerd daemon listened on TCP port 79. When a connection arrived, it read a line of input and reported the requested information. The reading code was:

/* Original fingerd.c, BSD 4.3, simplified */
#include <stdio.h>

main() {
    char line[512];
    gets(line);            /* No bounds checking */
    /* ... process and respond ... */
}

gets() reads from stdin into the buffer until it encounters a newline or EOF. It does not know or care how large the buffer is. It will write as many bytes as the input provides.

The Equivalent Assembly

Compiled for the VAX architecture of the era (here translated to x86-64 for analysis):

; fingerd main() — the vulnerable read
main:
    push    rbp
    mov     rbp, rsp
    sub     rsp, 0x200          ; 512 bytes for 'line'

    ; gets(line):
    ; In C terms: reads from stdin into [rbp-0x200]
    ; In assembly:
    lea     rdi, [rbp-0x200]    ; arg1 = &line[0]
    call    gets                ; no bounds checking

    ; ... rest of fingerd processing ...
    leave
    ret                         ; returns to whatever is at rbp+8

The stack layout during execution:

High address
┌───────────────────────────────┐
│  Return Address (8 bytes)     │  ← target: overwrite this
├───────────────────────────────┤
│  Saved RBP (8 bytes)          │
├───────────────────────────────┤
│  line[511]                    │
│  line[510]                    │
│  ...                          │
│  line[1]                      │
│  line[0]                      │  ← gets() starts writing here
└───────────────────────────────┘
Low address

A payload of more than 520 bytes (512 buffer + 8 saved RBP) would overwrite the return address. Morris's exploit provided exactly that.

What the Worm Did

Morris crafted a 536-byte payload (512 bytes of NOP sled + shellcode + return address). The VAX version of the exploit included shellcode that:

  1. Called execve("/bin/sh", ...) to spawn a shell
  2. The shell ran Morris's bootstrap loader
  3. The bootstrap loader connected back to the originating machine
  4. The originating machine sent the main worm body
  5. The main worm installed itself and started looking for new targets

In modern terms, this is a classic remote code execution → reverse shell → dropper chain.

Why the Exploit Worked

Three conditions needed to be true, all of which were:

  1. gets() had no bounds checking — by design. C's philosophy of trusting the programmer meant no safety check was inserted.

  2. The stack was executable — in 1988, there was no NX/DEP concept. Stack memory was writable AND executable. Shellcode written into the stack could run when the return address pointed to it.

  3. Addresses were predictable — there was no ASLR. The stack was at the same address on every execution of fingerd. Morris could determine the approximate stack address from his own VAX and use that address in the exploit payload.

Why It Would Not Work Today

On a modern Linux system with default security features:

Mitigation Effect
Stack canary (-fstack-protector) Detects the overwrite before ret executes; calls __stack_chk_fail and aborts
NX/DEP Stack is non-executable; shellcode in the buffer would fault on first instruction
ASLR Stack address is randomized; guessing the shellcode address would require ~2^28 attempts
gets() removed from C11 Modern compiler rejects gets() usage with a warning/error
FORTIFY_SOURCE Buffer overflow in gets() detected at runtime

Any one of these would defeat the Morris Worm exploit. All five together make it effectively impossible.

The Aftermath

The Worm infected approximately 6,000 VAX and Sun workstations — roughly 10% of the internet in 1988. Systems were slowed to unusable states as multiple instances of the worm consumed resources. The damage:

  • Major research universities and military sites were affected
  • Systems had to be taken offline and cleaned manually
  • Estimated economic damage: $100,000 to $10 million (1988 dollars)
  • Robert Morris was convicted under the Computer Fraud and Abuse Act, sentenced to 3 years probation, 400 hours community service, and a $10,050 fine

The technical response established patterns that persist: - The CERT/CC (Computer Emergency Response Team) was founded at Carnegie Mellon University in response to the Morris Worm - Systematic vulnerability disclosure processes began - Network services began including input validation - gets() began to be discouraged (deprecated in C99, removed in C11) - Research into secure coding practices accelerated

The security community studies the Morris Worm not just as history, but as a clean, well-understood example of the vulnerability-exploit-mitigation cycle. Every mitigation in Chapter 36 can be traced, directly or indirectly, to exploits like this one.

Assembly-Level Lessons

The Morris Worm illustrates the core assembly-level insight of buffer overflow exploitation: the return address is on the stack, adjacent to local variables, and the CPU will jump wherever it points. The CPU does not know or care that the return address was overwritten. It follows it faithfully.

When you look at modern function prologues and see:

mov     rax, [fs:0x28]
mov     [rbp-8], rax

...you are seeing the direct descendant of the response to the Morris Worm and exploits like it. The 35 years of security engineering since 1988 are embedded in those two instructions.