Part VII: Security and Reverse Engineering

The Union of Systems Knowledge and Security Thinking

There is a moment in every assembly programmer's education when two realizations arrive simultaneously, usually uninvited. The first: every security vulnerability is ultimately an assembly-level phenomenon. Stack overflows, use-after-free bugs, format string exploits — strip away the high-level abstractions and you find instructions, registers, and memory addresses behaving in ways the original programmer did not intend. The second: every security researcher who is genuinely good at their job reads assembly the way the rest of us read prose. They look at a binary and see the logic, the flaws, the intent, and the opportunities.

Part VII is the union of those two realizations.

If you have worked through Parts I through VI, you already possess an unusual set of skills. You understand the x86-64 instruction set at the encoding level. You know how ARM64 handles its register file differently. You can explain what happens at the microarchitecture level when a branch mispredicts. You have written syscall handlers and interrupt service routines. You understand the memory hierarchy from L1 cache to DRAM. Most people — including most programmers — cannot say any of that. Part VII puts those skills to work in the domain where they matter most: understanding how software breaks and how to protect it.

The Cat-and-Mouse Game

Modern exploit mitigations did not arrive fully formed. They emerged through a decades-long arms race, each generation of mitigations closing one attack vector and forcing attackers to find another. Understanding this history is not just interesting — it is essential for understanding why today's mitigations are designed the way they are.

The classic stack buffer overflow worked because: local variables and the return address sat in the same writable, executable memory region. Shellcode injected into a buffer would run when the overwritten return address pointed to it. The fix was NX/DEP: mark the stack non-executable. The response was Return-Oriented Programming: don't inject code, chain together existing code fragments called gadgets. The fix for that was stack canaries (detect overwrites before the return) and ASLR (randomize addresses so gadgets are hard to find). The responses were information disclosure leaks and partial overwrites. The current-generation fix is Intel CET with its shadow stack: maintain a hardware-protected copy of return addresses that cannot be forged by any memory write.

Every chapter in Part VII is one layer of that onion.

Chapter Previews

Chapter 34: Reverse Engineering begins with the practical skills: reading assembly you did not write. The tools — objdump, GDB, Ghidra, IDA Free — and the techniques for using them. Recognizing compiler patterns, reconstructing control flow, recovering data types. CTF-style challenges. By the end you will be comfortable opening an unknown binary and understanding what it does.

Chapter 35: Buffer Overflows and Memory Corruption is the first half of the anchor example that has been building since Chapter 11. We examine the classic stack buffer overflow in complete detail: the assembly-level mechanics of how a buffer overflow overwrites a return address, what shellcode is and why it is position-independent, and the modern heap corruption techniques that have largely replaced stack overflows. Every exploit mechanism is paired with its corresponding defense.

Chapter 36: Exploit Mitigations is the defender's chapter. Stack canaries, NX/DEP, ASLR, RELRO, and CFI — we examine each at the assembly level, seeing exactly what the compiler inserts into prologues and epilogues, how the hardware enforces the NX bit, and how ASLR entropy is calculated. We also understand how each was (or can be) bypassed, because knowing the bypass is what tells you why the next mitigation was needed.

Chapter 37: Return-Oriented Programming and Modern Exploitation completes the security arc. ROP is the technique that defeated NX/DEP and forced the development of CET. We trace through how a ROP chain works at the assembly level — how gadgets are chained, how the stack becomes a program, how ret2libc and SROP work — and then examine why Intel CET's shadow stack defeats it. The chapter closes with the current state of the art.

Why This Matters for Everyone

Security researchers need assembly because exploits live at the assembly level. But this part is not only for security researchers.

If you write C code, understanding buffer overflows at the assembly level makes you a fundamentally better C programmer. You understand concretely why gets() is dangerous, why strcpy() requires care, and why the compiler is inserting those extra instructions around your function. You stop treating memory safety as an abstract concern and start seeing it in the disassembly.

If you work on compilers or toolchains, understanding ROP and CFI tells you why your compiler needs -fcf-protection and what Intel CET changes about code generation.

If you work on embedded systems or kernels, understanding exploit mitigations tells you which protections you can rely on and which require explicit attention.

The skills in Part VII complete the picture that Part I began: there is no magic in software, only instructions. That is liberating for the programmer and sobering for the security engineer. Both perspectives are worth having.


Part VII assumes you have read Parts I through VI and are comfortable with the x86-64 instruction set, GDB, the C-assembly interface, syscalls, and memory management.

Chapters in This Part