> "Bad programmers worry about the code. Good programmers worry about data structures and their relationships."
Learning Objectives
- Implement a singly-linked list with insert, delete, search, and traverse operations
- Implement a doubly-linked list with bidirectional traversal
- Build a stack using a linked list (push, pop, peek, isEmpty)
- Build a queue using a linked list (enqueue, dequeue, front, isEmpty)
- Choose the right data structure for a given problem (array vs. linked list vs. stack vs. queue)
In This Chapter
- Spaced Review
- 15.1 Beyond Arrays: When You Need Dynamic Data Structures
- 15.2 Singly-Linked Lists
- 15.3 Doubly-Linked Lists
- 15.4 Stacks: Last In, First Out
- 15.5 Queues: First In, First Out
- 15.6 Circular Lists and Deques
- 15.7 Comparison: Arrays vs. Linked Lists
- 15.8 Memory Management for Linked Structures
- 15.9 Crypts of Pascalia: Inventory as a Linked List, Undo as a Stack
- 15.10 Project Checkpoint: PennyWise Undo Stack
- 15.11 Chapter Summary
Chapter 15: Dynamic Data Structures: Linked Lists, Stacks, and Queues
"Bad programmers worry about the code. Good programmers worry about data structures and their relationships." — Linus Torvalds
Spaced Review
Before we dive into dynamic data structures, let us revisit three concepts from earlier chapters that will prove essential today.
From Chapter 13: What is the difference between a typed file and a text file?
A typed file stores data in binary format — each record occupies the same number of bytes on disk, allowing random access via Seek. A text file stores human-readable characters organized into lines, supporting only sequential access. When we build linked structures in this chapter, we will eventually need to save and reload them — and typed files will be our tool for persisting pointer-based structures to disk.
From Chapter 11: When would you use a variant record?
A variant record is appropriate when different instances of a record type need to carry different sets of fields, selected by a tag field. In this chapter, we will see that nodes in a linked structure always carry the same fields (data plus pointer), but the concept of variant records will surface again when we build heterogeneous collections — imagine a linked list where each node holds a different kind of transaction.
From Chapter 7: What is the difference between a var parameter and a value parameter?
A var parameter passes the actual memory location of the argument, so modifications inside the procedure affect the original variable. A value parameter passes a copy, leaving the original untouched. This distinction is critical for linked structures: when we insert at the head of a list, we must pass the head pointer as a var parameter — otherwise the caller's pointer would never update.
15.1 Beyond Arrays: When You Need Dynamic Data Structures
Throughout Parts I and II we have relied heavily on arrays. Arrays are elegant when you know in advance how many elements you need, when you want instant access by index, and when insertions and deletions are rare. But real-world programs frequently violate all three assumptions.
Consider PennyWise, our personal finance application. A user might enter five expenses today and fifty tomorrow. We could declare an array of 10,000 records — but that wastes memory when the user has only a handful, and it fails silently when the user exceeds the limit. Worse, if we want to insert a new expense between two existing ones (say, to keep the list sorted by date), we must shift every subsequent element one position to the right — an operation that takes time proportional to the number of elements.
Dynamic data structures solve these problems by allocating memory for each element individually, at run-time, and linking elements together with pointers. The structures grow and shrink as needed; insertions and deletions at known positions take constant time; and we never waste memory on unused slots.
In Chapter 14 we learned how New allocates heap memory and how pointers reference that memory. Now we put those tools to work.
The Big Three Dynamic Structures
This chapter introduces three fundamental dynamic structures:
| Structure | Ordering Principle | Key Operations | Metaphor |
|---|---|---|---|
| Linked List | Sequence (position) | Insert, delete, search, traverse | A chain of paper clips |
| Stack | Last In, First Out (LIFO) | Push, pop, peek | A stack of plates |
| Queue | First In, First Out (FIFO) | Enqueue, dequeue, front | A line at the grocery store |
Each is built from nodes connected by pointers — the same New, Dispose, and ^ operators you mastered in Chapter 14. The difference lies in where we insert and remove elements, and what rules govern access.
💡 Theme 5 — Algorithms + Data Structures = Programs. Niklaus Wirth titled his famous book with this equation for a reason. The algorithm for evaluating an arithmetic expression becomes trivial once you choose the right data structure (a stack). The algorithm for simulating a printer becomes obvious once you choose a queue. The structure is the strategy.
15.2 Singly-Linked Lists
A singly-linked list is a sequence of nodes, where each node contains two things: a piece of data and a pointer to the next node in the sequence. The last node's pointer is nil, signaling the end of the list.
The Node Type
type
PNode = ^TNode;
TNode = record
Data: Integer;
Next: PNode;
end;
This is the same forward-declaration pattern from Chapter 14: we declare the pointer type PNode before the record type TNode, because TNode contains a field of type PNode. Pascal's compiler allows this specific exception to the "declare before use" rule.
A list is represented by a single pointer variable — the head — that points to the first node:
var
Head: PNode;
begin
Head := nil; { Empty list }
end.
When Head is nil, the list is empty. This is our base case for every operation.
Inserting at the Head
The simplest insertion places a new node at the front of the list:
procedure InsertAtHead(var Head: PNode; Value: Integer);
var
NewNode: PNode;
begin
New(NewNode);
NewNode^.Data := Value;
NewNode^.Next := Head;
Head := NewNode;
end;
Walk through this carefully. New(NewNode) allocates a fresh node on the heap. We store the value in NewNode^.Data and point NewNode^.Next at whatever Head currently points to — which might be nil (empty list) or another node (existing list). Finally, we update Head to point at our new node.
⚠️ Critical:
Headmust be avarparameter. If we passed it by value, the assignmentHead := NewNodewould modify only the local copy inside the procedure. The caller's pointer would still point to the old first node (ornil), and the new node would be leaked.
This operation takes O(1) time — constant, regardless of list length. No shifting, no copying.
Visualizing Insertion at the Head
Let us trace InsertAtHead step by step, because understanding how pointers move is essential to everything in this chapter.
Starting state: Head -> [10] -> [20] -> nil
We call InsertAtHead(Head, 5):
-
New(NewNode)— allocates a fresh node somewhere on the heap.NewNodenow points to it. The node'sDataandNextfields contain garbage. -
NewNode^.Data := 5— the new node now holds the value 5. -
NewNode^.Next := Head— the new node'sNextpointer now points to node [10], which is whatHeadcurrently points to.
State: Head -> [10] -> [20] -> nil and NewNode -> [5] -> [10]
Head := NewNode—Headnow points to the new node.
Final state: Head -> [5] -> [10] -> [20] -> nil
The old head ([10]) is now the second node. No existing nodes moved in memory — we only changed two pointer values. This is why insertion is O(1): the cost is the same whether the list has 3 nodes or 3 million.
Now consider what would happen if Head were not a var parameter. After step 4, the local copy of Head inside the procedure would point to [5], but the caller's Head variable would still point to [10]. The new node [5] would be unreachable from the caller's perspective — a classic memory leak. This is perhaps the single most common bug when working with linked lists in Pascal.
Inserting at the Tail
To append a node at the end, we must traverse the entire list to find the last node:
procedure InsertAtTail(var Head: PNode; Value: Integer);
var
NewNode, Current: PNode;
begin
New(NewNode);
NewNode^.Data := Value;
NewNode^.Next := nil;
if Head = nil then
Head := NewNode
else
begin
Current := Head;
while Current^.Next <> nil do
Current := Current^.Next;
Current^.Next := NewNode;
end;
end;
The while loop walks from the head to the last node (the one whose Next is nil), then attaches the new node there. This takes O(n) time, where n is the number of nodes. We can improve this to O(1) by maintaining a separate tail pointer, which we will see shortly.
Searching
To find whether a value exists in the list:
function Search(Head: PNode; Value: Integer): PNode;
var
Current: PNode;
begin
Current := Head;
while Current <> nil do
begin
if Current^.Data = Value then
begin
Search := Current;
Exit;
end;
Current := Current^.Next;
end;
Search := nil; { Not found }
end;
We walk the list node by node. If we find a match, we return the pointer to that node. If we reach nil without finding it, we return nil. This is a linear search — O(n) in the worst case. Unlike arrays, we cannot do binary search on a linked list because we have no random access by index.
Why not binary search? Binary search requires jumping to the middle element in O(1) time — A[(Low + High) div 2] for an array. But in a linked list, there is no way to reach the middle without walking there from the head, which already costs O(n/2). Without random access, binary search degrades to linear search, losing its entire advantage. This is the fundamental trade-off of linked lists: we gain flexibility in insertion and deletion, but we lose the ability to jump directly to any position.
Deleting a Node
Deletion is the trickiest operation because we must update the previous node's Next pointer to skip over the deleted node:
procedure DeleteNode(var Head: PNode; Value: Integer);
var
Current, Previous: PNode;
begin
if Head = nil then Exit;
{ Special case: deleting the head }
if Head^.Data = Value then
begin
Current := Head;
Head := Head^.Next;
Dispose(Current);
Exit;
end;
{ General case: find the node }
Previous := Head;
Current := Head^.Next;
while Current <> nil do
begin
if Current^.Data = Value then
begin
Previous^.Next := Current^.Next;
Dispose(Current);
Exit;
end;
Previous := Current;
Current := Current^.Next;
end;
{ If we reach here, Value was not in the list }
end;
We track two pointers: Current (the node we are examining) and Previous (the node just before it). When we find the target, we set Previous^.Next := Current^.Next to bridge over the doomed node, then Dispose(Current) to free its memory. The head node requires special treatment because there is no "previous" node — we update Head directly.
Let us trace deletion with the list 10 -> 20 -> 30 -> nil, deleting 20:
Head^.Datais 10, not 20, so we enter the general case.Previous := Head(points to [10]),Current := Head^.Next(points to [20]).Current^.Datais 20 — found it!Previous^.Next := Current^.Next— node [10]'sNextnow points to [30], skipping [20].Dispose(Current)— the memory for node [20] is returned to the heap.
Final state: Head -> [10] -> [30] -> nil
The deletion had two phases: finding the node (which required traversal, O(n)) and unlinking it (which was a constant-time pointer update). If we already had a pointer to the node and its predecessor, the unlinking itself would be O(1). This observation motivates doubly-linked lists, where each node knows its predecessor.
⚠️ Memory discipline: Always
Disposea node after unlinking it. Failing to do so leaks memory. We will discuss systematic disposal strategies in Section 15.8.
Traversal
Traversal visits every node in the list, typically to print or process each element:
procedure PrintList(Head: PNode);
var
Current: PNode;
begin
Current := Head;
while Current <> nil do
begin
Write(Current^.Data);
Current := Current^.Next;
if Current <> nil then
Write(' -> ');
end;
WriteLn;
end;
Counting Nodes
function CountNodes(Head: PNode): Integer;
var
Current: PNode;
Count: Integer;
begin
Count := 0;
Current := Head;
while Current <> nil do
begin
Inc(Count);
Current := Current^.Next;
end;
CountNodes := Count;
end;
Maintaining a Tail Pointer
If we frequently insert at the tail, we can avoid the O(n) traversal by keeping a second pointer:
var
Head, Tail: PNode;
procedure InsertAtTailFast(var Head, Tail: PNode; Value: Integer);
var
NewNode: PNode;
begin
New(NewNode);
NewNode^.Data := Value;
NewNode^.Next := nil;
if Head = nil then
begin
Head := NewNode;
Tail := NewNode;
end
else
begin
Tail^.Next := NewNode;
Tail := NewNode;
end;
end;
Now insertion at the tail is O(1). The cost is one extra pointer to maintain and slightly more complex deletion logic (we must update Tail if we delete the last node).
📊 Complexity summary for singly-linked lists:
Operation Time Notes Insert at head O(1) Insert at tail O(n), or O(1) with tail pointer Delete (known position) O(1) Given pointer to previous node Delete (by value) O(n) Must search first Search O(n) Linear scan Access by index O(n) Must traverse from head Traversal O(n) Visit all nodes
15.3 Doubly-Linked Lists
A singly-linked list has one limitation that becomes painful in practice: you can only move forward. If you are at node 50 and need node 49, you must start over from the head. A doubly-linked list solves this by giving each node a pointer to both the next and the previous node.
The Node Type
type
PDNode = ^TDNode;
TDNode = record
Data: Integer;
Prev: PDNode;
Next: PDNode;
end;
Each node now carries two pointers. The cost is one extra pointer per node (typically 4 or 8 bytes), but the benefit is bidirectional traversal and simpler deletion.
Insertion
Inserting at the head of a doubly-linked list requires updating more pointers:
procedure DInsertAtHead(var Head: PDNode; Value: Integer);
var
NewNode: PDNode;
begin
New(NewNode);
NewNode^.Data := Value;
NewNode^.Prev := nil;
NewNode^.Next := Head;
if Head <> nil then
Head^.Prev := NewNode;
Head := NewNode;
end;
The key addition is Head^.Prev := NewNode — the old head must know about its new predecessor. And NewNode^.Prev := nil because the head has no predecessor.
Inserting at the tail follows the same pattern in reverse. With a tail pointer:
procedure DInsertAtTail(var Head, Tail: PDNode; Value: Integer);
var
NewNode: PDNode;
begin
New(NewNode);
NewNode^.Data := Value;
NewNode^.Next := nil;
NewNode^.Prev := Tail;
if Tail <> nil then
Tail^.Next := NewNode
else
Head := NewNode; { List was empty }
Tail := NewNode;
end;
Deletion
Deletion in a doubly-linked list is cleaner than in a singly-linked list because each node knows its predecessor:
procedure DDeleteNode(var Head, Tail: PDNode; Target: PDNode);
begin
if Target = nil then Exit;
if Target^.Prev <> nil then
Target^.Prev^.Next := Target^.Next
else
Head := Target^.Next; { Deleting the head }
if Target^.Next <> nil then
Target^.Next^.Prev := Target^.Prev
else
Tail := Target^.Prev; { Deleting the tail }
Dispose(Target);
end;
We do not need a separate Previous pointer — Target^.Prev gives us direct access. This makes deletion O(1) when we already have a pointer to the node. We still need O(n) to find the node by value, but the unlinking itself is constant-time.
Let us trace the deletion of node [20] from the doubly-linked list [10] <-> [20] <-> [30], where we already have Target pointing to [20]:
Target^.Previs [10] (not nil), so:Target^.Prev^.Next := Target^.Next— node [10]'sNextnow points to [30].Target^.Nextis [30] (not nil), so:Target^.Next^.Prev := Target^.Prev— node [30]'sPrevnow points to [10].Dispose(Target)— node [20]'s memory is freed.
Result: [10] <-> [30]. Two pointer updates and we are done — no traversal needed because every node already knows its neighbors. Compare this with the singly-linked list, where we had to walk from the head to find the predecessor of the node we wanted to delete.
Inserting in the Middle
A powerful capability of doubly-linked lists is inserting a new node after any given node in O(1) time:
procedure DInsertAfter(var Tail: PDNode; Target: PDNode; Value: Integer);
var
NewNode: PDNode;
begin
if Target = nil then Exit;
New(NewNode);
NewNode^.Data := Value;
NewNode^.Prev := Target;
NewNode^.Next := Target^.Next;
if Target^.Next <> nil then
Target^.Next^.Prev := NewNode
else
Tail := NewNode; { Inserting after the current tail }
Target^.Next := NewNode;
end;
Four pointer updates, no traversal. In a singly-linked list, inserting after a known node is also O(1), but inserting before a known node requires finding its predecessor — which takes O(n). In a doubly-linked list, inserting before a known node is just as easy: use Target^.Prev to access the predecessor.
Bidirectional Traversal
The real power of a doubly-linked list is traversal in both directions:
procedure PrintForward(Head: PDNode);
var
Current: PDNode;
begin
Current := Head;
while Current <> nil do
begin
Write(Current^.Data, ' ');
Current := Current^.Next;
end;
WriteLn;
end;
procedure PrintBackward(Tail: PDNode);
var
Current: PDNode;
begin
Current := Tail;
while Current <> nil do
begin
Write(Current^.Data, ' ');
Current := Current^.Prev;
end;
WriteLn;
end;
💡 When to choose doubly-linked over singly-linked: Use a doubly-linked list when you need backward traversal, when you frequently delete nodes you already have pointers to, or when you are implementing a deque (double-ended queue). The extra memory per node is usually a small price for the flexibility.
The Cost of the Extra Pointer
A doubly-linked list uses more memory per node. On a typical Free Pascal 32-bit system, each pointer is 4 bytes; on a 64-bit system, 8 bytes. For a list of integers:
- Singly-linked: 4 bytes (Integer) + 4 or 8 bytes (Next pointer) = 8–12 bytes/node
- Doubly-linked: 4 bytes (Integer) + 4 or 8 bytes (Next) + 4 or 8 bytes (Prev) = 12–20 bytes/node
The doubly-linked list uses 50–67% more memory per node. For a list of large records (say, 200 bytes per record), the overhead is negligible — an extra 4–8 bytes on top of 200 is barely 2–4%. For a list of small data types like integers or characters, the overhead is significant. As always, the right choice depends on your access patterns: if you never need to go backward or delete by pointer, the singly-linked list is simpler and leaner.
15.4 Stacks: Last In, First Out
A stack is a data structure where the last element added is the first one removed — Last In, First Out (LIFO). Think of a stack of plates in a cafeteria: you add plates on top and remove them from the top.
Stack Operations
Every stack supports four operations:
| Operation | Description | Time |
|---|---|---|
Push(value) |
Add an element to the top | O(1) |
Pop |
Remove and return the top element | O(1) |
Peek (or Top) |
Return the top element without removing it | O(1) |
IsEmpty |
Check whether the stack has no elements | O(1) |
All operations are O(1). This is not a coincidence — it is a consequence of the design constraint that we only ever interact with the top of the stack.
Implementation Using a Linked List
A stack maps naturally onto a singly-linked list where the head is the top of the stack:
type
PStackNode = ^TStackNode;
TStackNode = record
Data: Integer;
Next: PStackNode;
end;
TStack = record
Top: PStackNode;
Count: Integer;
end;
We wrap the top pointer and a count into a record for clean encapsulation.
procedure InitStack(var S: TStack);
begin
S.Top := nil;
S.Count := 0;
end;
function IsEmpty(const S: TStack): Boolean;
begin
IsEmpty := (S.Top = nil);
end;
procedure Push(var S: TStack; Value: Integer);
var
NewNode: PStackNode;
begin
New(NewNode);
NewNode^.Data := Value;
NewNode^.Next := S.Top;
S.Top := NewNode;
Inc(S.Count);
end;
function Pop(var S: TStack): Integer;
var
Temp: PStackNode;
begin
if IsEmpty(S) then
begin
WriteLn('Error: Stack underflow');
Halt(1);
end;
Pop := S.Top^.Data;
Temp := S.Top;
S.Top := S.Top^.Next;
Dispose(Temp);
Dec(S.Count);
end;
function Peek(const S: TStack): Integer;
begin
if IsEmpty(S) then
begin
WriteLn('Error: Stack is empty');
Halt(1);
end;
Peek := S.Top^.Data;
end;
Notice how Push is simply InsertAtHead by another name, and Pop is DeleteHead that returns the deleted value. The linked list is the stack — we simply restrict which operations we expose.
Stack Applications
Stacks appear everywhere in computer science:
-
Expression evaluation. Converting infix expressions (like
3 + 4 * 2) to postfix (3 4 2 * +) uses a stack. Evaluating the postfix form also uses a stack. We explore this in Case Study 1. -
Undo functionality. Each action is pushed onto a stack. "Undo" pops the most recent action. This is exactly what PennyWise needs.
-
Function call management. When your program calls a procedure, Pascal pushes the return address and local variables onto the call stack. When the procedure returns, it pops them off. This is why recursive functions can exhaust stack space — each recursive call pushes another frame.
-
Bracket matching. To verify that every
(has a matching)and every[has a matching], push each opening bracket onto a stack. When you encounter a closing bracket, pop the stack and check for a match. -
Backtracking algorithms. Maze solving, depth-first search, and game AI use stacks to remember where they have been so they can back up when they hit a dead end.
🔗 Theme 2 — Discipline Transfers. The stack is one of those data structures that transfers across every domain of computer science. Once you understand push/pop, you understand function calls, undo systems, expression parsers, and backtracking algorithms. This is a concept that pays compound interest.
Decimal to Binary Conversion
A stack provides an elegant solution to decimal-to-binary conversion. The algorithm repeatedly divides by 2, pushing each remainder. When we pop the remainders, they come out in the correct order (most significant bit first):
function DecimalToBinary(N: Integer): string;
var
S: TStack;
BinStr: string;
begin
InitStack(S);
if N = 0 then
begin
DecimalToBinary := '0';
Exit;
end;
while N > 0 do
begin
Push(S, N mod 2);
N := N div 2;
end;
BinStr := '';
while not IsEmpty(S) do
BinStr := BinStr + Chr(Ord('0') + Pop(S));
DecimalToBinary := BinStr;
end;
Trace it with N = 42: - 42 mod 2 = 0, push 0, N = 21 - 21 mod 2 = 1, push 1, N = 10 - 10 mod 2 = 0, push 0, N = 5 - 5 mod 2 = 1, push 1, N = 2 - 2 mod 2 = 0, push 0, N = 1 - 1 mod 2 = 1, push 1, N = 0
Stack (top to bottom): 1, 0, 1, 0, 1, 0
Pop all: "101010" — which is indeed 42 in binary.
The remainders are produced in reverse order (least significant bit first), but the stack reverses them. This is a general pattern: whenever you need to reverse the natural order of production, a stack is your tool.
A Bracket Matcher
Here is a practical example — a function that checks whether brackets in a string are balanced:
function BracketsBalanced(const S: string): Boolean;
var
Stack: TStack;
i: Integer;
Ch: Char;
Top: Char;
begin
InitStack(Stack);
BracketsBalanced := True;
for i := 1 to Length(S) do
begin
Ch := S[i];
if (Ch = '(') or (Ch = '[') or (Ch = '{') then
Push(Stack, Ord(Ch))
else if (Ch = ')') or (Ch = ']') or (Ch = '}') then
begin
if IsEmpty(Stack) then
begin
BracketsBalanced := False;
Exit;
end;
Top := Chr(Pop(Stack));
if not ((Top = '(') and (Ch = ')')) and
not ((Top = '[') and (Ch = ']')) and
not ((Top = '{') and (Ch = '}')) then
begin
BracketsBalanced := False;
Exit;
end;
end;
end;
BracketsBalanced := IsEmpty(Stack);
{ Clean up any remaining nodes }
while not IsEmpty(Stack) do
Pop(Stack);
end;
We push the ASCII code of each opening bracket (since our stack stores integers). When we encounter a closing bracket, we pop and check that it matches. If the stack is empty at the end, all brackets were matched. If not, there were unmatched opening brackets.
15.5 Queues: First In, First Out
A queue is a data structure where the first element added is the first one removed — First In, First Out (FIFO). Think of a line at the post office: the person who arrived first gets served first.
Queue Operations
| Operation | Description | Time |
|---|---|---|
Enqueue(value) |
Add an element to the rear | O(1) |
Dequeue |
Remove and return the front element | O(1) |
Front |
Return the front element without removing it | O(1) |
IsEmpty |
Check whether the queue has no elements | O(1) |
Implementation Using a Linked List
A queue requires efficient access to both ends: we add at the rear and remove from the front. A singly-linked list with both head and tail pointers gives us O(1) for both:
type
PQueueNode = ^TQueueNode;
TQueueNode = record
Data: Integer;
Next: PQueueNode;
end;
TQueue = record
Front: PQueueNode;
Rear: PQueueNode;
Count: Integer;
end;
procedure InitQueue(var Q: TQueue);
begin
Q.Front := nil;
Q.Rear := nil;
Q.Count := 0;
end;
function QueueIsEmpty(const Q: TQueue): Boolean;
begin
QueueIsEmpty := (Q.Front = nil);
end;
procedure Enqueue(var Q: TQueue; Value: Integer);
var
NewNode: PQueueNode;
begin
New(NewNode);
NewNode^.Data := Value;
NewNode^.Next := nil;
if Q.Rear = nil then
begin
Q.Front := NewNode;
Q.Rear := NewNode;
end
else
begin
Q.Rear^.Next := NewNode;
Q.Rear := NewNode;
end;
Inc(Q.Count);
end;
function Dequeue(var Q: TQueue): Integer;
var
Temp: PQueueNode;
begin
if QueueIsEmpty(Q) then
begin
WriteLn('Error: Queue underflow');
Halt(1);
end;
Dequeue := Q.Front^.Data;
Temp := Q.Front;
Q.Front := Q.Front^.Next;
if Q.Front = nil then
Q.Rear := nil; { Queue is now empty }
Dispose(Temp);
Dec(Q.Count);
end;
function QueueFront(const Q: TQueue): Integer;
begin
if QueueIsEmpty(Q) then
begin
WriteLn('Error: Queue is empty');
Halt(1);
end;
QueueFront := Q.Front^.Data;
end;
The critical detail is in Dequeue: when we remove the last element, we must set Q.Rear := nil in addition to Q.Front becoming nil. Forgetting this leaves a dangling Rear pointer — it still points to the memory we just disposed, and the next Enqueue would write to freed memory.
Queue Applications
-
Print spooling. Documents are printed in the order they arrive. Case Study 2 explores this in detail.
-
Breadth-first search (BFS). Graph traversal algorithms use a queue to explore nodes level by level — first all neighbors, then all neighbors' neighbors, and so on.
-
Task scheduling. Operating systems use queues to manage processes waiting for CPU time, disk I/O, or network access.
-
Buffering. When data arrives faster than it can be processed (streaming audio, network packets), a queue buffers the excess until the consumer catches up.
-
Message passing. In concurrent programming, threads communicate through message queues. Producer threads enqueue messages; consumer threads dequeue them.
A Simple Demonstration
var
Q: TQueue;
begin
InitQueue(Q);
Enqueue(Q, 10);
Enqueue(Q, 20);
Enqueue(Q, 30);
WriteLn('Front: ', QueueFront(Q)); { 10 }
WriteLn('Dequeued: ', Dequeue(Q)); { 10 }
WriteLn('Dequeued: ', Dequeue(Q)); { 20 }
WriteLn('Front: ', QueueFront(Q)); { 30 }
WriteLn('Count: ', Q.Count); { 1 }
end.
The output confirms FIFO ordering: 10 was enqueued first and dequeued first.
Why Not Use the Other End?
You might wonder: why not dequeue from the rear instead of the front? Or enqueue at the front instead of the rear? You can — but then you have a stack, not a queue. The queue's power lies in its discipline: by separating the enqueue end (rear) from the dequeue end (front), it guarantees fairness. The element that has been waiting the longest is always served next.
This FIFO discipline is so fundamental that many systems enforce it as a matter of law or ethics. Hospital emergency rooms triage patients (a priority queue, not a strict FIFO queue), but within each triage level, patients are seen in order of arrival. Bank queues, airline boarding, and customer service phone lines all follow FIFO. When you implement a queue in code, you are implementing a fairness guarantee.
Comparing Stack and Queue Implementations
Both the stack and queue are built from the same building block — a singly-linked list. The difference is purely in which operations we expose:
| Feature | Stack | Queue |
|---|---|---|
| Insert point | Head (top) | Tail (rear) |
| Remove point | Head (top) | Head (front) |
| Extra pointers needed | Head only | Head and Tail |
| Access pattern | LIFO | FIFO |
| Metaphor | Stack of plates | Line at post office |
This is an important software engineering lesson: two very different abstractions (stack and queue) can share the same underlying data structure (linked list). The abstraction — the restriction on how we interact with the data — is what gives each structure its power. Restricting access is not a weakness; it is a feature. A stack guarantees that you always get the most recent item. A queue guarantees that you always get the oldest item. These guarantees simplify reasoning about your program's behavior.
15.6 Circular Lists and Deques
Before we move to comparisons and memory management, let us briefly introduce two variations that extend the basic structures.
Circular Linked Lists
In a circular linked list, the last node's Next pointer points back to the first node instead of nil:
[10] -> [20] -> [30] -> [10] -> ...
This is useful for problems that cycle through elements repeatedly — round-robin scheduling, turn-based games, or the Josephus problem. The key implementation difference is that we never check for nil; instead, we check whether we have returned to the starting node:
procedure PrintCircular(Start: PNode);
var
Current: PNode;
begin
if Start = nil then Exit;
Current := Start;
repeat
Write(Current^.Data, ' ');
Current := Current^.Next;
until Current = Start;
WriteLn;
end;
⚠️ Infinite loop danger: If you accidentally traverse a circular list with a
while Current <> nilloop, it will never terminate. Always use arepeat...until Current = Startpattern or maintain a counter.
Circular Doubly-Linked Lists
A circular doubly-linked list combines both extensions: each node has Prev and Next pointers, and the last node's Next points to the first while the first node's Prev points to the last. This gives O(1) access to both ends without maintaining separate head and tail pointers — one pointer suffices because the "tail" is just Head^.Prev.
Deques (Double-Ended Queues)
A deque (pronounced "deck") supports insertion and removal at both ends:
| Operation | Description |
|---|---|
PushFront(value) |
Insert at the front |
PushBack(value) |
Insert at the rear |
PopFront |
Remove from the front |
PopBack |
Remove from the rear |
A doubly-linked list with head and tail pointers implements a deque naturally — all four operations are O(1). A deque generalizes both stacks and queues: if you only use PushFront/PopFront, it behaves as a stack; if you only use PushBack/PopFront, it behaves as a queue.
We will not build a full deque implementation here, but you will have the opportunity to do so in the exercises.
15.7 Comparison: Arrays vs. Linked Lists
Now that we have seen linked lists in action, let us compare them systematically with arrays.
Access Patterns
| Operation | Array | Linked List |
|---|---|---|
| Access by index | O(1) | O(n) |
| Search (unsorted) | O(n) | O(n) |
| Search (sorted) | O(log n) — binary search | O(n) — no random access |
| Insert at beginning | O(n) — shift all elements | O(1) |
| Insert at end | O(1) — if space available | O(1) with tail pointer |
| Insert at position k | O(n) — shift elements | O(n) to find, O(1) to insert |
| Delete at beginning | O(n) — shift all elements | O(1) |
| Delete at position k | O(n) — shift elements | O(n) to find, O(1) to unlink |
Memory
Arrays allocate a contiguous block of memory. Each element occupies exactly the space its data type requires. Linked lists allocate each node separately, and each node carries the overhead of one pointer (singly-linked) or two pointers (doubly-linked). For small data types like Integer, the pointer overhead can double or triple the memory per element.
However, arrays must be sized at declaration time (in standard Pascal), which means you either waste memory by over-allocating or risk running out of space. Linked lists use exactly as much memory as needed — no more, no less (plus pointer overhead).
Cache Performance
Modern CPUs are optimized for accessing memory sequentially. Array elements sit next to each other in memory, so traversing an array benefits from cache locality — the CPU's cache prefetches nearby memory automatically. Linked list nodes can be scattered across the heap, leading to cache misses on each pointer dereference. In practice, this means array traversal is often significantly faster than linked list traversal, even when both are O(n).
Decision Guide
Use an array when: - You know the maximum size at compile time - You need frequent random access by index - You need binary search on sorted data - Cache performance matters (large datasets, tight loops)
Use a linked list when: - The number of elements is unpredictable - You frequently insert or delete at the beginning or middle - You need the collection to grow and shrink dynamically - You will traverse sequentially rather than accessing by index
Use a stack when: - You need LIFO access (undo, backtracking, expression evaluation) - You only ever need the most recently added element
Use a queue when: - You need FIFO access (scheduling, buffering, BFS) - You process elements in the order they arrive
💡 Choosing the right data structure is a design decision, not a coding one. It happens before you write a single line. The wrong choice can make an elegant algorithm painful and a simple task slow. When in doubt, consider: How will I access the data? How often will I insert and delete? Do I need random access or sequential?
A Concrete Example: Student Grades
Suppose you need to store student grades and perform these operations: (a) add a new grade, (b) look up the grade at position k, (c) compute the average, (d) insert a grade in sorted position.
With an array, operations (a) is O(1) at the end, (b) is O(1), (c) is O(n), and (d) is O(n) due to shifting. With a linked list, (a) is O(1) at the head, (b) is O(k), (c) is O(n), and (d) is O(n) to find the position plus O(1) to insert. The array wins on random access; the linked list wins on insertion. If you rarely look up by position but frequently insert in sorted order, the linked list is a better fit — not because it is faster overall, but because the O(1) insertion (once positioned) avoids the costly shifting that arrays require.
But there is a subtlety. If your list of grades is small (say, 30 students in a class), the performance difference between O(1) and O(n) is negligible — both complete in microseconds. For small collections, choose the simpler implementation (usually an array). The advantages of linked lists become significant only as the collection grows to hundreds or thousands of elements, or when insertions and deletions are very frequent.
15.8 Memory Management for Linked Structures
Linked structures introduce a responsibility that arrays do not: you must explicitly free every node you allocated. Pascal's garbage-free memory model means that if you lose the pointer to a node without calling Dispose, that memory is leaked — it remains allocated but unreachable for the rest of your program's lifetime.
Disposing an Entire List
When you are done with a linked list, you must walk through it and dispose every node:
procedure FreeList(var Head: PNode);
var
Temp: PNode;
begin
while Head <> nil do
begin
Temp := Head;
Head := Head^.Next;
Dispose(Temp);
end;
end;
The pattern is: save the current node in Temp, advance Head to the next node, then dispose Temp. We must advance before disposing, because after Dispose(Temp), the memory at Temp^.Next is no longer valid.
⚠️ Common bug: Disposing nodes in the wrong order. If you
Dispose(Head)and then try to accessHead^.Next, you are reading freed memory. The program might appear to work — the old data might still be there by coincidence — but it is undefined behavior and will eventually cause mysterious crashes.
Disposing a Stack or Queue
For a stack:
procedure FreeStack(var S: TStack);
begin
while not IsEmpty(S) do
Pop(S);
end;
Since Pop already calls Dispose on the removed node, we simply pop until empty.
For a queue:
procedure FreeQueue(var Q: TQueue);
begin
while not QueueIsEmpty(Q) do
Dequeue(Q);
end;
Memory Leak Checklist
To avoid leaks in programs with linked structures:
-
Always dispose nodes when deleting them. Never just unlink a node without calling
Dispose. -
Free the entire structure before the program exits or before the variable goes out of scope.
-
Set pointers to
nilafter disposing. This prevents dangling pointer dereferences. TheFreeListprocedure above does this implicitly becauseHeadends up asnilafter the loop. -
Be careful with reassignment. If you do
Head := Head^.Nextwithout first saving and disposing the oldHead, you have leaked a node. -
Test with boundary cases. Empty lists, single-element lists, and two-element lists are where most pointer bugs hide.
Debugging Linked Structures
When your linked list program crashes or produces wrong results, here is a systematic debugging approach:
-
Draw it on paper. Seriously. Draw boxes for nodes, arrows for pointers. Step through your code instruction by instruction, updating the diagram. Most pointer bugs become obvious when you see the arrows.
-
Add a diagnostic
PrintListcall after every operation during development. Verify the list looks correct after each insert and delete. -
Check for
nilbefore every dereference. If there is any possibility that a pointer might benil, check it. Dereferencingnilis the most common crash in pointer-heavy code. -
Track the count. Maintain a
Countfield and verify it matches the actual number of nodes. IfCountsays 5 but traversal visits 4 nodes, you have a broken link.
Using Free Pascal's heaptrc Unit
Free Pascal provides a built-in memory leak detector. Add uses heaptrc; to your program (or compile with -gh), and when your program exits, it will report any memory that was allocated but never freed:
Heap dump by heaptrc unit
2 unfreed memory blocks : 24
...
This tells you exactly how many nodes you forgot to dispose. During development, always compile with leak detection enabled. It is far easier to fix leaks as you write the code than to hunt them down after the fact.
A clean exit looks like:
Heap dump by heaptrc unit
0 unfreed memory blocks : 0
Train yourself to expect this message. If you see any unfreed blocks, track them down before committing your code.
Common Pointer Bugs: A Gallery
Here are the five most common pointer bugs in linked-list code, each of which we have seen in some form during this chapter:
-
Forgetting
varon the head parameter. The caller's pointer never updates. The new node is leaked. -
Disposing before advancing.
Dispose(Current); Current := Current^.Next;accesses freed memory. Always save inTemp, advance, then dispose. -
Not handling the empty list. If
Headisniland you tryHead^.Data, the program crashes. Always check fornilfirst. -
Not handling the single-element list. Deleting the only node must set
Headtonil. If your code assumes there is always a "previous" node, it will fail on single-element lists. -
Losing the tail pointer. When deleting the last node, the tail pointer must be updated. With a singly-linked list, this requires traversal to find the new tail (the second-to-last node).
15.9 Crypts of Pascalia: Inventory as a Linked List, Undo as a Stack
In previous chapters, the Crypts of Pascalia text adventure stored inventory as an array. But our hero now explores deeper dungeons where items come and go frequently, and we want an undo feature so the player can reverse their last action.
Inventory as a Linked List
We replace the fixed-size array with a linked list:
type
PItem = ^TItem;
TItem = record
Name: string[30];
Weight: Integer;
Next: PItem;
end;
var
Inventory: PItem;
procedure PickUpItem(var Inv: PItem; ItemName: string; ItemWeight: Integer);
var
NewItem: PItem;
begin
New(NewItem);
NewItem^.Name := ItemName;
NewItem^.Weight := ItemWeight;
NewItem^.Next := Inv;
Inv := NewItem;
WriteLn('You pick up the ', ItemName, '.');
end;
procedure DropItem(var Inv: PItem; ItemName: string);
var
Current, Previous: PItem;
begin
if Inv = nil then
begin
WriteLn('Your inventory is empty.');
Exit;
end;
{ Dropping the first item }
if Inv^.Name = ItemName then
begin
Current := Inv;
Inv := Inv^.Next;
WriteLn('You drop the ', Current^.Name, '.');
Dispose(Current);
Exit;
end;
{ Search for the item }
Previous := Inv;
Current := Inv^.Next;
while Current <> nil do
begin
if Current^.Name = ItemName then
begin
Previous^.Next := Current^.Next;
WriteLn('You drop the ', Current^.Name, '.');
Dispose(Current);
Exit;
end;
Previous := Current;
Current := Current^.Next;
end;
WriteLn('You don''t have a ', ItemName, '.');
end;
procedure ShowInventory(Inv: PItem);
var
Current: PItem;
TotalWeight: Integer;
begin
if Inv = nil then
begin
WriteLn('Your inventory is empty.');
Exit;
end;
WriteLn('=== Inventory ===');
TotalWeight := 0;
Current := Inv;
while Current <> nil do
begin
WriteLn(' ', Current^.Name, ' (', Current^.Weight, ' lbs)');
TotalWeight := TotalWeight + Current^.Weight;
Current := Current^.Next;
end;
WriteLn('Total weight: ', TotalWeight, ' lbs');
end;
The linked list lets us carry any number of items without a fixed limit, and picking up or dropping items is clean and fast.
Undo Stack
Every action the player takes is recorded on a stack. When the player types "undo," we pop the most recent action and reverse it:
type
TActionKind = (akPickUp, akDrop, akMove);
PAction = ^TAction;
TAction = record
Kind: TActionKind;
ItemName: string[30];
ItemWeight: Integer;
PreviousRoom: Integer;
Next: PAction;
end;
var
UndoStack: PAction;
procedure RecordAction(var Stack: PAction; Kind: TActionKind;
Name: string; Weight, Room: Integer);
var
A: PAction;
begin
New(A);
A^.Kind := Kind;
A^.ItemName := Name;
A^.ItemWeight := Weight;
A^.PreviousRoom := Room;
A^.Next := Stack;
Stack := A;
end;
procedure UndoLastAction(var Stack: PAction; var Inv: PItem;
var CurrentRoom: Integer);
var
A: PAction;
begin
if Stack = nil then
begin
WriteLn('Nothing to undo.');
Exit;
end;
A := Stack;
Stack := Stack^.Next;
case A^.Kind of
akPickUp:
begin
WriteLn('Undo: You put back the ', A^.ItemName, '.');
DropItem(Inv, A^.ItemName);
end;
akDrop:
begin
WriteLn('Undo: You pick up the ', A^.ItemName, ' again.');
PickUpItem(Inv, A^.ItemName, A^.ItemWeight);
end;
akMove:
begin
WriteLn('Undo: You return to the previous room.');
CurrentRoom := A^.PreviousRoom;
end;
end;
Dispose(A);
end;
Now the game can support arbitrary undo depth, limited only by available memory. Each action costs one heap allocation — negligible for a text adventure.
💡 Design insight: Notice how the undo stack stores enough information to reverse each action. For
akPickUp, reversing means dropping the item. ForakDrop, reversing means picking it up again (so we need to remember the weight). ForakMove, reversing means returning to the previous room. This pattern — recording the inverse of each operation — is the foundation of undo systems in professional software.
15.10 Project Checkpoint: PennyWise Undo Stack
It is time to apply what we have learned to PennyWise. Our personal finance tracker currently stores expenses in an array. We will migrate to a linked list and add undo functionality.
Requirements
-
Expenses as a linked list. Each expense node stores a description, amount, category, and date. New expenses are inserted at the head (most recent first).
-
Undo stack. Each expense addition is pushed onto an undo stack. The user can undo the last N additions, which removes the corresponding expenses from the list.
-
Traversal and totals. The user can view all expenses and see totals by category.
Data Types
type
PExpense = ^TExpense;
TExpense = record
Description: string[50];
Amount: Real;
Category: string[20];
Date: string[10];
Next: PExpense;
end;
PUndoNode = ^TUndoNode;
TUndoNode = record
Description: string[50];
Amount: Real;
Category: string[20];
Date: string[10];
Next: PUndoNode;
end;
TExpenseList = record
Head: PExpense;
Count: Integer;
Total: Real;
end;
TUndoStack = record
Top: PUndoNode;
Count: Integer;
end;
Key Operations
Adding an expense inserts at the head of the expense list and pushes a record onto the undo stack:
procedure AddExpense(var List: TExpenseList; var Undo: TUndoStack;
Desc: string; Amt: Real; Cat, Dt: string);
var
E: PExpense;
U: PUndoNode;
begin
{ Add to expense list }
New(E);
E^.Description := Desc;
E^.Amount := Amt;
E^.Category := Cat;
E^.Date := Dt;
E^.Next := List.Head;
List.Head := E;
Inc(List.Count);
List.Total := List.Total + Amt;
{ Record on undo stack }
New(U);
U^.Description := Desc;
U^.Amount := Amt;
U^.Category := Cat;
U^.Date := Dt;
U^.Next := Undo.Top;
Undo.Top := U;
Inc(Undo.Count);
WriteLn('Added: ', Desc, ' ($', Amt:0:2, ')');
end;
Undoing pops the undo stack and removes the matching expense from the list:
procedure UndoLastExpense(var List: TExpenseList; var Undo: TUndoStack);
var
U: PUndoNode;
Current, Previous: PExpense;
begin
if Undo.Top = nil then
begin
WriteLn('Nothing to undo.');
Exit;
end;
U := Undo.Top;
Undo.Top := Undo.Top^.Next;
Dec(Undo.Count);
{ Find and remove the matching expense }
if List.Head = nil then
begin
Dispose(U);
Exit;
end;
{ Check the head }
if (List.Head^.Description = U^.Description) and
(Abs(List.Head^.Amount - U^.Amount) < 0.001) then
begin
Current := List.Head;
List.Head := List.Head^.Next;
List.Total := List.Total - Current^.Amount;
Dec(List.Count);
WriteLn('Undone: ', Current^.Description, ' ($', Current^.Amount:0:2, ')');
Dispose(Current);
Dispose(U);
Exit;
end;
{ Search the rest of the list }
Previous := List.Head;
Current := List.Head^.Next;
while Current <> nil do
begin
if (Current^.Description = U^.Description) and
(Abs(Current^.Amount - U^.Amount) < 0.001) then
begin
Previous^.Next := Current^.Next;
List.Total := List.Total - Current^.Amount;
Dec(List.Count);
WriteLn('Undone: ', Current^.Description, ' ($', Current^.Amount:0:2, ')');
Dispose(Current);
Dispose(U);
Exit;
end;
Previous := Current;
Current := Current^.Next;
end;
WriteLn('Warning: Matching expense not found.');
Dispose(U);
end;
Testing the Checkpoint
A sample session:
> add "Coffee" 4.50 "Food" "2026-03-23"
Added: Coffee ($4.50)
> add "Bus pass" 75.00 "Transport" "2026-03-23"
Added: Bus pass ($75.00)
> add "Textbook" 89.99 "Education" "2026-03-23"
Added: Textbook ($89.99)
> list
=== Expenses (3 items, Total: $169.49) ===
Textbook $89.99 Education 2026-03-23
Bus pass $75.00 Transport 2026-03-23
Coffee $4.50 Food 2026-03-23
> undo
Undone: Textbook ($89.99)
> list
=== Expenses (2 items, Total: $79.50) ===
Bus pass $75.00 Transport 2026-03-23
Coffee $4.50 Food 2026-03-23
> undo
Undone: Bus pass ($75.00)
> list
=== Expenses (1 items, Total: $4.50) ===
Coffee $4.50 Food 2026-03-23
Category Summaries
PennyWise can also display totals by category — traversing the linked list and accumulating amounts:
procedure PrintCategorySummary(const L: TExpenseList);
var
Current: PExpense;
{ ... category tracking arrays ... }
begin
Current := L.Head;
while Current <> nil do
begin
{ Accumulate into category arrays }
Current := Current^.Next;
end;
{ Print summary table }
end;
This is a standard traversal with accumulation — the same pattern we used for CountNodes and SumNodes, but collecting grouped totals instead of a single sum. The full implementation in code/project-checkpoint.pas uses small parallel arrays for category names, totals, and counts (limited to 20 categories, which is plenty for a personal finance tracker).
Design Decisions and Trade-offs
Several design decisions in this checkpoint are worth reflecting on:
Why insert at the head? We insert new expenses at the head of the list, which means the most recent expense appears first when listing. This is the natural "reverse chronological" order that most finance apps use. Inserting at the head is O(1) and does not require a tail pointer.
Why separate undo nodes from expense nodes? The TUndoNode and TExpense have identical fields, which might seem redundant. But the undo stack and expense list have different lifetimes and ownership semantics. When we undo, we dispose the undo node and the matching expense node — they are independent heap allocations. If we tried to share nodes between both structures, disposal would become a nightmare of figuring out who "owns" each node.
Why match by description and amount for undo? In a production system, we would give each expense a unique ID and match by that. Matching by description and amount is fragile — what if two expenses have the same description and amount? But for our checkpoint, it is simple and sufficient. Exercise M3 challenges you to add unique IDs.
Why not save the undo stack to disk? Undo history is session-specific. When you quit PennyWise and reopen it, the undo stack starts empty. This is the same behavior as most desktop applications — undo history does not survive across sessions. (Some modern apps do persist undo history, but that requires serializing the stack to a file, which adds complexity beyond our current scope.)
The full implementation is in code/project-checkpoint.pas. Study it, compile it with fpc project-checkpoint.pas, and experiment with adding, listing, undoing, and viewing category summaries.
✅ Checkpoint criteria: Your PennyWise now supports (a) dynamic expense storage with no fixed limit, (b) multi-level undo, and (c) clean memory management when the program exits.
15.11 Chapter Summary
This chapter introduced the three most fundamental dynamic data structures in computer science, all built on the pointer mechanics we learned in Chapter 14.
Singly-linked lists are chains of nodes connected by Next pointers. They support O(1) insertion at the head, O(n) search, and O(n) deletion by value. A tail pointer makes tail insertion O(1) as well.
Doubly-linked lists add a Prev pointer to each node, enabling bidirectional traversal and simpler deletion when you have a pointer to the target node.
Stacks enforce LIFO access through Push and Pop. They are implemented as linked lists where all operations happen at the head. Stacks power undo systems, expression evaluation, bracket matching, and function call management.
Queues enforce FIFO access through Enqueue and Dequeue. They use head and tail pointers to achieve O(1) at both ends. Queues power scheduling, buffering, and breadth-first search.
We compared linked lists with arrays across access patterns, memory usage, and cache behavior, and established guidelines for choosing between them. We emphasized memory management discipline — every New must have a matching Dispose, and we must free entire structures before they go out of scope.
In Crypts of Pascalia, we replaced the fixed inventory array with a linked list and added an undo stack. In PennyWise, we migrated expenses to a linked list and implemented multi-level undo.
📊 The big picture: Arrays give you speed of access. Linked lists give you flexibility of structure. Stacks and queues give you discipline of access. Understanding when to use each — and being able to implement any of them from scratch — is what separates someone who writes programs from someone who engineers solutions.
Key Terms
| Term | Definition |
|---|---|
| Linked list | A sequence of nodes, each containing data and a pointer to the next node |
| Head | A pointer to the first node of a linked list |
| Tail | A pointer to the last node of a linked list |
| Singly-linked list | A linked list where each node has one pointer (to the next node) |
| Doubly-linked list | A linked list where each node has two pointers (next and previous) |
| Stack | A LIFO data structure supporting push, pop, and peek |
| Queue | A FIFO data structure supporting enqueue, dequeue, and front |
| Circular list | A linked list where the last node points back to the first |
| Deque | A double-ended queue supporting insertion and removal at both ends |
| Memory leak | Memory that was allocated but never freed, becoming unreachable |
| Dangling pointer | A pointer that references memory that has already been freed |
| Cache locality | The tendency for data stored close together in memory to be accessed faster |
What's Next
In Chapter 16, we will build on these foundations to implement trees — hierarchical structures where each node can have multiple children. Trees combine the dynamic flexibility of linked lists with the fast search capabilities of sorted arrays, giving us the best of both worlds through binary search trees.