25 min read

> "Data is a precious thing and will last longer than the systems themselves."

Learning Objectives

  • Explain why database programming matters for desktop applications
  • Set up and use SQLite as an embedded database with no server required
  • Use SQLdb components: TSQLite3Connection, TSQLTransaction, TSQLQuery
  • Create tables, insert data, and query results using SQL within Pascal
  • Display database results in a TDBGrid using data-aware controls
  • Use TDBEdit, TDBMemo, TDBNavigator, and TDataSource for data binding
  • Write parameterized queries to prevent SQL injection
  • Connect to PostgreSQL and MySQL databases using SQLdb
  • Apply basic database design principles: normalization, primary keys, foreign keys

Chapter 31: Database Programming — Connecting to SQLite, PostgreSQL, and MySQL

"Data is a precious thing and will last longer than the systems themselves." — Tim Berners-Lee


For thirteen chapters of data structures and thirteen chapters of file I/O, PennyWise has stored its expenses in typed files — binary records written sequentially to disk. This worked. But as the application has grown, the limitations have become painful.

Want to find all expenses in the "Food" category from the last three months? You must load every expense into memory, iterate through them, check the category and date, and collect the matches. Want to sort expenses by amount? You must implement a sorting algorithm. Want to ensure no two expenses share the same ID? You must check every existing expense before inserting. Want to change the data format? You must write migration code that reads the old format and writes the new one.

Databases solve all of these problems. A database engine handles storage, retrieval, indexing, sorting, filtering, and integrity constraints. You describe what data you want using SQL (Structured Query Language), and the engine figures out how to get it efficiently. Moving PennyWise from flat files to a database is not just an upgrade — it is an architectural transformation.


31.1 Why Database Programming?

What a Database Gives You

A relational database provides:

  • Structured storage. Data is organized into tables with defined columns and types — similar to Pascal records, but managed by the database engine.
  • SQL queries. Instead of writing loops to search data, you write SELECT * FROM expenses WHERE category = 'Food' AND date > '2025-01-01'. The engine optimizes the query and returns results.
  • Indexing. The engine builds indexes on columns you specify, making searches fast even with millions of rows.
  • Transactions. Multiple operations can be grouped so they either all succeed or all fail — no corrupted half-written files.
  • Integrity constraints. Primary keys prevent duplicates. Foreign keys enforce relationships. NOT NULL prevents missing data. The database enforces your rules automatically.
  • Concurrency. Multiple users or processes can access the same database simultaneously without corrupting data.

The Typed File vs. Database Comparison

Let us compare the two approaches concretely with PennyWise operations:

Operation Typed Files Database
Find expenses by category Load all, loop, check each SELECT * FROM expenses WHERE category = 'Food'
Sort by amount Load all, implement sort algorithm SELECT * FROM expenses ORDER BY amount DESC
Total by category Load all, iterate with running sums SELECT category, SUM(amount) FROM expenses GROUP BY category
Delete an expense Read all to temp, skip one, write all back DELETE FROM expenses WHERE id = 42
Prevent duplicate IDs Check every record before inserting PRIMARY KEY constraint handles it
Add a new column Write migration code to convert old files ALTER TABLE expenses ADD COLUMN notes TEXT
Backup Copy the file Copy the file (SQLite) or use database tools

Every operation that required dozens of lines of Pascal in the typed-file approach becomes a single SQL statement. The database engine handles the complexity.

When to Use a Database

Not every application needs a database. A simple configuration file (Chapter 29's TIniFile) does not need one. A small, read-only dataset can live in a text file. But when your application has:

  • Data that grows over time (transaction logs, expense records)
  • Need for complex queries (filtering, sorting, grouping, aggregation)
  • Multiple tables with relationships (expenses have categories; categories have budgets)
  • Requirements for data integrity (no duplicates, no orphaned records)
  • Potential for multiple simultaneous users

...then a database is the right choice. PennyWise checks all five boxes.


31.2 SQLite: The Embedded Database

SQLite is a self-contained, serverless, zero-configuration database engine. It stores the entire database in a single file. No installation, no server process, no administration. This makes it perfect for desktop applications like PennyWise.

Why SQLite for Desktop Apps

Feature SQLite PostgreSQL/MySQL
Server required No Yes
Installation None (library included) Full server installation
Configuration None Config files, users, permissions
Database location Single file Server-managed directory
Concurrency Single-writer, multiple readers Full multi-user
Performance Excellent for < 1M rows Excellent for any size
Portability Copy the file Export/import or backup
Size on disk ~600 KB library 50+ MB server installation
Maintenance None Regular backups, vacuuming, log rotation

For PennyWise — a single-user desktop application with thousands (not millions) of transactions — SQLite is the ideal choice. The database file can be copied, backed up, or transferred to another computer by simply copying a single file.

SQLite Characteristics

SQLite has a few characteristics that differ from server databases:

Dynamic typing. Unlike PostgreSQL, SQLite does not enforce column types strictly. You can insert a string into an INTEGER column. The declared type is a recommendation, not a constraint. In practice, this rarely matters if you use parameterized queries (which handle type conversion).

Single-writer concurrency. SQLite allows multiple simultaneous readers but only one writer at a time. If two processes try to write simultaneously, one waits (or fails). For a single-user desktop application, this is a non-issue.

ACID compliance. SQLite is fully ACID-compliant (Atomic, Consistent, Isolated, Durable). Transactions either complete fully or not at all. A crash during a write does not corrupt the database — the journal file ensures recovery.

No user authentication. SQLite has no built-in user accounts or permissions. Anyone who can access the file can read and modify the database. For a personal finance tracker on a personal computer, this is fine. For a shared server database, you would use PostgreSQL or MySQL.

Setting Up SQLite in Lazarus

To use SQLite in Lazarus:

  1. Add sqlite3conn to your project's required packages (Project > Project Inspector > Add > New Requirement > sqlite3laz).
  2. Ensure the SQLite3 shared library is available: sqlite3.dll on Windows, libsqlite3.so on Linux, libsqlite3.dylib on macOS. On most systems, this is already installed. On Windows, you may need to download sqlite3.dll from the SQLite website and place it in your project's output directory or in the system PATH.
  3. Add sqlite3conn, sqldb, db to your unit's uses clause.

To verify that SQLite is working, try a simple connection:

uses sqlite3conn, sqldb, db;

procedure TestSQLiteConnection;
var
  Conn: TSQLite3Connection;
  Trans: TSQLTransaction;
begin
  Conn := TSQLite3Connection.Create(nil);
  Trans := TSQLTransaction.Create(nil);
  try
    Trans.Database := Conn;
    Conn.Transaction := Trans;
    Conn.DatabaseName := 'test.db';
    Conn.Open;
    WriteLn('SQLite connected. Version: ', Conn.GetAsSQLText(
      'SELECT sqlite_version()'));
    Conn.Close;
  finally
    Trans.Free;
    Conn.Free;
  end;
end;

If this runs without errors, your SQLite setup is correct.


31.3 SQLdb Components in Lazarus

Lazarus provides the SQLdb framework — a set of components for database access. The key components form a chain:

TSQLite3Connection ──> TSQLTransaction ──> TSQLQuery ──> TDataSource ──> TDBGrid
    (database)             (transaction)      (query)       (bridge)      (display)

Each component has a specific role, and they must be connected in the correct order. Let us examine each one.

TSQLite3Connection

The connection component links your application to a SQLite database file:

var
  Conn: TSQLite3Connection;
begin
  Conn := TSQLite3Connection.Create(nil);
  Conn.DatabaseName := 'pennywise.db';  { file path }
  Conn.Open;
  { ... use the database ... }
  Conn.Close;
  Conn.Free;
end;

If the file does not exist, SQLite creates it automatically when you open the connection. This means your application can self-initialize — no separate database setup step required.

Key TSQLite3Connection properties:

Property Type Description
DatabaseName string Path to the SQLite database file
Connected Boolean True if the connection is open (read-only)
Transaction TSQLTransaction The default transaction
CharSet string Character encoding (default UTF-8)

TSQLTransaction

Every database operation happens within a transaction. The transaction ensures that either all your changes are applied (committed) or none are (rolled back):

var
  Trans: TSQLTransaction;
begin
  Trans := TSQLTransaction.Create(nil);
  Trans.Database := Conn;
  Trans.StartTransaction;
  try
    { ... perform operations ... }
    Trans.Commit;
  except
    Trans.Rollback;
    raise;
  end;
end;

Why transactions matter: imagine PennyWise needs to transfer an expense from one category to another. This involves two operations: update the old category's total and update the new category's total. If the application crashes between the two updates, the totals would be inconsistent. With a transaction, both updates happen atomically — either both succeed or both are rolled back.

In SQLdb, transactions are started automatically when you execute a query. You typically need to call Commit explicitly after write operations (INSERT, UPDATE, DELETE) to persist the changes.

Error Handling with Database Operations

Database operations can fail for many reasons: the file might be locked by another process, the disk might be full, a constraint might be violated, or the SQL might contain a syntax error. Always wrap database operations in try-except blocks:

procedure TfrmMain.SafeInsertExpense(const ADate, ADesc: string;
  AAmount: Double);
begin
  try
    InsertExpense(FConn, FTrans, ADate, ADesc, 'Food', AAmount, '');
    StatusBar1.Panels[0].Text := 'Expense added successfully';
  except
    on E: EDatabaseError do
    begin
      MessageDlg('Database Error',
        'Could not save the expense: ' + E.Message,
        mtError, [mbOK], 0);
      if FTrans.Active then
        FTrans.Rollback;
    end;
    on E: Exception do
    begin
      MessageDlg('Unexpected Error',
        'An error occurred: ' + E.Message,
        mtError, [mbOK], 0);
      if FTrans.Active then
        FTrans.Rollback;
    end;
  end;
end;

The EDatabaseError exception class is the base for all SQLdb errors. Catching it specifically lets you provide database-related error messages while handling other exceptions differently.

Common database errors include: - Constraint violation — inserting a duplicate primary key or a NULL into a NOT NULL column. - Foreign key violation — inserting a category_id that does not exist in the categories table. - Locked database — another process has a write lock on the SQLite file. - Syntax error — malformed SQL statement.

Always rollback the transaction after an error. An uncommitted transaction with errors will block further operations until it is either committed or rolled back.

TSQLQuery

The query component executes SQL statements and holds the results:

var
  Query: TSQLQuery;
begin
  Query := TSQLQuery.Create(nil);
  Query.Database := Conn;
  Query.Transaction := Trans;

  { Execute a SELECT and read results }
  Query.SQL.Text := 'SELECT * FROM expenses ORDER BY date DESC';
  Query.Open;
  while not Query.EOF do
  begin
    WriteLn(Query.FieldByName('description').AsString, ': $',
            Query.FieldByName('amount').AsFloat:0:2);
    Query.Next;
  end;
  Query.Close;

  { Execute an INSERT/UPDATE/DELETE }
  Query.SQL.Text := 'INSERT INTO expenses (date, description, category, amount) ' +
                    'VALUES (:date, :desc, :cat, :amt)';
  Query.Params.ParamByName('date').AsString := '2025-03-15';
  Query.Params.ParamByName('desc').AsString := 'Groceries';
  Query.Params.ParamByName('cat').AsString := 'Food';
  Query.Params.ParamByName('amt').AsFloat := 45.99;
  Query.ExecSQL;
  Trans.Commit;

  Query.Free;
end;

After calling Query.Open, the query behaves like a virtual table. The navigation methods mirror a cursor moving through rows:

Method Effect
First Move to the first row
Last Move to the last row
Next Move to the next row
Prior Move to the previous row
MoveBy(N) Move N rows forward (or backward if negative)
EOF True if past the last row
BOF True if before the first row
RecordCount Total number of rows (may not be available for all queries)
IsEmpty True if the query returned no rows

Reading Field Values

Each column in the result set is accessed via FieldByName or Fields[index]:

{ By name (recommended — more readable, survives column reordering) }
Description := Query.FieldByName('description').AsString;
Amount := Query.FieldByName('amount').AsFloat;
ExpenseDate := Query.FieldByName('date').AsDateTime;
ID := Query.FieldByName('id').AsInteger;

{ By index (faster but fragile — breaks if columns are reordered) }
Description := Query.Fields[1].AsString;

{ Check for NULL values }
if Query.FieldByName('notes').IsNull then
  Notes := ''
else
  Notes := Query.FieldByName('notes').AsString;

💡 Intuition: The Query as a Virtual Table After calling Query.Open, the query behaves like a virtual table. You can move through rows with First, Last, Next, Prior, and MoveBy. You can read field values with FieldByName('column').AsString, .AsInteger, .AsFloat, .AsDateTime. When you are done, call Close. For statements that do not return data (INSERT, UPDATE, DELETE, CREATE TABLE), use ExecSQL instead of Open.


31.4 Creating Tables and Inserting Data

Creating the Expenses Table

procedure CreateExpensesTable(Conn: TSQLite3Connection; Trans: TSQLTransaction);
var
  Query: TSQLQuery;
begin
  Query := TSQLQuery.Create(nil);
  try
    Query.Database := Conn;
    Query.Transaction := Trans;
    Query.SQL.Text :=
      'CREATE TABLE IF NOT EXISTS expenses (' +
      '  id INTEGER PRIMARY KEY AUTOINCREMENT,' +
      '  date TEXT NOT NULL,' +
      '  description TEXT NOT NULL,' +
      '  category TEXT NOT NULL,' +
      '  amount REAL NOT NULL,' +
      '  notes TEXT DEFAULT '''',' +
      '  created_at TEXT DEFAULT (datetime(''now''))' +
      ')';
    Query.ExecSQL;
    Trans.Commit;
  finally
    Query.Free;
  end;
end;

Key points: - INTEGER PRIMARY KEY AUTOINCREMENT creates a unique auto-incrementing ID. Every row gets a unique number automatically. - NOT NULL prevents null values — every expense must have a date, description, category, and amount. - DEFAULT provides a default value when one is not specified. DEFAULT (datetime('now')) uses a SQLite function to insert the current timestamp. - CREATE TABLE IF NOT EXISTS avoids errors when the table already exists. This is idempotent — you can call it every time the application starts.

Creating the Categories Table

Query.SQL.Text :=
  'CREATE TABLE IF NOT EXISTS categories (' +
  '  id INTEGER PRIMARY KEY AUTOINCREMENT,' +
  '  name TEXT NOT NULL UNIQUE,' +
  '  budget_limit REAL DEFAULT 0,' +
  '  color TEXT DEFAULT ''#808080''' +
  ')';
Query.ExecSQL;

The UNIQUE constraint on name ensures no two categories have the same name. If you try to insert a duplicate, the database raises an error.

Creating Indexes

Indexes speed up queries on specific columns. Without an index, finding all expenses for a category requires scanning every row (a "full table scan"). With an index, the database can jump directly to matching rows:

Query.SQL.Text :=
  'CREATE INDEX IF NOT EXISTS idx_expenses_category ON expenses(category)';
Query.ExecSQL;

Query.SQL.Text :=
  'CREATE INDEX IF NOT EXISTS idx_expenses_date ON expenses(date)';
Query.ExecSQL;

Index the columns you frequently search, filter, or sort by. Do not index every column — indexes consume storage and slow down INSERT operations.

Seeding Default Data

When PennyWise runs for the first time, the categories table is empty. Seed it with default categories:

procedure SeedDefaultCategories(Conn: TSQLite3Connection;
  Trans: TSQLTransaction);
var
  Query: TSQLQuery;
begin
  Query := TSQLQuery.Create(nil);
  try
    Query.Database := Conn;
    Query.Transaction := Trans;
    Query.SQL.Text :=
      'INSERT OR IGNORE INTO categories (name, budget_limit) VALUES (:name, :budget)';

    Query.Params.ParamByName('name').AsString := 'Food & Dining';
    Query.Params.ParamByName('budget').AsFloat := 500;
    Query.ExecSQL;

    Query.Params.ParamByName('name').AsString := 'Transportation';
    Query.Params.ParamByName('budget').AsFloat := 200;
    Query.ExecSQL;

    Query.Params.ParamByName('name').AsString := 'Housing';
    Query.Params.ParamByName('budget').AsFloat := 1500;
    Query.ExecSQL;

    Query.Params.ParamByName('name').AsString := 'Utilities';
    Query.Params.ParamByName('budget').AsFloat := 300;
    Query.ExecSQL;

    Query.Params.ParamByName('name').AsString := 'Entertainment';
    Query.Params.ParamByName('budget').AsFloat := 150;
    Query.ExecSQL;

    Query.Params.ParamByName('name').AsString := 'Health';
    Query.Params.ParamByName('budget').AsFloat := 200;
    Query.ExecSQL;

    Query.Params.ParamByName('name').AsString := 'Education';
    Query.Params.ParamByName('budget').AsFloat := 100;
    Query.ExecSQL;

    Query.Params.ParamByName('name').AsString := 'Other';
    Query.Params.ParamByName('budget').AsFloat := 100;
    Query.ExecSQL;

    Trans.Commit;
  finally
    Query.Free;
  end;
end;

INSERT OR IGNORE skips the insert if the row already exists (due to the UNIQUE constraint on name). This makes the seeding idempotent — it runs safely on both first launch and subsequent launches.

Inserting Data with Parameters

procedure InsertExpense(Conn: TSQLite3Connection; Trans: TSQLTransaction;
  const ADate, ADesc, ACategory: string; AAmount: Double;
  const ANotes: string);
var
  Query: TSQLQuery;
begin
  Query := TSQLQuery.Create(nil);
  try
    Query.Database := Conn;
    Query.Transaction := Trans;
    Query.SQL.Text :=
      'INSERT INTO expenses (date, description, category, amount, notes) ' +
      'VALUES (:date, :desc, :cat, :amt, :notes)';
    Query.Params.ParamByName('date').AsString := ADate;
    Query.Params.ParamByName('desc').AsString := ADesc;
    Query.Params.ParamByName('cat').AsString := ACategory;
    Query.Params.ParamByName('amt').AsFloat := AAmount;
    Query.Params.ParamByName('notes').AsString := ANotes;
    Query.ExecSQL;
    Trans.Commit;
  finally
    Query.Free;
  end;
end;

Notice the :date, :desc, :cat, :amt, :notes parameters. We will discuss why these are essential in section 31.7.


31.5 Querying and Displaying Data

Basic Queries

{ All expenses, newest first }
Query.SQL.Text := 'SELECT * FROM expenses ORDER BY date DESC';

{ Total by category }
Query.SQL.Text :=
  'SELECT category, SUM(amount) as total, COUNT(*) as count ' +
  'FROM expenses GROUP BY category ORDER BY total DESC';

{ Expenses in a date range }
Query.SQL.Text :=
  'SELECT * FROM expenses WHERE date BETWEEN :start AND :end';
Query.Params.ParamByName('start').AsString := '2025-01-01';
Query.Params.ParamByName('end').AsString := '2025-03-31';

{ Search by description }
Query.SQL.Text :=
  'SELECT * FROM expenses WHERE description LIKE :search';
Query.Params.ParamByName('search').AsString := '%grocery%';

{ Monthly totals for the current year }
Query.SQL.Text :=
  'SELECT strftime(''%m'', date) as month, SUM(amount) as total ' +
  'FROM expenses WHERE strftime(''%Y'', date) = :year ' +
  'GROUP BY strftime(''%m'', date) ORDER BY month';
Query.Params.ParamByName('year').AsString := '2025';

{ Top 5 most expensive categories }
Query.SQL.Text :=
  'SELECT category, SUM(amount) as total FROM expenses ' +
  'GROUP BY category ORDER BY total DESC LIMIT 5';

SQL Concepts: Aggregation, Grouping, and Subqueries

SQL provides powerful aggregation functions that replace dozens of lines of Pascal loop code:

Function Purpose Example
SUM(column) Total of all values SUM(amount) = total spending
COUNT(*) Number of rows COUNT(*) = number of expenses
AVG(column) Average value AVG(amount) = average expense
MIN(column) Smallest value MIN(amount) = cheapest expense
MAX(column) Largest value MAX(amount) = most expensive expense

The GROUP BY clause splits the data into groups before applying the aggregation. Without GROUP BY, the aggregation applies to all rows. With GROUP BY category, you get one result row per category.

The HAVING clause filters groups after aggregation (whereas WHERE filters rows before aggregation):

{ Categories where total spending exceeds $500 }
Query.SQL.Text :=
  'SELECT category, SUM(amount) as total FROM expenses ' +
  'GROUP BY category HAVING total > 500';

Subqueries let you use the result of one query inside another:

{ Expenses that are above the average amount }
Query.SQL.Text :=
  'SELECT * FROM expenses WHERE amount > (SELECT AVG(amount) FROM expenses)';

{ The most recent expense in each category }
Query.SQL.Text :=
  'SELECT * FROM expenses e1 WHERE date = ' +
  '(SELECT MAX(date) FROM expenses e2 WHERE e2.category = e1.category)';

These constructs let you express complex questions in a single SQL statement that the database optimizes and executes efficiently. The equivalent Pascal code — loading all data, iterating, filtering, grouping, and aggregating — would be dozens of lines.

Updating and Deleting Data

{ Update an expense }
Query.SQL.Text :=
  'UPDATE expenses SET description = :desc, amount = :amt ' +
  'WHERE id = :id';
Query.Params.ParamByName('desc').AsString := 'Updated description';
Query.Params.ParamByName('amt').AsFloat := 55.00;
Query.Params.ParamByName('id').AsInteger := 42;
Query.ExecSQL;
Trans.Commit;

{ Delete an expense }
Query.SQL.Text := 'DELETE FROM expenses WHERE id = :id';
Query.Params.ParamByName('id').AsInteger := 42;
Query.ExecSQL;
Trans.Commit;

Connecting a Query to a TDBGrid

The power of data-aware controls: connect a TSQLQuery to a TDataSource to a TDBGrid, and the grid displays the query results automatically — with scrolling, column headers, and sorting.

{ In FormCreate or a setup method: }
DataSource1.DataSet := SQLQuery1;

DBGrid1.DataSource := DataSource1;
DBGrid1.Columns.Clear;
with DBGrid1.Columns.Add do
begin
  FieldName := 'date';
  Title.Caption := 'Date';
  Width := 100;
end;
with DBGrid1.Columns.Add do
begin
  FieldName := 'description';
  Title.Caption := 'Description';
  Width := 200;
end;
with DBGrid1.Columns.Add do
begin
  FieldName := 'category';
  Title.Caption := 'Category';
  Width := 120;
end;
with DBGrid1.Columns.Add do
begin
  FieldName := 'amount';
  Title.Caption := 'Amount';
  Width := 80;
  Alignment := taRightJustify;
end;

{ Open the query — the grid populates automatically }
SQLQuery1.SQL.Text := 'SELECT * FROM expenses ORDER BY date DESC';
SQLQuery1.Open;

When you scroll through the grid, the DataSource notifies all connected controls. When you edit a cell, the DataSource sends the changes back to the query. The grid, the query, and the database stay synchronized.

Refreshing the Display

After inserting, updating, or deleting records through a separate query, call Refresh on the main query to update the grid:

procedure TfrmMain.RefreshGrid;
begin
  FQuery.Close;
  FQuery.Open;  { re-executes the SQL and refreshes the grid }
end;

Or more simply:

FQuery.Refresh;  { re-fetches data without closing the query }

📊 Analogy: The Data Pipeline Think of the database as a water treatment plant, the query as a pipe that selects and filters water, the DataSource as a junction, and the DBGrid as a faucet. The water (data) flows from the plant through the pipe, through the junction, and out the faucet. Multiple faucets (controls) can connect to the same junction. Change the pipe (query), and different water flows through.

Formatting Data in the Grid

The TDBGrid displays raw field values by default. To format dates, numbers, or other fields, use the column's DisplayFormat property or handle the OnGetText event of the underlying field:

{ Set display format for the amount column }
FQuery.FieldByName('amount').DisplayFormat := '#,##0.00';

{ Or handle formatting programmatically }
procedure TfrmMain.QueryAmountGetText(Sender: TField; var aText: string;
  DisplayText: Boolean);
begin
  if not Sender.IsNull then
    aText := Format('$%s', [FormatFloat('#,##0.00', Sender.AsFloat)])
  else
    aText := '';
end;

Assign this handler in FormCreate:

FQuery.FieldByName('amount').OnGetText := @QueryAmountGetText;

Now the grid displays amounts as "$1,234.56" instead of "1234.56".

Performance Considerations

For a personal finance tracker with a few thousand expenses, performance is not a concern — SQLite can handle millions of rows. But good habits formed now will serve you in larger applications:

Use indexes wisely. Index columns that appear in WHERE, ORDER BY, and JOIN clauses. For PennyWise, indexing date and category_id covers the most common queries. Do not index columns you rarely search by — indexes consume disk space and slow down inserts.

Batch operations in transactions. If you need to insert 100 expenses (e.g., importing from a CSV file), wrap all inserts in a single transaction. Without a transaction, each INSERT commits individually, which involves writing to disk 100 times. With a transaction, all 100 inserts commit at once — one disk write:

Trans.StartTransaction;
try
  for I := 0 to High(ImportedExpenses) do
    InsertExpense(Conn, Trans, ImportedExpenses[I]);
  Trans.Commit;  { one commit for all 100 inserts }
except
  Trans.Rollback;
  raise;
end;

This can be 10-100 times faster than individual commits.

Limit query results. If you only need the most recent 100 expenses, add LIMIT 100 to your query. Loading thousands of rows into a grid that can only display 20 at a time wastes memory and time.

Avoid SELECT *. Select only the columns you need. SELECT id, date, description, amount is more efficient than SELECT * when you do not need all columns.


31.6 Data-Aware Controls

Data-aware controls bind directly to a database field through a DataSource. When the current record changes, the control updates automatically. When the user edits the control, the database record updates automatically.

TDBGrid

We have already seen TDBGrid — it displays query results in a spreadsheet-like grid. Additional useful properties:

DBGrid1.ReadOnly := True;          { prevent direct editing }
DBGrid1.Options := DBGrid1.Options
  + [dgRowHighlight]               { highlight current row }
  + [dgRowSelect]                  { select entire rows }
  - [dgEditing];                   { disable cell editing }
DBGrid1.AlternateColor := $F0F0F0; { alternating row colors }

TDBEdit

A text field bound to a database field:

DBEditDescription.DataSource := DataSource1;
DBEditDescription.DataField := 'description';
{ When the user types, the 'description' field of the current record is updated }

TDBMemo

A multi-line text area bound to a text or blob field:

DBMemoNotes.DataSource := DataSource1;
DBMemoNotes.DataField := 'notes';

TDBComboBox

A dropdown list bound to a database field:

DBComboBoxCategory.DataSource := DataSource1;
DBComboBoxCategory.DataField := 'category';
DBComboBoxCategory.Items.Add('Food & Dining');
DBComboBoxCategory.Items.Add('Transportation');
{ ... etc. }

TDBNavigator

A toolbar with buttons for navigating and editing records: First, Prior, Next, Last, Insert, Delete, Edit, Post, Cancel, Refresh.

DBNavigator1.DataSource := DataSource1;
DBNavigator1.VisibleButtons := [nbFirst, nbPrior, nbNext, nbLast,
                                 nbInsert, nbDelete, nbEdit, nbPost,
                                 nbCancel, nbRefresh];

Drop it on the form, point it at a DataSource, and you have complete CRUD (Create, Read, Update, Delete) navigation. The Delete button prompts for confirmation. The Post button saves changes. The Cancel button reverts changes.

You can customize which buttons are visible using the VisibleButtons property. For a read-only grid, hide Insert, Delete, Edit, Post, and Cancel.

TDBText and TDBImage

TDBText displays a database field as read-only text (like a TLabel but data-aware):

DBText1.DataSource := DataSource1;
DBText1.DataField := 'amount';

TDBImage displays a BLOB field as an image — useful for storing photos, receipts, or icons in the database.

TDataSource

The bridge between a dataset (TSQLQuery) and the visual controls:

DataSource1.DataSet := SQLQuery1;
DataSource1.AutoEdit := True;  { controls can edit without explicitly calling Edit }

The DataSource also fires events that let you respond to data changes:

procedure TfrmMain.DataSource1DataChange(Sender: TObject; Field: TField);
begin
  { Called whenever the current record changes (navigation or editing) }
  if Field = nil then
    { Record changed (navigation) }
    UpdateStatusBar
  else
    { A specific field was edited }
    MarkModified;
end;

⚠️ Caution: AutoEdit With AutoEdit := True, clicking in a DBEdit and typing immediately modifies the record. This is convenient but can lead to accidental changes. For forms where you want explicit "Edit" and "Save" buttons, set AutoEdit := False and call SQLQuery1.Edit and SQLQuery1.Post explicitly.

A Complete Data-Aware Form

Here is a complete example of a form that displays and edits expenses using only data-aware controls:

{ Form layout:
  - DBNavigator at top
  - DBGrid on the left (list)
  - Panel on the right (detail view):
    - DBEdit for description
    - DBComboBox for category
    - DBEdit for amount
    - DBMemo for notes }

procedure TfrmMain.FormCreate(Sender: TObject);
begin
  FConn := TSQLite3Connection.Create(Self);
  FConn.DatabaseName := 'pennywise.db';

  FTrans := TSQLTransaction.Create(Self);
  FTrans.Database := FConn;
  FConn.Transaction := FTrans;
  FConn.Open;

  FQuery := TSQLQuery.Create(Self);
  FQuery.Database := FConn;
  FQuery.Transaction := FTrans;
  FQuery.SQL.Text := 'SELECT * FROM expenses ORDER BY date DESC';

  FDataSource := TDataSource.Create(Self);
  FDataSource.DataSet := FQuery;
  FDataSource.OnDataChange := @DataSourceDataChange;

  DBGrid1.DataSource := FDataSource;
  DBNavigator1.DataSource := FDataSource;
  DBEditDescription.DataSource := FDataSource;
  DBEditDescription.DataField := 'description';
  DBEditAmount.DataSource := FDataSource;
  DBEditAmount.DataField := 'amount';
  DBMemoNotes.DataSource := FDataSource;
  DBMemoNotes.DataField := 'notes';

  FQuery.Open;
end;

When the user clicks a row in the grid, all the detail controls update automatically. When the user edits a detail control, the grid reflects the change. When the user clicks "Post" on the navigator, the change is written to the database. No manual data transfer code. No manual refresh code. The data-aware controls handle everything.


31.7 Parameterized Queries

This section may save you from a security catastrophe.

The SQL Injection Problem

Suppose you build a search query like this:

{ DANGEROUS — DO NOT DO THIS }
Query.SQL.Text := 'SELECT * FROM expenses WHERE description LIKE ''%' +
                  edtSearch.Text + '%''';

If the user types %'; DROP TABLE expenses; -- into the search box, the resulting SQL becomes:

SELECT * FROM expenses WHERE description LIKE '%%'; DROP TABLE expenses; --%'

The database executes two statements: the harmless SELECT, and then DROP TABLE expenses, which destroys all your data. This is SQL injection — one of the most common and dangerous vulnerabilities in software.

Even in a single-user desktop application, SQL injection is a threat. A user might paste malicious text from the clipboard, or another program might provide input through a command-line argument or file import.

The Solution: Parameters

Always use parameterized queries for user-supplied values:

{ SAFE — parameters prevent injection }
Query.SQL.Text := 'SELECT * FROM expenses WHERE description LIKE :search';
Query.Params.ParamByName('search').AsString := '%' + edtSearch.Text + '%';
Query.Open;

The :search parameter is a placeholder. The database engine treats the parameter value as data, not as SQL code. Even if the user types '; DROP TABLE expenses; --, it is treated as a literal string to search for, not as SQL to execute.

How Parameters Work Internally

When you use parameters, the database processes your query in two steps:

  1. Preparation. The database parses and compiles the SQL statement with placeholders. It builds a query plan — how to access the data, which indexes to use.
  2. Execution. The database binds the parameter values (as raw data, not SQL text) and executes the pre-compiled plan.

Because the SQL is compiled before the parameter values are supplied, there is no way for parameter values to change the structure of the query. The DROP TABLE attack becomes a harmless search for the literal string '; DROP TABLE expenses; --.

Parameter Types

Parameters are typed. Use the appropriate type for each value:

Query.Params.ParamByName('id').AsInteger := 42;
Query.Params.ParamByName('amount').AsFloat := 99.95;
Query.Params.ParamByName('name').AsString := 'Groceries';
Query.Params.ParamByName('date').AsDateTime := Date;
Query.Params.ParamByName('active').AsBoolean := True;

Using the correct type ensures proper formatting and comparison. Comparing a date parameter as a string might not sort correctly; comparing it as a DateTime ensures correct date ordering.

Rules for Safe SQL

  1. Never concatenate user input into SQL strings. Not once. Not ever.
  2. Always use :paramName placeholders and Params.ParamByName.
  3. This applies to INSERT, UPDATE, DELETE, and SELECT. All SQL statements.
  4. There are no exceptions to this rule. Even if you "know" the input is safe, use parameters. Even for internal values, use parameters. The habit must be absolute.

📊 The XKCD Rule The webcomic XKCD has a famous panel (#327) showing a mother who named her son "Robert'); DROP TABLE Students;--" — so that any school database that concatenates student names into SQL will destroy its student records. The joke is funny because the vulnerability is real and devastating. Use parameters.


31.8 Database Design Basics

Good database design makes queries simple and data integrity automatic. Poor design leads to redundancy, inconsistency, and complex workaround code.

Tables and Columns

A table is like a Pascal record type — it defines a structure. Each row is like an instance of that record. Each column is like a field.

SQL Type Pascal Equivalent Example
INTEGER Integer, Int64 id INTEGER
REAL Double amount REAL
TEXT String description TEXT
BLOB Array of Byte photo BLOB

Primary Keys

Every table should have a primary key — a column (or set of columns) that uniquely identifies each row. In SQLite, INTEGER PRIMARY KEY AUTOINCREMENT creates an auto-incrementing ID.

Why is this important? Without a primary key, you cannot reliably identify a specific row. "Delete the expense for $45.99 on March 15" might match multiple rows. "Delete the expense with ID 42" matches exactly one.

Foreign Keys

A foreign key creates a relationship between tables. The expenses table references the categories table:

CREATE TABLE expenses (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  date TEXT NOT NULL,
  description TEXT NOT NULL,
  category_id INTEGER NOT NULL,
  amount REAL NOT NULL,
  FOREIGN KEY (category_id) REFERENCES categories(id)
);

Now each expense links to a category by ID. If you try to insert an expense with a category_id that does not exist in the categories table, the database rejects it. If you try to delete a category that is referenced by expenses, the database rejects that too (unless you specify ON DELETE CASCADE or ON DELETE SET NULL).

To enable foreign key enforcement in SQLite (it is disabled by default), execute this after opening the connection:

FConn.ExecuteDirect('PRAGMA foreign_keys = ON');

JOINs

With foreign keys, data is split across tables. To display expense data with category names, you need a JOIN:

SELECT e.id, e.date, e.description, c.name as category,
       e.amount, e.notes
FROM expenses e
JOIN categories c ON e.category_id = c.id
ORDER BY e.date DESC

The JOIN clause combines rows from expenses and categories where category_id matches id. The result includes columns from both tables.

Normalization

Normalization is the process of organizing tables to reduce redundancy. Instead of storing the category name in every expense row (which wastes space and creates consistency problems if a category is renamed), we store the category in a separate table and reference it by ID.

First Normal Form (1NF): Every column holds a single value (no lists, no repeating groups). Instead of a column tags containing "food,groceries,weekly", create a separate expense_tags table.

Second Normal Form (2NF): Every non-key column depends on the entire primary key. If a table has a composite primary key (two columns), every other column must depend on both columns, not just one.

Third Normal Form (3NF): Every non-key column depends only on the primary key, not on other non-key columns. If the expenses table had both category_name and category_budget, the budget depends on the category, not on the expense. Move category data to a separate table.

For PennyWise, 3NF means: expenses table has category_id (not the category name), categories table has the category name and budget limit, and there is no redundant data. When Rosa renames "Food & Dining" to "Food & Groceries", she updates one row in the categories table, and every expense automatically reflects the change.

Schema Migrations

As your application evolves, you will need to change the database schema — adding new columns, renaming tables, or creating new relationships. SQLite supports ALTER TABLE for adding columns:

{ Check if a column exists before adding it }
procedure MigrateToVersion2(Conn: TSQLite3Connection; Trans: TSQLTransaction);
var
  Query: TSQLQuery;
begin
  Query := TSQLQuery.Create(nil);
  try
    Query.Database := Conn;
    Query.Transaction := Trans;

    { Check current schema version }
    Query.SQL.Text :=
      'CREATE TABLE IF NOT EXISTS schema_info (' +
      '  key TEXT PRIMARY KEY, value TEXT)';
    Query.ExecSQL;
    Trans.Commit;

    Query.SQL.Text := 'SELECT value FROM schema_info WHERE key = ''version''';
    Query.Open;
    if Query.IsEmpty or (Query.FieldByName('value').AsInteger < 2) then
    begin
      Query.Close;
      { Add the new 'recurring' column }
      Query.SQL.Text := 'ALTER TABLE expenses ADD COLUMN recurring INTEGER DEFAULT 0';
      Query.ExecSQL;

      { Update version number }
      Query.SQL.Text :=
        'INSERT OR REPLACE INTO schema_info (key, value) VALUES (''version'', ''2'')';
      Query.ExecSQL;
      Trans.Commit;
    end
    else
      Query.Close;
  finally
    Query.Free;
  end;
end;

This migration pattern — checking the current version and applying changes incrementally — is how production applications handle evolving schemas. Run all migrations in order at application startup, and each migration only executes once.

Migrating Data from Typed Files to SQLite

If PennyWise has existing data in typed files from earlier chapters, we need a one-time migration to import it into the database. Here is a procedure that reads the old typed file and inserts each record into SQLite:

procedure MigrateFromTypedFile(const OldFileName: string;
  Conn: TSQLite3Connection; Trans: TSQLTransaction);
var
  OldFile: file of TExpenseRecord;
  Rec: TExpenseRecord;
  Query: TSQLQuery;
  Count: Integer;
begin
  if not FileExists(OldFileName) then Exit;

  Query := TSQLQuery.Create(nil);
  try
    Query.Database := Conn;
    Query.Transaction := Trans;
    Query.SQL.Text :=
      'INSERT INTO expenses (date, description, category, amount, notes) ' +
      'VALUES (:date, :desc, :cat, :amt, :notes)';

    AssignFile(OldFile, OldFileName);
    Reset(OldFile);
    Count := 0;
    try
      while not Eof(OldFile) do
      begin
        Read(OldFile, Rec);
        Query.Params.ParamByName('date').AsString :=
          FormatDateTime('yyyy-mm-dd', Rec.Date);
        Query.Params.ParamByName('desc').AsString := Rec.Description;
        Query.Params.ParamByName('cat').AsString := Rec.Category;
        Query.Params.ParamByName('amt').AsFloat := Rec.Amount;
        Query.Params.ParamByName('notes').AsString := '';
        Query.ExecSQL;
        Inc(Count);
      end;
      Trans.Commit;
    finally
      CloseFile(OldFile);
    end;

    { Rename the old file so we do not migrate again }
    RenameFile(OldFileName, OldFileName + '.migrated');

    ShowMessage(Format('Migrated %d expenses from typed file to database.', [Count]));
  finally
    Query.Free;
  end;
end;

Call this during initialization, before opening the main query. The file is renamed after migration so it does not run again on the next launch.


31.9 Connecting to PostgreSQL and MySQL

SQLite is perfect for single-user desktop applications. But what if PennyWise needs to share data between multiple users on a network? That requires a client-server database.

PostgreSQL

uses pqconnection;  { add pqconnection package to project }

var
  Conn: TPQConnection;
begin
  Conn := TPQConnection.Create(nil);
  Conn.HostName := 'localhost';
  Conn.DatabaseName := 'pennywise';
  Conn.UserName := 'pennywise_app';
  Conn.Password := 'secret';
  Conn.Port := 5432;
  Conn.Open;
  { The rest is identical — TSQLTransaction, TSQLQuery, TDataSource, etc. }
end;

PostgreSQL is the recommended choice for server databases. It supports advanced data types (arrays, JSON, UUID), full-text search, and sophisticated query optimization. It is free and open source.

MySQL

uses mysql80conn;  { or mysql57conn, mysql56conn depending on version }

var
  Conn: TMySQL80Connection;
begin
  Conn := TMySQL80Connection.Create(nil);
  Conn.HostName := 'localhost';
  Conn.DatabaseName := 'pennywise';
  Conn.UserName := 'pennywise_app';
  Conn.Password := 'secret';
  Conn.Port := 3306;
  Conn.Open;
end;

The Power of the SQLdb Architecture

The beauty of the SQLdb architecture: the code after the connection is identical. TSQLTransaction, TSQLQuery, TDataSource, TDBGrid — they all work the same way regardless of the database backend. Switching from SQLite to PostgreSQL requires changing only the connection component.

This means you can develop and test with SQLite (zero configuration, single file) and deploy to PostgreSQL (multi-user, networked) with minimal code changes. Write a factory function that creates the appropriate connection based on configuration:

function CreateDatabaseConnection(const Config: TDBConfig): TSQLConnection;
begin
  case Config.Backend of
    dbSQLite:
    begin
      Result := TSQLite3Connection.Create(nil);
      (Result as TSQLite3Connection).DatabaseName := Config.DatabaseFile;
    end;
    dbPostgreSQL:
    begin
      Result := TPQConnection.Create(nil);
      with Result as TPQConnection do
      begin
        HostName := Config.Host;
        DatabaseName := Config.DatabaseName;
        UserName := Config.UserName;
        Password := Config.Password;
        Port := Config.Port;
      end;
    end;
  end;
end;

🔗 Cross-Reference: This is the Strategy Pattern If you studied design patterns (or if you read Chapter 33), you will recognize the SQLdb architecture as the Strategy pattern. The database-specific code (the connection component) is isolated behind a common interface. The rest of the application works with the abstract interface, making the database backend interchangeable.


31.10 Project Checkpoint: PennyWise SQLite

We migrate PennyWise from flat-file storage to SQLite. The database file pennywise.db stores all expenses and categories.

Backup Strategy

Because SQLite stores everything in a single file, backups are simple: copy the file. But you should not copy the file while the connection is open and a transaction is in progress — the copy might capture an inconsistent state.

The safe approach is to use SQLite's backup API, which Lazarus exposes through a direct SQL command:

procedure TfrmMain.BackupDatabase;
var
  BackupPath: string;
begin
  BackupPath := GetAppConfigDir(False) +
    'pennywise_backup_' + FormatDateTime('yyyy-mm-dd_hhnnss', Now) + '.db';

  { Close the main query to release locks }
  FQuery.Close;
  try
    FConn.ExecuteDirect(Format('VACUUM INTO ''%s''', [BackupPath]));
    StatusBar1.Panels[0].Text := 'Backup created: ' +
      ExtractFileName(BackupPath);
  finally
    FQuery.Open;  { reopen the query }
  end;
end;

The VACUUM INTO command (SQLite 3.27+) creates a compacted copy of the database. It is safe to run while the database is open and ensures the backup is consistent.

For older SQLite versions, close the connection, copy the file with Pascal's CopyFile function, and reopen the connection.

Add a "File > Backup" menu item so Rosa can create backups before major data changes. And consider automatic daily backups in the FormCreate handler — if no backup exists for today, create one silently.

Database Initialization

procedure TfrmMain.InitializeDatabase;
begin
  FConn := TSQLite3Connection.Create(Self);
  FConn.DatabaseName := GetAppConfigDir(False) + 'pennywise.db';

  FTrans := TSQLTransaction.Create(Self);
  FTrans.Database := FConn;

  FConn.Transaction := FTrans;
  FConn.Open;

  { Enable foreign keys }
  FConn.ExecuteDirect('PRAGMA foreign_keys = ON');

  { Create tables if they do not exist }
  ExecuteSQL(
    'CREATE TABLE IF NOT EXISTS categories (' +
    '  id INTEGER PRIMARY KEY AUTOINCREMENT,' +
    '  name TEXT NOT NULL UNIQUE,' +
    '  budget_limit REAL DEFAULT 0)');

  ExecuteSQL(
    'CREATE TABLE IF NOT EXISTS expenses (' +
    '  id INTEGER PRIMARY KEY AUTOINCREMENT,' +
    '  date TEXT NOT NULL,' +
    '  description TEXT NOT NULL,' +
    '  category_id INTEGER NOT NULL,' +
    '  amount REAL NOT NULL,' +
    '  notes TEXT DEFAULT '''',' +
    '  created_at TEXT DEFAULT (datetime(''now'')),' +
    '  FOREIGN KEY (category_id) REFERENCES categories(id))');

  { Create indexes for frequent queries }
  ExecuteSQL('CREATE INDEX IF NOT EXISTS idx_exp_date ON expenses(date)');
  ExecuteSQL('CREATE INDEX IF NOT EXISTS idx_exp_cat ON expenses(category_id)');

  { Seed default categories }
  SeedDefaultCategories;

  { Set up the main query }
  FQuery := TSQLQuery.Create(Self);
  FQuery.Database := FConn;
  FQuery.Transaction := FTrans;
  FQuery.SQL.Text :=
    'SELECT e.id, e.date, e.description, c.name as category, e.amount, e.notes ' +
    'FROM expenses e JOIN categories c ON e.category_id = c.id ' +
    'ORDER BY e.date DESC';

  FDataSource := TDataSource.Create(Self);
  FDataSource.DataSet := FQuery;

  DBGrid1.DataSource := FDataSource;
  FQuery.Open;
end;

procedure TfrmMain.ExecuteSQL(const ASQL: string);
var
  Q: TSQLQuery;
begin
  Q := TSQLQuery.Create(nil);
  try
    Q.Database := FConn;
    Q.Transaction := FTrans;
    Q.SQL.Text := ASQL;
    Q.ExecSQL;
    FTrans.Commit;
  finally
    Q.Free;
  end;
end;

Adding an Expense

procedure TfrmMain.AddExpense(const ADate, ADesc: string;
  ACategoryID: Integer; AAmount: Double; const ANotes: string);
var
  InsertQuery: TSQLQuery;
begin
  InsertQuery := TSQLQuery.Create(nil);
  try
    InsertQuery.Database := FConn;
    InsertQuery.Transaction := FTrans;
    InsertQuery.SQL.Text :=
      'INSERT INTO expenses (date, description, category_id, amount, notes) ' +
      'VALUES (:date, :desc, :catid, :amount, :notes)';
    InsertQuery.Params.ParamByName('date').AsString := ADate;
    InsertQuery.Params.ParamByName('desc').AsString := ADesc;
    InsertQuery.Params.ParamByName('catid').AsInteger := ACategoryID;
    InsertQuery.Params.ParamByName('amount').AsFloat := AAmount;
    InsertQuery.Params.ParamByName('notes').AsString := ANotes;
    InsertQuery.ExecSQL;
    FTrans.Commit;
    FQuery.Refresh;  { refresh the grid }
    UpdateCharts;
    UpdateStatusBar;
    MarkModified;
  finally
    InsertQuery.Free;
  end;
end;

Deleting an Expense

procedure TfrmMain.DeleteSelectedExpense;
var
  ExpenseID: Integer;
  DeleteQuery: TSQLQuery;
begin
  if FQuery.IsEmpty then Exit;

  ExpenseID := FQuery.FieldByName('id').AsInteger;

  if MessageDlg('Confirm Delete',
    Format('Delete expense "%s" ($%.2f)?',
      [FQuery.FieldByName('description').AsString,
       FQuery.FieldByName('amount').AsFloat]),
    mtConfirmation, [mbYes, mbNo], 0) <> mrYes then
    Exit;

  DeleteQuery := TSQLQuery.Create(nil);
  try
    DeleteQuery.Database := FConn;
    DeleteQuery.Transaction := FTrans;
    DeleteQuery.SQL.Text := 'DELETE FROM expenses WHERE id = :id';
    DeleteQuery.Params.ParamByName('id').AsInteger := ExpenseID;
    DeleteQuery.ExecSQL;
    FTrans.Commit;
    FQuery.Refresh;
    UpdateCharts;
    UpdateStatusBar;
    MarkModified;
  finally
    DeleteQuery.Free;
  end;
end;

Querying Totals

function TfrmMain.GetTotalExpenses: Double;
var
  Q: TSQLQuery;
begin
  Q := TSQLQuery.Create(nil);
  try
    Q.Database := FConn;
    Q.Transaction := FTrans;
    Q.SQL.Text := 'SELECT COALESCE(SUM(amount), 0) as total FROM expenses';
    Q.Open;
    Result := Q.FieldByName('total').AsFloat;
    Q.Close;
  finally
    Q.Free;
  end;
end;

function TfrmMain.GetCategoryTotals: TCategoryTotalArray;
var
  Q: TSQLQuery;
  I: Integer;
begin
  Q := TSQLQuery.Create(nil);
  try
    Q.Database := FConn;
    Q.Transaction := FTrans;
    Q.SQL.Text :=
      'SELECT c.name, COALESCE(SUM(e.amount), 0) as total ' +
      'FROM categories c LEFT JOIN expenses e ON c.id = e.category_id ' +
      'GROUP BY c.name HAVING total > 0 ORDER BY total DESC';
    Q.Open;
    I := 0;
    SetLength(Result, 0);
    while not Q.EOF do
    begin
      SetLength(Result, I + 1);
      Result[I].Name := Q.FieldByName('name').AsString;
      Result[I].Total := Q.FieldByName('total').AsFloat;
      Inc(I);
      Q.Next;
    end;
    Q.Close;
  finally
    Q.Free;
  end;
end;

The COALESCE(SUM(amount), 0) function returns 0 instead of NULL when there are no matching rows. The LEFT JOIN ensures categories with no expenses still appear in the results (with a total of 0).

Filtering the Grid

Add a search box that filters the expense grid in real time:

procedure TfrmMain.edtSearchChange(Sender: TObject);
var
  SearchText: string;
begin
  SearchText := Trim(edtSearch.Text);
  FQuery.Close;

  if SearchText = '' then
    FQuery.SQL.Text :=
      'SELECT e.id, e.date, e.description, c.name as category, ' +
      'e.amount, e.notes FROM expenses e ' +
      'JOIN categories c ON e.category_id = c.id ORDER BY e.date DESC'
  else
  begin
    FQuery.SQL.Text :=
      'SELECT e.id, e.date, e.description, c.name as category, ' +
      'e.amount, e.notes FROM expenses e ' +
      'JOIN categories c ON e.category_id = c.id ' +
      'WHERE e.description LIKE :search OR c.name LIKE :search ' +
      'ORDER BY e.date DESC';
    FQuery.Params.ParamByName('search').AsString := '%' + SearchText + '%';
  end;

  FQuery.Open;
  UpdateStatusBar;
end;

This searches both the description and category name. The LIKE operator with % wildcards performs a case-insensitive substring search in SQLite.

Date Range Filtering

PennyWise should let the user filter expenses by date range — showing only expenses from this month, this quarter, or a custom range. Here is the implementation with TDateTimePicker controls:

procedure TfrmMain.ApplyDateFilter;
var
  StartDate, EndDate: string;
begin
  FQuery.Close;

  if chkDateFilter.Checked then
  begin
    StartDate := FormatDateTime('yyyy-mm-dd', dtpStartDate.Date);
    EndDate := FormatDateTime('yyyy-mm-dd', dtpEndDate.Date);

    FQuery.SQL.Text :=
      'SELECT e.id, e.date, e.description, c.name as category, ' +
      'e.amount, e.notes FROM expenses e ' +
      'JOIN categories c ON e.category_id = c.id ' +
      'WHERE e.date BETWEEN :start AND :end ' +
      'ORDER BY e.date DESC';
    FQuery.Params.ParamByName('start').AsString := StartDate;
    FQuery.Params.ParamByName('end').AsString := EndDate;
  end
  else
  begin
    FQuery.SQL.Text :=
      'SELECT e.id, e.date, e.description, c.name as category, ' +
      'e.amount, e.notes FROM expenses e ' +
      'JOIN categories c ON e.category_id = c.id ' +
      'ORDER BY e.date DESC';
  end;

  FQuery.Open;
  UpdateStatusBar;
  UpdateCharts;
end;

Notice how the date filtering works with SQLite's text-based date storage. Because dates are stored in yyyy-mm-dd format, string comparison (BETWEEN) correctly orders them chronologically. This is why consistent date formatting matters — the ISO 8601 format (yyyy-mm-dd) sorts correctly as text.

The Complete Database Lifecycle

Let us summarize the complete database lifecycle in PennyWise:

  1. Application startFormCreate calls InitializeDatabase, which opens (or creates) the database file, creates tables if they do not exist, runs schema migrations, seeds default categories, and opens the main query.

  2. Normal operation — The user adds, edits, deletes, and searches expenses. Each operation uses a parameterized query, commits the transaction, and refreshes the display.

  3. Data queries — Charts, totals, and reports use SQL aggregation queries (SUM, GROUP BY, COUNT) that the database executes far more efficiently than equivalent Pascal loops.

  4. Backup — The user can create a backup at any time, which copies the database file to a timestamped backup.

  5. Application shutdownFormClose closes the query, closes the connection, and saves application settings to the INI file. The database file remains on disk, ready for the next launch.

This lifecycle is simple, robust, and handles failures gracefully. If the application crashes, the SQLite journal file ensures database integrity on the next launch. If the user's disk runs out of space, the transaction rollback prevents partial writes. The database is always in a consistent state.

Checkpoint Checklist - [ ] SQLite database created at application start if it does not exist - [ ] Expenses and categories tables with proper constraints and indexes - [ ] Foreign key enforcement enabled with PRAGMA - [ ] TDBGrid displays expenses from database query with JOIN - [ ] Add Expense uses parameterized INSERT query - [ ] Delete Expense uses parameterized DELETE query with confirmation - [ ] Search/filter uses parameterized WHERE clause, updates grid in real time - [ ] Date range filtering with TDateTimePicker controls - [ ] Total and category summaries use SQL aggregation (SUM, GROUP BY, COALESCE) - [ ] All user input goes through parameters — no string concatenation in SQL - [ ] Default categories seeded on first run with INSERT OR IGNORE - [ ] Database backup capability


31.11 Summary

This chapter transformed PennyWise from a file-based application to a database-driven one — a fundamental architectural upgrade.

What we covered:

  • Databases provide structured storage, SQL queries, indexing, transactions, integrity constraints, and concurrency — capabilities that flat files cannot match. We compared typed-file operations to SQL equivalents, showing the dramatic simplification that databases provide.
  • SQLite is an embedded, serverless, ACID-compliant database stored in a single file. It requires no installation, no configuration, and no maintenance. It supports dynamic typing and single-writer concurrency — perfect for single-user desktop applications.
  • SQLdb components form a chain: Connection (database file) > Transaction (safety net) > Query (SQL execution) > DataSource (bridge) > DBGrid (display). We covered each component's properties, methods, and role in the chain.
  • SQL basics — CREATE TABLE for schema with constraints (PRIMARY KEY, NOT NULL, UNIQUE, DEFAULT), CREATE INDEX for performance, INSERT/UPDATE/DELETE for modification, SELECT for retrieval with WHERE, ORDER BY, LIMIT, JOIN for relationships, GROUP BY and SUM/COUNT/COALESCE for aggregation.
  • Data-aware controls (TDBGrid, TDBEdit, TDBMemo, TDBComboBox, TDBNavigator, TDBText) bind directly to database fields through TDataSource, eliminating manual data transfer code. AutoEdit controls whether changes require explicit Edit/Post calls.
  • Parameterized queries are mandatory for security. Never concatenate user input into SQL strings. Always use :param placeholders. Parameters are processed as data, not code, making injection impossible. We explained the two-step process (prepare, then bind and execute) that makes this work.
  • Database design — primary keys for unique identification, foreign keys for relationships (with PRAGMA foreign_keys = ON), indexes for query performance, normalization (1NF through 3NF) to eliminate redundancy, JOINs to reconstruct related data.
  • PostgreSQL and MySQL connections use different connection components but the same TSQLTransaction, TSQLQuery, and TDataSource code. The SQLdb architecture is the Strategy pattern — swap the connection, keep everything else.
  • PennyWise now stores all data in pennywise.db, uses SQL for queries and aggregation, displays results through data-aware controls, supports real-time search filtering, and seeds default categories on first launch.

In Chapter 32, we package PennyWise for distribution — creating installers, handling cross-platform differences, and producing a professional, distributable application.