Case Study 1: Business Logic Layer Design -- Views and Procedures at Scale


Background

Greenfield Federal Credit Union (GFCU) is a mid-size credit union headquartered in Portland, Oregon, serving 280,000 members with $3.2 billion in assets. GFCU runs its core banking platform on DB2 for LUW 11.5, deployed across a three-node pureScale cluster.

In 2023, GFCU's technology team completed a major initiative: migrating from a legacy COBOL/CICS application that embedded SQL directly in program code to a modern Java-based service architecture. The migration exposed a fundamental design question that every organization faces when modernizing: where should business logic live?

The Legacy Architecture

The old system had 1,400 COBOL programs, each containing embedded SQL statements. Business rules were scattered across these programs:

Rule Category Location in Legacy System Count
Balance validations Inline COBOL IF statements ~320
Fee calculations COBOL PERFORM paragraphs ~85
Regulatory compliance checks Scattered across 40+ programs ~150
Audit trail generation COBOL WRITE statements to flat files ~200
Cross-account validations Custom COBOL subroutines ~60

When two programs needed the same rule -- say, overdraft protection logic -- each had its own copy. Over 30 years, these copies had diverged. The overdraft logic in the ATM processing program was subtly different from the overdraft logic in the teller application, which was different again from the online banking system. Nobody could say with certainty which version was "correct."

The New Architecture Goals

GFCU's chief architect, Priya Chandrasekar, defined three non-negotiable goals for the new system:

  1. Single source of truth for business rules: Every rule exists in exactly one place.
  2. Defense in depth for data integrity: No single layer failure should allow invalid data.
  3. Measurable performance: No transaction exceeds 200ms end-to-end under peak load.

The Design Decision

Chandrasekar's team evaluated three architectures:

Option A: All Logic in the Application Layer

Business rules implemented in Java services. The database is a "dumb store" -- tables, indexes, constraints, nothing more. This is the dominant pattern in microservices architectures.

Pros: Easy to test (JUnit), easy to deploy (CI/CD), easy to hire developers, full debugging support.

Cons: Multiple applications accessing the same tables could bypass rules. A DBA running ad-hoc SQL to fix data could violate business rules. No defense in depth.

Option B: All Logic in the Database

Every business rule implemented as a stored procedure or trigger. Applications call procedures and nothing else. This is the traditional mainframe pattern.

Pros: True single source of truth. Impossible to bypass rules. Excellent for latency-sensitive operations.

Cons: Harder to test. SQL PL is less expressive than Java for complex workflows. Deploying procedure changes requires DBA coordination. Limits horizontal application scaling.

Option C: Hybrid -- Logic Where It Fits Best

Database layer handles: data integrity (constraints + triggers), audit trail (triggers), security boundaries (views + procedures for sensitive operations), high-performance transactional operations (procedures).

Application layer handles: workflow orchestration, user interface logic, external API integrations, business rules that change frequently.

Pros: Each type of logic lives where it works best. Defense in depth without database overload.

Cons: Requires discipline to enforce the boundary. Developers must understand both layers.

The Choice

Chandrasekar chose Option C with strict governance rules.

Implementation

The View Layer

GFCU created 47 views organized into three tiers:

Tier 1 -- Security Views (12 views): These restrict data access by role. The v_member_teller view shows member data appropriate for tellers (name, contact, account balances) but hides SSN, credit score, and internal risk flags. The v_member_manager view adds those sensitive fields. Applications connect with role-specific credentials and query the appropriate view. No application has SELECT access to base tables.

Tier 2 -- Abstraction Views (23 views): These encapsulate the normalized schema into business-meaningful "tables." The v_member_portfolio view joins 6 base tables to present a complete picture of a member's relationship with the credit union. Report developers query these views without needing to understand the physical schema.

Tier 3 -- Aggregation Views (12 views): These provide pre-defined summaries. The v_branch_daily_summary and v_product_performance views support management dashboards. Three of these are backed by MQTs (refreshed hourly) for performance.

The Trigger Layer

GFCU implemented 34 triggers, strictly limited to three categories:

  1. Audit triggers (18): Every INSERT, UPDATE, and DELETE on the 6 core tables generates an audit record. These triggers are non-negotiable -- they fire regardless of how data is modified.

  2. Derivation triggers (9): BEFORE triggers that compute derived columns (full name from first + last, masked SSN, account age in months). These ensure derived data is always consistent.

  3. Integrity triggers (7): BEFORE triggers that enforce cross-table rules that cannot be expressed as CHECK constraints (e.g., "a member's total unsecured lending cannot exceed 4x their annual income").

Chandrasekar explicitly prohibited the following trigger uses: - No triggers for workflow orchestration (that belongs in the application). - No triggers that call external services. - No trigger chains deeper than 2 levels. - No triggers on temporary or staging tables.

The Procedure Layer

GFCU created 28 stored procedures for operations that met at least two of these criteria: - The operation involves 3+ SQL statements that must execute atomically. - The operation touches sensitive data that applications should not access directly. - The operation is latency-critical (must complete in under 50ms).

Key procedures included:

  • sp_transfer_funds: 12 SQL statements, 6 validations, 2 audit records. Executes in 8ms average (vs. 45ms when implemented as individual round trips from the application).
  • sp_open_account: Creates account, assigns to branch, applies opening promotion, generates welcome correspondence record. 9 SQL statements.
  • sp_calculate_dividends: Monthly batch procedure processing 280,000 accounts. Runs in 4.2 minutes using set-based operations inside the procedure (vs. estimated 35+ minutes with cursor-based application logic).
  • sp_apply_fees: Monthly fee assessment with waiver logic based on account tier, balance thresholds, and relationship depth.

Results After 18 Months

Data Integrity

Before migration: 340 data integrity incidents per year (divergent business rules, ad-hoc SQL bypassing validations).

After migration: 3 data integrity incidents in 18 months, all caused by bugs in the trigger logic that were identified during testing of new features and fixed before production impact.

Performance

The sp_transfer_funds procedure reduced fund transfer latency from 45ms (8 round trips) to 8ms (1 round trip). Under peak load (1,200 TPS), the database server CPU utilization increased by only 4% from the trigger and procedure overhead -- well within the performance budget.

The MQT-backed views reduced dashboard query time from 12 seconds (scanning transaction history) to 200ms (reading pre-aggregated data). The hourly refresh added 90 seconds of batch processing, which runs during off-peak hours.

Development Velocity

Initial development was 20% slower because developers had to learn the hybrid architecture and coordinate with the DBA team on trigger and procedure changes. After 6 months, velocity normalized as the team internalized the patterns. After 12 months, velocity exceeded the pre-migration baseline because developers stopped reimplementing common logic and instead called existing procedures.

Audit and Compliance

GFCU's annual regulatory audit was completed in 3 weeks (down from 7 weeks). Auditors could trace every data change through the trigger-generated audit trail. The single-source-of-truth design meant auditors only needed to review 28 procedures and 34 triggers to understand all business rules, instead of 1,400 COBOL programs.

Lessons Learned

  1. Governance is more important than technology. The hybrid architecture works only if someone enforces the boundary between database logic and application logic. GFCU created a "Logic Placement Review" step in their design process -- every new feature must document where each rule will be implemented and why.

  2. Trigger documentation is not optional. Every trigger has a one-page specification: what it does, why it exists, what tables it touches, and what its performance impact is. This documentation is stored in version control alongside the trigger DDL.

  3. Procedure testing requires infrastructure investment. GFCU built a dedicated test database that is refreshed nightly from production (with data masking). Procedure tests run as part of the CI pipeline, using a custom test harness that calls procedures and validates output parameters and result sets.

  4. MQTs require monitoring. One MQT's hourly refresh began taking 15 minutes instead of 90 seconds after a schema change added 2 million rows to a base table. Without monitoring, this would have gone undetected until it impacted the next refresh window.

  5. Start conservative with triggers. GFCU's initial design had 52 triggers. After performance testing, they eliminated 18 that provided marginal value at measurable cost. The principle: every trigger must justify its existence with a concrete business requirement.


Discussion Questions

  1. Chandrasekar prohibited triggers for workflow orchestration. What is the risk of using triggers for this purpose? Can you think of a scenario where it might be justified?

  2. The sp_transfer_funds procedure reduced latency from 45ms to 8ms. Would this improvement matter for a system processing 10 TPS? What about 5,000 TPS? At what throughput does the improvement become critical?

  3. GFCU's MQTs are refreshed hourly. A branch manager complains that her dashboard does not reflect a large deposit made 30 minutes ago. How would you address this concern? Should the MQT be changed to REFRESH IMMEDIATE?

  4. What would you add or change in GFCU's architecture if they expanded to offer real-time payment services (requiring sub-10ms transaction processing)?


Return to Chapter 12 | Continue to Case Study 2