Chapter 10 Key Takeaways: Electronic Sound & Synthesis


Core Concepts

1. The Three Motivations for Synthesis

Electronic synthesis serves three distinct purposes: - Replication: Recreating acoustic instruments as accurately as possible - Extension: Implementing acoustic physics beyond physical constraints - Creation: Generating sounds with no acoustic referent whatsoever Each motivation has led to distinct synthesis paradigms and cultural applications.

2. VCO-VCF-VCA: The Physical Model of Sound

The three fundamental synthesizer building blocks directly mirror acoustic physics:

Synthesizer Component Acoustic Analog
VCO (oscillator) Vibrating element (string, reed, vocal fold)
VCF (filter) Resonating body (guitar top, vocal tract)
VCA × ADSR envelope Amplitude dynamics (bowing pressure, breath)

This mapping shows that synthesizer architecture is not arbitrary engineering — it implements the physics of acoustic instrument sound production.

3. Waveform Harmonic Content

  • Sine: Fundamental only — the "atom" of sound
  • Triangle: Odd harmonics with 1/n² amplitude — soft, gentle
  • Square: Odd harmonics with 1/n amplitude — buzzy, hollow (like clarinet)
  • Sawtooth: All harmonics with 1/n amplitude — bright, rich (like string/voice) Subtractive synthesis starts with the sawtooth (maximally rich) and sculpts with filters.

4. ADSR Envelope

The ADSR (Attack, Decay, Sustain, Release) models the time-varying amplitude of any acoustic instrument. Piano: fast attack, long decay, no sustain. Flute: slow attack, long sustain. Percussion: instantaneous attack, fast decay, no sustain.

5. Subtractive Synthesis

Start rich (sawtooth source) → sculpt with filters → shape with envelope. This is the electronic implementation of the source-filter model. The VCF models the frequency-selective response of any resonating acoustic body — guitar body, vocal tract, instrument bore.

6. Additive Synthesis

Build from sine waves up, following Fourier's theorem. Any periodic sound can be synthesized by summing enough sine waves with appropriate frequencies, amplitudes, and phases. The Hammond organ's drawbars are additive synthesis in mechanical form.

7. FM Synthesis

FM formula: x(t) = A·sin(2π·fc·t + I·sin(2π·fm·t)) - Sideband frequencies: fc ± n·fm for n = 0, 1, 2, ... - Sideband amplitudes: proportional to Bessel functions Jn(I) - C:M integer ratios → harmonic spectra; irrational ratios → inharmonic (bell-like) - Higher modulation index I → richer, more complex spectrum Two oscillators + simple math → complex acoustic results (emergence).

8. Karplus-Strong Algorithm

Plucked string synthesis from a delay line of noise: - Delay line length = period of target pitch (simulates string length) - Two-sample averaging = frequency-dependent damping (high freqs decay first) - Feedback = standing wave formation Physical modeling from arithmetic: the physics of wave propagation emerges from a recurrence relation.

9. The Universal Oscillator Equation (Aiko's Insight)

The resonant synthesizer filter is governed by: m·ẍ + b·ẋ + k·x = F(t) This same equation describes: - Mass-spring mechanical systems - RLC electrical circuits (VCF in analog synthesizers) - Vocal tract formants - Acoustic cavity resonances - Quantum harmonic oscillator (in classical limit) The synthesizer filter and the quantum harmonic oscillator are the same physical system in different materials.

10. Technology as Mediator → Technology as Revealer

Electronic synthesis doesn't mediate between physics and music — it makes the underlying physics audible. The synthesizer reveals that acoustic instruments, electronic filters, and quantum systems are all instances of the same differential equation family, realized at different scales and in different materials.


Key Equations

Concept Equation/Value
FM output x(t) = A·sin(2π·fc·t + I·sin(2π·fm·t))
FM sidebands fc ± n·fm, amplitudes ∝ |Jn(I)|
Universal oscillator m·ẍ + b·ẋ + k·x = F(t)
Resonant frequency ω₀ = √(k/m)
Q factor Q = √(km)/b
Nyquist frequency fmax = fs/2
Karplus-Strong frequency f = sample_rate / delay_length
Sawtooth harmonics nf₀ with amplitude 1/n
Square harmonics (2n−1)f₀ with amplitude 1/(2n−1)

Synthesis Paradigm Comparison

Paradigm Physical Model Strength Limitation
Subtractive Source-filter Intuitive, expressive Limited to filtering
Additive Fourier reconstruction Theoretically complete Too many parameters
FM Coupled oscillators Complex spectra from simplicity Programming difficulty
Wavetable Sampled physics Realistic at snapshot Doesn't respond dynamically
Physical modeling Differential equations Dynamic realism Computationally expensive
Neural audio Statistical approximation Perceptual realism No physical understanding

Historical Timeline

  • 1897: Telharmonium — first electronic instrument
  • 1920: Theremin — touchless electronic instrument
  • 1960: RCA Mark II — computer-controlled synthesis
  • 1964: Moog modular synthesizer — voltage control of physics
  • 1967: Chowning discovers FM synthesis at Stanford
  • 1970: Minimoog — synthesis for everyone
  • 1973: Chowning publishes FM synthesis paper
  • 1975: Yamaha licenses FM synthesis from Stanford
  • 1983: Yamaha DX7 — FM synthesis in every studio
  • 1983: Karplus-Strong algorithm published
  • 1990s: Waveguide synthesis — physical modeling goes real-time
  • 2000s: Software synthesizers — physics in code, free
  • 2019+: Neural audio synthesis — machine learning meets synthesis

Big Picture Connections

  • Reductionism vs. Emergence: FM synthesis shows how simple mathematical rules (the FM formula) produce emergent acoustic complexity (Bessel-function sideband spectra). The synthesizer architecture reduces acoustic instruments to three components (oscillator, filter, amplifier) while the sounds that emerge from combining these components defy reduction.
  • Technology as Mediator: The Moog synthesizer translated source-filter physics into a playable instrument; the DX7 translated FM mathematics into pop music; physical modeling translates wave equations into expressive instruments. Technology mediates between mathematical physics and human musical experience.
  • Universal Structures: The universal oscillator equation connects synthesizer filters to quantum mechanics to vocal acoustics — demonstrating that the mathematical structures underlying music are not music-specific but universal.
  • Constraint and Creativity: The Minimoog's fixed architecture (constraint) enabled widespread musical adoption (creativity); the DX7's difficult programming (constraint) produced a consistent era-defining aesthetic (cultural creativity through constraint).

Bridge to Part III

Part III — "Perception: How the Brain Hears Music" — will ask: given all this physics (vibrating strings, formants, FM sidebands, Karplus-Strong delay lines), how does the brain make sense of it? How do acoustic waves become pitch, timbre, consonance, dissonance, emotion? How does the brain's auditory processing interact with the physics to create the experience of music?

The key preparation from this chapter: the brain is itself a physical system — neurons are electrical oscillators governed by differential equations similar to the ones we've been studying. The physics of sound and the physics of perception are not separate domains; they are connected by the same mathematics.