Blog / Mixing

"What -14 LUFS Actually Means: The Signal Processing Behind Streaming Loudness"

Every mastering engineer knows they should target around -14 LUFS for Spotify. Far fewer can explain what that number actually measures, why the loudness wars ended, or why a file that looks safe at -0.3 dBFS can cause clipping after upload. The answer is in the algorithm — and it's more interesting than the rule of thumb.

The Problem LUFS Was Built to Solve

Before 2011, loudness was measured in dBFS (decibels full scale) — the amplitude of the loudest sample. This created an arms race: labels compressed and limited their masters to be perceptibly louder than competitors, even if technical peak levels stayed the same. A song at -10 dBFS peaks but -14 LUFS integrated would play louder on air than one at -9 dBFS peaks but -18 LUFS.

The fundamental issue: dBFS measures peaks, not loudness. Human perception of loudness integrates over time and weighs different frequencies differently. A 100 Hz tone and a 3 kHz tone at identical dBFS levels don't sound equally loud. Any useful loudness standard had to account for both.

The ITU-R BS.1770 standard (first published 2006, now on revision 5 as of 2023) built a measurement algorithm that models perceived loudness rather than signal peaks. LUFS — Loudness Units relative to Full Scale — is the output unit. EBU R128 took the same algorithm and standardized broadcast targets around it (-23 LUFS for European broadcasting).

The K-Weighting Filter

The core of the BS.1770 algorithm is a two-stage filter applied before any loudness calculation. This filter is called K-weighting, and it shapes the signal to approximate the way the human auditory system weights different frequencies.

Stage 1: High-Shelf Pre-Filter

The first stage is a shelving filter with a shelf at approximately 1500 Hz that boosts the signal by +4 dB. This accounts for the acoustic effect of a human head on a sound source — the head functions as an acoustic obstacle that causes diffraction, amplifying high-frequency content reaching the ear. The filter emulates how your head makes distant high-frequency sources sound louder than your eardrums' raw response to the signal would suggest.

Stage 2: High-Pass Filter

The second stage is a high-pass filter at approximately 37.5 Hz with a Q of 0.5. Its job is simpler: human hearing rolls off sharply below 50 Hz, so deep bass contributes almost nothing to perceived loudness. Excluding it prevents a kick drum's subsonic content from artificially inflating loudness readings.

The combined K-weighting filter doesn't flatten the spectrum — it tilts it. A signal with heavy low-end content will measure quieter in LUFS than its peak suggests, and a signal rich in the 2–5 kHz presence range will measure louder. This is why a dense hip-hop mix with heavy 808s can sit at -14 LUFS integrated while still sounding subjectively loud, and a bright pop vocal mix might hit the same LUFS target but feel piercing at the same gain.

The Gating Algorithm

Once the K-weighted signal is computed, the next step is integration — calculating the mean square (power, not amplitude) of the filtered signal over time. This is where gating becomes essential.

If you integrated over the entire program without gating, silence, room ambience, and quiet intros would drag the integrated LUFS reading down, making the track measure quieter than it sounds during its main content. To prevent this, BS.1770 defines a two-stage gate.

Stage 1 — Absolute Gate: -70 LUFS

Anything measuring below -70 LUFS absolute is excluded entirely from the integrated calculation. This eliminates silence, fade-outs, and noise floors that are perceptually irrelevant.

Stage 2 — Relative Gate: -10 LU

Once the algorithm has an initial estimate of integrated loudness, it re-calculates, this time excluding any block that measures more than 10 Loudness Units below that estimate. This floating gate excludes quiet dynamic passages — soft intros, sparse verses, sustained silences between movements — that would otherwise pull the integrated value down from where the actual "body" of the program sits.

Both gates operate on overlapping 400 ms blocks with 75% overlap, meaning each new block advances 100 ms forward. This gives the momentary measurement its responsive, near-real-time character.

Three Measurements, Three Jobs

This architecture produces three measurement modes used differently in practice:

Momentary (M): The loudness of a 400 ms window in real time. Highly variable. Use it to catch transient loudness spikes that aren't obvious on a standard peak meter, especially in dialogue, drums, or sharp percussive elements.

Short-term (S): A 3-second sliding average of the momentary blocks. This is the measurement most useful during the actual mixing and mastering process. It smooths out transients while still reacting to the structure of the music — verses, choruses, builds. When a mastering engineer talks about where a track "sits" loudness-wise, short-term LUFS is usually what they're watching.

Integrated (I): Gated loudness over the entire program. This is the number that matters for streaming delivery. Platforms measure your upload using integrated LUFS to decide how much gain adjustment to apply.

What the Platforms Actually Do

The rule "master to -14 LUFS" is a simplification. Different platforms behave differently:

PlatformTargetDirectionNotes
Spotify-14 LUFSBoth (up and down)Users can select -11, -14, or -19 LUFS in settings
Apple Music-16 LUFSBoth (up and down)Via Sound Check algorithm
YouTube Music-13 LUFSReduce onlyNever boosts below target
Amazon Music-14 LUFSReduce onlyNo upward normalization
Tidal-14 LUFSReduce onlyNo upward normalization

The direction matters. Spotify and Apple Music will boost a quiet master to reach their target. YouTube and Amazon will only reduce. If you submit a master at -18 LUFS to YouTube, it plays at -18 LUFS — no boost applied. On Spotify, the same master gets +4 dB of gain applied to reach -14 LUFS.

When a track exceeds the target, platforms apply clean gain reduction — a mathematically precise attenuation. They don't re-limit or re-process your audio. This is why there's no incentive to submit a loud, crushed master: Spotify will simply turn it down. Your carefully destroyed transients stay destroyed, but quieter.

True Peak and the Inter-Sample Problem

Here's the part that catches engineers off-guard even after they understand LUFS.

When audio is stored digitally, samples represent discrete moments in time. When a DAC reconstructs the analog waveform, it interpolates between those samples. If consecutive samples are both near 0 dBFS, the reconstructed waveform between them can exceed 0 dBFS — a peak that never appears in your DAW's meter because it only exists in the reconstructed analog domain (or in the oversampled representation of it).

These are inter-sample peaks (ISPs), and they become a problem during lossy encoding. AAC and MP3 compression can generate additional inter-sample peaks during encoding, even in files that had none. A master with 0 dBFS sample peaks and +0.2 dBTP inter-sample peaks can come out of AAC encoding at +1.0 dBTP or higher, causing clipping in the DAC.

The -1 dBTP true peak limit exists to absorb this. A master limited to -1 dBTP has 1 dB of headroom for the codec to add ISPs before actual clipping occurs. Platforms specify -1 dBTP for delivery precisely because they know they're going to re-encode your file.

To measure true peak correctly, your meter needs to oversample the signal (typically 4x) and look at the interpolated values between samples, not just the sample values. Most modern loudness meters (iZotope Insight, FabFilter Pro-L 2, Nugen VisLM) do this automatically. Many standard peak meters don't.

The Practical Picture

The LUFS system neatly solves the loudness war on distribution platforms — louder masters don't play louder, they play at the same volume with worse dynamics. But the subtleties in the K-weighting filter mean that two masters at identical integrated LUFS can still sound different in perceived loudness if their spectral balance differs significantly. A heavy low-end master sounds louder than its LUFS suggests; a bright, presence-heavy master sounds louder too.

The gating algorithm means that integrated LUFS reflects the body of your program, not its quietest moments. A track with a 2-minute ambient intro followed by a dense 3-minute arrangement will have an integrated LUFS much closer to that arrangement than to the intro.

And the true peak limit isn't just a safety checkbox — it's a specification that accounts for downstream processing you don't control. Meet it, and your audio survives the platform pipeline intact. Ignore it, and you're relying on luck in the codec.

The number on the meter is a single output. The machinery behind it is doing significantly more work than the number implies.