The Mystery of Gamma Correction
The Hidden Curve
When you read an RGB value of (128, 128, 128) from an image file, you might assume it represents exactly half the brightness of (255, 255, 255). It doesn't. It represents roughly 21.5% of the physical light intensity — less than a quarter. This discrepancy is gamma encoding, and ignoring it is one of the most common mistakes in color computation.
Why Gamma Exists
Gamma encoding has historical roots in CRT monitors, whose phosphors had a non-linear response to voltage — roughly a power curve of γ ≈ 2.2. Image files are encoded with the inverse curve (γ ≈ 1/2.2 ≈ 0.4545) so that the CRT's non-linearity cancels out and the image appears correct.
But there's a deeper reason gamma persists in the LCD era: it matches human vision. We're far more sensitive to differences in dark tones than bright tones. Gamma encoding allocates more of the 0–255 range to darks, where we can see the finest distinctions, and compresses the brights, where we're less discerning. It's a form of perceptual compression.
The Math Problem
When you calculate the average of two colors — say, blending (255, 0, 0) (red) and (0, 0, 0) (black) — doing it directly in gamma-encoded sRGB gives you (128, 0, 0). But physically, the correct midpoint requires linearizing first, averaging in linear space, then re-encoding:
Linear red = 1.0, linear black = 0.0. Average = 0.5. Re-encoded to sRGB ≈ (188, 0, 0). The difference is dramatic — the gamma-naive blend is than it should be.
Gerelateerde concepten
Gerelateerde artikelen
Probeer in het Lab
Verken Gerelateerde Secties
Gebruik deze secties om kunstwerken te ontdekken, technische context te lezen en het volledige ecosysteem van algoritmische kunst te verkennen.
