Perfecting Sound Forever: An Aural History of Recorded Music by Greg Milner
Reviewed by Brian Hayes
American Scientist
"The story goes that, late in his life, Guglielmo Marconi had an epiphany. The godfather of radio technology decided that no sound ever dies. It just decays beyond the point that we can detect it with our ears. Any sound was forever recoverable, he believed, with the right device. His dream was to build one powerful enough to pick up Christ's Sermon on the Mount."
Thus begins Perfecting Sound Forever, Greg Milner's cultural and technological history of the sound-recording industry. As far as I know, the original-cast album of the Sermon on the Mount has not yet been released on CD, but plenty of acoustic waves emitted in our own era have been captured and preserved, to become the golden oldies of future generations. Neil Young said it: Rock and roll will never die.
Living in an age of ubiquitous recorded audio, it can be hard to appreciate that sound was once the most evanescent of sensory experiences. Faces could live on in portraiture (even before photography), and words could be written down, but until Edison dreamed up his phonograph, the human voice never survived except in memory and imagination.
Edison thought he had invented a dictation machine; his business model was to sell recording equipment and blank media on which people would make spoken memos to themselves or perhaps to posterity. The recording of music was an afterthought; almost 25 years passed between the first version of the phonograph and the release of the first commercial music recordings. After that, though, it wasn't long before the "phonograph" became the "record player." This was not to be an instrument with which we would record our own voices; instead, a few star performers -- from Enrico Caruso to Hannah Montana -- would sell millions of copies of recordings, which the rest of us would listen to over and over. The process of creating those sound recordings became an art, a science and an engineering profession.
Edison's early phonographs recorded on wax-coated cylinders; the rival gramophone machines of the Victor Company played shellac-coated discs. The competition between these two recording formats was the first of many contests for market share that occupy much of Milner's history. Over the years, consumers of recorded music have been confronted with a long series of choices: 78s versus 45s versus 33s, mono versus stereo, tubes versus transistors, tapes versus discs, cassettes versus eight-track, CDs versus vinyl, analog versus digital, and now MP3s versus WAVs and a dozen other digital file formats. Behind the scenes, equally contentious issues have divided the community of producers and sound engineers. Should the studio be a performance hall that contributes ambience to the sound, or an anechoic chamber? Do microphones belong out in the auditorium where a listener would sit or close to the voices and instruments? Should a performance be recorded all in one take or assembled from bits and pieces?
My own exposure to recorded music began around the time that the "record player" turned into the "hi-fi." That term "high fidelity" made the aims of the enterprise seem simple and obvious: A recording should capture the sound of the original performance and reproduce it faithfully in the listening room. When you closed your eyes, the cabinet full of glowing and blinking equipment was supposed to disappear, to be replaced by Leonard Bernstein and the New York Philharmonic, or by Buddy Holly and the Crickets. If this illusion was hard to achieve, that meant you needed to work on your turntable's rumble and wow and flutter, or suppress your amplifier's interharmonic distortion, or get yourself some better woofers and tweeters.
Milner traces the idea of the hi-fi illusion back to an invitation-only performance in Montclair, New Jersey, in 1915. Three musicians, including contralto Christine Miller, shared the stage with a new Edison Diamond Disc Phonograph. At one point Miller sang a duet with her own recording of an aria from Mendelssohn's Elijah.
The record began, and Miller let it play for a while. She began singing along with it, and then stopped. There were audible gasps from the audience. It was uncanny how closely Miller's recorded voice mirrored the sounds coming from her mouth onstage.
The stunt was so effective that such "tone tests" became a popular road show, with Miller and others playing hundreds of towns across the country over the next 10 years. And the notion has been revived many times since then, as in the long-running advertising slogan, "Is it live, or is it Memorex?"
Can we really believe that a phonograph in 1915 reproduced sound so accurately that it fooled a theater audience? Milner points out that if the tone tests were not exactly fraudulent, they were very carefully staged. The record always played continuously; it was the singer who stopped and started. In effect, what was being tested was not the ability of the phonograph to mimic the live human voice but the ability of the singer to imitate the tonal characteristics of the recording device, "such as the 'pinched' quality it lent to voices."
Today, ironically, musicians are again struggling to imitate their own recordings, often with less success. In modern studio practice, a piece of music is sliced into separate tracks for each instrument and diced into multiple takes for each phrase or even each note. The sounds are digitally processed and enhanced. Software can rescue a vocalist who wanders off key or a drummer who can't keep the beat. The final product is a seemingly flawless performance, which the musicians may be hard pressed to duplicate on stage without all the technological aids. Hence the recent flurry of controversies over lip-synching -- or, in the case of Yo-Yo Ma at the Obama inauguration, bow-synching.
The 1950s quest for perfect audio fidelity -- for the illusion of the concert hall in the living room -- was doubtless naive, but there are worse alternatives. Milner discusses several of them at length. He gives the overall impression that records have been getting worse and worse even though the tools for making them have become steadily more powerful and more widely available.
The most egregious example is a pursuit of loudness at any cost that overtook some genres of pop music in the 1990s. Ultimately, of course, loudness is still under the control of the listener, who can turn the volume up or down at will. But record producers found that they could make some songs seem louder than others, even at the same playback setting, by compressing the dynamic range. In essence, if you can't boost the peaks of the signal, you can boost everything else until it's as loud as the peaks. And then some bands went further still, boosting the peaks even though there was no room left for more signal. The result is "clipping" -- sine waves with their heads cut off -- which creates audible distortion. An engineer remarks: "The music we listen to today is nothing more than distortion with a beat."
Milner is inclined to blame the march of technology for many of the excesses and deficiencies of current recorded music. Almost every step along the way, he suggests, took us in the wrong direction. The tape recorder allowed sound to be edited and spliced but sacrificed the immediacy of live performances. Multitrack recorders allowed each element of the music to be adjusted independently but "ended the idea that the sound of recordings bore any relation to a real-world event." And then came all things digital, as CDs replaced vinyl LPs and as computer programs called digital audio workstations replaced both the tape recorders and the giant mixing boards that were once the heart of a recording studio. Milner notes, "Many pros . . . feel that something ineffable has been lost to a generation that both makes and consumes music that has never been outside the digital domain."
"Something ineffable": These debates always seem to come down to factors that can't be measured or clearly defined. Milner writes about CDs:
As for their sound, the problem for me isn't so much the harshness, a common complaint. It's more the sensation of distance I feel between me and the music. There's a disagreeable, frictionless quality to the sound that may be the downside of substituting a phantom laser for the diamond stylus.
Milner also gives a fair amount of space to the loony views of John Diamond, a psychiatrist who maintains that something in digital audio induces "a state of hatred" in the listener and "is increasingly killing our society." It's all those digital signals we hear that cause events such as the shootings at Columbine High School, he says.
I don't mean to suggest that the resistance to digital audio is nothing but nostalgia, mysticism and nuttery. Milner cites some plausible speculations that the noise and distortion of some analog recordings may create effects that listeners find pleasing; one musician remarked that the soft background hiss of vinyl "has the tendency to feather the edges of things." In other words, vinyl records are the heirloom tomatoes and organic apples of the audio world, whose very blemishes bear witness to their wholesomeness. CDs are the pesticide-laden produce of industrial-scale farming; because they are cosmetically perfect, everyone knows they must be lacking in flavor and nutrients.
Milner's dour and sometimes dire view of the state of the recording industry is backed up by considerable technical expertise and many hours of attentive listening. His assessment is to be taken seriously. Nevertheless, I want to end on the upbeat. It seems to me there are at least two important trends of recent years that give cause for encouragement. First, as Milner concedes, the digital audio workstation has democratized the production of recorded music. We have moved a few steps back toward Edison's vision of a recording device that would allow anyone to be a performer as well as a listener. Second, we are in the middle of a great renaissance in live performance in many genres, from baroque to bluegrass. To hear someone actually strumming on a banjo or sawing on a cello is the ultimate corrective to whatever distortions might lie in the grooves of the recording.
Brian Hayes is Senior Writer for American Scientist. He is the author most recently of Group Theory in the Bedroom, and Other Mathematical Diversions (Hill and Wang, 2008).