General histories of the relationship between sound and image in cinema tend to perpetuate an ocular-centricity (emphasizing vision over the other senses) that dates back to the very earliest experiments in “moving pictures”—a term which itself serves to confuse historians. The vast majority of cinema histories tend to relegate the subject of sound in cinema to a subordinate position by studying the transition to the sound period in the late 1920s and early 1930s, only to drop the subject of sound and to emphasize the visual nature of film.
Yet as long as cinema has existed, sound has been a part of it—both in its presence and in its absence. A generation of new film historians has revealed that the interplay between sound and image in cinema is quite complex and a number of presiding assumptions about film sound need to be re-examined. For example, “detonating celluloid” was a popular slang term dating from a 1930 industry guidebook for “talking cinema,” and it encapsulated a crucial misconception in the history of early sound film (“Studio Slanguage,” 1930, p. 25). As an expression, it emphasized the radically transformative effect that sound was perceived to have had on the film industry in the late 1920s.
However, the term obscured the fact that sound films had been produced in small numbers since the advent of cinema while it supported the popular myth that sound cinema emerged fully grown from the mouth of Al Jolson in The Jazz Singer (1927) when he uttered the now-immortal expression, “You ain’t heard nuthin’ yet! In fact, the transition to sound in cinema was quite orderly and not nearly as explosive as the term implies, and the history of film sound follows a winding path from the earliest experiments in sound and image synchronization to today’s digital cinema systems. The function of this chapter is to apply a corrective filter to film history and to amplify how film sound has aided the development of modern cinema. Sound and image relations during the “silent era”: 1895–1926 Perhaps the greatest misnomer in the history of sound and image relations is the term “silent cinema. During the period commonly referred to as the “silent era” (roughly from 1895 to the end of the 1920s), films were never called “silent,” nor were they even called “cinema” for the first decade of their existence. In fact, from the very earliest experiments in the 1890s—such as Thomas Edison’s Kinetoscope and the Lumiere brothers’ Cinematographe—through the rise of Nickelodeons in 1904–5, films were always part of other mixed-entertainment forms such as vaudeville shows, traveling lectures, magic lantern presentations, song–slide performances, phantasmagorias, and even circuses.
Despite these numerous divergent practices, there is general consensus that the earliest moving pictures were accompanied by some form of acoustic presentation. In the case of the first projected films in the United States, at Koster and Bial’s Music Hall in New York City on April 23, 1896, Thomas Edison’s films were shown with accompaniment by Dr. Leo Sommer’s Blue Hungarian Band. And two months later when the Cinematographe made its American debut at Keith’s Union Square Theater, it was accompanied by lecturer Lew Shaw and the use of live sound effects.
In later presentations of the Cinematographe, this was expanded to include pre-recorded sound effects, such as the sounds of a train engine starting, played back via a phonograph in the auditorium. Although these models seem diverse from a twenty-first-century perspective, it is because films were interpolated into pre-existing entertainment forms as part of a variety bill. Therefore, much of the early history of cinema can be understood as multiple attempts at establishing a sense of accord between image and sound.
After the turn of the century, as moving pictures began to be used more regularly in vaudeville shows, their musical accompaniment grew from the lone pianist to the inclusion of small orchestras. The most versatile member of these orchestras was the trap drummer, who specialized in “catching” the pratfalls of stage comedians with a well-timed cymbal crash or kettledrum hit. As an extension of his art, the trap drummer also was responsible for providing synchronous sound effects for films.
Because musical accompaniments were often done without the benefit of previewing the films, they would regularly vary from show to show and theater to theater. Therefore, visual cues within the films would often trigger related sound effects (see a cow, rattle a cowbell) or musical passages (generally relying on the recognition of lyrical passages from popular songs) that would either augment or hinder an audience’s experience of a film.
Although films were just one part of an evening’s entertainment, in their first ten years they grew from 1- to 2-minute actualities and trick films to 5- to 10-minute narrative pieces. In early cinema, traveling lecturers like Lyman Howe used moving pictures to aug ment their slide show presentations, and to add an extra dimension, a team of sound-effects men—often stationed behind the screen—were employed to bring the image to life through the addition of synchronous sounds.
This led to vocal “impersonators” like LeRoy Carleton, who would use his voice to provide sound effects during his lectures, and immersive sensorial experiences like Hale’s Tours. Utilizing a trailer that had been converted to look like a railroad car, Hale’s Tours would project “phantom” train films (shot from the perspective at the front of the engine) on the front wall as live sound effects were played and the trailer was jostled to mimic the experience of movement.
Intriguingly, these formats did not emphasize the now-common narrative function of the film; instead they used both image and sound to convey a sensorial experience. Cinemas, based around the exclusive viewing of moving pictures, didnot arrive until the Nickelodeon boom between 1904 and 1908. And with the fundamental rules of editing working to construct narrative trajectories in the films, the lecturer’s role of explaining the content of the films became less necessary.
Thus, with the Nickelodeon’s piano accompanist providing music for the illustrated song slides and sing-a-longs, the films were shown in silence as often as they were shown with some form of acoustic accompaniment. Despite this, Nickelodeons were far from silent. Projectors were noisy mechanical devices that were located at the back of the room. Audiences were encouraged to sing along with the slides and were not discouraged from conversing during the film presentations.
In addition, ballyhoo music was often played on phonographs directed out into the street to draw patrons into the theater. The result is that early cinema was a heterogeneous experience where events within (and outside of) the space of the theater demanded the audience’s attention as much as the films being projected. Even though live musical accompaniment became the main form of sound in early cinema, the idea of synchronous sound and image correlation was a concern from the earliest days in the development of cinema technology.
Even in the first Kinetograph experiments, Edison sought to combine sound and image by using musical recordings on cylinder to accompany film shorts. Although most historians are quick to invoke Edison’s words about the Kinetoscope as “an instrument which should do for the eye what the phonograph does for the ear,” few bothered to include the full quote, which explains “and that by a combination of the two all motion and sound could be recorded and reproduced simultaneously” (quoted in Dickson and Dickson, 2000, p. 14).
The synchronous reproduction of sound and image had been a goal of Edison, as well as most of the early inventors, from the very start of moving images. However, the maintenance of synchronization while using disks or cylinders, which could easily get scratched and skip, with hand-cranked cameras and projectors proved to be very difficult. In addition, recordings had to be amplified acoustically, which meant that they could be heard clearly in a small room but did not provide sufficient volume for larger auditoriums.
Despite numerous attempts at synchronous sound and image devices—like Edison’s Kinetophone (1895), Gaumont’s Chronophone (1902), Messter’s Kosmograph (1903), Norton’s Cameraphone (1908), and the modified Kinetophone (1913)—both the difficulties surrounding synchronization and the lack of amplification stood in the way of success for any one system. This difficulty in providing synchronization between sound and image recording devices led to the development of varying strategies for combining sound with the projected image.
As the presentational methods for early cinema changed—from Nickelodeons to movie palaces—so too did the interplay between the image and accompanying music and sound effects. With the emergence of stable systems of cinema production and distribution, and the development of visual techniques such as editing and intertitles, came new sound and image relationships: from the orchestral accompaniments in large movie palaces and scores specifically composed for feature films, to the continued use of narrators to enhance the narratives with dialogue and sound effects in the tradition of the Japanese benshi and Quebecquois bonimenteur.
The transition to sound: 1926–35 The earliest experiments to add synchronous sound to films had failed outright or met with little success with exhibitors; however, in the second half of the 1920s, an alignment of specific determinants allowed sound film to take root and flourish. Film sound became a practical reality only with the invention of electrical amplification in the 1920s, and the development of new technologies in the radio and telephone industries had a direct impact on cinema. Lee De Forest, who revolutionized early radio with his
Audion vacuum tube, was one of the first individuals to apply the notion of electrical amplification to film sound when he developed the Phonofilm system for recording sound-on-film in 1922. Unlike most preceding film sound systems, the Phonofilm did not rely on a synchronized disk recording; instead, the sound waves were recorded on film as oscillating light and dark wave patterns. The Phonofilm boasted perfect synchronization, even if the film were to break and be repaired, but it initially suffered from poor sound quality.
After a less than successful debut in April 1923, De Forest started filming synchronized film performances by several of Broadway’s brightest stars and he signed up engineers Theodore Case and Earl Sponable to fix the problems with the system’s sound. With the addition of Case’s light-valve technology, the sound quality of the Phonofilm was improved and De Forest started marketing his films directly to theater owners. By 1924 the Phonofilm system was installed in over 50 theaters and De Forest was regularly producing a number of shorts that highlighted Broadway performers and vaudeville routines.
In contrast with the advanced editing patterns and complicated camera movements of “silent” cinema, these sound shorts were static single-takes shot with a fixed camera. Although the shorts were successful in presenting synchronized sound and image, De Forest was unable to interest any of the major film studios in using the technology. By 1926, Case and Sponable had left the company and Phonofilm had been relegated to a novelty. Even though De Forest’s system had failed to provide a model for the successful representation of sound and image, it was not due to the technology.
Concurrent with De Forest’s work in sound-on-film, AT&T’s Bell Laboratories was developing a sound-on-disk system called Vitaphone, and in April 1925, this new system was introduced to the Warner brothers. Despite a long-standing wariness of “talking cinema,” they were overwhelmed by the impact of synchronized voice and music. After expressing his reservations about using the system for the reproduction of voice, Harry Warner noted: We can use it for musical accompaniment to our pictures!
We can film and record vaudeville and musical acts, and make up programs for houses that can’t afford the real thing or can’t get big-time acts. (Green, 1929, p. 50; emphasis in original) Their goal was not to make talking pictures, but to obviate the need fortheater orchestras. After the June 1925 merger between Warner Bros. and AT&T, two divergent paths of sound usage were followed: the first concentrated on recording sound and images live for the shorts, while the second confined itself to adding semi-synchronized musical scores to already completed feature films.
Through this two-pronged approach, Warner Bros. was able to test their new equipment in the lower-priced shorts, incorporate technological advances into their feature films, and market the Vitaphone equipment by ensuring smaller theater owners the same program quality and content as higher-priced venues. The new Vitaphone team conducted several tests to perfect the art of live recording in which, unlike the films of De Forest, they understood that cinematic elements were equally as valuable as the sound recording.
They used multiple cameras, electrically synchronized, to capture different angles on the action being recorded. The takes were then edited together into one master film that preserved synchronization with the disk recording while allowing for changes in visual perspective. However, most of the shorts filmed were either musical or theatrical performances, which created an aesthetic that foregrounded sound synchronization often at the expense of visual expressiveness.
Vitaphone made its public debut on August 6, 1926, when a program of Vitaphone shorts preceded Don Juan with its recorded score by the New York Philharmonic Orchestra. Although the presentation was deemed a success, it is interesting to note that, aside from an introductory message, neither the shorts nor the feature film emphasized the synchronization of the spoken word. Instead, over the next year Warner Bros. and Vitaphone continued to release films with recorded scores and dozens of musical shorts, yet dialogue was rarely emphasized.
By the time The Jazz Singer debuted on October 6, 1927, audiences had been conditioned by nearly a dozen sound features that contained semi-synchronous sound effects and occasional dialogue. While there is no contesting the fact that Al Jolson’s improvised dialogue sequences were impressive to audiences, it should be noted that when the film was released only a few theaters had been wired for sound and in the course of the film’s run more people saw it silent than with sound.
Furthermore, after the success of the dialogue sequences in The Jazz Singer it took over a year of hybrid “part-talkies” before The Lights of New York (1928), the first “100 percent talkie,” was released. The result is that, even though Warner Bros. and AT&T had made a carefully calculated business investment in the transition to sound films, the actual creation of the films introduced a host of technical and stylistic challenges. The presentational aesthetics of the single-take sound shorts and hybrid “part-talkies” contrasted with the highly developed narrative logic and editing strategies used in the late silent period.
For example, the length of the disks dictated thatfeature-length films had to be shot in 10-minute increments, where all dialogue, music, and sound effects were performed live on the set. This required actors from the stage to be brought in to accommodate the long dialogue sequences. In addition, with cameras in soundproof boxes and microphones fixed in place, the actors were very limited in their movements and nearly all action was restricted to shooting in studio. The ultimate effect was a cinematic “staginess” that was in great contrast to the fluidity of the films of the late silent period.
Even though Warner Bros. was the first studio to introduce sound into cinema, the Vitaphone’s exclusivity was short-lived. After leaving Phonofilm, Case and Sponable shopped their sound-on-film technology to other studios, and Fox Film Corporation decided to buy the rights to the sound system under the name Movietone. At the end of 1926, the studio signed a cross-licensing agreement with Vitaphone which gave Fox the use of amplification technology and AT&T the rights to market sound-on-film.
Like Warner Bros. , Fox studio started releasing its feature films with “canned” musical scores that were synchronized to the films but featured no dialogue. However, instead of providing a bill of synchronized entertainment shorts, in April 1927 Fox started to release synchronous sound newsreels called Fox Movietone News. These not only featured important figures of the day recorded speaking in the studio but sound trucks were also outfitted to record newsreels on location.
As a form of product differentiation, the Movietone newsreels stood in stark contrast to the staged performances of the Vitaphone shorts, yet each was providing the same function: to use sound as a form of spectacle to attract cinema audiences. Despite their differences in shorts, both Warner Bros. and Fox made an orderly transition to sound feature films by adhering to the prior codes of silent cinema, first dabbling in hybrid “part-talkies” before embarking on all-talking pictures in 1928. The five major studios initially took a wait-and-see strategy to film sound and igned an agreement in February 1927 that they would all act together in any transition. To make matters more complicated, in 1927 the research division of the Radio Corporation of America started shopping its own Photophone sound-on-film system to studios. By May 1928 the major studios decided to use Movietone sound-on-film technology, and in October 1928, RCA merged with the Film Booking Office and the Keith–Orpheum theater chain to create the Radio–Keith–Orpheum studio to use its Photofilm technology.
In the midst of the confusion regarding varying sound technologies, the idea of talking pictures rapidly caught on with audiences. Although The Jazz Singer may have introduced the idea of talking pictures to the public, it wasn’t until September 1928 with The Singing Fool that Warner Bros. , and the film industry, had their first bona fide sound-film hit. In the months and years that followed, the American studios worked at refining the awkward style of the early talkies to an aesthetic that was in line with the demands of narrative.
Whereas early sound cameras had to be contained in stationary padded booths to limit their noise, innovative filmmakers equipped the booths with wheels so it was possible to have moving camera shots. In addition, the fixed carbon microphones that were attached to the ceiling of sets were replaced with lighter and more sensitive condenser microphones mounted on moveable boom poles. And with the industry’s preference for sound-on-film came the capability of recording and editing the soundtrack separate from the image track, something that was not possible with the Vitaphone disks.
The result is that, after a period of awkward transition from 1926 to 1929, by the 1930s the American cinema entered into a new era of film sound that served the narrative and representational demands of filmmakers and audiences. In the rest of the world the transition to sound was dominated by the American sound systems and the development of competing sound systems like the Tri-Ergon sound-on-film system owned by German company Tobis-Klangfilm. In a July 1930 agreement, both AT&T and Tobis-Klangfilm decided to split up the remainder of the global cinema market between the two systems.
As a result the transition to sound occurred rapidly around the world in the early 1930s, yet many international filmmakers resisted the models of film sound being exported from the American studios. In particular, a number of directors like Alfred Hitchcock in the United Kingdom, Rene Clair in France, and Dziga Vertov in the Soviet Union resisted the redundancy of synchronous sound by experimenting with the use of subjective sound, off-screen sounds, the “doubling” of voices, and manufactured sound effects.
Due to the inability to dub the language tracks in early sound films, several European studios started producing “multilingual” versions. These were made by utilizing the same sets and stories, but casting actors of different languages to play the same roles. Often, an English-language cast would shoot the scenes in the morning, a French cast in the evening, and a German cast overnight. This ensured a steady stream of film releases, but it also meant that they were “fixed” in certain languages without the ability to dub them into others.
The practice of filming multilinguals continued in Europe until the development of efficient dubbing technology in the mid-1930s. Classical film sound and the rise and fallof multichannel: 1935–70 With the end of Vitaphone disk recordings in 1933 and the adoption of 35-mm monophonic optical sound as a global standard, the period from the mid-1930s to the early 1950s saw sound film find its equilibrium. Even though international standards meant that sound film would be distributable and compatible around the world, the era also saw gradual changes in sound and image relationships.
In the hands of innovative filmmakers (such as Mamoulian, Lubitsch, Hawks, and Welles in the United States; Hitchcock in the United Kingdom; Clair and Vigo in France; Lang and Pabst in Germany; Eisenstein, Pudovkin, and Vertov in the Soviet Union; and Ozu and Mizoguchi in Japan), sound film was able to evolve from its fragmented origins into a stable narrative system. Filmmakers moved away from a logic that demanded a match between sound scale and image scale to a more flexible system of sound recording where dialogue intelligibility was more important than spatial fidelity.
Additional aesthetic changes occurred with the creation of new industrial developments such as Foley (a technique for creating synchronous sound effects), sound-effects libraries, magnetic recording, and dubbing techniques. However, the most significant change occurred in the early 1950s with the introduction of widescreen cinema with multichannel film sound and its effect on cinematic presentations. Widescreen formats such as Cinerama, Vistavision, and ToddAO created complications for framing, composition, and editing, in addition to narrative.
The inclusion of multichannel sound with each of these systems allowed for new aesthetic possibilities, but it also meant an increase in cost and labor. As a result, the period for multichannel film releases was relatively short during the 1950s, yet it offered a model for future sound and image relations. Although experiments in multichannel sound had been conducted since the advent of sound recording, it wasn’t until the 1950s that the advent of magnetic recording allowed for multichannel sound in cinema.
Not only did magnetic recording allow sounds to be recorded, erased, and rerecorded on the same tape (optical sound could only be recorded once) but it also allowed for instantaneous playback. This revolutionized the recording of sounds during production and their rerecording and editing in postproduction, but the introduction of magnetic playback in theaters did not meet with as much success. In 1953, Twentieth Century Fox introduced their own widescreen format, CinemaScope, which utilized four-track release prints (Left, Center, Right, and a fourth “Surround” channel) with magnetic stripes running along the side of the filmstrip.
CinemaScope sound recording was done by placing three fixed microphones on the set at a distance that would roughly correspond to the spatial limits of the image. This left the camera free to move around while the microphones constructed an acoustic plane across the screen. In order to mix the fourth surround channel, separate effects tracks were combined and added in postproduction. The trend in many early CinemaScope films was to favor long take, single-camera set-ups over multiple cameras and shorter shots.
Films like Henry Koster’s The Robe (1953) and Jean Negulesco’s How to Marry a Millionaire (1953) suffer from a stylistic stiffness that is imposed by the technical demands of the widescreen framing and the sound-recording apparatus. It is because CinemaScope sound systems patterned its technology after a model of “realistic” matching sound and image scale that it came into conflict with the highly constructed mode of representation established in monophonic narrative cinema.
Thus with the development of CinemaScope and the use of multichannel sound in narrative feature films, the spectacular demands came into discord with the need for dialogue comprehension. A common complaint of the CinemaScope system was about sounds and voices “traveling” across the screen. Because of the previously established modes of close-miked dialogue and monophonic theater sound, audiences in the 1950s found it distracting if sounds, especially dialogue, moved across the screen or extended beyond the frame line. However, the multichannel reproduction of a musical erformance’s spatial qualities was not considered distracting, but preferable over monophonic reproduction. It is, therefore, not surprising that a third form of multichannel sound recording emerged using “pseudo-stereophonic” sound. Perspecta Sound was a system that recorded all of the individual audio tracks monophonically and mixed them into a stereo field during postproduction. It was developed as a response to the other multichannel sound systems’ inability to reproduce the established function of close-miked monophonic cinema.
When the Perspecta Sound system was engaged, the control signals sent the monophonic sound information to any or all of the speakers behind the screen while also raising or lowering their volume levels. In this way Perspecta Sound could provide “pseudo-stereo” with an expanded dynamic range from a standard monophonic optical track. Moreover, the US$900 cost of installation and backward compatibility was a major appeal to exhibitors who wanted stereophonic sound without the price or complexity of the Fox system.
But the major advantage of the Perspecta Sound system was an unstated one. Because the system was designed to re-channel the information on a monophonic soundtrack into a stereophonic presentation, the “effect” of multichannel sound needed to be considered only in the very last phase of soundtrack construction: the mix. This meant that all other aspects of sound recording and rerecording carried on as before: dialogue tracks were close-miked and mixed in mono, rather than “fixed” in stereo during the recording process.
Consequently, the Perspecta Sound system delivered the multichannel “effect” while adhering to both the narrative demands of a pre-existing monophonic code of representation. It is for this reason, more than any economic advantage, that Perspecta Sound rapidly became the exhibitor’s stereophonic format of choice in the last half of the 1950s. Thus, the contradictory modes of multichannel sound achieved equilibrium with the general acceptance of this constructed method of soundtrack creation.
By 1958 all production sound was recorded monophonically and a move was made back to single-channel film sound which had taken on a code of realism on the basis of its previously privileged narrative status. Further codes of realism were explored with the development of lightweight 16-mm cameras and portable quarter-inch magnetic tape recorders. Starting with the growth of Cinema Verite and Direct Cinema—two models of documentary that moved away from the standard oice-over to let the subjects and events narrate the stories themselves—there was a move toward the use of direct sound in cinema. In the documentaries, sound recording was as important as the image for fully chronicling events and for creating a sense of verisimilitude. The 1962 introduction of the Nagra III recorder, which featured a crystal-driven synchronization unit, meant that both sound and image were recorded by separate devices yet perfect synchronization could be achieved in postproduction.
This gave documentarians a freedom to follow their subjects and to film in ways that had never been possible when using 35-mm cameras and bulky sound recorders. The result was an enhanced sense of realism in documentaries that rapidly made its way into feature film techniques. Coincident experiments with film sound were occurring in the narrative feature films of “new wave” directors from France, Cuba, Czechoslovakia, and Japan, where the filmmakers sought to manipulate and reconfigure standard sound practices to provide new models of film style.
During the late 1960s in the United States, sound practitioners also started to explore new methods of constructing film soundtracks in an attempt to rethink regimes of seeing and hearing in narrative cinema. Formal alterations appeared in multi-microphone mixing, the use of radio microphone transmitters, location sound use, and the dismantling of hierarchically structured systems of film sound editing and mixing.
Filmmakers like Robert Altman, Francis Ford Coppola, Arthur Penn, and George Lucas resisted models that dictated certain accepted structural aspects of how tocorrectly make a film and proceeded to challenge audiences with films that required spectator/auditors to engage with the cinematic action on new, visceral levels. These changes made the gap between the uniform address of classical Hollywood and the emergent cinematic forms of the period more palpable.
However, changes that took hold in the latter half of the 1970s—specifically the introduction of Dolby Stereo—introduced a new Classicism into cinema and derailed many of the formal and aesthetic experiments initiated in the late 1960s. Modern sound practices and the returnof multichannel: 1970–92 In the 1960s, road show pictures, like The Sound of Music (1965) or My Fair Lady (1964), regularly used multichannel sound as part of their presentation; yet the majority of theaters still relied on optical sound reproduction that sounded scarcely better than it did in the 1930s.
After leading the sound industries in terms of new technologies and new presentational styles in the 1950s, by the 1970s commercial cinema had become woefully behind the times. Although some experiments in “spectacular” sound like Sensurround—a low-frequency sound system that would literally shake the viewers in their seats—met with modest success on films like Earthquake (1974), it was not until the introduction of Dolby Stereo that the basic nature of film sound was changed.
Delivering high-quality multichannel sound in the same space as the standard 35-mm monophonic optical soundtrack, the Dolby Stereo was comprised of a stereophonic optical recorder, dual noise-reduction units, and a processor that derived a third “center” channel of sound from the left and right optical tracks. The result was a realistic field of sound that matched the wide-screen visual field of the film where sounds could be placed precisely in relation to their location on the screen. Moreover, the use of noise reduction circuitry resulted in a greater frequency range as well as improved dynamic range over regular monophonic optical sound.
Dolby Stereo made its commercial debut in 1975 with the releases of Ken Russell’s Tommy and Lisztomania and the three channels provided adequate coverage of the screen space. However, at the request of director Frank Pierson and producer Jon Peters, the next Dolby Stereo film, A Star Is Born (1976), also included a fourth “surround” channel. At the heart of the system was a “matrix” converter, a remnant of the quadraphonic music boom, which made it possible to mix left, center, right, and surround channel information onto two optical tracks.
This allowed for four channels of sound to be encoded into the space of the optical soundtrack, obviating the need for the more expensive magnetic “road show” formats and allowing for backward compatibility of the prints. Its effect was a constant field of sound that surrounded the audience from all sides, similar to the experience of attending a live musical performance. This was crucial to a film like A Star Is Born, which wanted to immerse the audience in concert ambience for the majority of the film.
However, when dialogue and sound effects were recorded in Dolby Stereo, the system did not function as well. Certain frequencies and moving sounds would confuse the matrix, which would then send these erroneous sounds into the surround speakers. This required that all dialogue be mixed into the central channel to ensure comprehension. Sound effects could be positioned anywhere across the space of the screen, but moving effects needed to be monitored carefully so that their acoustic motion matched the on-screen motion. Music, however, provided few problems because it was rarely anchored to an on-screen image.
While music and effects were occasionally deployed to the surround speakers, dialogue was strictly avoided. Moving away from the earlier emphasis on music-based films, the first major successes for Dolby Stereo were in a number of late-1970s science fiction films: Star Wars (1977), Close Encounters of the Third Kind (1977), Invasion of the Body Snatchers (1978), and Altered States (1980). In the hyperbolic worlds of science fiction, manufactured sound effects and acoustic ephemera could be assimilated into the genre without disrupting the narrative.
Specifically, the use of the surround channel to introduce narrative elements and the ability to envelop an audience in ambient sound proved very popular with filmmakers and audiences alike. But just as the genre assimilated the use of Dolby Stereo, so too did it rapidly impress its own representational codes onto the technology. After the inspired use of the surround channel to convey the horror of the pods from the Invasion of the Body Snatchers or the alien drag races in Close Encounters, future uses were regularly entangled with either the portent of the sinister or the uncanny.
Conversely, the surround channel was rarely used to give a sense of acoustic realism. Despite their advances in 35-mm optical sound, Dolby Laboratories recognized that the six-channel 70-mm magnetic format still provided the best sound quality available for theaters. Starting with Star Wars in 1977, they improved the sound in the 70-mm format by rechanneling two of the tracks to provide enhanced low-frequency “baby-boom” tracks. (This was the first example of low-frequency enhancement in cinema, a technique still in use today with the addition of subwoofers to most theater sound systems. Unlike the 35-mm system, 70-mm Dolby Stereo utilized six discrete channels and followed the same mixing patterns as the road-show films of the 1950s and 1960s. Simultaneously marketed as Dolby Stereo, alongside the 35-mm system, it was actually the 70-mm system that was responsible for many of the earliest successes of Dolby Stereo. The Dolby Stereo systems were praised in part due to the popularity of Star Wars and Close Encounters, and the “baby-boom” sound created a young audience base for the films; however, this aesthetic had far more to do with emulating the volume and dynamic range of home stereo systems and rock concerts than it id with accurate sound representation and verisimilitude. Like the magnetic sound systems of the 1950s, the primary attraction of Dolby Stereo was acoustic “spectacle. ” A final change in Dolby Stereo occurred in 1979 when director Francis Ford Coppola decided that he would use 70-mm Dolby Stereo as the primary release format for Apocalypse Now (1979). Coppola wanted a sound mix that would emulate the quadraphonic musical recordings of the early 1970s: in both the multichannel location of the speakers and their psychedelic sounds and style.
According to sound editor Walter Murch, this required a greater measure of control over the soundscape, as well as the ability to position sounds anywhere within a 360-degree area. To achieve this, Murch placed all the low-frequency enhancement on one track, thereby freeing up another for use as a second surround channel. By doing so, it made it possible to position a sound anywhere in a 360-degree sound field in the auditorium. The result was a truly immersive model that could be expanded or contracted according to the needs of the narrative.
Its effect was completely original and this model of five-channel discrete sound with low-frequency enhancement became the template for 5. 1 sound in the digital era. However, due to its lower cost and “spectacular” nature, the 35-mm Dolby Stereo system was adopted as an industry standard for multichannel optical presentations by the 1980s and became the dominant format until the introduction of DTS and Dolby Digital sound in 1992 and SDDS (Sony Dynamic Digital Sound) in 1993.
Digital technologies and contemporary sound design: 1992–present With the development of new film sound technologies and their acceptance around the globe, the 1980s and 1990s represented a period of both standardization and a reconsideration of the significance of sound and image relationships. Specifically, the transitions in the film industry brought on by the advent of video and eventually digitization meant that films could circulate further than ever before and have a greater influence on emergent filmmakers.
The result is that the aesthetic experiments inaugurated in the 1960s and 1970s finally came to fruition. Even though the use of Dolby Stereo as a standard for optical sound in the 1980s meant that filmmakers had to adapt to a new technology, by the end of the decade directors such as David Lynch, Jean-Luc Godard, Jonathan Demme, and Steven Spielberg were each using the technology to better serve their films. In conjunction with the standardization of multichannel sound came a shift in he conception of sound, one that put sound on equal footing with the image. Drawing on his experiences making Apocalypse Now, Walter Murch described his role of placing sounds throughout the three dimensions of the theater as analogous to what the production designer does when dressing a set. Hence, he called this art of integrating the sound of a film with its dramatic demands “sound design. ” Conceptually, the sound designer’s role was to work with the director to see how the sound of the film could best help to tell the story.
In a way this was a radical concept, since sound had traditionally been broken into two entirely separate domains—production sound and postproduction sound—and there was little, if any, communication between the two. The sound designer was a bridge between these two domains and would work with the director in both the shooting of the film and afterward as it was being edited. With the development of low-cost digital editing and mixing equipment, the full potential of sound design is now coming to fruition and a generation of filmmakers now thinks of cinema in terms of its visual as well as its acoustic capabilities.
In addition, it has meant that creative sound work has been decentered from Hollywood by global directors such as Aleksandr Sokurov (Russia), Carlos Reygadas (Mexico), Isabel Coixet (Spain), Takeshi Kitano (Japan), Lynne Ramsay (Scotland), Tsai Ming-Liang (Taiwan), and Lucrecia Martel (Argentina), a trend which points toward the use of film sound in the varied expressions of cultural identities. On the home front, the development of digital theaters and DVD technology has allowed the living room to mimic the theater.
While this is a vast improvement over prior forms of home viewing, especially the limitations of videotape, it means that paradigms for theatrical sound are dictating the aesthetics of personal listening practices. In a way, DVD sound has become a kind of “options package” for the home film viewer. In addition to the standard two-channel Dolby Stereo and 5. 1 mixes available on most DVDs, home theater sound systems also offer a variety of adjustable listening environments. Indeed, most home theater listeners are now able to replicate the low-frequency enhancement and moving stereophonic effects of early Dolby Stereo in their own living room.
And with the application of multichannel sound to television broadcasts, musical recordings, and video games, the home listening environment has now become the prime platform for sound and image interplay. Finally, not only are digital technologies transforming sound and image relations in contemporary cinema and home viewing but media convergence is also changing the very nature of cinema itself, making it necessary to consider the future of moving image technologies and the ongoing evolution of sound and image.