top of page
  • Facebook
  • Twitter
  • Instagram

Making Meaning With Sound

Composer Aaron Copland insisted that film music is subordinated to the narrative. He wrote, “no matter how good, distinguished, or successful, the music must be secondary in importance to the story being told on the screen” (Buhler 151). Copland’s intent was likely to stress to composers that their music is almost never the most important thing happening during a film, but he also inadvertently diminished the importance of music in film. While he was specifically writing about music, this attitude reflects how sound is generally thought of on a larger scale (Gorbman). This line of thinking results in sound’s narrative potential being underutilized. Instead of sound being thought of as secondary to the narrative, intentional use of sound can serve as an integral part of the narrative itself by powerfully conveying emotion, ideas, and even creating meaning. Hence, sound becomes a fundamental part of the story. The following section will explore both theoretical and practical approaches to making meaning through the use of the soundtrack.

A Theoretical Approach

While power of sound in film may be underappreciated in academia, there is certainly not an absence of literature on the subject. Notable theorists include Michel Chion, Mary Anne Doane, Rick Altman, Béla Balázs, and Christian Metz. There are also a number of highly successful sound practitioners that have delved into the world of sound theory such as Walter Murch, Randy Thom, and Thomlinson Holman.

 

There is nearly a century’s worth of literature on sound theory and the derivation of meaning from sound. While these theories are diverse and plentiful, like many academic fields, they are often inaccessible for the average student. These theories can become complex, overcomplicated, and wordy to the point that they are no longer understandable or practical for established filmmakers, let alone a student filmmaker. 

 

For example, William Johnson has devised a system of examining the relationship between image and sound during every moment of a film. Johnson argues that “the basis of film is a continuous interaction between sound and image” (Johnson). To analyze these interactions, he developed a system that he calls “relations.” His analysis creates categories for every sound-image relation, no matter their significance to the film. While Johnson’s approach certainly has its merits and use cases, it simply is not a feasible avenue for a filmmaker looking to improve the intentionality of their use of sound. To this same point, typical sound theories (and most film theories in general) tend to focus on analyzing finished films rather than providing filmmakers practical tools and advice.

 

For both theoretical and practical purposes, Leo Murray provides us with the most universal framework for film sound. In Murray’s book Sound Design Theory and Practice, he examines sound design through the lens of semiotics. Applying semiotics to film was first pioneered by French film theorist Christian Metz and has been adopted by a number of theorists since. Murray's application is particularly useful to filmmakers due to its modernity, focus on connecting theory with practice, and use of the Peircean model rather than the traditional Saussurean model commonly used to analyze film. 

 

Semiotics is the study of signs and how we make meaning from them. Charles S. Peirce’s model is a three-way structure that describes the nature of the relationship between a signifier, an object, and an interpretant (Murray 65). The signifier (or sign) is “an object which stands for another, an object is “anything that can be thought,” and the interpretant is the conclusion drawn from the relationship (Peirce). For example, consider the sound of a lightning strike. Here, the sound of the thunder is the sign. This represents a storm, or the object. The conclusion that can then be drawn is that a storm is approaching. 

 

Signs can further be broken down into what Peirce calls classes. There are three classes: icon, index, and symbol. For the sake of simplicity, we will only talk in terms of sound. When a sound sign is iconic, it means that a sound has recognizable auditory qualities. A sound is indexical when it points to an object of origin. A sound is symbolic when it has a meaning understood by the listener. Almost all sounds in all situations fall into the three classes at the same time. Consider the sound of a clock tower ringing. It is iconic in the sense that it can be acoustically recognized as the sound of a clock tower bell. It is indexical in that it indicates the presence of a clock tower in the area. It is a symbol because it is understood by the listener that the bell ringing indicates that the time is at the top of an hour. 

 

Additionally, symbols are largely context dependent. Murray gives an excellent example using the sound of a ticking clock. He explains that through our life experience, we have learned that the sound of a clock ticking represents a second going by and therefore is “symbolically linked with the idea of time” (Murray 76). However, this symbolic link can be expanded upon depending on the context around it. If the sound accompanies a man sitting in a dark room staring at the ceiling, the ticking may represent time passing slowly. If it accompanies an unattended package at a train station, the sound could represent a possible bomb (Murray 77). This is a perfect example of how semiotics can be applied by a director or sound designer to directly influence the meaning of a film sequence. 

 

While this model may initially seem confusing or impractical, the primary purpose is to help guide meaning making through the use of sound. It helps to foster intentionality by encouraging the director or sound designer to focus on the messages that they are conveying with their choices for the soundtrack. This model is also beneficial because, as Murray points out, “its strength lies in its flexibility and universality” (Murray 76). The model can be applied as broadly or specifically as the director or analyst desires. One of the large advantages over other theories is that the model is equally useful in pre-production, production, and post-production. Additionally, the use of this theory does not look to replace the plethora of other theories previously devised. Instead, semiotics is a broader and more generalized framework that all other theories that discuss meaning making can fit into. 

 

For example, in his book Sound Technology and the American Cinema, James Lastra explores sound theories from the early days of sound film. In the chapter, he discusses the frequent debate between “phonographic” and “telephonic” models of sound design. A phonographic approach to sound attempts to create a “perfectly faithful reproduction of a spatiotemporally specific… performance” (Lastra 139). In other words, it creates a soundscape that is acoustically accurate based on the position of the camera. In this model, if a character is speaking and turns their head away from the camera, their voice will get quieter. A telephonic model “assumes that sound possesses an intrinsic hierarchy that renders some aspects essential and others not” (Lastra 139​). Using the same example, the dialogue would not get quieter since the model places emphasis on fidelity and intelligibility. In the chapter, Lastra goes on at length about the benefits and drawbacks of each model. Lastra’s ideas are certainly interesting and useful, but the debate as a whole can be folded into the larger umbrella of semiotics. Broadly, the use of a telephonic or phonographic model is itself a sign that conveys meaning. Further, signs then emerge through the use of these models. This demonstrates the universality and flexibility of semiotics previously mentioned. 

 

In applying semiotics, it is likely most useful to work in reverse: figure out the meaning or interpretation that you desire your audience to understand, and make a sonic decision that best achieves that goal. As explored in the section “In Practice”, this is applicable to all stages of production and all branches of the soundtrack. 

 

Of course, the meaning derived from sound largely depends on the audience. A 1960s love song featured in a film will likely have a vastly different meaning to a group of Baby Boomers compared to Generation Z. Beyond groups, the interpretation of every sign will even vary from individual to individual based on their life experience. When using semiotics, a director must attempt to know their target audience to best anticipate the audience’s interpretation of their sonic decisions.

 

Overall, semiotics is a robust, comprehensive, and practicable theoretical framework for creating meaning with sound. The following section will provide practical advice for applying semiotics in order to successfully make meaning with sound.

In Practice

With the theoretical groundwork laid, we can now discuss more practical ways to apply these concepts. For many student filmmakers, sound only becomes a concern in post-production. However, meaning is most effectively made with sound when considered from the very beginning of the filmmaking process. Leo Murray notes, “Where a director or producer values sound and sees its importance in the finished product, the impact is felt throughout the entire production” (Murray 8) In his influential paper Designing a Movie for Sound, Randy Thom walks through the ways that sound can be utilized to effectively tell the story in all stages of filmmaking. 

 

Thom begins with the writing process. On a broad level, he encourages writers to tell their story through the point-of-view of one or more characters. This allows for the creation of a more immersive soundscape that reflects the experiences of the characters. On a more specific front, Thom highlights wall-to-wall dialogue and how it prevents sound other than dialogue from having a meaningful role. By having breaks in the dialogue, creative choices can be made to allow sound to tell the story without it needing to be spelled out through dialogue. This is akin to the idea of visual storytelling often taught and encouraged in film school (which should perhaps be referred to as visual/aural storytelling).

 

Thom then discusses the crucial relationship between sound and the image. This is the point where consideration of sound becomes especially important in the production process. He stresses the importance of choosing locations that are likely to be sonically interesting environments. In the essay, he hypothesizes an example that takes place in an apartment with exposed pipes. He explains that adding the sound of the pipes running makes the location feel alive. However, without a clear shot of the pipes, it may be unclear to the audience what the sound they are hearing in the background is.

 

This concept can be demonstrated using a student film written and directed by Hannah Otos titled Wake Up Call. During this tense interior scene, there is an ongoing storm which can be heard as background ambience. However, without a shot establishing the rain, this sound could easily be confused as white noise or static. This confusion is especially concerning because it could ruin the audience’s suspension of reality during a pivotal scene in the film. However, with the inclusion of a shot of the rain, the soundscape is no longer distracting and, instead, becomes more dynamic and interesting. 

​

 

 

 

 

 

 

 

 

 

 

 

 

 

The primary visual concept that Thom proposes is the idea of “starving the eye.” What he means by this is creating visual ambiguity by withholding information from the audience. This ambiguity creates intrigue and baits the audience into staying engaged with the story. The techniques that he highlights include moving cameras, darkness in the frame, extreme close-ups, slow motion, and black and white images. I highly encourage reading Thom’s entire essay for a full explanation of each visual technique. In these scenarios where ambiguity is utilized, it is most effective to provide the audience with sonic pieces of the puzzle that can be combined with the visual information. This combination then hints at the progression of the overall narrative. 

 

The final aspect of the production process that Thom discusses is the editing process. He notes that there is a tendency for editors to cut out any dead space that surrounds dialogue without providing room for sound design. To combat this, legendary sound designer and editor Walter Murch takes a unique approach when he is editing a film. Ironically, Murch likes to spend time editing the film without sound. By doing this, Murch can edit without the distraction of the rough, edit-stage sound and, instead, imagines the sound design in his head. This helps provide breathing room for sound design and resist the urge of cutting prematurely. 

 

To conclude his essay, Thom creates a comprehensive list the jobs that the soundtrack of a film can serve:

  • Suggest a mood, evoke a feeling

  • Set a pace

  • Indicate a geographical locale

  • Indicate a historical period

  • Clarify the plot

  • Define a character

  • Connect otherwise unconnected ideas, characters, places, images, or moments

  • Heighten realism or diminish it

  • Heighten ambiguity or diminish it

  • Draw attention to a detail, or away from it

  • Indicate changes in time

  • Smooth otherwise abrupt changes between shots or scenes

  • Emphasize transition for dramatic effect

  • Describe an acoustic space

  • Startle or soothe

  • Exaggerate action or mediate it

 

This list is an excellent starting point for a filmmaker in the early stages of developing a story idea. To stress the point made at the beginning of this section, a look at the list above makes it clear that to fully utilize all of these functions of sound, a great deal of intentional planning must occur.  Waiting until the film is picture-locked to decide to integrate these concepts will likely result in these functions not having their full impact on the audience.

 

Beyond Randy Thom’s ideas on using sound to create meaning, another practical theory worth noting is that of sound designer Tomlinson Holman. In his book Sound for Film and Television, Holman provides a unique, simple, and practical framework for creating hierarchy in sound design. He groups aspects of the soundtrack into three roles: direct narrative, subliminal narrative, and grammatical. Direct narrative sounds are those that have a direct influence on telling the story. Holman considers these to be dialogue and narration as well as sound effects that have specific narrative consequences (these are often written into the script). Subliminal narrative sounds are those that the audience takes in but may not pay direct attention to. These are the sounds that have unconscious emotional impact and “tell” the audience how to feel. The most common example of this is musical underscore. Finally, grammatical sounds are those that provide “connective tissue” for films and help keep the audience immersed. These are typically beds of background noise that help to de-emphasize visual edits (Holman). Under the broader umbrella of semiotics, Holman’s framework is a practical way to think through narrative importance once meaningful sound signs have been decided on. This theory can be especially helpful when working with a sonically busy scene where auditory hierarchy must be established to best tell the story.

 

In order to fully adopt effective sound design into our filmmaking toolkit, we may have to reconsider how we generally think of film. Rather than considering film to be a visual medium, think of it as an immersive one. By utilizing sound in tandem with visuals, you can create an experience for the audience that fully engages their senses and suspends their sense of reality. 

 

While the preceding information provides numerous ways to make meaning with sound throughout production, it truly boils down to one idea: storytelling. Through intentional uses of sound, we are able to support, strengthen, and even alter our narratives. As filmmakers, our primary goal is to share a compelling story with our audience. Sound design is an integral part of filmmaking equal to all others that is a necessary tool for achieving this goal.

A Note on Dialogue

When considering all of the sounds in a soundtrack, it might seem obvious that dialogue generally carries the most direct meaning. While obvious, it is important to note because dialogue intelligibility is often one of the weakest points of student films. Not only is it a common indication that a film was produced by a student, but more detrimentally, unintelligible dialogue prevents your audience from understanding your story. No matter how great a film looks visually, if the audience cannot understand the dialogue, they will not be able to follow the film. 

© 2023 by Nick Asprea. Powered and secured by Wix

bottom of page