Column Name

Title

Q&A With Tod Machover

Sara Heaton as Miranda and Hal Cazalet (’96, voice/opera) as Nicholas—and robots—are seen in a 2011 Chicago Opera Theater performance of Tod Machover’s Death and the Powers. It will be presented by the Dallas Opera this month.

 (Photo by Jonathan Williams)

Author

Tod Machover (BM ’73, MM ’75, composition) is one of the world’s pre-eminent practitioners of and spokesmen for the intersections of music and technology. But the first time he wanted to learn to program a computer was soon after he arrived at Juilliard, to study with Elliott Carter (faculty 1966-84). “One of the reasons I was interested in studying with Carter was that I was really interested in complexity in my music,” says Machover, who recalls writing a string trio in which each instrument was slowing down or speeding up independent of the others. It was so complicated that he couldn’t convince anyone to play it, and “a sort of lightbulb went off,” he said, adding that he thought “computers are out there, and if you have an idea and can learn how to program, you should be able to model it.” 

Body

Machover has been on the faculty of vaunted Media Lab at the Massachusetts Institute of Technology since 1985. He currently heads the lab’s Opera of the Future group, and his projects often bridge the fields of composition, programming, and invention. Some of Machover’s most notable endeavors include the development of hyperinstruments, which augment elements of both acoustic sound and performance gestures. His fifth opera, Death and the Powers, a 2012 Pulitzer Prize finalist, is scored for hyperinstruments and it involves human characters interacting with robotic ones; it’s being performed by the Dallas Opera February 12 to 16.

Machover has also been working on a series of crowd-sourced symphonies for which the people of a given city submit compositional elements in the form of sampled sounds. The next one was commissioned by the Perth (Australia) Arts Festival. Called Between the City and the Deep Blue Sea: A City Symphony for Perth, it will be premiered by the West Australian Symphony Orchestra, conducted by Carolyn Kuan, on March 1 (see video below). For this special technology-and-alums issue of The Journal, Recently Machover chatted by phone with composition doctoral candidate Evan Fein (MM ’09, composition) about his student days, multifaceted career, and vision for the future of music’s relationship with technology.

Evan Fein: What happened when you started thinking about learning to program?
Tod Machover: I got to Juilliard in 1973, and computer music was quite new then—and didn’t really exist at Juilliard. So I asked [Milton] Babbitt [faculty 1971-2008] what I should do, and he said there was a guy named Hubert Howe [faculty 1974-94] teaching electronic music. There was no studio and I don’t think anybody was taking the course, but he was a really terrific computer-music guy, so I took a few years of independent study with him, and he taught me to program.

EF: How did you end up being the director of musical research at IRCAM (the Institute for the Research and Coordination of Acoustics and Music)?
TM: I had met [Pierre] Boulez when I was studying in Italy—actually he was the one who suggested I go to Juilliard to study with Carter—and he had sort of kept in touch with me. When IRCAM was supposed to open, in the fall of 1978, Boulez wanted to know if I wanted to come for a year, basically because they were launching this place and they realized they didn’t have anybody young who knew anything about computers in music. After a little while, Boulez asked me to be in charge of music research there. That shows you how desperate they were, but for me it was kind of like grad school, and I [ended up staying] for seven years. 

EF: What was most exciting about your time there?
TM: When I got to IRCAM, there was this new machine that allowed you to make your own studio. In terms of computing power, it wasn’t as powerful as a laptop now, but conceptually, it was amazing to have a general programming environment where you could literally sculpt the materials that you wanted to play with. All of a sudden, a world was open in which you could think of the computer not only as an instrument, but also a kind of composing environment. 

EF: What has kept you at M.I.T. for so long?
TM: Part of it is that although the idea of having a place that is designed to make the quality of people’s lives better with technology has always been the core goal here, the idea of which medium you do that through keeps changing. Music is always part of that, as are visual arts, medicine, education, prosthetics. It’s been a very interesting place to think about what the potential of music is and what composition can be.

EF: What might a composer who has experience only in writing acoustic music gain through working with technology? 
TM: Maybe the most interesting thing about technology is how open-ended and general a tool it is for realizing something you have in your imagination. You really can extend your imagination and make something happen that doesn’t already exist in the world. The first thing that got me interested in learning to program was complexity, but I learned pretty quickly that what interested me more were textures and timbres that include counterpoint, polyphony, harmony, and color. And I found that to make that kind of language, even with the best tricks of orchestration, acoustic instruments are too distinct from one another. So one way I often use electronics is as a kind of structural and timbral “glue” between instruments. I almost think of it as the air in which the instruments breathe.

EF: What are some of the challenges that face artists working in technology? Do the rapid improvements in technologies pose a threat to the durability of works?
TM: It varies, depending on the project. I remember in the late ’80s early ’90s, you’d be programming something, and they’d change the operating system six months after some piece got premiered, and literally nothing would work anymore, even though you were using generally available tools. It’s stabilized a bit now, and most technology companies are pretty careful. Things have to be really easily compatible in the future, and chances are if you use something that’s commercially available and there’s a manufacturer, if it breaks, someone out there can fix it. The big problem now is if you build a physical thing, like instruments or sensors, finding the time and money to rebuild it 5 or 10 years from now will be even harder than doing it the first time.

I love acoustic music, though, and I think even just the sound of amplified music or the sound of technology is still in its adolescence. There’s so much work to do to make an amplified sound that has the same presence in a room as a real vibrating instrument. Speakers are still primitive; they still sound like they’re pushing air at you rather than making things vibrate. Composers should be interested in the blend of acoustic and electronic or amplified sound in a way that in 20 or 30 years is going to be as beautiful as acoustic sound. Part of it is there are still very few halls in the world where this is possible. 

EF: Do you have a pie-in-the-sky project? 
TM: Imagine making a piece that is partly your piece, but in it is the possibility of its being modified or shaped in different ways. There are apps for commissioning pieces; they come on your mobile device and each one’s really a piece but will play out differently every time you listen to it. You can consciously change the density or instrumentation, and it can also take in the sound around you. For instance, if you’re listening to it while you’re walking on the street, it will take in the audio and tune it to the piece. It’s music that adapts to something about you, and it’s something we’re poking around with. One reason that music, whether it’s Beethoven or the Beatles, becomes really popular is if a composer finds some element that is going to be powerful and meaningful to the largest number of people possible. But imagine if we could do the opposite—not just allow people to personalize a piece, but make a piece that adapts to have its peak power for you alone. There’s really interesting research about what happens in our minds and bodies when we listen to a piece of music—everybody listens to music differently depending on biology, experience, and psychology. I bet in maybe 100 years, we’ll be able to have a pretty good way to get a reading of what’s happening to somebody when they hear a piece.

Popular Features

Recent Issues