Laurie Spiegel"When you're writing the software for yourself, you get something that really thinks like you."
Portrait by Peter Schmideg
Photography by Laurie Spiegel
Laurie Spiegel is a composer and a humanist software developer. Composing musical works, as well as the programs used to create them, Spiegel is celebrated for her seminal albums “Unseen Worlds” and “The Expanding Universe," and included in such varied arenas from the Voyager Spacecraft of 1977 to the recent Hunger Games soundtrack. Her work on the GROOVE system at Bell Labs and the countless sounds she developed at the Experimental Television Center has made her elemental in both the history of computer programming as well as New York's experimental music scene. She is a feminist, animal rehabilitator, programmer, and writer whose presence and work over the past forty years has charted a seismic shift in technology, in our relationships to the tools we use, and how these tools have reshaped the lines of creative production.
How did you start working at Bell Labs?
I had been working with analog synthesizers—with the Buchla system in particular, an instrument by Don Buchlathis—and I was frustrated with it for a number of reasons. Analog synthesizers had no memory, so you couldn’t save your work. You’d use these machines oftentimes in a shared studio. You’d be working for four hours and then you'd have to leave as the next person came in. You'd never get your setup or music back exactly the way you had it, because with that kind of synthesizer nothing could ever be reconstructed quite the same way again. There was also the issue of analog noise, as well as lack of more exacting precision and control. Anyway, in the early 70’s, Rhys Chatham invited Emmanuel Ghent and Max Mathews to do a concert at The Kitchen, here in New York. Instead of playing analog synths directly, Ghent and Mathews were using computers to control analog equipment. I remember seeing this and thinking that was exactly what I needed.
So did you reach out to those guys? Did you know them?
It took me a while to work up the guts, but I called up Emmanuel Ghent and asked if I could be his apprentice and learn to do what he did. He was working at Bell Labs. I asked, "Can I study with you?" but he said he didn't teach. So I asked if I could volunteer to be his assistant. He was willing, and so I started going out to Bell Labs with him. Max Mathews would look over my shoulder every now and then while I was working there. After about three months he decided I knew what I was doing technically, and he gave me clearance and a badge so I could start working at Bell Labs on my own.
What were you doing there?
I wanted access to GROOVE, which was a new hybrid computer-controlled analog system. The few people who were doing computer music back then were doing non-real-time synthesis. You would put a bunch of instructions into the computer, how you wanted the sounds to be synthesized, and then the computer processed them over a weekend. After that you’d have 30 seconds worth of digital signal computed, waiting for you to play it out of the buffer and record it to tape. It was really slow and completely non-interactive. But with computer-controlled analog synthesizers, there wasn’t as much processing involved. All the computer did was update the values of the voltages going out to control the synth modules at around a hundred times per second. It was like a computer playing an instrument—rather than a computer being the instrument. To synthesize CD-quality audio, you would have two channels of over 44 thousand numbers to calculate every second if the computer is synthesizing the sounds real-time. This was too taxing to do in real-time with the technology back then. But when a computer was controlling an analog system all it has to process is a hundred little control points that might move a sound parameter up a hair, move to some note, or fade a sound down. So the demands on the computer were small enough to allow for real-time interaction with the sound at that rate, the way you had with a conventional acoustic instrument.
Were any other big companies bringing in artists to do this kind of work?
Bell Labs was really special. For one, it was a regulated monopoly. Before the government broke up the Bell system in 1984, there was only one phone company in the United States: Bell Telephone. Everybody still thought you could only have one company to connect everybody. It works well if it's just one company, and it was complicated and took years to figure out how to break it into smaller parts to conform with the anti-trust laws. So “Ma Bell” was given an exemption to those laws. But in exchange for that, the government mandated that they act sort of like a non-profit company.
So what was the company culture like?
The atmosphere at the lab was very non-commercial. There was a lot of pure research. The dozen departments that Max Mathews ran researched things like speech synthesis and speech recognition, the structure of human memory, and how our minds do depth perception. There were studies on nonverbal interaction. They were interested in the picture phone, and wanted to see if people talked differently on the phone if they could or couldn't see each other. They were doing a lot of experimental research. It wasn't at all related to making new products. It was for understanding how communication works and how to make it better. It wasn’t product-oriented or commercial, they weren't selling telephones. The philosophy of product-making meanwhile was different back then, too. Stuff was built to last longer.
It sounds almost like a university.
It was an atmosphere of people who were really serious about the research they were doing, and they weren't concerned with commercial stuff. They wanted to understand things, create things, and create knowledge. They also recognized that if you're designing something like switching circuits, nobody is going to test them better than a real-time musician. The number of operations and kinds of information that a musician puts out during a live performance is way more than any other kind of interactive user. I had a chance to do those kinds of tests when I was working on another synthesizer project at Bell, built by someone named Hal Alles.
Tell me about that project.
I wrote for Hal’s synth in one of the early versions of the C programming language, while they were still developing it. I had to program the instrument remotely, from another computer in another part of the building. If the program didn't run, I'd have to walk all the way down to this other lab and might discover that it wasn’t running because Hal had actually taken the whole machine apart, and he was changing some components. I was also getting memos at the lab saying, "We have just changed the equals-plus sign in C, to plus-equals and installed a new compiler. Please revise all your software." The whole operating system, language, and hardware were under development as I was working. When something didn’t run, you couldn't tell whether you had a bug in your program, or there was a bug in the compiler, or they had changed the compiler or syntax but not put out a notice yet, or the hardware or operating system had been modified. All of it was new and still in flux.
That sounds crazy.
The Lab was asked to do a performance for the Motion Picture Association in Hollywood to celebrate “50 Years of Talking Pictures,” so we had this deadline for getting Hal’s machine up and running and out to Hollywood for a performance. It was around-the-clock trying to get that thing going. And when we finally got it out to LA, it was full of condensation from being cold in the cargo hold of an airplane. We had to take it apart and dry off the circuit cards with hair dryers. During the performance, the Motion Picture people put the synthesizer on this rotating platform that no one had bothered to tell us about, so the cables were gradually getting entangled—pulled tighter and tighter as the thing rotated on stage! They stopped the rotation just in the nick of time and nothing blew up.
So when did you leave Bell Labs?
I left in 1979, mostly because they got rid of these wonderful old comptuers I had been working on. I was working on these bulky DDP-224 systems. These were dedicated systems, which meant only one user at a time. At the end of January 1979, they replaced these dedicated systems with more modern machines, or modern for the time anyway, called timesharing systems. Timesharing was a whole new model for using computers. These new systems were running Unix, which was also brand-new. With Unix you could have multiple terminals and users on one computer, all sharing processing power and memory at the same time, dividing computing power into tiny slices of time. Each concurrent job or process would get a little bit of the processor’s time, and then it was on to the next one. Because of the extremely limited processing power of computers at that time, if you were doing music in real-time or interactively, that kind of architecture was not going to work for you. There wasn’t enough processing power to share! You really needed all the power you could get, the whole computer to yourself, with complete control of its timing to do music in real-time. So if you were sharing resources on a multiuser, multiprocess system, real-time music became impossible. Computers were a lot slower then.
So what happened when they junked these old systems?
We lost everything. All of a sudden we had no computers to run all of the software we were using and developing for the DDP-224 throughout the '70s. The DDP-224 was old, obsolete technology to the Labs at that point, but to me it was my instrument, my musical voice. I tried to make do by writing music on paper for instruments for a while. Then I was given a prototype Apple II, which was great. But there was still a real limit to what you could do on an Apple II. So when this thing called the McLeyvier came along, a much more powerful instrument, I wanted to be involved with it, and I went to work for the company that was developing it so that I could use it.
Yes, Toronto. It was the early '80s, and I was working on the McLeyvier [synthesizer], developed by David McLey. The McLeyvier was a music processor that included an LSI 11/23 computer, an analog synthesizer and various input and output devices as subsystems. It also had faders and lots of audio and other IO connectors. When I was doing software for that, I worked from '82 to '85, up in Toronto.
So you took this programming job to continue doing computer music?
It wasn't so much programming as being put in charge of software design—the McLeyvier project had a staff of programmers.
You mentioned getting an Apple II before that. What was your relationship with the Mac people?
Jef Raskin was my main contact and friend at Apple. He was the originator of the Macintosh until Steve Jobs took it over from him. I had met Jef back in the ‘70s, and we had become friends. He was a really good musician.
Oh, who knew!
Yeah, two of the best musicians I've ever known aren't even thought of as musicians. Jef Raskin and Marvin Minsky are the two people that I know who can, at the drop of a hat, sit down and improvise a fugue on piano in a million different classical styles.
So Jef was the one who gave you the Apple II?
Yes. One day Jef shows up at my loft and says, "Laurie, I think you're really going to like this. You don’t have a computer. You're miserable. I'm going to plug this computer into your TV then I’m going to take a nap in your back room. And while I'm taking a nap, you're going to write your first program on this computer." And I was doubtful, since I had never seen this computer before. And I certainly had never seen this programming language. "You'll figure it out,” Jef reassured me, “It's BASIC. It's like a subset of FORTRAN. It won’t be a problem for you."
Was he right?
He went and took a nap and by the time he woke up, I had written a little visual mandala generator. I was hooked. It was a prototype 48K Apple II. Jef saw what I made and said, "The computer is yours, you know, keep it."
Were you writing the same kind of algorithms to make visual work that you were writing to compose music?
That’s what I wanted to do. I wanted to try to make a visual version of music: a non-referential visual art made of structures of change over time, the way music is made of structures of changing sound over time. I wanted an art that was self-referential, using shapes and colors and textures similar to the way that music uses pitch and loudness and timbre—art that doesn't refer to anything outside of itself, that creates emotion by manipulating our expectations, using structures of repetition and change.
That reminds me, you were involved in that Ursula Le Guin movie, The Lathe of Heaven. How did that happen?
It was a project of at the Experimental TV Lab at WNET, [the PBS TV station in New York City]. I’d been a video artist in residence there around that time, trying to make visual music, though I ended up mostly doing music for everybody else's videos because they all really needed music. I was interested in the co-generation of image and sound, composing them together at the same time.