Top of page

Collection The Roger Reynolds Collection

The Work of a Composer

SS: There are a couple of issues that we really should come back to or important issues that we haven't covered yet. First of all, on the composer, the issue of self-awareness. Are there additional comments that you could make in this area?

RR: You quoted to me on another occasion an anecdote that I had relayed about a time in the early 70s when I received an award from the National Institute of Arts and Letters . Everybody who was receiving some kind of recognition that day was seated on the stage, and we were seated in order so we knew that we were going to go forward and be dealt with one at a time. And when the time came for me to rise, I did so and began to make my way forward on the stage. And I heard the person who was giving the awards read the citation. The citation said something about ... I don't remember exactly the words ... but the idea was a "dark and conflicting world of turbulent emotions." And I thought, "That can't be me. I'm a sunny person." And so I went back to sit down, and then he said my name. So I had to come forward. And your question was, "has there been any change in the degree to which you, let's say, were aware of how ... well, to be more objective about it, how your music is perceived by others?" And I imagine that the issue of self awareness is one of the most problematic things that any artist, perhaps any individual, encounters in his or her life. I would imagine that if you asked me, do I know myself better now than I did then, the truth is probably "no." What has changed is that [laughing] I care less about whether the connection between what I am and ...

SS: [Laughing] How far do you want this honesty to go?

RR: You realize little by little that - and this was basically contained in the comment from Karen that you read to start the whole process of this taping - when you are working in art, you have a certain role, but it is folly to imagine that you are in complete control of what is happening. People use different metaphors, and the most common one in my experience is how composers talk about "the work taking over." Of course, that's nonsense. The piece doesn't take over. But what does happen is that your ability to move things or to alter assumptions diminishes as the amount of material that already establishes normatives grows. Exactly how this relates to the question of self awareness I'm not entirely clear.

But I could say something else about the issue of self awareness which is important because in our previous talkings I mentioned a number of times the notion of the balance between the rational and intuitive capacities and the importance of getting all of yourself engaged in the making of the art. When I was a student at the University of Michigan - an engineering student - I nevertheless had "illicit" relationships with musicians ... I mean, of course, only on the highest level. But one of these relationships involved a musicologist who lived in the same dormitory that I did and was already at a very advanced level. His name was Sherman Van Solkema; he is at this at this point, I believe, a faculty member at Brooklyn College. He was an art enthusiast, and he went occasionally to New York on weekends to see art shows. I knew nothing about art. And so one week he asked me if I would be interested in going with him. I said yes. We drove to New York and went to the Whitney Museum which had a group show. And one of the artists whose works I, at this point, knew nothing about was Boziotes - William Boziotes. Sherman was taking me around the show, and we arrived in front of a Boziotes painting, and he said, "What do you see?" And I said, "What do you mean?" He said, "Tell me what you see." And I said, "Well, I don't know anything about art." He said, "Just tell me what you see and what you make of it." So I started saying what I saw. And I talked for quite a while and realized that I actually could say quite a bit. And when I finished he said, "Exactly. That's the way it is. That's what the picture is about. That's what it's dealing with." And I was dumbfounded.

What this experience said to me was that one needs to listen to all those whisperings that go on in your mind which you normally dismiss as arbitrary or, let's say, insufficiently meritorious or founded in objective fact or whatever you will. And this was a landmark moment for me because it showed me in a very objective fashion that I was seeing and hearing more than I was using, and that what I needed to do was to trust more these aspects of my mental life which were illusive and furtive. And so in that way there has been, I think, quite a change over the years in that I have trusted more and more, not only the parts of my mental activities that are logical and open to immediate introspection, but also those parts that nudge me one way or another for reasons that are not at all clear. But I listen now very carefully to those strange nudgings and urgings so in that way perhaps it's true that I am somewhat more self aware than I was originally. That does not, however, solve the question which always perplexes everyone, I think, which is: how do others see me or, in my case, how will others hear me? I don't know. I would like to know, but I think it's probably not knowable.

SS: So the words you used when we were talking about this before were "authority of intuition" I believe. And, as I understand it, what you're saying is that it's not just the composer who has the authority of intuition.

RR: Oh, absolutely.

SS: Just as you came to the painting, others come to your music?

RR: Absolutely, and that's something that I think is an important obligation of anyone who is in a mentorial relationship - that you need to impress on people that the ultimate thing that they must do is to find the resource that they need within; because to borrow it from elsewhere, in the end, will not serve ... will not do, because it isn't yours. And, of course, you may wish to borrow from time to time as a way of initiating something. But in the end it has to be you. And so that means we need to get more and more - to derive more - out of what we already are, and we need to extend that which we are by listening to these "outposts" that whisper, "You could go there." And so you try to do that.

SS: I also wanted to ask you about the role of the composer in the rehearsal situation.

RR: Ah ... well, yes, that requires a different kind of awareness, because when you are in the early stages in your life as a composer, probably the most excruciatingly painful experiences you ever have are those which occur at a first rehearsal, when the musicians are first engaging with the music you wrote, and it all sounds wrong, and there is terror - "I can't have done this! I have to get out of here! We have to cancel the performance!" - on the one hand. There is also anger about: "Why aren't they playing it the way it's supposed to be?"

SS: "... it was perfectly clear when I wrote it."

RR: Right, "... it was perfectly clear in my mind - how come they're having to struggle?" But of course, little by little, through an accumulation of experiences, you understand that a performer has a certain order of information that he or she needs to become clear about. The first thing is probably, "What do I do?" The next is, "When do I do it?" The next is, "Am I with anyone else?" And these are very, very basic things. But until they're understood by the player, there is no possibility of any music happening - because the first task is orientation. Now, once you've understood that, of course, you can relax a little bit more in these situations, these rehearsal situations.

Further, at least for me, for many years the premier performance of the piece found me in some kind of a psychological state that was very unpleasant. I only hear ... used to only hear ... the aspects of the performance which were wrong - I mean, literally, almost as though the sound field was a little bit dim, and only the things which were inappropriate to the music came forward and would be very clear and very loud and very disturbing. And I had no perspective whatever on the piece. All I experienced in the first performance was the sense of all the things that I wish had been otherwise - either from my perspective or the performers' or whatever. And I am greatly relieved to say that over time that all went away, and at this point I can sit in a performance, a rehearsal, a premiere, and be pretty relaxed knowing that if it doesn't happen this time it will happen another time. But, on the other hand, I also recognized at a fairly early stage that there was - in the field of music, and this is, of course, not true of the other performance arts like dance and theater - there was a grotesque assumption of parsimony, of the economy of the process. So the ideal is this: the musician says, "You give me a part, I'll play it." The assumption is that you give each individual musician all that that person needs to know to do that person's part in the whole; and if each does his or her job, it will happen. That's unfairly simplistic, but, nonetheless, that's the basic assumption.

What you realize next is that there is a dynamic according to which each work is likely to receive something on the order of three rehearsals of slightly varying duration - perhaps depending on the extent of the piece - five or six is extraordinary ... frequently there are three or four rehearsals. So the question - if you're being very, very pragmatic about it - the question is: what are you going to use the rehearsal time for? And if the rehearsal time is used for decoding what it is that you intend, then there's no time for music. And the performance reflects that. Whereas if the way you ask the musicians to confront it allows them to get immediately to musical issues, then more music comes out. When I say musical issues, I mean subtlety, nuance, the blending, and the passing of the ball from one instrument to another - these kinds of things. So when you realize that, you take a very stern view suddenly towards the issue of experimentalism; and you decide, I think, that you want to be very, very thoughtful about any situation in which you violate normatives. And I don't mean that that ends up making you timid or conservative. It's a strategic matter. And you need to figure out how to get the result you want in a way that is productive of the best use of the time and energies and capacities of the musicians. I decided at a certain point ... let's say, early/mid-career ... that I might want to separate experimental activities from "normal" activities such that I could do - in the context of computer music or a solo or a duo, where the rehearsal constraints were less - rather more explorative things; to be more respectful, on the other hand, of the social categories that were inherent, for example, in an orchestra or a string quartet context. Now I trust that that has not made me a less inventive composer; but it certainly has made me a composer who has fewer experiences that are unpleasant ...

SS: It's realistic.

RR: Yes. I think that the composer in a rehearsal, first of all, is obligated to have prepared extremely well for that. One thing that is frequently taken for granted, and which should not be, is that the score is what the ensemble musicians see; the musicians see individual parts, not a score. If those parts do not accurately and effectively reflect what the score says, clearly the musicians and the conductor are going to founder. I don't know how many times I have seen, and others have also viewed, the conductor and the musicians on stage, the composer in the auditorium (in the dark usually) with the score and literally tearing at his or her hair because everything is going wrong. And why aren't they playing what I wrote? Well, in 99 out of 100 cases they're not playing it because it's not what's in the parts. So that's the first thing.

The second thing is: you have to recognize the dynamic of a rehearsal, and you have to understand that the musicians need to do a great many bookkeeping things before they get to the point where they start to actually listen. I don't mean to demean them in any way, it's just the fact. So you have to keep quiet, and you have to understand what's going on ... to try to optimize your input by saving it for those moments when what you have to say can really make a difference. And I suppose in a slightly lighthearted way, I would end this comment by saying that another thing you learn, which is a little bit hard to accept, is that, in general, musicians are not interested in why they are doing anything. All they want to know is: When do I do it? and What do I do? And ... I regret that. But it's been reinforced for me time and time and time again. If you start to try to explain the "larger meaning" or "intent" of the music, their eyes glaze over. As I've said to you in some of our conversations, it's interesting that theater is probably the precise opposite - that an actor or actress will not do anything without understanding everything ... about motivation, the connection of the lines, the characters, etc. - whereas the musician looks at the situation in a different way. There are different economics at work. A composer who does not understand that suffers a lot.

SS: Well, there is a lot going on there. And this isn't a defense of that, because I agree with you. But there's the old Hindemith story about when he played viola in an orchestra and commenting, "We've played the Brahms First under four or five conductors now, and every single time each of them has stopped to talk about the sunrise as soon as we get to the famous horn solo in the last movement."

RR: [Laughs] That may be another reason ... they've heard it before. But in the case of a new piece it's less likely they've heard the patter before.

SS: Could we go on now to talking about things that we didn't cover before regarding the overall plan1 that you devise.

RR: Well we did talk about, and I did show, the schematic or overall plan in relation to Transfigured Wind. What would be good would be to talk for a moment about one of the things that underlies the plans that we didn't speak of, and that is the idea of a particular and characteristic attitude towards proportion. First of all, I would emphasize at the beginning that proportion is not to be thought of as something that is strict, that is to say 3-to-5-to-11 or 3-to-.488-to-11.026 ... these are, of course, absurd levels of specificity. What we're basically talking about is nuances of long, short, and medium, about discriminatable things - something which seems short, something which seems long, something which seems normative. You could think of it in all kinds of ways, but that's the basic idea. So proportion has to do with general shape, and there are various ways in which we might think about shaping proportion.

Traditionally, western music has been largely focused on the idea of binary subdivision. So you have two-bar phrases, four-bar phrases, eight-bar, sixteen, thirty two, etc. There's a normative pattern to the temporal span of events and how they fit into larger units, assuming a consistent tempo. Of course not all musical cultures deal with time in a similar way. But the idea of basic binary subdivisions is widespread. One of the things that I discovered in reading about the psychology of time perception at the time that I was trying to break away from - or actually not trying to break away from, but trying to find alternatives to - what I had been taught by Roberto Gerhard ... I was reading, in particular, a wonderful book by the French explorer Paul Fraisse called The Psychology of Time 2, and in reading it I realized that the human nervous system is essentially interested in change, and that constancy - this is at a perceptual or neurological level - constancy tends to be ignored, in fact, to the point that an organism will, little by little, no longer recognize something which doesn't change. All you need to do is think about sleeping in a room on a summer day with the air conditioner on. If you just thought of the sound of the air conditioner coming on and off, you can't imagine sleeping through it; but when it's on, and it's on for hours, we disregard it after a while and just sleep through it. So constancy is less attuned to by the nervous system than change. It occurred to me to think: why should the musical normative be regularity of pulse? Obviously we have heartbeat, we have respiration. But it might be interesting to play with the idea of time fluctuating so that it converged to more rapid or shorter values or expanded to longer and more generous values. And it even seemed to me that it might be that to organize the structure of a piece of music in relation to converging and diverging time periods might go a certain way toward offsetting the loss of tonal function and of "arrival" - that there could be a "temporal arrival" in the sense that the units of meaning in the music would become more and more easily grasped - shorter and shorter.

I looked into that and started working with this idea in the early 70s. And one numerical series which is frequent, not only in music but in visual art, is the so-called Fibonacci series3. It has the character that each of its terms is the sum of the two preceding ones. I wanted a more general set of numerically varying values which nonetheless were related to each other in meaningful ways, and predictable ways, and perhaps experienceable ways. And what I decided to do was to work with logarithmic series4. We have an example here. This is log graph paper, if we read across the horizontal axis, it's linear - so we have 1,2,3,4,5,6,7,8,9,10. But if we read up the vertical axis, we have 1,2,3,4,5,6,7,8,9,10,20,30,40,50,60, etc. - so each of these is at an order of magnitude larger in value ... these are the ones, these are the tens, these are the hundreds, and so on. So what happens is that if you draw a line, a straight line, across this, you get a series of numbers which is not additive - like 2,4,6,8,10,12 - but much more complexly expanding or contracting numerically.

SS: Depending on where and how you draw the line.

RR: Right. Depending upon the slope of the line, and whether it's all within the "ones values," or in the "tens values," or whatever. So this became an extremely straightforward and simple way of getting custom-made sets of numbers that have the character of growing either larger or smaller, as I chose, and that would fill in the space in time that I needed. So this is one of the things that I used to assist me in shaping the overall design of my musical works.

Personae (1990) - Overall Plan
Written in 1990 For Reynolds' UCSD colleague, violinist János Négyesy, Personae is a chamber violin concerto that extends the devices of Transfigured Wind. Based upon four solos (The Conjurer, The Dancer, The Meditator, The Advocate) from which both the ensemble accompaniment and the computer processed sounds are derived, the piece occupies a continuous 26 minutes.

Now here is the overall plan for the violin concerto Personae. At the top, the solo part is represented by a series of four boxes; and the ensemble is underneath - one, two, three, and then, displaced down here, four; and then the computer - one, two, three, four. And the idea here is as with Transfigured Wind. There's a solo set of materials which were written first and recorded as performed - the same thing as with Transfigured Wind; and then four ensemble responses; and then four computer responses. And what you can see here is that there is a tendency for these solo numbers to grow longer, so there's an expansive trend; then these ensemble values grow longer; the computer values grow longer. So there is an expanding-in-time character which has a logarithmic pace. Now there's also a hierarchic situation where the - one-, two-, three-, four-proportion - is mirrored again inside this last value - one, two, three, four - and mirrored again inside that by these even smaller values - one, two, three, four. The point here is that we have two kinds of things happening. There are similar, but not identical, expanding series of proportions controlling the three components of the work - solo, ensemble, and computer - and, within this, there also is a nested, hierarchical "self-similarity" element. That self-similarity element is constrained quite a bit in this piece. I find it's a problematic thing to do. It tends to generate what I hear as repetitious and unpleasing things musically, even though visually it's perhaps less disturbing that way. But in any case there is a kind of familial proportionality here, and there is an hierarchical suggestion, not only in terms of the one, two, three, four, but then its mirroring at different hierarchical levels elsewhere. Because, you see, each of these elements is echoed by a component of another quartet of statements. And you can see it first in the overall plan serially, in order: the soloist plays, the ensemble plays, the tape plays, and the soloist plays, but now the relationship between solo and ensemble and tape starts shifting and gets much more complicated. So it's a ... I don't know exactly how to say this ... but it's an "extrapolated hierarchy" with some flexibility built in.

Variation (1988) - Overall Plan
This 1988 work is one in a series written during the same period in which Reynolds utilized the editorial algorithm SPLITZ as a primary methodological strategy. Three thematic elements (linear, chordal, and figurative) are algorithmically transformed in relation to a pre-existent overall plan, and the composite of the computer's algorithmic output is then "mediated" by the composer so as arrive at a final text.

Here is an example of another overall plan . Again, there are strata. They are called, I think, linear, chordal, and figurative ....

SS: This is for Variation, right?

RR: Yes, this is for the piano work, Variation. This is a plan, in time; and these layers are the appearances in time of what you could think of, if you were thinking of drama, as a character. The character "Linear" appears first and then returns from time to time. The character "Chordal" has a kind of "ghost" at the beginning but then comes in and then really occurs .... The character "Figurative" doesn't actually appear on stage until the last part of the whole piece. So you're constantly hearing about him, but you don't actually get to experience him directly until the very end. So this is not a plan of three strata of resources - like solo, ensemble, computer - but of three kinds of musical material. In doing this, you'll notice that three of these boxes have heavy lines around them. I refer to them as the primary or core versions of my thematic sources for this piece. "Chordal" appears slightly after the beginning, and the "figurative" core doesn't occur until the very end. So this is a part of the way I "populate" and utilize the opportunity presented by one of these overall plans - such an architectural scheme. I don't just wade in, start composing at the beginning, and do the whole thing. I designate certain places as something that I start out with a lot of knowledge about.

And in thinking about this I realized that it's not different from the traditional way of thinking about sonata form as it exists, let's say, in the later work of Beethoven, where you have what we call, not a first theme and a second theme, but a first tonal group and a second tonal group. Why was Beethoven using "groups"? Clearly, because a small idea is much more mutable in terms of its recombination and its identity than a full theme. So he has a group of ideas with a certain assertive character that occurs early in the exposition, then he has a group of more moderate, and frequently triplet-based ideas, that occur secondly - and then those are ramified in different ways. It's not the same thing, but it's a parallel thing. There are certain areas in which Beethoven says: this is material; and at other points he's saying: now I'm moving somewhere - I'm preparing you for something, I'm closing something, I'm pausing, I'm taking a breath here. And these are all the things that I think about in relationship to my overall plans.

SS: What you were saying before about the hierarchy ... this might take us into some of the problems or rewards of working with computers. Could we talk about the technology a little bit more?

RR: Yes. I think there are two other factors behind the idea of using overall plans like this. One is that I'm attracted to the idea of a music in which there is more than one thread of argument or presence at a time. I think this is the way life is, and I think that it adds a lot of very interesting texture to musical experience. So by creating these kinds of plans, what I do, in effect, is to give myself an advantage in terms of planning what it is that's going to be necessary in order to make the large-scale formal ideal of interacting musics work out. And -- as with the architect who needs to know which way a door opens and whether a hall is ramping down ... whether there are steps or not, where is the light going to come in, all these things - you need to have that kind of information here if you're wanting to accomplish a more complicated purpose. And I'm not suggesting, of course, that there's more value to this ... simply that if you want to do something that involves several things going on at once you have to work out how they're going to interact. I remember Cage talking about working with Merce Cunningham5 in the context of chance, and saying that chance is fine with sounds that can't hurt each other, but when you've got dancers moving rapidly chance is not an acceptable principle because it could cause injury or loss of life. So it may be that's relevant here.

One thing that causes me to want to use these overall plans - has caused me to develop more of a reliance on them - is this idea of multiplicity of layers going on at the same time. The other certainly is the fact that, as I began to work with computers in the late 70s, I quickly understood that the issue of storage and of the length of computations (because computers were very much slower then) was a critical one. And if you could work in terms of smaller modules it was an enormous, not only efficiency, but an enormous relief in every way. It allowed the computer to run faster, it kept you out of the hair of the other people who were using the same machine and were angry because you were using all the space ... there were lots of reasons for this. So having a modular plan meant that you could address a particular section, a particular idea, and finish it and put it aside, wipe the computer memory, insert new basic material ... create that, and so on. So there was an advantage at that time that I think is still strategically useful - although of course computers have gotten so much faster and storage has gotten so much cheaper - that, were I to start now, that factor would probably be much less crucial.

SS: You probably wouldn't think of it.

RR: Not as much. Anyway, that leads us to talking a little bit about technology. First of all I'd say that technology, like any resource, should never be embraced for itself, but rather because there is some creative need for it. And so I would always emphasize to younger composers or artists that they go to technology (and I'm thinking now of computers) only when, and in the sense that, it can give them something that their art needs. Now, I've mentioned the idea of multiple strata going on. One of the ways in which you can assure the utility and the affect of such a situation is if the strata that are moving at the same time are clearly and usefully distinct from one another. So what limits what you can do? Well, you can do different tempos. You can do as Ives did - and Ives is clearly the master at very complex and subtle manipulations of multiple streams of music going on at the same time. You can also place things in different positions in space. But also you can give them an extra-human or extra-instrumental character. (The computer allows you to extend certain aspects of the sound in such a fashion that it is not un-instrumental, but is not like any instrument you've heard.) And so, for me, the use of the computer allowed access to several fields of exploration that had always fascinated me. First, space - that is, the use of illusory acoustic space to move and reposition sounds so that they are mobile as they are in real life. And second, the transformation of language and vocal behavior, which I think, of course, is the source of all music and is an extraordinarily rich resource. The difference between vocalization - spoken words - and song is a powerful and very mysterious one. And the interfaces between - declamation, which is primarily language based but has the aura of song, and singing, which is primarily sonorous but has an echo of or flavor of words - these boundary conditions are complex and interesting. And you can get into them with computers in a way that's quite unique and would not have been possible before. And, of course, the origins of computer music, the earliest efforts to do this were at Bell Telephone Labs, and they were originally focused on the ideal of having synthesized operator voices, which we now have. I'm not sure it's all that wonderful, but it probably saves a certain amount in the salary component.

The idea of the computer as the creator of related but distinct sonic behaviors was necessary to me because it allowed me to have more things going on at the same time which were nevertheless separable. So, I got involved in technology - actually, of course, I was an engineer before studying music formally at the University of Michigan - I became peripherally involved with technology rather early as a result of the ONCE Group and my compatriots there - particularly Gordon Mumma6, to a degree Bob Ashley7, but mostly Gordon who was heavily into technology and very expert at it at an early stage. I found it interesting but, I have to say, I was bothered by what I thought were the limitations of sound quality. I did a little bit more with technology when I arrived in Japan in the late 60s because, at that point, Takemitsu was working with a marvelous engineer named Okuyama. And Okuyama was capable of building boxes that did, basically, anything you wanted. So Takemitsu introduced me to him, and I said I would like a box that does this, and so he made me a box. And I had a collection of boxes, and I made a number of real-time electronic pieces that involved ring modulation8 and filtering9 and that sort of thing. But again, although sometimes the sounds were intriguing, I had the feeling that there was just too narrow a band. I wasn't interested enough to pursue it.

SS: You were working in analog at that point.

... the serpent-snapping eye...
When, in 1977, Reynolds was invited to work at CCRMA, he undertook two projects, one (The Palace) requiring the signal processing of recorded sounds, the other (...serpent...) the synthesis of complex tones using John Chowning's recently discovered "frequency modulation" technique, that utilized mathematical functions previously described by Bessel to economically achieve rich and euphonious timbres with a minimum of specification and computational expenditure.

RR: Oh yes. Yes, everything was analog. But, on the other hand, it was all ... most of it at that time was real time - that is, I was sitting in a performance context, turning on devices and allowing them to interact with what the performers were doing at the moment. That may sound a little odd to people who aren't involved with computers - the idea of "real time." What other time could we have? But real time in the context of technology and music means: was it done beforehand - prepared beforehand, stored somehow and played back during the performance - or does it actually happen during the performance? The root instantiations of electronic music - or, what is called, I think somewhat more appropriately, electro-acoustic music - are two. One was in Germany - mainly in Cologne , mainly formed around Stockhausen - and that group was interested in the idea that sound could be generated according to abstract mathematical, mainly Helmholtzian, concepts of sound. It was elektronische Musik. You would take simple components and aggregate them into a sound which you had designed in some intellectually abstract way. On the other hand, the French were working in musique concrete. They were recording natural sounds and disturbing their integrity in some way. So when I first went to Stanford in the late 70s I decided that I was going to make a piece in each domain to figure out what they had for me. And I did a certain amount of synthesis10, that is to say, the generation of sound according to, in this case, John Chowning's frequency modulation algorithm. And I also did a piece which involved recording my friend and colleague Philip Larson's voice and modifying it in a variety of ways. One became central to a piece called ...the serpent-snapping eye... which is for trumpet and percussion and piano and quadraphonic tape - it uses an oceanic, rich, frequency-modulated sound synthesis. And the other became a piece called The Palace, and it involves a live singer singing into the resonances of his own speech. That may sound counterintuitive, but it turns out that each of us, when we speak, have, by and large, a "tonality." And if you record your own voice, for example, and slow it down in time without altering its pitch, you'll hear that it tends to be at a very specific pitch most of the time; we each speak at a particular pitch level. Women are, more or less, an octave above men, and there are all kinds of things that you notice when speech is slowed down.

The Palace (Voice Space IV)
In the Summer of 1977, John Chowning invited Reynolds to Stanford University's Center for Computer Research in Music and Acoustics. He used this opportunity to assess the two primary directions then offered by digital resources, doing works in both the realms of sound synthesis (... the serpent-snapping eye ...) and also the digital signal processing of recorded sound (The Palace). Based on a poem by Jorge Luis Borges, The Palace uses a computer-processed reading of the Borges text by singer Philip Larson to create an accompanimental structure against which a live vocalist sings.

SS: Would this be the same with, say, a Chinese speaker?

RR: I'm not sure. It's more complicated because, of course, Chinese languages and dialects are "tonal" languages. I would guess, however, that there would be a normative ... let's say that low tone would always be, more or less, in the same place. And Philip, in fact, turns out to be in E or E flat - something like that. And so what I realized was that it would be interesting to create an imaginary space ... a palace (this work is on a poem by Jorge Luis Borges) - the palace is a metaphor for the mind. I thought it would be interesting to create a space that reverberated in such a way that everything that was around E flat or its harmonics was very powerfully reinforced, and that, in fact, the decay time11 in this palace was not, as is normal in a cathedral, six or seven seconds in the most opulent circumstances, but sixty seconds. And then I created another voice - or asked him to create - the voice of an old, haggard man. The normal voice was of a deep and ... active person; the other was that of a wizened, old person and the latter had a very high fundamental. And we created an inverse virtual acoustical space to reverberate this. It was a way of taking the computer and a very straightforward subject (the recorded speaking of a person) and ramifying it, ratifying it, enlarging it. I found this enormously interesting. And so my commitment very specifically and immediately went to, not synthesis, but processing. It has stayed there.

So the next question was: what kind of processing - what kinds of operations that a computer could perform on a sound - were most interesting? And there are a relatively small number that I have found that absorbed me. This does not mean that that's all there are. It just means that, from my point of view, these have been sufficient. And the first set of these operations was the editorial algorithms (that we spoke about briefly earlier) which have the character of taking an initial sound, of whatever nature, and cutting it up into pieces and rearranging those pieces, but in a particular principled way so that there is a characteristic imprint - the formal impact of the algorithm - on the form of the thing being processed. And the interplay there becomes potentially rich and interesting.

A second thing that you can do with computers is to analyze a sound into various components. You can describe the sound along several dimensions. For example, you take the sonic space from the lowest pitch that we can hear to the highest, plot it over time, and cut it into narrow bands. The computer looks at each of these bands and says: at such and such time, is there any energy there? If there's energy there, at what frequency is it within this narrow band, and does that frequency change? So the analysis gives you back a picture in this stratum of pitch space - whether there's anything there at any moment and, if so, how stable it is. At the same time, each of these bands has another eye that is looking at it which says: if there is something here, how loud is it? So we know certain things about every band: whether or not there is energy there, how much energy there is, at what frequency, and how those measures change over time. Once you have that picture of a sound, you can go in and you can change one set of values without affecting the other. So you can elongate a sound radically without changing the relative relationships of all of its parts. You can move it up or down in pitch without making it shorter or longer. Of course, in the old situation with phonograph records or tapes, you could play the recording at a different speed and the pitch would change, but the change of time and pitch were ineluctably related - you couldn't separate them. With a computer you can separate them.

Earlier we were talking about the idea of separating out parts of a complex spectrum. The advantages of this may not be immediately obvious to lay people or even to musicians. But let's look at it from the point of view of the orchestra as we already know it, or jazz. You know what a trumpet sounds like played without any interference. You also know that in the 20s - I think about the 20s - players started using a plunger to open and close the front of the bell. Later there began to be mutes of different character. A mute is basically a filter which removes certain aspects of the sound. So what you are actually getting is the sound of the trumpet, but with some of its components taken away. A mute is a filter. You could think of these phase vocoding strategies (the representation of sounds by bands, or strata) as extremely sophisticated and variable filters or mutes. They take away some components and leave others. As you know, you hear a trumpet, you know it's a trumpet whether it's muted or not, but its character is rather radically different.

I became interested in the idea of stretching because, when you slow a sound down, you hear all of the "choreography" of the physical system changing as the vibratory mode changes. If you're recording, let's say, a flute, and it is producing one pitch so that the air in the tube is moving at a certain rate and you then change the venting and thereby alter the pitch which the instrument wants to resonate at, you get a moment where there's a confusion between two modes of vibration. And when you slow that moment down you can hear the confusion. When you slow down a cellist playing a low note and then sliding up the string to a high note, in the course of Dvorak's Cello Concerto, you just hear a low note and then a high note. When you slow it down you hear the tension of the string changing - you hear the "aspiration" of the low sound for that higher place. It's very moving. It's similar, for me, to seeing slow motion pictures of dancers or gymnasts. The place we first saw it, in athletic contests, was in a replay. You see the athlete reaching ... there's something to be understood about the dynamics of the body and of motion.

SS: The computer is allowing you to focus on this moment which you could never do before.

RR: Right. So time distortion is one facet of what is allowed by phase vocoding analysis and re-synthesis12. Another is the idea of the separation of a spectrum into its components. So you could think of it as the disassembly of a sound into its parts, and the reorienting of those disassembled parts relative to each other.

Now, one thing we haven't talked about at all yet is spatialization. And I want to spend just a few moments on that. I think that something that we would all immediately understand intuitively is the idea that, if you are in a primitive environment and there are many sources of danger - predators - the thing that you need most to know is: where are they? Because if you know where it is you know the direction of flight you must take to escape it. And so the fact is that the human nervous system is most temporally acute in the realm of audition. And it's obvious why that is, because we need to know first where the threat is if we are going to survive. And what that means is that our nervous system is deeply sensitive to, and ingrained to, issues of spatial location. Music, by and large, makes no formal use of this at all. There are, of course, antiphonal elements in certain music; and we do make use of spatial location in music. Anyone who has heard the Ives Fourth Symphony live rather than on a recording knows that the difference between live and recorded is "inexplicable" - but it is explicable. And the reason that it's different when you're there live is that your capacities to localize allow you to hear each of the sources as significantly distinct from one another because they come from different locations.

The Emperor of Ice Cream
Emperor (1961-62) was Reynolds' first theatrical work. Written for the Bob James Trio and the ONCE Festivals in Ann Arbor, Michigan, its score specifies not only non-traditional vocal behaviors with a novel and evocative notation, but also the performers' positions on the stage (left <-> right) so as to control the spatial effects of performer movement and repositioning. The title is taken from and the work shaped by the Wallace Stevens poem of the same name.

I have no idea where the need came from - but in the earliest stages of my work I was interested in spatialization. One of the first pieces that I did was called The Emperor of Ice Cream. And The Emperor of Ice Cream has three instrumentalists and eight singers. The score shows how, over time, these singers are to reconfigure their positions on stage in order to alter the "spatialization" of their sounds. And at one place, where the Wallace Stevens poem has the line, "and spread it so as to cover her face," I have the singers - it sounds rudimentary, but it was actually quite interesting - the eight singers are spread across the stage, and they pass the sound back and forth. I have no idea where that interest came from, but it was there at the very beginning, and it has continued as something that I care a great deal about. I have worked a lot, for example, to create modular units gestures of spatial position, such that out of such small units I could form larger, more complex spatial paths. And then I have worked also with the idea of discrete positions. I'm gesturing now, but ... Let's look at a specific situation here. [Personae diagram on stand -- here shown in box on right.]

This is a little difficult to see, but this is graph paper, and down here is a square, and down there is another square. That square represents the performance space, or it represents this room. What this represents - that curve, that curve, or this one - is a potential path that a sound can take. So let's say that I say, "Abracadabra, abracadabra, abracadabra," that I say that continually, and that sound, "abracadabra," starts here and moves closer to you and then moves away. You hear it - you're facing in one direction - you hear it come in from one direction, you hear it come very close, you hear it go back out again - just as though a mosquito flew by. We're very good at knowing [claps hands to kill imaginary mosquito] where to hit something moving by us. And what I've done is to design a variety of paths that aren't just abstractly attractive, but that have actual, visceral effects on one. A fly-by is a very effective gesture in sound space. So I've become involved with the idea of using the spatial acuity of the human perceptual system as a component of my music. It involves, like everything else, a lot of planning and informed intuition.

Personae - Open and Closed Path Segments for Spatialization

The thing that I wanted to end with here, is the idea that all of these strategies imply, not the fact that by using an algorithm or spatializing or stretching you necessarily get anything interesting, but rather that these processes are useful in direct relationship to the material upon which they're exercised. So, just as you cannot make a fugue out of anything, or surely not a canon out of just any kind of musical idea, you can't use these strategies on just any material with interesting results. If you're going to spatialize something, you want to think about what the aspects of the human nervous system are that cue us effectively in regard to spatial location. And, for example, one of the things that would be foolish to try to do is to spatialize a very low sound which doesn't have many high partials. It's not going to work. These things are known. There are things that are known about spatialization that can increase the palpability of this effect - or virtually negate it. And this is something that I am interested in at this point because I hear what is coming out of Hollywood on the new DVDs . And the number of DVDs which use 5.1 in an imaginative and effective way is minuscule. The number of them that just thrash around arbitrarily is alarmingly large. It alarms me because for the first time we have a commercially viable medium in which multi-channel music can really be heard as it was intended; and I think this is an important horizon for music in all of its forms - certainly serious music as well as entertainment music. But, if this opportunity is consistently used in a mundane and banal way, one worries a little that it will atrophy - it will go back to simple surround-sound or something which essentially puts you in an echo chamber.

What I'm interested in about 5.1 is that it allows discrete discrimination of sounds in the space that you exist in as you watch. And this can be very powerful and very interesting, but it mustn't be done in a formulaic way. It should be done as a result of a combination of sensitivity to the way hearing operates, the way the central nervous system processes things, the way room acoustics work, the way particular kinds of sounds behave in an acoustic space. It involves a huge number of factors which somehow have to alchemically come together into a situation where that dimension - space - really makes its impact. It doesn't need to be, as it frequently is, a simple "effect." It can be. But the example that I would cite is - and I don't mean this to be out of reach or out-sized as a comment - that if you look back at the history of the use of dynamic variation in Western music and the Mannheim rocket or the idea of the Rossini crescendo - the idea is that the use of dynamic intensity could become not only a general arousal strategy but, in late Beethoven, a structural element. To become very, very soft would not be just an effect, but had a specific relationship to the formal design of the music - similarly with the use of sforzandos, the use of sudden dynamic changes. In Beethoven, dynamics become, not just expressive, but structural. And I think - there's no question in my mind - that the spatial aspects of music to which the human nervous system is acutely attuned, can take on similar power. But, of course, we have to have a medium in which this is reproducible and shareable.

SS: One additional thing that I wanted you to say just a few words about because it's happening right now, I think (and it's due to several problems that you and other composers have experienced in the past, and also it has to do with the problems of rehearsals that you've discussed): that is, the idea of what you call the "technical score."

RR: We've only gotten so far as the fact that a resource exists. A resource can be brought into fulfillment as potential, but then we still have the issue of using this resource. And this is complicated by a number of factors. Firstly, technology, as everybody is aware, changes at an extraordinarily rapid rate. And if you are intimately dependent upon technology, it becomes a much more problematic reality that you are constantly getting into a situation where the hardware, or the software, or both, have altered. And it becomes impossible to do the thing that you have previously done in the same way - perhaps not at all. This changeability means that, not only does change occur over time rapidly, but distinctions exists at every moment, so that the technical environments here at the Library of Congress and at the University of California in San Diego; at Ircam in Paris; in Helsinki; in Porto Alegre, Brazil; in Kyoto, Japan ... they're all distinct from one another. And technology will continue to change in different ways in different societies depending upon which manufacturers are powerful, which consumer goods are more popular, what kind of support there is for research - there are all kinds of intangibles. So the basic fact is that technology is an extremely unstable factor. Now, we can figure out ways of using it; and we can get, in our controlled environment, a product that we feel very happy with. The issue then is how you transport that effect, that impact, to another technical environment reliably. This is very problematic.

SS: It's almost easier to move a whole orchestra around the world.

RR: Well, if you happen to be funded the way Ircam is, you take your own loud speakers, you take your amplifiers, you take your tape playback machines or your computers, and your technicians all with you when you go anywhere. It's the only way that you know it will be right. Most of us can't afford that kind of lavish treatment. So, in any case, what one has to decide is either that one will be everywhere, at every performance (which a number of composers who depend on technology now must do) because he or she is the only one who knows how it is supposed to sound, or one can attempt to devise a new solution to the problem.

Watershed I
As with the Transfigured Wind series, Watershed (1995) involves a set of parallel works featuring a central solo which can be performed with or without technological extension. This composition is the first of Reynolds' to use real-time computer spatialization (although real-time analog technologies were involved in many works of his from the 60s), Composed for four families of instruments (skins, metals, wooden snare drums, and "oddities" or noisemakers), Watershed addresses the topographical and metaphorical implications of its title.

You have a new resource, but utilizing it is not trivial. So I've been thinking about this a lot, and what I have decided to do now - and I started this summer - is to make what I call a "technical score" for each piece that involves technology. And that technical score does not tell anyone how to do it - what it says is that I want certain things to happen. I describe them generically. I describe the kind of effect that they are supposed to produce on the listener. And then I say, in the score, where each of these effects is to begin and end, so it is known on what material they are to be applied. And then .... this is in a real time situation, of course, not the playback of pre-computed materials .... what the technical score does, then, is explain, in the best way and the fullest way that I know, my aspirations for that aspect of the piece. I say, in effect, to a particular technical environment, use your equipment and your expertise and your insights to emulate this result. And this was tested this year, for me for the first time in the case of my percussion solo, Watershed IV, which was done in Belgium. The technologist, Michael Clarke, did a realization for percussionist Geert Verschraegen, entirely on the basis of my technical score. I am convinced that this is really the only viable way to go.

If you think about a traditional musical score, you realize that a violin part to a Mozart sonata does not tell the violinist which way to move the bow, where to put his finger on the strings, how to hold the violin, how hard to push, how fast to pull the bow. It doesn't tell him any of those things. It just says: I want this note here, and then this one and then this one, and this should be louder than that one, and so on. What I'm trying to see, with a technical score, is whether there are parallel ways of addressing technological issues which are perhaps much more variable even than such a phenomenon as the technique of an individual instrumentalist. It's very clear that if we continue to pretend that technology is a fixed thing, a box on a shelf, that we're sure to be endlessly frustrated. I think we have to treat technology as a medium, and we have to explain how we want it to work, and then assume that a sufficient number of sensitive and competent people can be found in different places, and who will rise to this challenge. We'll then have a situation in which technology is used as a medium interpretatively, perhaps rather like lighting is used in a theater.


Notes

  1. Overall Plan - Over decades of compositional experience, Reynolds has evolved a strategy to aid in imagining, planning, and later realizing the overall form of his musical works. A proportional plan or diagrammatic representation of the large shape of a proposed work is made, becoming a provocation to thought and a repository for an evolving level of clarity about how to proceed and why. [Return to text]
  2. The Psychology of Time - The eminent French psychologist, Paul Fraisse, wrote this, for its time (1963), definitive study of the experimental work in time perception. Reynolds came upon an English translation in a Hong Kong bookshop in 1966, and it effected his attitudes towards musical structure and rhythmic practice. Fraisse proposed, in this book, the concept of the "perceptual present," a duration (usually less that 7 seconds) during which an individual can remain unaware of the past and not anticipate the future. [Return to text]
  3. Fibonacci Series - Composers have long been interested in numerical games as organizational stimulants (one thinks of elaborate canonic reversals and of Mozart's dice-tossing). As the largely duple and triple rhythmic conventions of Western music (controlling not only local metrical and rhythmic behaviors but also the larger phrase structurings) began to be questioned, composers found irregular, sometimes geometric sources of numerical authority interesting. Primary among them has been the Fibonacci series (0, 1, 1, 2, 3, 5, 8, 13, 21, ...), in which each number is the product of the preceding two. [Return to text]
  4. Logarithmic Series - Number series provide a basis for grouping and ordering elements in music. In many traditions these are simple (2, 4, 8, 16, ...). In more recent times, the Fibonacci series (0, 1, 1, 2, 3, 5, 8, 13, 21, ...) has attracted artists. Logarithmic series can vary widely in their specifics, but are used by Reynolds, in integer approximations, to create the effect of accelerating or ritarding formal shapes. [Return to text]
  5. Cunningham, Merce - In the 1940s, Cage encountered choreographer Cunningham (b. 1922) at the Cornish School in Washington State, and this began their lifelong partnership. Cunningham's ideas about dance (He had been a principal with the fabled Martha Graham Company.) were especially radical structurally, allowing the fruitful co-existence of Cage's music with his sometimes chance-determined dances. Cunningham himself was an idiosyncratically flexible and mesmerizing performer. [Return to text]
  6. Mumma, Gordon - Mumma (b. 1939) was one of the co-organizers of the ONCE Festivals in Ann Arbor, Michigan during the 60s. An early innovator in relation to the custom design and imaginative real-time use of electronics, Mumma then became one of the musicians of the Merce Cunningham Dance Company, and was a longtime collaborator of John Cage in this capacity. He continued his explorations while Professor at the University of California, Santa Cruz. [Return to text]
  7. Ashley, Robert - A co-founder of the Ann Arbor ONCE Festivals during the 60s, Ashley (b. 1930) became an activist educator at Mills College in Oakland, then settled into the New York scene, where he concentrated on video and theater. His unique sensibility is characteristically exercised through original, witty, and elegant texts and by an experimental, eclectic, and pop-influenced musical perspective. His work is widely recorded (both music and media compositions) and published (cf., his book, Music with Roots in the Aether). [Return to text]
  8. Ring Modulation - In the 1960s, as the electronic manipulation of synthesized sounds and those captured with microphones increased in interest and frequency, a variety of convenient and attractive strategies were developed. Favored means were likely to be both easy to accomplish and broadly applicable. Filtering and modulation were among them. Modulation involves the systematic and continuous influence of one stream of sounds over the way that we hear another. Ring modulation, in particular, combines two sounds so that the signal and the carrier together create a new family of "sideband" sounds by the addition and subtraction of their component frequencies. As ring modulation involves the proliferation of products, it is the case that simpler inputs will result in more easily characterized and useable results. [Return to text]
  9. Filtering - When sound materials are electronically represented, the frequency response of the recorded form is an important feature of its fidelity. Ideally, a recording represents the full range of frequencies associated with the source. When one listens to orchestral music over a small portable radio, however, one notices that, although the source is recognizable, there is a marked change in the character of the sound itself (far less strength in the low frequencies, somewhat less in the higher frequencies). Filtering is the process by which one can alter the nature of a sound by removing specified bands of frequencies from the composite present in the original. [Return to text]
  10. Synthesis - In the 50s and 60s, as interest in composition with electroacoustic means increased, there were two schools of thought in Europe. The French subscribed to musique concrète, which took natural recorded sound as its source and concentrated on modes of alteration and montaging. The Germans, however, were interested in music comprised of artificially generated, or "synthesized" sounds built up of individual simple (or sine) tones. Later, particularly at Stanford's CCRMA, more complex "physical models" were proposed as a result of the mathematical analyses of real musical instruments. In any case, the aim of synthesis is to create interesting musical materials that are not directly dependent upon the actual behavior of physical systems, and are, in that sense, "synthetic." [Return to text]
  11. Decay Time - The advent of musical applications for electronic devices in the 50s and 60s, involved the need to describe musical sounds in objective terms. As a shorthand approximation, tones were said to have an initial transient phase, a steady-state period, and a decay time -- roughly corresponding to the initial attack, the portion of the sound during which the pitch and timbral character were established, and then a phase when the sound died. In describing the phenomenon of acoustical reverberation, the "decay time" of a performance space refers to the duration in time, after the cessation of a sound source, that it will typically persist through the medium of reflected energy. [Return to text]
  12. Phase Vocoding Analysis and Resynthesis - As computer processing of sound became more common, it was discovered that even very complex sounds can be adequately represented by a sufficiently large number of "bands" that described the fluctuation over time of all the frequencies involved, as well as their time-varying intensities. Once this information has been analytically derived from a sound, it may be re-synthesized, now altered in various ways (length, pitch, etc.). These changes can be dramatic and (within limits) without appreciable effect on the other apparent characteristics of the source sound. [Return to text]