(Jazz and Multimedia) DAN TEPFER
This is August’s Featured Interview
Dan Tepfer is a Brooklyn-based jazz pianist and composer. He has performed with some of the leading lights in jazz, including extensively with veteran saxophone luminary Lee Konitz. As a leader, Tepfer has crafted a discography already striking for its breadth and depth, ranging from probing solo improvisation and intimate duets to richly layered trio albums of original compositions. In this interview, he discusses his groundbreaking multimedia project Natural Machines, which integrates computer-driven algorithms into his improvisational process.
Interview by Tyler Nesler
You've been interested in programming, physics, and playing music from an early age. Did all of these pursuits develop more or less simultaneously? What do you think are some unique ways that these interests overlapped and helped you develop early on as a musician?
I grew up with a biologist father and an opera-singer mother. And going another generation back, my mom’s dad was a jazz pianist, and my dad’s dad was a biologist. So I was bathed in music and science from age zero. I’m not sure that I could tell you how this helped me develop early on as a musician, but what it did do was establish in my mind that art and science are two equally valid, and in many ways separate, ways of seeing the world. Art is about subjective reality; science is about objective reality. These things don’t really have a bearing on one another. Objective things we can establish rationally and hope to agree on, because they should look the same no matter who we are. They’re worth arguing about; science is about arriving at a universal truth. In contrast, subjective experiences and beliefs are valid in themselves. There’s simply no point in arguing about them, and that’s where art is so wonderful, because for me to express an idea through art, there’s no need for it to be objectively true; it just needs to resonate somehow. The art that I love leaves a huge amount of room for mystery. I feel that if people only realized that objective reality and subjective reality are separate, and that just because I believe something, subjectively, I shouldn’t expect anyone else to believe it, a lot of the strife and violence in the world would disappear.
The analytical and the artistic are sometimes thought of as opposing forces. How do you believe your Natural Machines project explores the ways in which the core scientific elements of music complement and reinforce its more spiritual aspects?
I like to say that the music I love lives at the intersection of the algorithmic and the spiritual. The music of Bach, or the music of Coltrane, is supported equally by structural rules on the one hand and by emotion / intuition on the other. I want to be careful about distinguishing this from science, though. Just because people have come up with algorithmic systems, such as the rules of counterpoint, that reliably result in music achieving a certain sound that people have found desirable, that doesn’t mean that there’s any kind of objective scientific “truth” about it. It’s just a structural system that works, somehow. Certainly there’s a basis in physics for some of the rules of harmony — for example, we find major triads, as well as diminished and augmented ones, right there in the lower parts of the harmonic series — but there’s a huge part of the common practice of music that can’t be explained by anything objective. Just like language, these are things that have evolved slowly over time in human cultures; in many ways there might not be an objective reason for them. And that’s 100% okay.
The name Natural Machines intends to convey that this music lies right at the intersection of organic and mechanical processes. I’m the organic process — I improvise, and try hard to let my intuition guide me. The computer is the mechanical process. By explicitly separating the algorithmic and the spiritual like this, by physically separating the roles, I hope to make it clear that there’s nothing to be feared from structure in music; that, indeed, music IS structure in tandem with intuition. It’s objective reality and subjective reality, together. And in this age of anti-rationality sentiment, I hope that this can be a useful message.
What has the difference of feeling been for you between playing with a machine versus playing with a human virtuoso like Lee Konitz? They're both working from logical structures and reacting to your playing. But how do you think the emotional nature of your own musical communication changes when you're playing with a person versus a program?
It’s surprising to me how similar they are, actually. Because my algorithms are being calculated faster than I could calculate them myself, the results genuinely surprise me. I’ve found that for this music to work, I need to listen to what the computer plays as intently and carefully as I would to another person. And in both cases, the music that comes out of it is a result of each participant willfully finding common ground with the other — just as we do when we have a conversation with someone. In a good conversation, we don’t just blab on about whatever we feel like talking about. Both participants move together through subjects that are mutually interesting, in a language and register that’s comfortable to both. It’s the same here, whether I’m playing with another person or with an algorithm — I’m improvising, hearing how my counterpart reacts, and finding ways to make what I’m doing work with what they’re doing. It’s a positive feedback loop.
The biggest difference between playing with my algorithms and with someone like Lee Konitz is that the machine has no artistic vision. It’s just establishing formal constraints around what I’m doing. It’s like going to a fun house of mirrors — it takes whatever you give it, and transforms it. So if I don’t give it anything, it gives nothing back. This is very different from playing with Lee, where one of my favorite things to do is not play at all for a while and just listen to him!
Who are some people who have also done music and machine experiments that inspired your own work in the field?
Natural Machines grew in a very organic way for me, basically through tinkering and experimenting. I’ve never taken a course in music technology, or worked with anyone in that field. I’ve written every line of code myself. So it’s really homegrown. In many ways my biggest inspirations in this project are Bach, Ligeti, and Steve Coleman, who all three have reinforced the idea in me that meaningful music can be made from algorithms. I’ve also loved the music of Aphex Twin since I was a teenager. His recent record Computer Controlled Acoustic Instruments pt2 EP has been an inspiration to me, although I was already working on Natural Machines when it came out. In terms of the visualizations I’ve been making, it all started with seeing the work Stephen Malinowski did for Björk’s Vulnicura, which I saw live in NYC. I loved the radical authenticity of showing each and every note in the music as clearly as possible. I took that idea — of a live graphical score — and then the challenge was how to make it happen in real time, with the computer having no a priori knowledge of what was going to come next. I then went deeper and made the visualizations as specific as possible to each algorithm, with the intent of showing its underlying structure.
Your live performances of the Natural Machines project includes accompanying visual geometric representations of the music being played. In what key ways do you think the visual aspects of this project complement or enhance the musical elements?
What I like about there being a visual aspect to the show is that it’s fundamentally interactive — the audience can choose what experience they have. They can close their eyes, they can focus on me at the keyboard (I always have a camera on the keyboard, too, so that everyone can see the keys moving by themselves in response to what I play), or they can choose to get lost in the visual representation of the music. I think the visuals enhance the experience in two ways: first of all, in our intensely visual culture, I think they can amplify the hypnotic aspect of the music, keeping the audience not only sonically, but also visually engaged. Second, the visuals I’ve created aren’t random — I’ve tried in each one to convey what’s happening structurally in each algorithm. So they have the potential to make the music more legible to the audience. And I’ve had several people tell me that the visuals enabled them to understand what was happening below the surface, which I was glad to hear. It made the music less opaque, opened it up for them. This seems to be particularly true in the Inversion algorithm.
The immersive visual and music combinations of Natural Machines allows an audience to experience a form of synesthesia. How much have you studied this as a physical perceptual phenomenon? And have you actually heard from anyone with natural inherent synesthesia who has attended a Natural Machines performance (or even simply watched the videos)? It seems like it could all be quite an intense experience for someone with those abilities.
I’ve heard a few people say that the visualizations I’ve made match up pretty closely with what they see in their mind when they hear music. But synesthesia is a very personal phenomenon — everyone experiences it differently. And I’d go even a little further: one of the reasons I wanted to put out a strictly audio version of the project (the Natural Machines record) is that I find that the visuals can actually be limiting in some way. Seeing them tells you what to imagine. But hearing the music without any visuals imposed on you allows you to construct your own visual world, which is always going to be more personal, and sometimes richer, even if less specific, the way dreams can sometimes be much more intense than reality. And I think it’s really interesting to have both experiences. I can’t say that this is something I’ve studied, but I will say that I’ve been very deliberate to make sure the visuals are strictly a representation of what’s happening in the music, and nothing else. The reason for this is that I’ve found that music, when paired with visuals that have their own narrative, tends to immediately become background music in our perception, a sort of accompaniment to the action on screen. But I believe that if the visuals are strictly correlated to the music, then this problem mostly goes away.
Have you ever considered experimenting with psychoacoustical scales that aren't normally used in music composition, such as the Bark scale and the mel scale? What about the possibilities of composing for sounds which are out of the normal hearing range for humans, but may be perceived via the somatosensory system?
These are things I’ve been interested in, yes. In fact, the second episode of Natural Machines deals with the difference between just intonation and equal temperament. It’s the only track in which the computer produces sound on its own, without activating the piano. When I play a three-note chord in my left hand, my program calculates the harmonic ratios between the tones of that chord in just intonation, creates a visual representation of them, and plays the chord in just intonation with simple sine waves. So the chord the computer plays is tuned quite differently from what I’m playing on the piano — and I love the friction between these two microtonally-related worlds. I should note that I’ve chosen for some chords to be represented and played in equal temperament, for variety — so only some of them, the ones that look the simplest, with whole-number ratios between the tones, are in just intonation.
I’ve also experimented with creating music from the orbital ratios of planets, which likewise leads to microtonally tuned harmonies outside of our normal equal temperament. With respect to the range of human hearing, one of the nice coincidences I’ve observed is that, with the closest planet from the Sun, Mercury, orbiting in 88 days around the Sun, and the furthest, Neptune, orbiting in 60,000 days, there’s an approximately 1 : 1000 ratio between the shortest and longest period in the planetary system. This is just about the same as the range of human hearing, which goes from around 20 to 20,000 Hz. So we, humans, are actually able to hear the entire Solar System at once, within the range of our hearing — provided we’re young, because by age 40 or so, not very many people can still hear past 16,000 Hz.
What do you think of the possibilities of incorporating other human musicians into playing along with you and your programs, such as adding a bass player and a drummer who also interact with the programs and vice versa?
I’ve experimented with that — I’ve done algorithmic duets with drummer Arthur Hnatek, singer Claudia Solal, bassist François Moutin, and there’s even a track on my recent record with Lee Konitz, Decade, where I use my tremolo algorithm as we improvise. The track is called “Through the Tunnel.” In all these situations, though, I’m the only one directly interacting with the algorithm. I plan to explore having other people interact with it as well.
The Natural Machines project also includes 3D-printed earrings that are elegant little renderings of musical structures. Do you have any interest in taking musical structures even further into the visual art realm by depicting them on a much larger scale, such as turning them into large-scale sculptures, or even using them as the basis of an architectural design for a building?
Yes! I have a dream of making big steel sculptures of the major, minor, diminished and augmented triad shapes — the building blocks of western harmony — and displaying them on the plaza at Lincoln Center.
Do you have any plans in place yet to further your experimentation with programming and live interactive playing through new projects, such as creating another album or a multimedia production?
In my mind, yes — first I want to put out an app for the current album that allows people to see the whole thing in VR, using Google Cardboard or a dedicated VR device. Then there will be another album, exploring — I think — the more percussive possibilities of algorithmic improvisation. I’ve been tinkering with solenoids…