HollyHerndon:EpiphanyEpiphanyEpiphany
Holly Herndon: Epiphany Epiphany Epiphany
On machine learning, Opus Dei, and Dolly Parton.
“The voice is a collective instrument,” the artist Holly Herndon observes in this conversation with Hans Ulrich Obrist. “It’s formed through everyone that it comes in contact with.” Herndon’s voice is central throughout her work, whether it be music—she has made three full-length albums, along with numerous installations and exhibitions querying the intimacy between humans and our tools—or on the podcast Interdependence, where she and Mat Dryhurst interview guests on topics spanning art, music, philosophy, and tech. Through these endeavors, she forges networks for exploration and knowledge exchange that have shaped the public discourse around emerging technologies like blockchain and AI and their impact on creative production.
Broad, networked, and generative dialogue is also central for Obrist, a curator, critic, and art historian who currently helms London’s Serpentine Galleries as artistic director. Beyond his writing and exhibition-making, Obrist has conversed with all kinds of cultural figures for his Interview Project, generating an archive totaling over 2,000 hours of talks. This “endless conversation,” as he describes it, forms a colossal web of perspectives on the history and role of art institutions, as well as different approaches to curatorial practice.
Herndon and Obrist are both, in this sense, paradigmatic of a certain tendency of Web3: value production through networked community. Both represent highly connected and dynamic nodes within their respective social networks, forging links between artists across generations, media, practices, and time.
In the years since Obrist began his tenure at Serpentine in 2006, the art world has transformed, prodded, and been propelled by the growing influence of new technologies like the ones that Herndon explores. And although he comes from the institutional—some may say “trad”—art world, Obrist has been paying attention to developments like blockchain for some time. In 2015, we did a show together at Serpentine titled Products for Organising, which centered on the emergence of organizational norms in businesses and governments and included work about Bitcoin and its attendant notions of decentralization. Obrist also interviewed Vitalik Buterin for TANK magazine in 2018, when the ERC-721 token standard was just being adopted. Buterin refers, in that conversation, to Ethereum’s plan to eventually transition from its original consensus algorithm, proof-of-work, to the more secure and less energy-intensive proof-of-stake.
There is an aspect of timelessness in much of the technology that is commonly lauded as new—of uncanny ideas that don’t go away, but go latent, only to get resurrected later on.
The day Buterin spoke of has now arrived: the Ethereum “merge” took place in September of 2022. To mark the occasion, ETHBerlin hosted a hackathon, along with an exhibition tracing the history of Berlin’s blockchain art scene across a number of events and gallery shows, some of which I had been involved in myself. During one roundtable discussion, visitors to the show, which was curated by María Paula Fernández and Stina Gustafsson, were invited to contribute to a crowdsourced timeline. Herndon and Dryhurst, her partner and longtime collaborator, recalled that her 2015 record Platform dealt with a number of topics that would go on to occupy mindshare in the crypto scene and the legacy art world alike. Among these moments of prescience was a song titled “DAO,” which included samples from a spatially distributed performance. It was simultaneously computer-mediated and intimate, reflective of trends that had begun to emerge and predictive of those yet to come.
The exhibition also occasioned a phone call from myself to Obrist. I rang him, inviting him to come see the show, interrupting the interview you’re about to read. He answered, and put me on the spot, wondering if there was anything I wanted to ask Herndon. I went with: “How can you retroactively apply today’s most exciting new AI tools to the past?”
She replied by observing that today’s models are informed by the past anyway. You don’t need to be deliberately retroactive about it; they’re already working this way by design. This resonates with dynamics I have explored in my own work, including, most recently, Dotcom Séance, a collaboration with CryptoKitties illustrator Guile Twardowski that resurrects companies that failed during the dotcom crash as NFTs, with new, AI-generated logos based on text inputs. Like Herndon, we were interested in exploring how, through machine learning and other tools, the internet can be made to reimagine itself.
There is an aspect of timelessness in much of the technology that is commonly lauded as new—of uncanny ideas that don’t go away, but go latent, only to get resurrected later on. Even blockchain and AI can be forces of re-evaluation and revival, resurfacing forgotten moments of culture and provoking new modes of engagement with the past. Just as Herndon and Obrist use their voices to do, these tools can engender novel forms of cross-generational dialogue—and create new openings, new value, in the process.—Simon Denny
Hans Ulrich Obrist: How did it all begin? How did you come to art and music? Was there an epiphany? Was there always new technologies or did it start in a more analog way?
Holly Herndon: No, it wasn’t an immediate childhood fascination with technology. I grew up in Appalachia, in the shadow of Dolly Parton and her legacy. I started making art and music in the church and you can certainly still hear that in the work that I make with choirs. I was scared of computers until I was an adult, and then I realized that they were actually a really interesting new instrument that I could do more with than with the analog instruments that I was using at the time.
HUO: The choir and church thing is interesting, because I’m obsessed with Hildegard von Bingen, whom I think is one of the most contemporary of our composers.
HH: That time period, and of course her music, you could say, was an influence. She was a beast. It’s funny how music aesthetics get really baroque and complicated, and then we rediscover minimalism and history falls in on itself.
HUO: What about other inspirations at the beginning? I recently spoke with Sylvia Wynter. Alexis Pauline Gumbs did this great book on her called Dub (2020). Gumbs says she’s obviously inspired by Sylvia Wynter, but it’s not like a genealogy, where you are inspired by the past. She said, “I’m working with Sylvia Wynter.” The question is not so much whom you’re inspired by, but whom are you working with from the past?
HH: I’m trying to think about technological systems dealing with AI at the moment, thinking about them as a new internet and thinking about the evolution of the internet as something that came out of humanity and human collaboration. I like to go back and think about early tool development and the early choreography that impacted our brain development, early ancestors we’re still collaborating with today. I don’t know their names, but they’re the Neanderthals and early humans that are still in our blood.
HUO: Because we’re talking about beginnings, what is your number one in your catalogue raisonné? Visual artists, usually at some point in their life, make a catalogue raisonné; some do it towards the end of their lives and some artists never do it, it’s done posthumously. But I always think it’s interesting. What’s the first work you did which was no longer student work, where you feel the language set in?
HH: I think it’s when I started to use my name. Before, I was always using monikers with collaborative music projects. But I would say my first mature work was Movement (2012), which I released as an album.
I could do all kinds of things with my voice if I put it into this digital space that I couldn’t do in the acoustic space, essentially exploring this intimate relationship that I was having with the laptop.
HUO: Can you talk about that?
HH: I was coming out of Mills College at the time and really embracing the laptop as my primary instrument. I was learning programming and I was using my voice as a data input stream or controller for performance. I wasn’t thinking of myself as a vocalist in the classical sense. I was trying to make laptop performance embodied and physical. I could do all kinds of things with my voice if I put it into this digital space that I couldn’t do in the acoustic space, essentially exploring this intimate relationship that I was having with the laptop.
HUO: Because before we talk about the “Extreme Present,” to quote the book Shumon Basar, Douglas Coupland, and I did, I just want to fast-forward through your main epiphanies. After Movement, what would be the next epiphany?
HH: Probably Platform (2015). It wasn’t just an album. It came out of a series of different projects. I was really coming to terms with online life and how the politics of the platforms we were all using, like Instagram, where we had really hyper-intimate relationships, were not actually public spaces but private, for-profit spaces. I was also trying to uncover some of the more intimate and real relationships there, trying to see my digital self through that album as part of myself, not as this other. That album actually has a track on it called “DAO,” which was made in 2014. I think it was even before Ethereum was released.
HUO: A DAO epiphany
HH: A lot of this should be credited to my partner, Mat Dryhurst. Mat was at UCLA researching decentralized website-specific posting in this project he had called Saga (2015). He was trying to make smart contracts without knowing what smart contracts were. That’s how we found out about Ethereum. There’s also a collaboration with Amnesia Scanner on that album called “An Exit,” because “exit” was a really big topic at that time. We were thinking, how do you exit from the contemporary art world? How do you exit from the current economic paradigm that we’re all in? It was very critical of platform capitalism.
HUO: With DAOs, it’s this idea of participation. How do you think your listeners can participate in and co-decide what you do next?
HH: That fast-forwards us way into the present where we’re dealing with machine learning models, and I think that’s very much an interactive and collaborative experience, specifically with the voice. My voice is a product of me growing up in East Tennessee, then moving to California, and then moving to Berlin; that’s a collective instrument that was formed through everyone that it came in contact with, but yet I as an individual perform this instrument with agency. I think that’s one of the reasons why I’m so obsessed with the voice: it’s always in this gray area between the collective and the individual. I think things are about to get really cool and psychedelic with how people can interact with each other’s identities and artwork and catalogue and archive—everything.
HUO: How did you bring machine learning into your album, Proto (2019)?
HH: Mat and I teamed up with a developer named Jules LaPlace who’s an American software engineer, and we just started downloading software off of GitHub and reading white papers from academia and playing around. It was really important for me to understand the fundamentals and what was possible. For example, style transfer was originally developed for images—like Hans Ulrich Obrist in the style of Van Gogh, and you could get whatever image of that—but there wasn’t that in the audio space to the same degree. We were doing spectrograms, images of sound, doing style transfer, and then transferring that spectrogram back into audio, which sounds terrible because spectrogram-to-audio is not a high-fidelity situation. But now we have all of these really high-fidelity systems for things like timbre transfer. It’s been really fun to see that unfold and develop. We started that project around 2016, and it was released in 2019. It takes a long time to make these things. We knew from the beginning that what we fed our AI baby, Spawn, was the only world that AI knew. So the data is really important.
HUO: Is the child here?
HH: Well, the child is here. I’m pregnant.
HUO: Congratulations.
HH: The child evolved. Spawn the noun has evolved into Spawn the verb. We use the term “spawning” for when you’re able to create new media trained on a canon of existing media. It’s different from sampling. Sampling is a one-to-one mechanical reproduction, and spawning is a generative ability to make entirely new works in the logic of something that came before. It’s a really important distinction that is difficult for people to grasp.
HUO: There’s a lot of discussion about AI. We had this conference hosted by Google Cultural Institute [now Google Arts & Culture] with Ian Cheng, Rachel Rose, and Hito Steyerl, and the conversation surrounded whether AI would replace human labor. I think what the artists all said is that it’s more like a collaboration. AI becomes a collaborator. Is Spawn a collaborator of yours?
HH: Spawn was definitely a collaborator on Proto, and now Spawn has evolved into a larger framework. Essentially, you could think about machine learning as a new internet. The impact is that profound. Just as the internet replaced a lot of jobs, it also created new jobs, and I’m hoping that machine learning will do the same. But we have to deal with it now and not just wait for it to unfold.
HUO: Can you tell me about your NFTs?
HH: The first one we did was Crossing the Interface (2021) and that was based on the 2013 libretto we commissioned from Reza Negarestani, who’s a philosopher of artificial intelligence and technology. We wanted to revive that text, and I think those are some of the first NFTs and works made using the text-to-animation ability. It’s a 13-part narrative series that we released through Foundation.
We also released an NFT series called CLASSIFIED (2021); it’s really important for me and the epiphany conversation we’re having. It’s a series of self-portraits. Think about it as a corollary to the camera, capturing the reality of that latent space. I was exploring what’s known as an embedding within that latent space. My embedding often exhibits qualities of a red braid or blue eyes. You can actually check it out with a tool that Spawning [an organization established by Herndon and Dryhurst alongside Jordan Meyer and Patrick Hoepner that is building AI tools for artists] just released this week called HaveIBeenTrained.com. You can put in your name, which would be your embedding keyword, and all of the images that show up are attached to your name in the line system. This is where things get really exciting and psychedelic. This is the public understanding of you. It’s an exercise in self-portraiture. You don’t have control over that at the moment. That’s one of the things that Spawning is trying to do: allow individuals and artists to actually tweak what their embedding is so that when somebody types in your name, you can actually feed it with the data that you want, or you can opt out of the system.
Say you’re trying to build a travel agency on top of this AI substrate—it’s important that if you put in airplane that the collective public understanding of airplane is correct and that’s a one-to-one understanding. But maybe you’re an artist that wants to be more creative with your reality. The term we’re using right now is “reality engineering,” where, in your own personal model, you can actually have airplane be whatever you want and have whatever properties you want it to have. For game development or for playing with your identity in the public data, latent space is from these public images, but you might want to artificially inseminate it with your own idea of who you are or even a collective idea. That’s what Holly+ plays with a little.
HUO: So you can edit. It’s also an exercise in portraiture and self-portraiture. A lot of portraiture in relation to NFTs has been the simplest PFPs. With Ian Cheng’s 3FACE (2022)—the way your wallet changes then your portrait changes—and with your self-portraits, they’re the opposite of PFPs now.
HH: The way I like to think about it is it’s almost like iconography in a really classical sense. We were in the Pyrenees a couple weeks ago and we went to this Opus Dei church. They have a giant collection of Virgin Marys from around the world. The Virgin Mary embedding is a mother often with a shroud, and with a child. But then you have people from around the world contributing their idea of what the Virgin Mary is to that collective understanding. Holly+ was built on a public documentation of my personal image, but where it could go and where we want to take it and where it’s interesting is having people submit, like the Virgin Mary, their idea of what Holly+ could be or should be, and then that goes into a collective understanding of what that character is. Then we can explore the latent space of what Holly+ is as a collective hallucination.
HUO: Can you tell me about the podcast? Because I’m a fan of your podcast.
HH: That started during the pandemic. Podcasting is one of these socially acceptable vehicles to just write someone out of the blue you find interesting and talk to them for two hours. It’s artists, it’s engineers, it’s people from the private and public sector, academics—anyone who’s doing interesting work. Shumon Basar’s episode was really fun. He defines a lot of really interesting terminology and operates at this really wonderful place where it’s academic in nature, but approachable. We try to shine light on really interesting projects in the Web3 space, explain to people who are coming to the space new what’s actually interesting about it—not just people trying to get rich. It’s actually people criticizing the current economic paradigm of the internet and trying to come up with a new paradigm
HUO: That of course brings us to the metaverse, the current buzzword which Neal Stephenson already coined in 1992. There are all these different definitions around. What’s your view on the metaverse? Gaming companies are very well-positioned in terms of building it.
HH: Meta might have killed the term metaverse when they rebranded themselves because nobody wants to associate with that name now. It’s this idea that we would all be living in this one world that’s controlled by one company. It’s straight out of an 80s sci-fi dystopia movie. Of course it should be a polyphony. I think that’s one of the core ideals of the Web3 space that was lost in the cacophony of money. It certainly should be a plural space and it should be a space where you can port your identities, your connections, and your communities.
HUO: So interoperability is the key to everything.
HH: 100%. That’s really important. You can spend years building a community on Instagram and then they can take it away from you like that. You have no power in that decision-making. That was one of the core ideas behind Channel, which is another side project that we did during the pandemic.
HUO: Let’s talk about unrealized projects. There are writers, novelists, poets, visual artists, and designers—all artistic forms or artistic practitioners—with unrealized projects, some because they’re too expensive to be realized and some because of censorship.
HH: Social norms are just a series of self-censorships.
HUO: My friend, the late Doris Lessing, said: the most important projects we should ask about, or should ask for, are actually the projects we haven’t done. I always wanted to write a novel, but I think it’d probably be embarrassing so I haven’t completed it yet.
HH: Oh, you should do it though, even if it is embarrassing.
HUO: Right? And that’s a form of self-censorship. What are Holly Herndon’s unrealized projects?
HH: They’re … being realized.
HUO: If we can’t yet speak about them because they’re still under wraps, we can talk a little bit about the music you’re doing right now. What’s next?
HH: We finally finished “Jolene.” We’re having it mastered.
HUO: Our conversation today began with Dolly Parton.
HH: She’s the patron saint of northeast Tennessee. Where I came from, she’s a really important figure. There’s a complex feeling when you come from a place like Appalachia. There can be a lot of shame, because it’s not a part of the world that’s necessarily celebrated, so having someone like her who embraces where she comes from was helpful for me as a child. I wanted to do a homage to her.
We have a new model of Holly+. The first model we released was with never-before-heard sounds, and that’s a polyphonic aestheticized version that was trained on my processed vocal stems. You can throw any audio file onto the interface and you get a choir singing back to you. The new version that we have was made with Voctro Labs. That’s a very naturalistic version trained on my natural singing voice, which is something that I don’t often release. Dealing with my natural singing voice was something that was complicated for me at the beginning.
We are working at the forefront of industry and trying to put together an infrastructure and a framework for artists to be able to have some control over their models, their own likeness.
HUO: I found an interview saying you knew Voctro Labs from YouTube and then approached them, and it led to this interesting collaboration.
HH: Jordi Bonada was actually involved in Vocaloid, which is Hatsune Miku. That’s a sample-based system. It’s not machine learning, but he’s been doing vocal tech for decades, and honestly, I think he’s the world’s leading person on vocal tech.
HUO: Jordi Bonada from Barcelona?
HH: From Barcelona. He’s a researcher. He’s not a very public person, but I’ve been watching his lectures. He’s been releasing his research to the public for years, and I think he’s a hero. I always wanted to work with him, so I contacted him and his collaborator, Jordi Janer. They have this company called Voctro Labs. They were doing all this choral modeling for choral software for people to practice choral singing. I said, “Can we make a custom Holly model?” And they said, “Of course,” so we used that to make the “Jolene” track. We fed that Holly+ system “Jolene” by Dolly Parton, and then my friend, Ryan Norris, who’s a really beautiful guitar player, did some great finger-picking along with it.
We’re releasing a video in the next couple weeks with Sam Rolfes. He’s very connected to gaming. He built a gaming world essentially based on a pastiche of my childhood. He put himself in a motion capture suit that’s attached to an avatar of a Holly, and he performs the Dolly Parton song in this gaming world for the video. We just did a performance in Helsinki in August called Identity Play. That was the first large-scale performance I had done using this vocal timbre transfer technique. The new naturalistic version of Holly+ that they already made has a real-time system so you can sing into a microphone and, in real-time, my voice will come out. It’s really wild. I did a TedTalk on this several months ago.
HUO: Can you tell us in a short synthesis the message of your TedTalk?
HH: Well, it is essentially explaining how some of these personal model systems work. It explains the concept of spawning and the idea of identity play: training a voice model and an image likeness model on an individual. It unlocks the ability for other people to perform through that individual—in this case, me, because I own my own IP. For the TedTalk my friend Pher performs first as himself and then as me so that you can see it in real time.
For Identity Play, I made an autobiographical play where I’m dealing with this idea of embedding a pastiche of my own childhood. Sam Rolfes and his collaborator Alexander Bowman created a virtual world of their idea of an East Tennessee childhood. They made an avatar version of me played by my friend Ryan Norris (who’s playing guitar on “Jolene”) and Sam Fuller-Smith (who is a vocalist from North Carolina) performing as me. Also, we have a choir, Philomela Choir from Helsinki, and they’re all wearing Holly embeddings which means that they all have a Holly-like presence.
HUO: It’s brilliant. Sam Rolfes brings us to the question of gaming. Do you have any projects with games, and is there a Holly Herndon game in the making?
HH: I feel like the container of the game, not so much, but I like the ideas from gaming very much—that you can engineer your own reality
HUO: Worldbuilding.
HH: The way that I’m trying to think about it is a little bit less of a separate container and more of something that bleeds into reality. I think gaming is a huge influence on that. A lot of the work that people have done with worldbuilding—Ian Cheng and other artists—is very interesting. David Kanaga—we just did an episode with him on Interdependence—is an independent game developer and composer, and he does really psychedelic games. He did a dog opera about universal basic income several years ago that was very good. His most recent project is a restaging of Wagner, and it’s narrated by Stephen Fry.
This is what artists are doing all the time anyway: we’re trying to create a reality that otherwise doesn’t exist. An in-person IRL game—there’s a lot to be done there—especially with this reality-modeling stuff. For me, it’s a question of: where’s the border of the game?
HUO: Do you want to create a missing infrastructure for artists to participate in? Can one say that you’re building an infrastructure?
HH: Definitely, we’re doing that actively with Spawning. We are working at the forefront of industry and trying to put together an infrastructure and a framework for artists to be able to have some control over their models, their own likeness. Not just artists: individuals, people. Imagine a Disney film that comes out in five years, and someone’s kid wants to be the princess in the movie. This will be possible. You can be the character.
There has to be a privacy component to it, but the cool thing is that even the more cavalier developer communities want artists to feel good about their data being used. They want an infrastructure in place. No one has built it yet. So that is what Mat and I are trying to do: create a permissive IP environment where people can give consent, and remunerate or set rules around their own likeness. If they choose to opt out or if they choose certain ways that they want their model to interact, it’s important that there’s a consent mechanism in place. There is not one at the moment.
The good news is that we’ve spoken to almost all of the top studios and companies that are building these models and they all want a consent infrastructure. That’s what we’re trying to build. People can opt in or opt out. More people will opt in than out because it is the new internet.
Most people opted in to participate in the internet and then, once opted in, have some control over who they are in that space. You should be able to control what the collective understanding of you is online.
HUO: There’s all these people who impersonate me on Instagram and on social media, and I never know if I should stop it or not. People all over the world are pretending to be me. It’s very confusing.
HH: Authentification...
HUO: Becomes key?
HH: Authentication is key. Consent is key.