How technological capitalism shapes & exploits human existence: Vauhini Vara on her new book SEARCHES

How technological capitalism shapes & exploits human existence: Vauhini Vara on her new book SEARCHES

Photo by Brigid McAuliffe

Vauhini Vara is a writer and editor in Colorado. She began her journalism career as a technology reporter at the Wall Street Journal and later launched, edited, and wrote for the business section of the New Yorker’s website. Since then, she has also written and edited for The New York Times Magazine, The Atlantic, and other publications, including Businessweek, where she is a contributing writer. Her journalism has been honored by the Asian American Journalists Association, the International Center for Journalists, the McGraw Center for Business Journalism, and others.

Her latest book is Searches (Pantheon, 2025), a work of journalism and memoir about how big technology companies are changing our understanding of ourselves and our communities; The New York Times, Publisher’s Weekly, Esquire and others have named it an anticipated book.

Her debut novel, The Immortal King Rao (Norton, 2022), was a finalist for the Pulitzer Prize, the National Book Critics Circle’s John Leonard Prize, and the Center for Fiction’s First Novel Prize. Her story collection, This is Salvaged (Norton, 2023), was longlisted for The Story Prize and the Mark Twain American Voice in Literature Award. For her body of work, she has been longlisted for the Joyce Carol Oates Award.

Interview by Nirica Srinivasan

How did Searches come together? I’m particularly interested in the experimental chapters that you include—a chapter of your Google searches, a chapter of your Amazon reviews. They were so fun and covered such a wide range of things. How did you think of them?

In 2019, I wrote this essay made up entirely of my Google searches, and it was published in The New York Times opinion section. And then about a year after that, I published this piece called “Ghosts,” that was this experiment in which I used an AI model, GPT-3, a predecessor to ChatGPT, to try to write about my grief over my sister’s death. When I wrote each of those two pieces, I didn't think of them as necessarily having anything to do with each other, but once I wrote them, I started realizing that I was doing something similar in those pieces. I was engaging with a product of a big technology company in a way that revealed something about myself, the product, and the company behind it at the same time.

Realizing that opened me up to the possibility that there might be all kinds of ways to do that, too, so that's when I started thinking about other experimental pieces I might write. I sold this book on proposal, and at the time I actually just proposed writing a book that was entirely those experimental pieces. When I turned the book in to my editor at the time, Lisa Lucas, that's all it was—it was just these experimental pieces. And she said to me, “I think you understand what it is you're trying to convey with these, but a reader might not—a reader might need some more hand-holding.” She proposed that I add an introduction and a conclusion. But when I heard her, I realized that she was right and that I needed to take that idea of being quite a bit more explicit on the page even further. And that’s when I created the kind of narrative throughline that carries a reader from the beginning of the book to the end.

Then there’s the third part, which are these little interstitial conversations with ChatGPT. Those actually were also Lisa's idea. She said to me, “I wonder what would happen if you shared parts of your book with ChatGPT?” And when she initially said that, I didn't have a positive immediate reaction—I felt like, ugh. Like, “That sounds terrible!” But then I played around just to see what would happen, and I quickly realized that those interactions with ChatGPT were actually doing something similar to my interactions with Google, my interactions with GPT-3, and some of the other experiments I'd undertaken. I realized that it was both revealing something about me and revealing something about the product in a way that I found compelling, and I hope other people will find it compelling, too.

Pantheon

Was there anything that surprised you in writing the book, but also specifically with those interactions with ChatGPT?

I wouldn't have published those sections of the book if ChatGPT hadn't sort of revealed something on the page about how the technology works. And how the corporation, the company—I guess at the time an organization, but the entity behind the technology—might play a role in shaping how the technology functions. I expected that I might see that, but I was also prepared to not see that. I was curious about what might happen. And I think what surprised me was how explicitly both inaccuracies and biases showed up in that ChatGPT conversation. Those were some of the things I thought I might see, but it was pretty intense how it showed up, I thought.

Yeah, I found some of it—I wouldn’t say surprising, because I also kind of expected some of it to show up, but it was shocking. I also think it was really impactful that you don't necessarily comment on those bits of the book as we’re reading it. It's kind of just there for us to take in.

I think a lot about tension, even though I sometimes write experimentally—you know, narrative tension and what it takes to move from one page to the next in any text. And so in those conversations, I was trying to build tension in some ways, through the withholding of reaction on my part. I expected a reader to read those things and think, “Isn't she gonna say something?” Ultimately I do directly address some of those things that have come up earlier. But I thought doing it in real-time on the page in some ways would diffuse the tension in a way that would make it less interesting.

I think this book was written pre-election, right? I am personally removed from the US geographically so a lot of what I hear is just what hits the news or what I see people react to, but there is this sense that these tech giants have amped up their game, but also that they've sort of given up on any secrecy with their political allegiances. A lot of it has been signposted, but a lot of it has happened quite openly post your book being written. I was wondering what it's been like these last few months for you to know your book is coming out and to have all of these things happen.

Honestly, I wish that I had more time to incorporate some of what's happened more recently! Because I think what it is is an extension of some of the things that I write about in the book. But it's gone further than I ever expected that it would, even when I was finishing writing the book and the election was approaching. I think, hopefully, this book is a nice setup for people to understand what led to this current moment. But I so wish that I could have included this current moment in the book itself.

This is something that I just generally really love across your books, so I’d love it if you talked about it not just for Searches but for The Immortal King Rao and This is Salvaged as well, which is how you approach your epigraphs!

Thank you for asking about my epigraphs, I love my epigraphs! I think of epigraphs as an opportunity to have a little mini-narrative right there on the page before the book even starts. In this book, I have two epigraphs. I like the way in which epigraphs can sort of speak to each other across time. I imagine in both the case of this book and my novel [which both have two epigraphs], the writers of those epigraphs are reaching across time somehow to speak to each other. And the epigraph from my story collection is from a Lorrie Moore story that I love.

I like for the epigraphs to open up or point to possible areas where readers might focus their attention. In this book, both of the epigraphs are about the relationship between language and power. I think a lot of people might read this book and say, “This is a book about big tech,” or “This is a book about the internet.” But to me, it's ultimately a book about language and power, and the way in which we individuals and communities use language and how powerful institutions—in this case, big technology companies—are constantly interested in appropriating that language of ours. I saw those epigraphs in this book as being a way to point to that without necessarily having some subtitle saying “How Big Technology Companies are Appropriating Language,” you know.

It's one of those situations, just like with the ChatGPT chapters, where you don't have to spell it out. And I think that's true of epigraphs everywhere. It's offering up that interpretation or perspective to the reader, which I love.

So, the actual subtitle to the book is Selfhood in the Digital Age. Something that I found really both relatable and impactful in this book is the idea of archiving or memory. I'm someone who really likes to collect things. I really like to have records of things. But I have been thinking over the last few years that if I had a photo album, and a photo faded, it wouldn't matter in a way, because the album is mine. But right now, because I rely on digital memorabilia so much, I'm basically handing over my memory to these tech overloads who don't care about me at all. I think that idea of memory in the digital age specifically is so interesting, and your book is so much about that digital archive or construction of memory that the internet offers (or forces). I was just wondering if you want to talk about that a bit.

Yeah. I agree with you that it's really fraught to know that these companies that we're entrusting with our information don't care about us. And actually, they do care about us to the extent that they can monetize the information we're providing to them. And yet, I continue to use them. I continue to let Google store my searches. I continue to let Amazon keep a record of my purchases. The list goes on and on. And the reason I do those things is because it's convenient for me. It's efficient. It provides me with a small benefit.

I think something I was really interested in with this book was acknowledging that there is this kind of transaction that takes place here. There is something that these companies are offering us that we are choosing to accept. The problem with that is that, of course, when we all do that, the companies benefit disproportionately. Yet we keep doing it. And in talking about the ways that I keep doing it, my point isn't for people to understand that these tech companies are providing a wonderful service. Rather, my point is to acknowledge our own complicity in the rising wealth and power of these big technology companies.

After I read the chapter on your searches, I looked up mine, and it only loaded the last five years or so, but it was still so surprising and fascinating to me, this version of myself through time that I hadn’t thought of before. Some of this is stuff that we don't know that they have—or rather, that we don't think about them having. What was it like to look at that data and understand something about yourself via it?

It surprised me. I used to keep a diary a long time ago—I haven't in a long time. I used to send my friends emails all the time about what was happening in my life. I don't necessarily do that anymore either because we're so constantly texting in shorthand, you know? So all of these documents that I might otherwise have, that I could go back to if I were to write my memoir in twenty years to understand what happened in the past, just don't exist. What does exist is my daily, hourly, use of Google and other platforms, to record in real time what I'm thinking about what I'm doing.

The last search that I could find in my own records was an image search for “world’s ugliest dog.” Which kind of surprised me, because I like to think of myself as somebody who doesn't get stuck in internet rabbit holes looking at ridiculous things! But clearly I am more of that kind of person than I had realized, you know? And there were poignant memory moments too. There are searches that are included in that essay that I don't remember ever having done, but I remember the context for them. And in some cases, the context is very personal and emotionally intense, right? It has to do with loss, with my own insecurities, with my own relationships. So, it was personally meaningful to me that that exists. And at the same time, it's frightening to me that it exists in the records of this big technology company.

I really love that when you write about “Ghosts,” you acknowledge that the AI-generated writing moved you. I think there's a tendency to either say, “This is soulless, I can always identify AI writing, I can always identify AI art.” And on the other hand to say, “This is amazing. This is fantastic. We should run towards this technology with open arms.” You maintain this really balanced view of it, which does include the acknowledgment that the writing can actually be moving. I would love to hear more of what you think about that acknowledgment—especially seeing as AI is just gonna get better and better (although trained on other people's work, and also apparently millions of pirated books!).

I'm glad you asked that question, because it points to what I think is kind of a self-defeating way of critiquing AI, and AI that generates language or images specifically. I think sometimes when people criticize these models that produce language and images, the easiest, most obvious critique is, “Well, it's no good,” right? “It's so easy to distinguish between this and human-generated art, because human art is so much better.”

But I agree with you—I think you're saying this—that there is a decent chance that the technology will improve, such that the distinction between AI-produced “content,” let's call it, and human art, will be harder to discern. And I am interested in looking ahead to that moment and asking what the possible critiques will be then. I think there are all kinds of possible critiques, and I wonder if bringing those critiques into the forefront now is maybe a better rhetorical move than focusing on whether the art is any good.

I think some of those critiques have to do with the cost of these technologies, by which I mean the environmental cost, as well as the labor costs—the way in which these technologies depend on the labor of poorly paid and exploited people in countries outside of the United States and Europe. And there’s also the labor of those who created the art—the literature, the visual art—that is being used to train these models. I think if critics focus on those aspects of how AI models are built, those criticisms will remain valid as long as these technologies are built the way that they're built.

On a more philosophical level, I'm interested in questions about what art even is. I have to give my husband, the writer Andrew Altschu, credit for this—Andrew always says, “What if we were to just decide among us that the definition of art is material that is created by humans as a form of human expression?” If we give art that definition, then it's a non-starter. Then AI produced content does not qualify as art.

There’s an anecdote in your book I keep thinking about. It’s where Sam Altman [CEO of OpenAI] proposes an idea about AI, and everyone laughs, and then he says, “I know it sounds like the setup to a Silicon Valley episode.” But he means it. And in that section you also discuss his view of what the world could look like, a system where everyone would become minor shareholders, which is kind of what happens in your novel, The Immortal King Rao.

I know it's trite to say we live in fictional times, or dystopian times—people say it all the time—but it does feel uniquely weird right now. Like, a reporter being added to a war group chat? None of it feels real in a way. I was just wondering how you feel about that kind of idea, given so many years working on technology in your journalism, but also as a fiction writer.

I think what's happening is that these individuals and institutions with a lot of wealth and power are making decisions about what the world should be like, in one month, and one year, and five years, and ten years. And because they have a lot of resources, they are able to put those resources toward creating that world. But then also, interestingly, as you point out, they have developed this habit of writing essays or making speeches in which they describe that future world as if it's inevitable. They use language like, “This will happen. Here's what's going to take place.”

I think that rhetoric should not just be read as just some more rich people mouthing off about what they think the world should be like. I think what they intend to do—or, I can't speak to their intent, but what this rhetoric does—is it creates a narrative that becomes self-reinforcing because they put this narrative into the world. The narrative is backed up by capital and resources. And then that narrative takes on a life of its own. That narrative ends up in the halls of Congress. That narrative ends up in the White House. That narrative ends up in the UN. And next thing we know, policies are being built around this narrative.

I hear all the time from people working in government, for example, in education, that AI is definitely the way of the future, and we just need to figure out what its role will be in our institutions. And I think even claiming that AI is the way of the future is a way in which we've adopted this rhetoric that these companies have created, and that serves the companies and doesn't necessarily serve us.

There are these alternate narratives that we can put forth. That's part of what I'm interested in doing in the book. Not necessarily to say, “here is the alternate narrative we should put forth,” but to open a space for alternate narratives. Hopefully, the ending of the book, rather than saying, “Here's the future we should strive for,” instead says something like, “Maybe there are all kinds of different futures.” What are some that we might imagine?

I love the last chapter [“What Is It Like to Be Alive?”, which consists of responses to a survey you put out in 2023]. I actually did respond to the survey!

Did you find something of yours in there?

Yeah! It was actually really, really interesting because there were times when I couldn't remember what I’d written, but I’d see a line or a paragraph and go, “Wait—that's me.” And there were others where I can't remember what I said and I couldn’t recognize what I’d written, so I might be in those responses or I might not. I have no idea.

Wow. Can I ask you a question? Because I was wondering about this—I'm so curious about how it felt to encounter your own words in that context.

It was really, really surprising. Mostly because, for some reason, I had no memory of what I had written, and then I’d read a line and realize it was something about me that I haven't told anybody. But it was just this anonymous form, so I was like, “I can tell you!” And I also was struck by how people answered questions so differently—there were times I wrote a paragraph, and others wrote two words, and vice versa.

That was actually going to be my next question, to ask you about this chapter—I'm assuming it must've been really moving for you, because it was really moving for me to read it, to read everyone’s responses!

Thank you for sharing that with me. That's really meaningful for me, to know that.

There were so many beautiful [responses]. When I sent this to my editor, it was so much longer! I had a back-and-forth with my first editor, Lisa, and then when she left Pantheon, I got a new editor, Denise Oswald. And with both of them, there was so much back-and-forth just to cut it down a little more. I could have published a whole book of those answers.

In some ways, all the different parts of the book make a rhetorical argument. I think if I can say that the last chapter contains a rhetorical argument, it has something to do with the value of human communication, including from people who don't necessarily identify as writers or storytellers in any way. Because I think the assumption behind saying that we should use ChatGPT to help us edit our text, for example, is that there's a certain “correct” way to communicate—and that ChatGPT has the key to that “correct way.” The reality is that there's no correct way to communicate, and what makes communication beautiful is all the different ways we communicate. Which means that ChatGPT, by hewing to a kind of flat, conformist, somewhat conservative writing style that subtly reinforces white American English norms—it claims to be making our communication better, but I would argue that it's actually making it worse. It felt like that last chapter was helping to make that statement.

It was amazing to read. I wonder what it was like to put it together—how did you approach arranging the responses? There was one section I really loved where there were about four or five answers in a row that were all about sisters.

I did [arrange] each of those answers, and then within the essay, the chapter as a whole. I started to discern ways to kind of create what felt like a narrative arc. What felt moving to me about that was that in an American and, to an extent, European conception of narrative, we have this idea that narrative must involve one storyteller with a single perspective telling a story in a certain way for a reason. And I really liked the way in which a kind of community narrative could emerge from all these different things that people said.

But then, to further complicate the idea that this is a beautiful alternative to ChatGPT is that I posted the survey in all kinds of different venues, but ultimately, I think all of the people who responded were responding in English. I would assume a majority of the people who responded were in the US, based on the answers. So there's a way too, in which even the answers to these survey questions are biased. It's too easy to say, “Oh, this represents all of us, a kind of global community, whereas Chat-GPT doesn't.” I think my essay itself actually reinscribes some of the things that I'm interested in moving away from. And by virtue of being the person curating these, choosing what to include, choosing which order to put them, I'm putting my finger on the scale, too, right? I'm exerting my own power there.

You might also like our December 2023 podcast interview with leading AI researcher, author, and TED speaker Blaise Agüera y Arcas about his book, Who Are We Now?

Check out our other recent author interviews

Nirica Srinivasan is a writer and illustrator from India. She likes stories with ambiguous endings and unreliable narrators.

Did you like what you read here? Please support our aim of providing diverse, independent, and in-depth coverage of arts, culture, and activism.

Interlocutor Interviews PODCAST ~ Priya Vulchi discusses her new book GOOD FRIENDS: Bonds That Change Us and the World

Interlocutor Interviews PODCAST ~ Priya Vulchi discusses her new book GOOD FRIENDS: Bonds That Change Us and the World

LLOYD KNIGHT reflects on his 20 years with the MARTHA GRAHAM DANCE COMPANY

LLOYD KNIGHT reflects on his 20 years with the MARTHA GRAHAM DANCE COMPANY

0