Film Features

Use of Deepfake Anthony Bourdain Voice in New Documentary Sparks Outrage—But Is it Immoral?

What autonomy should artists and beloved cultural figures have over their legacy after they pass?

If Anthony Bourdain wanted to be anything, it was to be genuine.

All you have to do is watch a few minutes of the famous chef onscreen to recognize that a desire for authenticity was at the core of his life, a quality reflected in every interaction he had and every painstakingly written voiceover he uttered. Of course, the always wry chef never would have used that word, though in an NPR interview he once described his favorite kind of restaurant as just that, saying, "It's, for lack of a better word - I hate this word, but I'll use it anyway - authentic."

Perhaps that's why fans of Bourdain have expressed outrage that documentary filmmaker Morgan Neville fabricated his voice in her recent documentary, Roadrunner: A Film About Anthony Bourdain.

Neville told the New Yorker's Helen Rosner that while many of the moments that use Bourdain's voice are stitched together from various podcasts, TV shows, and recorded conversations that Neville had access to, there are a few times throughout the film in which Bourdain's voice is a total fabrication. "I created an A.I. model of his voice," Neville told Rosner. "There were three quotes there I wanted his voice for that there were no recordings of," he explained.

Neville worked with a software company to turn about 12 hours of recordings of Bourdain's voice into a program that, essentially, allows the user to make Bourdain say anything they want. "You probably don't know what the other lines are that were spoken by the A.I., and you're not going to know," Neville said. "We can have a documentary-ethics panel about it later."

While everyone can agree that the creation of a robot that successfully mimics a voice as distinctive as Bourdain's is certainly eerie, the discussion on whether or not it's outright immoral became a fierce topic of debate across the Internet.

Twitter user @afroelven noted, "anthony bourdain was very intentional about the words he chose and very particular about how he said them so for a documentary to artificially use his voice is deeply fucked." User @tribelaw said in response to a New Yorker article that asked how we should feel about the simulated voice tweeted, "Easy question: We should feel shitty about being deceived with an AI-aided, deepfake version of Bourdain's actual voice. Shitty. Period."


But others seemed unbothered by the morality of the technology, taking the opportunity to joke about what they would have the voice of Bourdain say if they had access to the technology.

Adding fuel to the fire, Neville responded to the controversy in an interview with GQ, saying, "I checked, you know, with his widow and his literary executor, just to make sure people were cool with that."

But Bourdain's ex-wife, Ottavia Busia, soon joined the conversation, tweeting in response to Neville's claim, "I certainly was NOT the one who said Tony would have been cool with that."

Of course, this isn't the first time technology of this nature has attempted to bring back a dead icon for the pleasure of an audience. You might remember the now famous moment in 2012, when a Coachella crowd believed for a brief moment that Tupac Shakur was alive and well, performing on stage alongside Snoop Dogg. Or perhaps you caught wind of Whitney Houston's upcoming tour. Even more eerily, images from Audrey Hepburn's film catalogue were harvested to create a one-minute ad for Galaxy chocolate bars starring the late actress in 2013, long after her death.

In regards to these kinds of practices — the Whitney Houston hologram, in particular — journalist Simon Reynolds put it succinctly, "On an ethical and economic level, I would liken it to a form of 'ghost slavery'."

Of course, in the case of Anthony Bourdain's deepfaked voice, fans can take some comfort in the fact that the AI voice was made to read words Bourdain had actually written in life. Then again, almost more disconcerting than Neville's use of the AI voice is his use of the material it was made to read: a personal email that we have no reason to believe Bourdain ever intended to be read by anyone but the recipient.

It all raises the questions: What autonomy should artists and beloved cultural figures have over their legacy after they pass? How much of them belongs to us, and how much do they get to keep private and untouched?

While the morality of these digital resurrections remains firmly in a grey area, one can't help but feel pangs of renewed grief for someone who would have surely been a lively participant in this discussion of cinematic ethics — Bourdain himself. Bourdain was a documentary maker, of course, but he was also a storyteller who understood that sometimes a complex narrative has to be crafted rather than simply told.

When asked about his show Parts Unknown in a 2013 interview, Bourdain said, "We try to get the audience to a particular place. Our shows are very subjective, with no attempt of being fair."

So maybe Bourdain would have been cool with the AI recreation of his voice; maybe he would have recognized that the directorial decision served the film and ultimately given it his blessing. Or maybe he would have seen it as the complete antithesis of the genuine connections he strived to cultivate in his life and work. While there's no way to know, what seems to honor him best is continued discourse that freely questions art's authenticity and integrity in this glitchy modernity.