ru24.pro
News in English
Июль
2024

Errol Morris on whether you should be afraid of generative AI in documentaries

0

Errol Morris is used to being called a liar, a trickster, and a manipulator. Over his storied four-decade career, Morris told me these insults have been hurled at him again and again — often, because of how he chooses to illustrate real life events.

Today, The Thin Blue Line, Morris’ investigation into the shooting of a Dallas police officer, is regarded as a seminal documentary film. The 1988 feature led to the release of Randall Dale Adams, who had been wrongfully serving a murder sentence in a Texas prison. At the time, though, The Thin Blue Line was a lightning rod among film critics for its extensive use of reenactment. Instead of relying solely on archival footage, talking-head shots and cinéma vérité footage, filmed with available light, Morris recreated the scene of the crime on a set and hired actors to play real victims and alleged perpetrators. The film was barred from the Academy Awards documentary category.

These days, this kind of reenactment has been folded into the documentary filmmaking vocabulary, and Morris continues to champion the form in his more recent films. He’s since won an Academy Award, in 2003, for his portrait of Robert McNamara in The Fog of War. But Morris says the accusation that he deceives audiences with reenactments has followed him.

Recently, a similar debate has played out over generative AI in documentary film, with some alleging it’s the latest way filmmakers are fooling their audiences. Last November, the Archival Producers Alliance, an association of some 300 documentary filmmakers, published an open letter in The Hollywood Reporter alleging that its members had been pressured to produce AI-generated newspaper clippings and fake “historical” images to save time and money for documentary productions. The group has since drafted a set of ethical guidelines for generative AI adoption in the industry, which are set to be released in September.

In May, the debate boiled over when a Netflix true crime docuseries called “What Jennifer Did” was accused of using AI-generated photos of its main subject, Jennifer Pan, without disclosure. A lead producer on the docuseries denied those allegations, but the backlash against even the potential use of AI was swift and loud.

Is AI imagery at inherent odds with documentary filmmaking? Can it be used, like reenactment, to depict historical events? And if it can, can it be done ethically? I recently spoke with Errol Morris to talk through the parallel debates over generative AI and reenactment, to reflect on the legacy of his documentary work, and to find out whether he’d ever use an AI-generated image in one of his films.

This interview has been lightly edited for length and clarity.

Andrew Deck: I went back to some of the reported essays you wrote for The New York Times in 2008, where you look back on The Thin Blue Line. In one essay, you said you met audience members, and even a journalist, who believed the reenactments in the film were genuine footage shot at the scene of the crime that night in 1976. Walking away from that, did you feel the onus was on those audience members to discern which footage was reenacted, or did you feel a responsibility going forward to share that information more directly?

Errol Morris: Film isn’t reality, no matter how it’s shot. You could follow some strict set of documentary rules…it’s still a film. It’s not reality. I have this problem endlessly with Richard Brody, who writes reviews for The New Yorker, and who is a kind of a documentary purist. I guess the idea is that if you follow certain rules, the veritical nature of what you’re shooting will be guaranteed. But that’s nonsense, total nonsense. Truth, I like to remind people — whether we’re talking about filmmaking, or film journalism, or journalism, whatever — it’s a quest.

You’re trying to establish something that represents the real world. And that’s not something that is handed over to you by style, whatever the style might be. It’s a pursuit. But I guess people are so afraid of being tricked or manipulated that they feel if they impose a set of rules, somehow they don’t have to be afraid anymore. I would like to assure them that they still need to be afraid.

Deck: This is a somewhat tongue-in-cheek question, but I’m genuinely curious to hear the logic to your answer. One reenacted scene in The Thin Blue Line, which you’ve revisited yourself in your writing, and critics and scholars have revisited, is the spilled milkshake.

Morris: It was not the actual milkshake [from the crime scene], let me assure you.

Deck: Right, to my point. You’ve said that it was designed to make the audience think about where the police officer’s partner was at the time of the shooting — were they in their car or out of their car when they dropped the milkshake? It was less about representing the actual milkshake and more about capturing or posing a question to the audience.

Today, would you ever use generative AI to create a reenacted shot like that? And if you did, do you think it could accomplish the same goal of posing a question to the audience?

Morris: Interesting question. Say I answer your question: Oh sure, I’d use generative AI. And then people would say, well, you’re such a trickster, you’re a liar.

I would prefer to shoot it for real, and not to have those questions intrude in what I’m doing. I’d rather [the shot] be like I hoped it would be, a way of thinking about one of the things that I was interested in examining — the testimony of Teresa Turco, the partner of police officer Robert Wood, who was killed.

They had stopped at a fast food restaurant. She had ordered a milkshake. And one of the questions that arose during the trial was at what point did Teresa Turco get out of the car. Now, the milkshake was there to examine that question.

Deck: It seems like a question at the heart of the film. I wonder — entertaining the hypothetical again — do you feel like using an AI-generated shot could be a distraction from that question?

Morris: If people knew that it was generated by AI they would get their hackles up, because then the focus would be not on what the shot is trying to examine, but whether we were being tricked. And I don’t think we were being tricked in either instance, by the way. But would the opportunity arise to be accused of falsifying evidence and trickery? Of course it would.

People are looking for things to attack, and they were certainly looking for things to attack in The Thin Blue Line. I mean, maybe it’s my stupidity. Ultimately, you’re asking people to be in some ways sophisticated about how they look at imagery, and what they derive from imagery. But I have always liked what I’ve done — self-serving of me to say so — because it makes you think about the [connection between] film and reality. And people would prefer that they not have to think about that.

There’s no set of rules that says, okay, that eliminates all epistemic problems that you might have with the nature of imagery. That’s nonsense thinking, nonsense talk. Now, there’s no run around epistemology. Although I sometimes joke about taking the piss out of epistemology, I don’t think anything really does take the piss out of epistemology.

Deck: There’s a sentiment that I find often reporting on AI-generated imagery, that we as humans have a good enough eye to discern what is a human-made image and what is not…

Morris: Nah. Nahhh.

Deck: And there’s this assumption that we can still tell when something’s AI-generated, like spotting the contorted hands or the distorted backgrounds or some nonsensical details…

Morris: Nahhh.

So for my next film Separated [about the Trump-era border policy of family separation], we shot a scene where I had this fabulous art director, Eugenio [Caballero], who was the art director on Roma. We shot in and around Mexico City and we shot in a detention center where there were kids sleeping on floors covered by aluminum foil in a caged area, in cyclone fencing, etc., etc. He did a really good job — so good a job that people thought it was documentary footage.

Now you can ask me, was it my intention to deceive people? I can tell you no, it wasn’t. Did I want it to look believable and real, yeah. But I wanted the actors to be seen as actors and the scenes as sets. It wasn’t an attempted subterfuge, but people kept telling me: It looks so real, it must have been real. No, it wasn’t real.

Say it had been the real detention center. Say I had been there. And certainly there was film taken in real detention centers, with real kids, and real guards. But I wasn’t telling the story “did this really happen?” It’s assumed that it really happened. I was examining the nightmare that these policies produced.

I don’t know what to tell you, these things are so knotty and so complicated, but just because you follow a set of rules doesn’t mean that it’s any more real. It ultimately will come down to whether you feel you’re being tricked. I don’t think in my film, Separated, that there’s any trickery going on at all. It’s there to make you think about the nature of what might have transpired.

Deck: One conversation that has surfaced around AI-generated imagery in documentaries is disclosure. Some documentary filmmakers, including the APA, have advocated for AI-generated images to have title cards or other visual cues, like frames or filters or something else, to implicitly mark what parts of a film are AI-generated. You were just telling me that there are no rules that can solve that epistemic issue of whether an image is real or not, but how do you feel…

Morris: Well, of course, all images are unreal. They all are. That’s an important thing to remember. In reality, they’re images of reality taken by people under certain circumstances with certain equipment, and on and on and on and on and on. To think that there’s some way of showing material as being more veritical than others, that just doesn’t make any sense to me.

Deck: I do some reporting on disinformation and political deepfakes…

Morris: I would say that all information is disinformation, but go on.

Deck: Something that researchers say is that AI imagery now is often sophisticated enough, that it’s only an algorithmic model that can parse through whether an image or audio has been generated or altered by AI. Do you think that generative AI will make it harder for us to determine what is a true image, or has it always been as hard as it is today?

Morris: Now, we’re surrounded by falsehood everywhere. I used to say that I always knew people were morons, but Trump taught me just how stupid people really are. I mean, just look at what’s going on in the world around us, where things are being endlessly falsified, and lied about.

This is not, you know, Errol Morris trying to tell you that there’s no such thing as truth, and that everything is subjective, because I don’t believe that for a second. I believe that Biden won the [2020] election, and Trump did not. But we live in a world where people are constantly trying to tell you that black is white and white is black, and falsifying what they’re looking at. The whole fear about ChatGPT and the creation of imagery is, I think, fear based on the fact that we live in a world where everything is up for grabs. Everything is manipulated, everything is challenged, everything is endlessly falsified. I don’t know what the solution to that is. Hopefully, Trump will be defeated. But if he isn’t, you know, we live in communion in an increasingly Orwellian world. I mean, he was right about 1984. He just was a little early.

Deck: I wonder, though, about the archive that documentary filmmakers pull from. If a flood of deceptive images are created, because it’s cheaper and more easily accessible or more convenient to fabricate things than it was 20 years ago, I wonder whether that could actually pollute the archive — that is, even more than it already is. Do you think it will make it harder for documentary filmmakers to do their job if that happens?

Morris: Well, attempts at lying and falsification have existed since the beginning of history. In fact, history may depend on them. Part of our job is to sort through error, and try to find truth, if possible. I think people want it made easy. Well, if we can guarantee how images are going to be used and produced, then I’m not going to have to worry so much about whether an image is actually taken from the real world, or just manufactured out of old cloth. But one of my great joys is trying to figure out what’s true and false.

I believe we have a job to try to understand what’s out there in the world, it’s not going to be just handed over to us on a silver platter. I think The Thin Blue Line is a great example. I mean, certainly my own research on that case, which went on for over close to three years, had the amazing result where I got this guy out of prison due to the research. It’s just a lot of hard work, crazy hard work.

Deck: Well, here’s one case study of sorts. It was reported, but not confirmed, that a Netflix docuseries “What Jennifer Did” used AI images of its subject Jennifer Pan and presented them as archival photos. Bracketing the question of whether the photos were actually AI-generated or not, the incident was instructive to me in terms of how quick audiences were to criticize even the potential use of generative AI in a documentary.

Morris: Well, people love to criticize everything.

Deck: That’s a fact. But to me, it felt like there was a particular raw nerve here for audiences, maybe sensitized by the current fear cycle around AI images.

Morris: I wouldn’t use AI and I’ll tell you why — because I wouldn’t want to be open to those kinds of criticisms. I’ve taken enough abuse of one kind or another for reenactments. It doesn’t matter how many times I say that, if anything, I’m reenacting various falsehoods, so you can think about the nature of reality. But to offer these kinds of sophisticated arguments isn’t going to make people happy — they’ll turn the film off.

Deck: Another type of generative AI has come up in my reporting. I’m curious if you feel any differently about this: using AI to protect the anonymity of interview subjects. There’s been a couple examples. In 2022, there was a documentary called Another Body that used deepfake technology — basically, AI-generated face swaps — to mask the identity of victims of revenge porn. A few years ago, in 2020, there was a feature called Welcome to Chechnya that similarly used face swaps to identify or rather mask the identities of LGBTQ+ activists…

Morris: That seems to be okay.

Deck: What makes it different to you?

Morris: If I’m making a movie about families that are illegal immigrants to the United States, do I want to use real people and show their faces? I don’t think so. Should you hide the identity of people that could be prosecuted or targeted for violence? Of course you should.

Deck: You talked a lot about taking punches over the years for reenactment. I would argue, in large part due to your body of work, and your advocacy, reenactment has become a more accepted form in documentaries. Looking forward, do you think generative AI tools in the hands of the right filmmakers could also become more accepted in the industry?

Morris: Probably yes. I mean, I sometimes think when I read a book now, I wonder if this was created by ChatGPT or I wonder whether ChatGPT played some kind of role in the formation of this book…I won’t mention names…but I get that feeling now and then again. Maybe all of these tools and techniques are going to further erode our belief in what we see and what we read, I don’t know.

Deck: It sounds like the question is seeded now, though, in a way that it maybe wasn’t several years ago, in the back of your mind: Could this have been made with ChatGPT?

Morris: Once you know that it exists, it’s always there. It’s not going to suddenly vanish.

All I know is that imagery isn’t reality and all of these attempts to ensure that images are not seen as images but are seen as reality, are corrosive. Ultimately, I think we should own up to the fact that we’re creating, that it’s not reality itself. Our task is to get back — through films, books, magazines, articles, whatever — to get back to the real world, to the extent that it is recoverable, to figure out what really is out there.

People still react to what I do negatively. If I was just using a handheld camera with available light, they would feel comforted. Somehow that’s the standard of what’s real, or what’s believably real, but my job is not to make people feel comfortable. Fuck that.

Deck: Well, considering that, using AI might be a fast track to discomfort for a lot of people.

Morris: I have never used it in a film and really don’t want to. Maybe because I’m old-fashioned, I’m too old. Maybe if I was a different generation, I’d feel completely differently about it. I like to be in a position where I can examine and think about the relationship between film and reality, and not to obscure that distinction, but to enhance it and make people think about it. If OpenAI is a way to do that, well, fine.

Deck: It sounds almost like you feel you’ve fought enough battles over form in your career. Does it feel daunting to take up a new one?

Morris: I have. I think I’d like to give it a rest and make some movies for the time being.

Deck: Is there anything else that you wanted to say that I didn’t ask directly?

Morris: No. Any last words before sentencing? No. I’ll take my punishment like a man.

Photo by Nafis Azad, courtesy of Errol Morris.