Are humans the only ones that can be creative?
What is the relationship between creativity and intelligence?
That’s a fundamental, perhaps unanswerable, question. Is it also an obsolete one? The question today seems to be: What is the relationship between creativity and artificial intelligence? We tend to think of artistic creativity as a uniquely human endeavor, but what if it can be much more?
Philosophers, artists, and scientists are already debating whether the art and writing generated by Midjourney and ChatGPT are evidence of machines being creative. But should the focus be on the output — the art that’s generated? Or the input — the inspiration?
And what about the other, smaller ways in which we use our creativity, like through a prank on a friend or in a note to a loved one? Does the value of those communications change if AI creates them?
Meghan O’Gieblyn is an essayist and the author of God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning. She’s been thinking about our relationship with technology for a long time. Her book, originally published in 2022, made a convincing case that we’re going to have to reimagine what it means to be human in the age of artificial intelligence.
I invited O’Gieblyn on The Gray Area to explore how AI might force us to also reconsider the meaning — and importance — of creativity. As always, you can hear the full conversation on The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.
This conversation has been edited for length and clarity.
Sean Illing
How did you start studying and writing and thinking about the relationship between humans and computers?
Meghan O’Gieblyn
I came to my interest in technology in a very idiosyncratic way. I grew up in a very religious family. There was a lot of fear about technology when I was growing up during the Y2K crisis, for example, and just all of this focus on the end times prophecies, which were often filtered through the lens of emerging technologies and fears about emerging technologies.
I studied theology for two years at Moody Bible Institute, a very conservative, old Christian institution in Chicago, and ended up having a faith crisis while I was there. I left that belief system behind and just happened to read Ray Kurzweil and some other transhumanists in the years after that deconversion experience. I became kind of obsessed with the relationship between spiritual traditions and the larger philosophy of human nature that I had grown up with, this idea that humans are made in the image of God, that we’ve been given these divine capacities for reason and creativity.
Sean Illing
Since you brought it up, I should ask for a thumbnail definition of transhumanism.
Meghan O’Gieblyn
Transhumanism is a movement that emerged primarily in Silicon Valley in the ’80s and ’90s. Followers believed that humans could use technology to evolve into a higher form of intelligence.
At the time, the conversations about those possibilities were very speculative. But I think the things that were being discussed at that time are very much being implemented now into technologies that we’re using every day.
Sean Illing
You once asked a computer scientist what he thought creativity meant and he told you, “Well that’s easy, it’s just randomness.” What do you make of that view of creativity?
Meghan O’Gieblyn
It’s no coincidence that a computer scientist came up with this definition. If you’re thinking about creativity, or what we call creativity, in large language models (LLMs), you can play around with the temperature gauges. You can basically turn up the temperature and turn up the amount of randomness in the output that you get. So if you ask ChatGPT to give you a list of animals at a low temperature, it’ll say something very basic like a dog, a cat, a horse. And if you turn up the temperature, it’ll give you more unusual responses, more statistically unlikely responses like an ant eater. Or if you turn it way up, it’ll make up an animal like a whistledy-woo or some Seussian creature that doesn’t exist. So there is some element of randomness.
I’m inclined to think that creativity is not just randomness because we also appreciate order and meaning.
The things that I appreciate in art have a lot to do with vision, with point of view, with the sense that you’re seeing something that’s been filtered through an autobiography, through a life story. And I think it’s really difficult to talk about how that’s happening in AI models.
Sean Illing
We have these large language models, things like ChatGPT and Midjourney, and they produce language, but they do it without anything that I’d call consciousness. Consciousness is something that’s notoriously hard to define, but let’s just call it the sensation of being an agent in the world. LLMs don’t have that, but is there any way you could call what they’re doing creative?
Meghan O’Gieblyn
The difficult thing is that creativity is a concept that is, like all human concepts, intrinsically anthropocentric. We created the term “creativity” to describe what we do as humans. We have this bad habit of changing the definition of words to suit our opinion of ourselves, especially when machines turn out to be able to do tasks that we previously thought were limited to us.
Inspiration has this almost metaphysical or divine undertone to it. And now that we see a lot of that work done by automated processes, it becomes more difficult to say what creativity really is. I think there’s already an effort, and I sense it myself too, to cordon off this more special island of human exceptionalism and say, “No, what I’m doing is actually different.”
Sean Illing
Do you think a machine or an AI could ever really communicate in any meaningful way?
Meghan O’Gieblyn
There’s things that you can say in an essay or a book that you can’t say just in normal social conversations, just because of the form.
I love seeing the way that other people see the world. When people ask, “Do you think an AI could create the next best American novel, the great American novel?” We’re talking a lot in those hypotheticals about technical skill.
And to me, I think even if it was — on the sentence level or even on the level of concepts and ideas — something that we would consider, virtuoso, just the fact that it came from a machine changes the way that we experience it. When I’m reading something online and I start to suspect that it was generated by AI, it changes the way I’m reading. I think that there’s always that larger context of how we experience things, and intent and consciousness is a big part of it.
Sean Illing
There’s something about the intentionality behind artistic creations that really matters to us. It’s not like when I consume a piece of art, I’m asking myself, how long did it take to make this? But I know subconsciously there was a lot of thought and energy put into it, that there was a creator with experiences and feelings that I can relate to who’s communicating something in a way they couldn’t if they weren’t a fellow human being. That matters, right?
Meghan O’Gieblyn
I think that the effort that we have to put into making things is part of what gives it meaning, both for the audience and the person who’s producing it. The actual sacrifices and the difficulty of making something is what makes it feel really satisfying when you finally get it right. And it’s also true for the person experiencing it.
I think about this a lot even with things that we might not consider works of genius. Everyday people have always been creative — like my grandfather, who would occasionally write poetry or make up funny poems for different occasions. He didn’t have a college education, but he was creative and the poems were personalized for the person or for the occasion. And that’s precisely the kind of thing that an LLM couldn’t do very well, right? It could write a simple poem and you could prompt it to do that. But what would that mean to us if it was just produced by a prompt? I think that really does change how you experience something like that.
Sean Illing
Do you remember that controversy over the Google Gemini commercial? It’s Google’s competitor with OpenAI’s ChatGPT. The commercial has a young girl who wants to write a fan letter to her hero, who’s an Olympic gold medalist or something like that. Her dad says something like “I’m okay with words, but this letter has to be perfect.” And so he’s just going to let the AI write it for them.
It is horrifying to me because it shows that AI isn’t just coming for our art and entertainment, it’s not just going to be writing sitcoms or doing podcasts, it’s going to supplant sincere authentic human-to-human communication. It’s going to automate our emotional lives. And I don’t know what to call that potential world other than a machine world populated by machine-like people and maybe eventually just machine people. And that’s a world I desperately, desperately want to avoid.
Meghan O’Gieblyn
For a long time I wrote an advice column for Wired magazine where people could write in with questions about technology in their everyday life. And one of the questions I got very shortly after ChatGPT was released was from somebody who is going to be the best man in their friend’s wedding. And he said, “Can I use ChatGPT, ethically, to do a best man’s speech for me?”
There’s cases of people doing this. People use it to write their wedding vows. And my first instinct was like, well, you’re robbing yourself of the ability to actually try to put into words what you are feeling for your friend and what that relationship means to you. And it’s not as though those feelings just exist in you already.
I think anyone who’s written something very personal like this realizes that you actually start to feel the emotions as you’re putting it into language and trying to articulate it. I think about the same thing with this hypothetical fan letter that the girl is writing in the commercial, right? It’s like you’re stealing from your child the opportunity to actually try to access her emotions through language.
Sean Illing
Do you think that AI will make radically new kinds of art possible?
Meghan O’Gieblyn
Any of us who are daring to speak about this topic right now really are putting ourselves out there and risking looking stupid in two years or five years down the road. But it is true that AI is often called an alien form of intelligence and the fact is that it reasons very differently than we do. It doesn’t intuitively understand what’s relevant in a dataset the way that we do because we’ve evolved together to value the same things as humans.
Sean Illing
This is a big question, but I’m comfortable asking you because of your theological background. Do you think we have any real sense of the spiritual impact of AI?
Meghan O’Gieblyn
It’s a paradox in some way, right? Technologies are very anti-spiritual in the sense that they usually represent a very reductive and materialist understanding of human nature. But with every new technological development, there’s also been this tendency to spiritualize it or think of it in superstitious ways.
I think about the emergence of photography during the Civil War and how people believed that you could see dead people in the background. Or the idea that radio could transmit voices from the spiritual world. It’s not as though technology is going to rob us of a spiritual life. I think that technological progress competes with the type of transcendence that spiritual and religious traditions talk about, in the sense that it is a way to push beyond our current existence and get in touch with something that’s bigger than the human. I think a very deep human instinct is to try to get in touch with something that’s bigger than us.
And I think that there’s a trace of that in the effort to build AGI. This idea that we’re going to create something that is going to be able to see the world from a higher perspective, right? And that’s going to be able to give our lives meaning in a new way.
If you look at most spiritual traditions and wisdom literature from around the world, it usually involves this paradox where if you want to transcend yourself, you also have to acknowledge your limitations. You have to acknowledge that the ego is an illusion, you have to admit that you’re a sinner, you have to humble yourself in order to access that higher reality. And I think technology is a sort of transcendence without the work and the suffering that that entails for us in a more spiritual sense.
Sean Illing
What I’m always thinking about in these sorts of conversations is this long-term question of what we are as human beings, what we’re doing to ourselves, and what we’re evolving into. Nietzsche loved this distinction between being versus becoming. Humanity is not some fixed thing. We’re not a static being. Like everything in nature, we’re in this process of becoming. So what are we becoming?
Meghan O’Gieblyn
At some point, I think, a threshold is crossed, right? Where is that? If we’re becoming something, we’ve already been becoming something different with the technologies that we’re using right now. And is there some hard line where we’ll become post-human or another species? I don’t know. My instinct is to think that there’s going to be more pushback against that future as we approach it than it might seem right now in the abstract.
I think that it’s difficult to articulate exactly what we value about the human experience until we are confronted with technologies that are threatening it in some way.
Some of the really great writing and the conversations that are happening right now are about trying to actually put into words what we value about being human. And I think these technologies might actually help clarify that conversation in a way that we haven’t been forced to articulate it before. They can help us think about what our values are and how can we create technology that is actually going to serve those values, as opposed to making us the subjects of what these machines happen to be good at doing.
Listen to the rest of the conversation and be sure to follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you listen to podcasts.