Abstract

This conversation is excerpted and edited from a transcript of a live event, “Wishful Thinking and AI: An Evening with Ted Chiang and Dr. Emily M. Bender,” which took place in Seattle, Washington, on November 10, 2023. The conversation was moderated by Tom Nissley and edited for Critical AI by Lauren M. E. Goodlad and Kelsey Keyes. Questions from the audience have been edited for concision.

Tom Nissley (TN): I want to talk about intelligence and start by reading a little bit of Ted Chiang's “Story of Your Life.” I've been immersing myself in some of Emily's work, and then I went back to reread “Story of Your Life” and it just hit the things that we want to talk about so beautifully. This is from early in the story when Louise, the linguist, is first invited to talk to Colonel Weber, the military man who's in charge of the encounter with the aliens.

We entered my office. I moved a couple stacks of books off the second guest chair and we all sat down. “You said you wanted me to listen to a recording. I presume this has something to do with the aliens.”

“All I can offer is the recording,” said Colonel Weber.

“Okay. Let's hear it.”

Colonel Weber took a tape machine out of his briefcase and pressed play. The recording sounded vaguely like that of a wet dog shaking the water out of its fur.

“What do you make of that?” he asked.

I withheld my comparison to a wet dog. “What was the context in which this recording was made?”

“I'm not at liberty to say.”

“It would help me interpret those sounds. Could you see the alien while it was speaking? Was it doing anything at the time?”

“The recording is all I can offer.”

“You won't be giving anything away if you tell me you've seen the alien. The public assumes you have.”

Colonel Weber wasn't budging. “Do you have any opinion about its linguistic properties?” he asked.

“Well, it's clear their vocal tract is substantially different than a human's. I assume these aliens don't look like humans?”

And it goes on from there. So my first question is: Linguists must love this story.

Emily M. Bender (EMB): Yes. All my linguist friends are jealous that I get to meet Ted and do this event. I'm going to be speaking at the Linguistic Society of America in January, and I'm already arranging my talk so I can brag about this.

It is lovely to see the representation of linguists and how linguists view the world and do our work. I do have to mention that the kernel of the story, this idea that the alien language allows her to do different kinds of thinking and experiencing time, is kind of a third rail in linguistics.

TN: The thing he's getting at there—that language doesn't just exist as a recording, that it depends on physical presence of the other person, of being, that Louise demands if she's going to learn this language, she has to be with the beings that are speaking it—that seems, in my layperson's understanding, to get at a basic question in linguistics. Does language depend on context? Can it exist without context? Without physical embodiment?

EMB: Yes, but also what is meant when we say that language depends on context. There are words that obviously depend on context, like here and I and you. But there are also subtle ways that meaning depends on who's speaking, whom they're speaking to, and when they're speaking.

I like the example of the word phone. Think about what the word phone means now versus what it meant forty years ago versus what it meant eighty years ago. There's the general context, there's our shared knowledge, but there's also what's going on in the moment in which the communication takes place. There are lots of different kinds of dependencies.

Ted Chiang (TC): Your example of the word phone reminds me of an example I remember from an article on writing science fiction about how words evolve and change their meanings. The example was let's charge the battery. You could have said that sentence hundreds of years ago, but it would have meant something entirely different than what it came to mean in the twentieth century.

TN: I'm curious when what we call AI entered your life, and the field of linguistics? When did it become a big deal for the work that you do?

EMB: I came into linguistics and then computational linguistics in the thick of the AI winter. When I was in college, the computer scientists around me would joke that AI was anything that a computer couldn't do yet. And soon as you built the program to do it, then it wasn't AI anymore. And very few people said they were working on AI. It didn't attract VC funding.

Sometime in the last five to ten years, when deep learning became practical and people started throwing money at it, AI suddenly became the buzzword that made everything worth millions of dollars.

But it's really in the past maybe four years that it really started impinging on my field of computational linguistics, with the introduction of these large language models. And that was the point at which I realized that it's the linguists who are in the position to see through this, and we have some work to do to speak back to the rest of the world.

I encourage people to talk about automation instead of “AI” and to compare what “AI” is in the real world to what “AI” means in speculative fiction.

TN: Ted, you started writing about AI in the real world. What has drawn you—as an essayist—to write about it?

TC: The recent hype started, maybe, with GPT-2, and I started becoming more curious about what these large language models were doing. And I think everyone can admit they are surprising. A few years ago, no one expected that we would have programs that could do what these programs are doing. Even with the most conservative interpretation of what they're doing, we did not expect what we are seeing.

So I wanted to know, how are they doing these things? Could they actually understand things? Are they actually thinking? And as I read more about them, I realized, no, they don't understand things and aren't thinking. But what exactly are they doing?

We need another, nonanthropomorphic way to make sense of what they are doing.

TN: This is for both of you: Why do we think they're thinking? What is it about the way that they work that taps into something that we are built to recognize?

EMB: It comes back to what you were saying about Louise in “Story of Your Life,” when she asks, “What's the context?” In order to interpret these sounds, I need to be able to be copresent with the entity that may be expressing something so that I can attempt to interpret intersubjectively. That's the term we use for when I join my attention with that other entity to try to get a sense of what might be going on for that other entity internally—so that I can make sense of what I'm hearing.

The difference with ChatGPT is twofold: First, it's already using a language we understand. Second, there's no entity behind it. There's no thinking mind there. But we reflexively interpret language that we understand by assuming that second part: it's what we're accustomed to doing. We really can't help ourselves.

TC: Yes, the fact that it is producing coherent sentences totally throws us off. But a point that is often lost is that these coherent sentences aren't actually language in the way linguists use that term. When a linguist uses the “language” they are talking about an act of communication. But there's no communication happening in the case of large language models.

I think it would be very easy to get a ChatGPT to emit a series of words such as “I am happy to see you.” But one thing we can be sure of is that ChatGPT is not happy to see you. [audience laughter]

Think about it: a dog can be happy to see you, and a dog will demonstrate that it's happy to see you. A child who is prelinguistic can be happy to see you and they will demonstrate that they are happy to see you. Both actually feel when they're happy to see you and may want you to know that they are happy to see you. Those are fundamental underpinnings of, or prerequisites for, actually saying “I'm happy to see you.”

But ChatGPT doesn't have those prerequisites. Because language is reflexive for us, we tend to forget that it lies on top of these other phenomena of feeling a thing and wanting to communicate that thing. So we're tempted to project those prerequisites when ChatGPT emits these coherent sentences.

TN: Emily, what do you see as the risk of imagining that there is someone who likes us behind those texts?

EMB: Unfortunately, there's a lot of risk, which all stems from the fact it seems to be thinking. But it's a computer, and computers are supposed to be objective and to have a lot of data inside. So you get people using it to provide precedents in case law—not realizing that a chatbot isn't a search engine.

Or people constantly suggest that we should use these models for talk therapy. Or the National Eating Disorder Alliance disbands their helpline and replaces it with a chatbot that promptly provides inappropriate dieting advice to the people dialing in.

Because ChatGPT can reproduce the form of a precedent in case law or use the kind of words that therapists use, it is easy to believe that it actually is those things. But that's mistaking the form of the artifact for the actual experience and substantive content that you would need to cite the right legal precedent appropriately or practice therapy.

The danger comes when people begin to rely on the information, or when the noninformation ends up on the internet. And then some people begin to use it as an excuse to pretend that this is a solution for social problems—as if it can take the place of insufficient resources for education, or mental health care.

TC: I think there are a lot of problems with the way AI is being used. This tendency of people to personify AI has specific issues and problems because that is something that corporations are going to capitalize on, and they're going to use that against us. They will take advantage of this tendency so that we treat objects with more respect, as if they're people, and give them the same deference that we owe to people, when they are, in fact, just objects. And when those objects are corporate products, that works to the corporation's advantage. So this is a vulnerability in human psychology that corporations are trying to exploit.

TN: That connects to something the media loves to talk about: that AI is going to replace creative writers. What are your thoughts about that threat or about the possibility of using it creatively?

EMB: People might say: my AI wrote a beautiful poem, but what I learned from my mom, the poet Sheila Bender, is that poetry and creative writing are about sharing an experience that you had so that other people can experience it with you or feel the resonance of it. And the papier-mâché language that comes out of these systems isn't representing the experience of any entity, so I don't think it can be creative writing.

I do think there's a risk that it is going to be harder to make a living as a writer, as corporations try to say, well, we can get the writing or illustration done much more cheaply by taking the output of this system that was built with stolen art, visual or linguistic, and just repurpose that.

There's a risk that if we don't stand up and defend the rights of labor—and shout out to the writers (WGA), who did the right thing and won!—then we could harm humanity's ability to nurture, appreciate, and benefit from creative writers.

TC: One aspect of this topic that I'd like to discuss is the market economics of it. It's not at all clear to me that AI-generated text is a game-changer for the prose fiction market because, frankly, the amount that you pay the author is only a tiny fraction of the cost of publishing a book. And so you are not actually saving all that much by replacing the author.

The apt comparison, I think, is to Kindle Direct Publishing. When Kindle Direct Publishing started and anyone was able to publish e-books and sell them, some people thought it was going to be the death of traditional publishing. And Kindle Direct Publishing is definitely thriving now; there's a whole ecosystem there. But it has not become a big threat. Traditional book publishing is under threats from a lot of different directions, but Kindle Direct Publishing is not the biggest one. I think AI-generated prose occupies a similar niche in terms of the market. There's almost an infinite supply of self-published books. But the problem of book publishing has never been that we don't have enough submissions or we don't have enough books or manuscripts. In market terms the problem of publishing is really a scarcity of attention, and how to get people to pay attention to this book. It's really a competition for promotion and publicity.

TN: We need more readers, not more writers.

TC: Exactly. If AI could create more readers, that would fundamentally change things. We already have almost an infinite amount of text.

EMB: I think that there are some risks that are posed, though, to the ability of the authentic authors to catch attention. For example, the author Jane Friedman found that Amazon was selling LLM-generated books that had her name on them; and these books ended up on her Goodreads profile and for a while she couldn't get either Amazon or Goodreads to take these books down. So she appealed to her following and it got fixed.

But then she said, “This is a problem. I have enough clout I can get them to listen, but what about other authors who don't?” This is how synthetic text can hurt authors.

TC: And that is absolutely an issue, but it speaks more to the way AI-generated text is a boon for scammers, and the internet is an environment very conducive to scamming. It's an example of a broader problem of valuable human-generated text being drowned out in a sea of AI-generated nonsense, which is happening at every level. Websites, news sites, all sorts of sites. But I think that is a different problem than the threat to creativity.

TN: As a bookstore owner, I find the idea of Amazon being flooded with noise and trash—bring it on, because more people are gonna come to your little neighborhood bookstore where we're a trusted vendor!

TC: I was at an event earlier this year with a computer scientist named Anna Rogers, and she phrased this distinction in an interesting way. She said there's writing-as-thinking and there is writing-as-nuisance. And AI-generated text could be a boon for writing-as-nuisance because the world calls upon us to generate a lot of bullshit text. A tool that handles that nuisance aspect of writing for you is, arguably, of some utility. But as readers, you're interested in writing-as-thinking, and AI-generated text is not going to fill that need.

But the problem is that, for an undergraduate, their essay assignments, they—

TN: —seem like a nuisance.

TC: They think writing is a nuisance. What we are trying to get them to understand is that writing is thinking, and we are trying to give them practice in thinking. But students mistake this as nuisance, and that's why they are turning to these tools. It's a gigantic problem that might be unsolvable. I don't know. We will have to see.

TN: There was an article by Vauhini Vara that got a lot of attention. It was a piece called “Ghosts.” It was about her sister's death, and she said: “I've never been able to find the words for this.” And so she had ChatGPT-3 generate some words and she found it useful. Since then she's written a pretty ambivalent piece about it. There was one line, she said, that people always commented on as the best part, and that she loved as well—about holding her sister's hand, the hand that's writing this story—and that was actually a line the robot wrote. Which I found both creepy and fascinating: that the thing that was most compelling was the falsest thing. There was no hand writing that story, but that was the part that felt the truest. It was a very creepy piece to read, partly because it's memoir, it's not fiction, and the computer was making up things about her life that it assumed was real. It was a very ambiguous and interesting piece.

TC: I'm reminded of Philip K. Dick, who wrote The Man in the High Castle using the I Ching. As I recall, he would throw coins and consult the appropriate part of the I Ching to plot the novel. And that is widely regarded as one of his better novels.

So yes, good work can come from things that are not people, and that is a good line, but it's only because of how she was able to make use of it in that text.

EMB: I think there's an important difference between using ChatGPT and the I Ching that way. If you're using the I Ching or some other process where you are randomizing something, this is truly random. If you are reading lots of books and being inspired by the authors you've read, there's an organic sort of natural connection to the author that you read and what you wrote. But ChatGPT is a randomized process of other people's writing. In some instances that beautiful idea that you found from the machine might have belonged to some other author. It can be okay to repeat the ideas of other authors—if you've read them. But is it okay without even having read them?

TN: Ted, you've written a lot about artificial intelligence. Can you talk about how that writing has affected your views on this technology?

TC: One thing I think is interesting about the idea of artificial intelligence is, will it (as many allege) help us understand human cognition? I'd say large language models will not. Or to take a different example, is this a research effort that will give rise to a machine that has subjective experience? That's something which I think is interesting from a philosophical perspective. But no, I don't think LLMs will do that, either.

I think LLMs are mostly interesting as a way of studying large bodies of text. It's amazing what can be gleaned from a statistical analysis of a large body of text. So, if that's your goal, then sure, LLMs are great.

EMB: Bear in mind, though, if we want to find out what we glean from large bodies of text, that's only scientifically valid if we know what the text was. With these closed models, we don't know that. And we are probably getting close to grabbing all of the accessible text—but that's still not a scientifically coherent body of text to be studying.

TN: Emily, as you train people going into this field, what conversations do you have with young people who are in this field—to help them make life decisions about what to do with their work?

EMB: Well, linguists do a range of things. Right now I'm teaching a class on the societal impacts of language technology, including what has gone wrong so far, what kinds of regulation and corporate best practices are needed, how do we negotiate those regulations? And throughout the curriculum there are opportunities to look at the question of what could go wrong.

I really appreciate the body of work that gets called value sensitive design. It's a school of thought that's identifying who's impacted by the use of technology. Obviously, the person using it, but who else is having their life experience shaped by somebody using this technology? What design choices do we have? And how do we work with the people who are impacted to make sure their values are represented in those design choices?

And we also think about how you share what you learned. I tell the people in my program: You have some knowledge to share with the broader world. Let's talk about how to do that. How do you take what you've learned and make it accessible to somebody else? How do you find moments to enter into that conversation and feel comfortable doing it? Social media isn't for everybody. What other ways can you think about sharing?

TN: Do you have any hopes for sensible regulation of AI? You talk to lawmakers. Do you think it's at all possible?

EMB: What's giving me hope is that in the initial consultation with the broader public, I saw lawmakers talking to corporations first and foremost, and now there seems to be more representation of civil society, so that is giving me hope.

TN: We're now turning to questions from the audience.

AUDIENCE QUESTION: I was a little bit disappointed to only hear about the risks and negative sides of this technology. . . . Arabic is my native language, and without large language models back in 2007 I wouldn't have been able to understand a lot of English text. So even back then I was using large language models [and working on improvements for] Arabic translation. And as with every technology there are some negative and positive sides, and I think it's important to highlight the nuances of both.

EMB: So I think that there's a lot of benefit to be had in specific technologies that are built for a specific purpose. Machine translation can be very valuable. It's the sort of thing you can build and test. Is this working? How well is it working? Does this technique work better? And it needs to be transparent to the users; a user of the machine translation system needs to know this is not a direct representation of what was said but a statistical approximation that might be wrong. And I think we have some way to go in making that uncertainty apparent to the user. Automatic transcription is another very useful technology. I really appreciate having a spellchecker. So language technology can certainly be valuable. Where I get off the bandwagon is “AI.” If you're calling it AI but it's actually something else, then call it what it is. If you're trying to build general-purpose AI, my question is: What's that for?

AUDIENCE QUESTION: When I read a story, I have a reader-to-author relationship, I trust you, I know you're a human being who wrote this story, and there's a whole lot of value in that. How do writers think about what they bring to writing as a human that is of value?

TC: For most of my writing career, I did not make a living. [laughter]

But I continued to write because, like with most people who write, there's something you need to get out. There's something you're trying to say, and you hope someone will read it.

You cast that out there and hope it connects with someone. The reality of being a writer is that you don't know that you'll ever connect with someone, and you do it anyway. That urge to get out the thing that you need to get out, that will always be there. And so even if in some weird way the market changes—if it turns out that AI-generated fiction actually does sell a lot, well, human beings are still going to want to write. They have never made much money at it, but they hope somebody will pick it up and get something out of it. I don't think that will change.

TN: A question from our Zoom audience. “The death of the author is a way of reading and analyzing literary works as if there were no author, examining just the text. One interpretation of this is that meaning is only interpreted into a work; it is never implicit in the work itself. It means nothing without the interpreter. Does this mean that there is no distinction between authorless works and AI works? How does one distinguish between works when you don't know who or what wrote it?”

EMB: I think there is a distinction because—prior to large language models—we still would assume that anonymous work was written by some person or group of people. So we would attribute a mind and a set of experiences to that person, knowing that we could well be wrong about the specifics. We're doing our best to understand based on the artifact we're looking at. But in the case of a large language model, there's no there there. There's no experience there. I think you could still run the literary analysis. But I honestly think it would be a waste of time.

TC: We are never presented a book without any contextual information; we always have some context. At minimum, we have an author's name. Publishers and booksellers usually provide much more context. You would never go into a bookstore where all the covers were blank and just start reading books at random—that's not a reading experience anyone wants. We want a highly contextualized reading experience. We don't want to read under double-blind conditions.

AUDIENCE QUESTION: Imagining that in the next couple of years we are probably going to see some fairly widespread societal adoption of artificial intelligence, or the use of large language models as simulacra for intelligence, what kind of impacts do you think that we're going to see on a societal level when intelligence becomes—in a very American term—cheap?

EMB: I reject the inevitability argument. Some people are speculating that essays will become obsolete because students will perceive it as a nuisance and use this tool to do it. Certainly we can implement policies and try to design assignments around it. But we also have the choice to say collectively that's actually not the path we're going down. The price of computing chips is going down, but the price of energy and the consequences of using it are not going down.

AUDIENCE QUESTION: In thinking about AI, where do you draw the line of what it means to be human?

TC: Rather than address the question of what does it mean to be human, let me give you another reason why I think this technology is not intelligent. There's a computer scientist named François Chollet who makes a distinction between skill and intelligence. Skill is how well you perform at a task, and intelligence is how rapidly you gain new skills. I'm not claiming it's a perfect definition, but I find it a really interesting one. There was an experiment a few years ago where they taught mice how to drive. They put mice in these little motorized carts, with contacts to make the cart move forward and steer left and right. They put the mice in these carts three times a week, for eight weeks, and the mice became really good at driving.

So in twenty-four trials, mice were able to become skilled at a task which mice had never encountered in all of evolutionary history. I think that's a pretty good demonstration of skill acquisition.

Now compare that to [DeepMind's 2016 program] AlphaGo. AlphaGo is obviously super skilled. But if you look at how long AlphaGo or AlphaZero took to reach grandmaster performance, those programs had to play millions of games to do that. Human grandmasters reach that status playing thousands of games. So we might say that AlphaZero is incredibly skilled, but not that great at skill acquisition.

Now let's veer back to the question of what makes human intelligence interesting: humans are really good at picking up new skills, whereas ChatGPT is terrible at gaining new skills. We currently have no idea how to build a program that could learn a new skill in twenty-four trials like those mice did. Suppose you ask a developer: build a program that will learn a task in twenty-four trials, and I'm not gonna tell you anything about the task beforehand. No programmer has a clue how to do that. So I think there is a really long way to go before we can say that these programs are as good at skill acquisition as mice are—let alone as good as humans.

AUDIENCE QUESTION: In what capacity do either of you see yourselves using generative AI in your work, assuming it gets better at accuracy and representation, and what guardrails would you want to see on it to engage with it?

EMB: I don't use it, and I won't use it. And I don't want to read what other people do using it.

The guardrails I'd like to see are around transparency. I think we should all know whenever we've encountered synthetic media, it should be immediately apparent to the human eye. It should also be mechanistically encoded so you can filter it out and not see it. I think we need transparency about training data and energy use. And on top of that I would love to see accountability. I would love to see a world where OpenAI is responsible for everything ChatGPT outputs.

TC: Yeah, I don't have anything to add to that. [laughter]

AUDIENCE QUESTION: What do you think the role of error is in both creative writing and computational linguistics?

EMB: I think the role of error as a human experiencing it is a chance to learn something. Right? If you do something that is wrong in some way, and you have a chance to learn from that and learn what the impacts are and what you can do better, that's really valuable. Errors are valuable because they're authentic errors. So if a student turns in something with an error, and you provide some information to help them learn, that's good. If they turned in some synthetic text that had that error in it, they wouldn't learn anything from it, because it wasn't their error.

TC: I'm not sure if the error is the most salient thing here? Humans acquire skills through practice: through trying and failing and gradually getting better. In that sense, not being good at things is something that students need to be willing to experience before they can get good at things. Using these tools as a way of avoiding of that—or trying to get around that discomfort—is not going to give any student the practice that they need to actually get better at anything.

AUDIENCE QUESTION: I also teach undergraduates and they're doing okay. They're not using AI writers. And they're really curious. And so it gives me a lot of hope. And it just made me wonder what brings you hope in these times in conversation like this?

EMB: What brings me hope is when people value authenticity. And finding ways to live that value and express that value.

TC: Broadly speaking I think young people like your students give me hope. The younger generation constantly surprises us with how well they actually cope with these things and understand them, usually much better than we give them credit for. There are definitely a lot of ways in which young people see the world differently than we do, but in general, I think the kids are all right. They are figuring things out. They are up to the task.