John Warner's latest book, More Than Words, is available through Hachette Book Group. Join the conversation on www.perusallexchange.com!
Eric Mazur:
Welcome to a special episode of Social Learning amplified live at Perusall Exchange. I'm your host, Eric Mazur, and joining us today is John Warner, author, editor, and nationally recognized expert on writing and education with eight books to his name, including the Bestselling, The Writer's Practice, and Why They Can't Write. John has spent over two decades helping students, teachers, and readers rethinking what writing can be. His latest book, More Than Words: How to Think About Writing in the Age of AI, challenges us to see AI not as a threat, but as a chance to discover or rediscover what makes writing deeply human. John, thank you so much for joining us today.
John Warner:
Oh, very much my pleasure.
Eric Mazur:
This podcast is not only part of the Perusall Exchange, but also connected to a Perusall Engage event that is running until June 8th. The Perusall Engage event is an interactive author-facilitated communal reading event. Think of it as a virtual asynchronous book club where you can not only get to access John's book, but you also have an opportunity to directly engage with John and brainstorm with like-minded educators about leveraging AI in teaching reading and writing skills. John, you have almost 400 participants in the Engage Event and close to 800 people registered for this recording. You're popular. What are some of your takeaways from participating in the Engage event?
John Warner:
It's stuff that I've been hearing pretty frequently as I travel around talking about the book or being invited into institutions to discuss how we're going to respond to these challenges of teaching in a world where this technology exists, which is that there's a whole host of different viewpoints on this technology and how we can or should or shouldn't use it, and that the difference in these viewpoints tend to rest really in the root values that you bring to the educational experiences. And one of the things that I urge people when I go do my talks and presentations or all day seminars and this kind of stuff is to first work from those values, those things that we think are most important, and this is one of the reasons why I talk about large language models or specifically the initial appearance of Chat GPT as an opportunity because it allows us to see here's a machine that can create an output that looks a lot like what we ask students to do.
We obviously don't think students are the same as a generative ai, a large language model, so if we think students still doing this thing is important, remains important, how are we going to ask them to do it? What are we going to value in what students do as they do it? These are all sort of questions about education that I've been interested in for a long time. Pre-Chat GPT, it now forces all of us to kind of confront these things. So while I have my share of worries about where we're going and what we're doing, I'm glad that the conversation is happening and it's been fascinating to see the different perspectives in the event.
Eric Mazur:
I do at some point want to touch upon your worries, but first, let's talk about the opportunities first because your challenge us in your book to think about gente value as an opportunity rather than a threat. And I know we have quite a few listeners who are teaching writing and who are really eager to know how to leverage Chat GPT and other large language models effectively. So can you elaborate for listeners on the opportunity afforded by generative ai?
John Warner:
In my view, the most important thing it does for us is provide a lens into the work we do teaching writing, and again, this is an issue that I've been concerned with for years and writing about four years as well, particularly in my Why They Can't Write: Killing the Five Paragraph Essay and Other Necessities book, where I looked at the kinds of writing experiences students were having primarily prescriptive in order to satisfy very limited and uninspiring tests of proficiency and trying to urge us to move towards a approach to teaching writing that is more rooted in a genuine rhetorical situation where we see writing as a kind of experience of thinking, feeling and communicating. We have these tools, this new technology that can't think or feel or communicate, but it can generate simulations of those things and it can help shed light on what happens when we are writing versus what I call in the technology automatic syntax generation.
And quite honestly, I am on the skeptical side in terms of what I see this technology being used for an education out of the gate, but I've seen significant evidence particularly recently of really thoughtful creative folks who are using the tool as a way for humans to collaborate with each other while being able to reflect on what we're doing through interacting with the technology. And I think that's where the most promise is not one-on-one a student with a chatbot, but groups of students, two or three or a classroom who are engaging with this kind of technology in a way that allows them to think both critically as we often do in academia, but reflectively how they view their own work and what they're doing when they write. And I think it can be very useful for that as long as we keep that sort of human need at the center of the equation
Eric Mazur:
In educational settings, whenever new technologies appear, they're often seen as a shortcut to learning. Just think of let's say the graphing calculator. Initial reaction was we should ban them because students will use them to cheat and then came search engines and banning students from accessing Google during homework or exams and in the end people come around and realize that the problem is not the technology, but really the way we assess our students. And I sort of expect the same to be true for large language models. So where do you stand on this issue? Is chat GPT? Just another word for cheat.
John Warner:
No, I think leading with the sort of academic integrity approach to the challenge of chat GPT, I understand why. Absolutely understand why because the stuff appeared for most people out of nowhere in November of 2022 and immediately looked like something that students would use to do what I call an end run around learning, which is not new behavior among students. Students doing something other than what you want them to do is not new, but the automation of this became troubling. So it's not like I sort of gain say those choices, but I think to your point, I agree in the long run simply treating this as how do we stop them from using it from a kind of cheating lens is not really going to get us all that far if we don't want students to use this technology in ways that we think are counterproductive.
I see it very much as a demand side problem. We have to help students see why using it is not in their own interests of their learning or their engagement with life. So the challenge really is probably primarily around assessment. I think for sure the kinds of things we ask students to do as they learn the experiences of learning, particularly in writing matter, but ultimately it really is probably more important to figure out how are we assessing learning, how are we assessing the human aspect of this? The calculator was a tool of automation chat. GPT is a tool of automation. There's lots of tools of automation in writing that we have successfully integrated into writing. I couldn't have written more than words without being able to search information on the internet. It simply couldn't have happened under the timeline in which I produced the book.
I probably wouldn't be a writer without the tools of automation of a typewriter and then word processing because my handwriting was always so poor in school and I could not write in a way that allowed me to express my thoughts. So automation is not def facto bad in and of itself, but where we inject this technology into a learning context that becomes a real challenge. It's different than in a business place where we can say efficiency and speed are values that we can associate with a more successful outcome for a company. Speed and efficiency are not educational values, so if we're going to use this technology, we have to figure out how does it give rise to other values as we do this work?
Eric Mazur:
And maybe the problem lies in the fact that it's right now difficult to establish the values because you don't know yet how in society outside of an educational setting, large language models help us be more efficient, better at our jobs, how it will change even the job market. We can only sort of hypothesize about that and let's say that in certain professions chat GPT becomes sort of not the end result, but the starting point of where people develop their work. Then I think it's really important to teach people how to use large language models to become more efficient. I don't see that happen yet in education.
John Warner:
It's a real challenge because the technology, we know what the technology does, but we have very little idea of how the technology can be useful to us. Unlike most other highly tested, developed consumer technology, I think it's very important. I have a past that I talk about in more than words as a market research consultant and researcher and when companies create a new consumer technology, they've done lots and lots of work to determine the market and use for this product before anybody uses it. And none of that happened before the introduction of chat GPT because my guess is OpenAI didn't really anticipate the kind of response that the technology got. So in terms of preparing students for a future or the cliche that runs around AI isn't going to take your job, but somebody using AI might. I think these are unknowns and for me, I want to go back to the roots of what allow people to make use of technology and other tools, which is the ability to think critically, the ability to read, the ability to communicate.
Obviously I'm a writing guy, these are all skills we get to practice with writing, but I think it's true the people who are using generative AI technology productively in their work now, the vast majority of them had never used it before November of 2022 and they had a whole base of what I call a practice. I use it for my lens of the writer's practice of the skills, knowledge, attitudes and habits of mind of the practitioner. And if you have those things, you can look at a new tool and say, ah, this is how this thing can be of use to me. So it's not that we all of a sudden upend our practice because technology has arrived. I have this thing that's solid that works and if I can be mindful about what I do and I understand what I do, I can make use of the technology.
It's one of the reasons why despite significant experimenting, I barely use this stuff because my writing the way I write is so ingrained after all these years that it has marginal utility for that. I use it as a sort of low level clerk to alphabetize things and sort things and go through large reams of data and surface interesting things. That's what I think we should be training students to do. I think I read it this morning, I can't remember where I read it, but that art history majors had better employment last year than finance majors, and I think that may increasingly be true that the people who are conditioned to think broadly and critically across different domains may have an advantage in a world where these tools exist and also are evolving, right? This technology is not fixed. It does continue to evolve and change and even the ways you prompt a model this week, they release a new version and those same prompts don't work anymore. So if you're going to keep prompting them, you have to know how to think in ways that allow you to adjust. So it's a big challenge. I think you're a hundred percent right. Part of the challenge is we don't actually know how businesses are going to use it or if it'll have different impacts on different businesses, but I do have a strong sense that education like learning is never going to go out of style for this kind of thing.
Eric Mazur:
Yes, I think that is so true. So out of curiosity, as both a writer and someone who teaches writing, what are some of the most innovative ways or what is the most innovative way in which you've seen large language models being used or you've used it in your classes to teach writing?
John Warner:
I haven't seen anything that I consider truly innovative in teaching writing. What I have seen that is interesting to me is experiments in using ai, and this is sort of for younger grades also, I should say using it as a way as a sort of background device that is helping teachers sort of sort through a large array of realtime information that's coming from their students either through writing or this also can be applied to other subjects like math. I think it could be, I've seen experiments that strike me as useful where again, it's say two or three students interacting with an AI tool that creates a sort of experience that allows them to take think critically collectively around what the AI is producing. Unfortunately, I've seen much more counterproductive deployment of this technology, which I think is primarily the byproduct.
Eric Mazur:
Write an essay for me or
John Warner:
Write an essay for me or a chatbot that coaches a kind of the prescriptive process that I've been fighting against for years.
Eric Mazur:
The five paragraph essay.
John Warner:
Exactly. So the distinction I make between what I think are going to be the productive versus the unproductive is the difference between schooling and learning. So those applications which help students just do school better, get a better grade on an assignment or march them more efficiently through a particular sequence of events, I think ultimately are not going to do much for them. Those that create a unique opportunity to engage with and reflect with something interesting that might stoke student curiosity, it might keep them more engaged. It may trigger some creative notion in them that they might not have had before as they've seen with some experiments around young kids in creative writing where the AI can trigger, just give a little, it's not even a nudge, it's less than a nudge. It's like a tiny little thing that gets the student to just keep going. I think that has real benefits. The idea that this stuff is going to teach anybody anything as a teacher would I think is probably a fantasy, and I think it's a fantasy that's being furthered by people who have designs on automation in ways that think those of us who have taught would find objectionable, but as technology that could help do that work. I think that is real, but I do have worries about the market incentives of using this technology to replace important human labor with inferior automated versions that are rooted in this technology.
Eric Mazur:
You mentioned counterproductive use. I think part of the reason for that counterproductive use is really our assessment and assessment practices. I mean, there's no question that it's innate in human nature to want to learn. We're essentially wired to want to learn, and in a sense the assessment practices beat it out of them. There's a wonderful quote, Maria Lim pasted that in the chat from Mark Twain. I've never let my schooling interfere with my education. I think it's a great quote. I hadn't heard that before. Shifting gears maybe a tiny little bit. Throughout history, human skills have come and gone, right? We know that from making arrowheads from Flintstone to speaking Latin to long division, and typically we complain and then we adapt and then we move on. You write that writing is a skill we're struggling for essentially really to our sense of selves, but given how recent writing is in the grand scheme of human history, I mean yes, we have written text that days thousands of years ago, but it was just a tiny, small fraction of society that was actually engaged in writing, and it's really only maybe the past few hundred years that writing has become societal enterprise, so to speak.
So given how recent writing is and how fast AI is evolving, do you imagine a future where we don't write ideas but generate them with AI handling the articulation of those ideas? In other words, we accept what technology can do and focus our efforts on what technology can't do that? Well, I'm just brainstorming.
John Warner:
Yeah, no, anything's possible. One of the things I have to remind myself, I don't have to remind myself I have no trouble maintaining my humility. I am Midwestern by birth and nature, so I've been trained and not think that I'm that special or my ideas are so great, but you make an excellent point in that the advent of writing as a sort of mass activity is relatively new. The centering of writing as a indicator of learning or intelligence or something like that is even newer, and I would even be, I'm very open to the notion that we may have allowed, what I would argue is sort of a simulated writing carry too much weight in terms of the kinds of assessment we want to do of students in school. There's other ways to assess the things we want students to learn and know than writing.
It became a kind of mass belief that writing is the best way to assess learning, and that's not always the case. Depending on what we want to assess, we could be looking at sort of a shift where it's the end of writing where exchanges, where things will be transmitted through mediums like podcasts or video from now on where writing as a medium of an exchange of ideas will be outdated. I shudder at that thought, right? Because reading and writing have been so central to my life, but it's possible. The thing that I sort of clinging to about writing two things. One is writing as thinking, as an experience of thinking where it is both in two dimensions. We both have a notion in our head that we're trying to put on the page, we're trying to capture it with words, with language, but all of us have also experienced a second dimension of writing is thinking, which is while we're writing the thought changes shape and something surfaces in our subconscious or unconscious or just comes, who knows where it comes from, but we have a new idea that was not there when we started writing, and that process feels to me inherently human and inherently important.
Part of my belief is because how important it's to me, I say some days I stay sane because I can think something, I can write about it and I can better understand it on the other side of that, and I find a certain reassurance in that writing is not the only way to think. There's lots of ways to think, but I think it's a great way to think.
I don't just think I know having worked with many students over the years that achieving that sense of agency and self, self, self-belief that comes through believing in their own expression and writing to audiences has a sort of powerful effect on their self-esteem and their self-image. It's not a panacea, it's not for everybody, but one of my goals over the years has been to make sure that every student has an opportunity to at least try it. The same way when I was a little kid, they still made every student try an instrument. In grade school in fourth grade, we all had to try a clarinet or a saxophone or a bass drum or a flute or a piccolo or whatever, and I was horrible at it. I had the clarinet and I never got past making a honking noise with that thing, but it did introduce me to music in a way that resonates today. Today I still play guitar, I play the drums. My Allman Brothers cover band has a gig tonight, and I think that the chance to be introduced to music was important that way, and I feel that way about writing still. Those potential changes you talk about will be well after my lifespan, so I don't worry about them too much, but I don't dismiss them. Anything is possible.
Human development, societal evolution is amazing if you take that long enough view, and I find a certain amount of comfort in that honestly, that the struggles that I think are real today in the grand scheme of things are just a mere blip in the timeline.
Eric Mazur:
Right. While you were speaking, two things came to mind. The first one is that I've been in education for over four decades. I started the whole flip classroom movement some 30 some years ago, and I always had this motto, the person who learns the most in any classroom is the teacher. I mean, that sort of was the underlying assumption for peer instruction and students helping each other. And then later I started writing a textbook and what you said resonated extremely strongly with me because while writing the textbook, I discovered that the person who learns the most is the author of the book Learn so Much by reflecting on your writing. But now in terms of skills, atrophying, and I am sort of reminded of this because we're at the end of the spring semester and what happens at the end of the semester, dozens and dozens and dozens of requests for letters of recommendation.
I could imagine a future where, and it's a chore and it's hard for me if I have 20 students from the same class who asking me for letter recommendation to bring in any variability. So I can imagine just making a bullet list of items, feeding it to ChatGPT, telling me, write me a letter of recommendation, sending off the letter of recommendation to somebody else who then takes the letter, uploads it to ChatGPT, and says, generate for me a bullet list of the important items. So you mentioned the word communication, so maybe this is just another way, maybe chatGPT is transforming how we are communicating as humans.
John Warner:
This is another great, a wonderful example of how I like to think of the technology as a lens, right? Thinking about those recommendation letters of which I'm with you, they all cluster at certain times of the year, often the worst time of the year for people who are teaching and often in many occasions they feel sort of formal, this is just something that I need to do to prove to another human being that the student could get somebody to recommend them for something. And so the temptation to not spend a lot of time on them or not feel that they're important to spend a lot of time is real and the temptation isn't real. The actual instrumental ends for that piece of writing. Do not demand somebody to sit down and think and feel and communicate about that student to fulfill the occasion. What I would say though is a couple things.
One is as a lens, we should be using this as an opportunity to reflect on the writing we may do in our lives that I call BS writing. I had a chapter in the book in the first draft of the book that I dropped that was specifically called BS Writing, except I did not censor the profanity. It was based on David Grabber's book called BS Jobs, which essentially he theorized a lot of the work that happens in the world. If it just went away, nothing would change. And there probably is a lot of writing like that in the world right now. If we think I'm writing, if we think of an agentic AI world where my email, AI email can write the letter and somebody else's AI email can respond to it, my question is does that ever need to exist at all? And so I think for some of these recommendation letters, we could just think of a different or better way of fulfilling the actual underlying need of that exchange rather than writing. If we do want to write those things, then maybe it's an assessment problem. The things that we want to these letters
Eric Mazur:
Was just going to say, why do we have letters of recommendation? Because the assessment doesn't really tell what we
John Warner:
Know, so we should rethink what we ask for in these
Eric Mazur:
Letters,
John Warner:
Right? Absolutely. Or what I did many years ago when I was overwhelmed by these things before chatGPT could take it off my hands, I just gave myself an assignment to make it interesting to me that I was going to tell a story, like an anecdote about every student I was writing, recommendation for a specific anecdote about something they worked on, something that happened in class, something they said, and this required me to engage in a reflective process about the student and my own teaching. And to your example of writing the textbook is where you learn everything. Writing those recommendation letters are ultimately what kicked off what ultimately became my first books about writing, which was reflecting on the teaching I had done at the time I had those students. There was times where I would sort of call up the syllabus I had that semester and I'd look at all the comments I'd written on my students writing and I'd be like, oh, geez, you need to apologize when you send this because I'm better now.
Right? I don't do that anymore. And it helped me see the challenge of teaching, solving the problem of teaching and learning because this iterative process at which I had been working for many years in which I was getting better and that I had learned something that I could share with others. And so I wouldn't want to get rid of recommendation letters because entirely because it would alienate me for my own work. It's a way to get in touch with my work. On the other hand, we may have a system that is way too burdensome without a lot of reward because we're not assessing them correctly. What do we truly want from a recommendation letter? This is a good question to ask anytime we ask for them. And I would say this in institutions, like if a university is hiring somebody, we say, we need an application letter, we need recommendation letters. What do you really want from that? Not the performance, right? Like these letters we write to apply for jobs, they're often sort of a positioning and performance, so you appear to be who you think they want you to be. Is that what we generally want from folks when they write? And the answer is maybe not depending on the circumstance.
Eric Mazur:
Yeah. I so agree with every word you say, and I think we could go on to that, but there are quite a few great questions in the chat. So let's hear from the audience. Samira Mai writes, and I'll read, she has actually three questions in one, but I'll read the first two and then the last one I'm going to postpone because I have a connected question. What skills should we focus on developing in students that AI can't replicate? In other words, as a writer and as an educator, if you were to evaluate Chachi PT as a writer, what shortcomings have you found there?
John Warner:
Those are
Eric Mazur:
Whats we can address. Let me ask you the second question too. Do you believe AI will change the definition of what it means to be a writer?
John Warner:
Let me take the second question first. I actually do think that some of it depends on the domain we're talking about, but I think let's say in publishing, right, or writing for audiences in a commercial marketplace, we will see over time AI generated books, books that were a combination of AI and human and books that are entirely human. And I think consumers will begin to expect truth in labeling of those things. I know which of those books, I think I'll be reading the human generated ones, but I think that's, that's sort of inevitable as to whether we will call that sort of middle ground. Those people are still writers. Maybe the semantics don't matter that much to me in terms of the labels, but more like what do we think of the activity? What's meaningful about what we're doing in terms of the skills?
I have chapters in More Than Words, writing is thinking, writing is feeling, writing is communicating. Those are the skills that I think are most important. And the way that I want us to be sensitive to those things is through the lens of experience that as we experience these things as humans, that thinking where you have an idea, an idea comes to you that you didn't have when you started writing. This is something I assess in my students when they write. This is part of the assessment in the class. I want my students to be able to concentrate for extended periods of time to focus while they're writing. I want them to be able to read critically. You had Maryanne Wolf as part of this larger event. I highly recommend her work on deep reading and the meaningfulness of deep reading and even the ways that deep reading that we've lost that skill a little bit, both in those of us who grew up at a time where we developed it, but also in the ways it's been deprioritized in school, particularly K 12 settings.
Again, because of assessments, because of the things we ask students to do for the purpose of assessment. I think thinking, feeling and communicating will not go out of style. I think again, it's part of what holds the human fabric together and we will be better off if we continue to nurture these things. And I think as part of that, Eric, you asked what's not good about chat GPT writing for the most part, it's boring. You can prompt it to be, have a little more personality depending on the model and the prompts and this kind of stuff, but it does not have that. I put it this way in the book chatGPT is a sort of flattener. It has a flat affect. Ultimately it's a flat nerve intelligence. Humans have very, very sping intelligences. We have these very specific experiences of life that if we can draw it out, it becomes interesting to others.
So I think part of what we have to help students develop is I use the word taste in more than words and taste is not what is good and what is bad in an objective sense, taste is what do I respond to as a person? And then understanding why, where we become knowledgeable and attuned to our own tastes. So if we are fans of opera, we know this is why opera when we see different operas and one opera moves us and the other does not, one performance is meaningful, one is not that we are attuned to why that might be the case. I think that's what we want to try to achieve in writing. It's a kind of radical departure from what we ask students to do when they read and respond to texts in school, but I think that's a way of keeping us human.
Eric Mazur:
How good. Something that you said prompted this question in my mind, how good do you find that chat GPT is at evaluating writing?
John Warner:
Again, it depends. Some of it is like you can feed it a rubric and criteria and it will produce feedback that looks very much like what a human applying that same rubric would do. My question and caution is to examine that rubric and ask if that's something. Those are the things we care about, if that's what we want to be assessing. I tend to think, and the other caveat I have to give is so when I am using this technology, I cannot shake my knowledge that what I am experiencing is a simulation. That this is something that's generating words just on the basis of probabilities and adjacencies. And don't get me wrong, that's amazing. The fact that this technology works routinely blows my mind even after having thought deeply about this stuff for a couple of years plus now. But I know what it's doing and I know that that is not a unique human intelligence responding to me.
So my interest in its feedback is zero. I cannot muster that. So it really depends on what kind of feedback are we asking for. Is it good at saying this could be more concise? Absolutely. Because ultimately that's kind of a patronistic analysis of expression and syntax. Do I care? I don't care about that. And again, part of this is the nature of my professional work. To the extent that I can get paid for my writing, it's because I sound like me. If I were in other contexts, I would make different use of this technology. If I was still in market research and I had 500 open-ended responses that needed to be looked at and coded, you bet I would be using a large language model to do the initial lifting of that for me because it would greatly shorten the duration of that work without compromising ultimately what's going to happen to that data, which is to be critically thought of by me.
I don't want to give the impression that this stuff is garbage. It's not useful for anything, but it's useful when we're very careful about the context in which we're using it. So when we say feedback, I would never assign students a piece of writing that would only be given feedback by an ai. I just think that's a betrayal of the compact we make with students. If I give you writing, an audience is going to read that doesn't have to be a teacher, could be a peer, could be another audience, could be the public, doesn't matter. But you can't communicate with a large language model like you can communicate with a human because it can't think, feel, or respond with intention. It's responding with probabilities. And I think that's important to keep in mind.
Eric Mazur:
Of course, we don't know exactly how the neurons that in our brain connect and make connections and work. That might also be deep down probabilities. And
John Warner:
This is a good point. I do like to address this. It's true our cognition may be more mechanistic than we know or ever will know, but what I would say is we do not experience our cognition as mechanistic. And in the same way, we might all be living in a simulation right now, but we don't know it or that we don't have free will or something like that. I can't live, and I can really mess my head up if I think enough about the fact that I can't prove that I have free will, but I can't live my life day to day unless I believe have faith that I have free will. And so I find those questions about the ultimate nature of the engineering, if you will, of our cognition. I find those really interesting and I'm glad people are investigating them, but I think they have ultimately limited utility to the lives, the day-to-day moment to moment lives we lead. I experience the world through my senses, through reflection, through heuristics, all these things that my brain is doing that will be mysterious to me. And part of me is like I'm happy to let it remain mysterious so that I don't drive myself badly
Wondering about it.
Eric Mazur:
It all ends up being a question about consciousness and sense of self.
John Warner:
Well, and your background is in physics, right? Ultimately, we know a lot about physics. We've answered so many questions about it, but I assume some of it still remains mysterious.
Eric Mazur:
Oh, absolutely.
John Warner:
To this day, yeah.
Eric Mazur:
I mean things like consciousness are never addressed by physics. In fact, last year we had Steven Wolfram as one of our keynote speakers, and I posed him the question about large language models and consciousness. And of course you can speculate about that and never reach any agreement, but it was a very fascinating and interest interesting discussion. I see that we only have about 15 minutes left, and I want to touch on two things. One is I want to ask a question that Maryanne Wolf post in the q and a, but I also want to come back to something you said all the way at the beginning of our podcast here. You were talking about concerns, and I noted in your book you talk about the fact that it's just a very small number of companies that control the technology, the potential for the technology to be used to produce more and more online content. Right now, it's being trained on human generated content, but soon it will be trained on content generated by large language model, which will sort of crowd out any human generated writing. So what keeps you up at night nowadays regarding to LLMs?
John Warner:
Yeah, so we had a great example of this yesterday that a lot of folks might've seen where in the Chicago sometimes newspaper, they published a summer supplement that had, it was like 15 new books you should read this summer. That was LLM generated that hallucinated most of the books. And for the books that were real, it had descriptions that were not accurate, and they contacted, they found the person who had done this. In confess, it was obvious there was no confession to be had. It was dead to rights. And what has subsequently happened, that thing is the world. You can go to Google search and ask about a novel called The Rainmaker by Ful Everett, which does not exist, and the Google AI will generate that suggests this novel does exist, right? So here's an almost instantaneous example of information pollution that has been predicated on AI slop that was produced for commercial purposes to sell into a newspaper.
And really, nobody ever looks at this stuff in reality, but it allows the Chicago sometimes to look like they have this big, robust summer package that they're going to put into the world. This worries me. The fact that we may begin to accept AI slop or content in place of human generated writing, I think is a potential danger. I'm concerned about both the political and social designs of the people who lead the companies and the companies themselves that are developing this technology. Again, I don't think of myself as a conspiracy theorist, but it's not a huge stretch to be concerned about a sort of tech oligopoly in the world where they have access to this technology or putting it there. I opened my Microsoft Office suite updated to the newest version. I opened it and it told me just this morning that it will Now, anytime I open a document and my Microsoft Word, it's going to give me unbidden, an AI summary of my own document.
I don't want this. I have no need for this. This is my own writing. Why do I need an AI summary about it? But the idea that we're going to sort of habituate and normalize it without kind of a human agency or regulation or this kind of stuff that worries me, ai, fomo in institutions, fear of missing out where institutions think they have to leap on something because it's the future where they'll spend a lot of money on a lot of stuff that isn't going to work. That worries me just because I think it's a waste of time and resources. It's really what worries me is our own natural, understandable human tendencies to worry about things and do things without proper foresight to leap on these bandwagons. The very first thing I wrote about this technology, it was about a week after chat GT came out and I wrote it at my newsletter and I said something like, I might be paraphrasing myself, but chat GPT can't destroy anything worth preserving, not by itself, but humans sure can. As we've shown repeatedly. I just think we have to prevent from holding a sort of deterministic view of this technology. I think that's really the thing that worries me is if we give into a deterministic view, we will cease to have our human agency over it. And that's where I think we get ourselves in trouble.
Eric Mazur:
The FOMO that you mentioned connects to a question that is in the q and a box from Maryanne Wolf, not to be confused with Maryanne Wolf. How do you recommend we start conversations with faculty administrators who seem to be uncritically embracing AI in a classroom and recommending its use to students usually as a way to be more efficient? How can we push back against this attitude on an institutional level?
John Warner:
Yeah, this is that values discussion where it really is, what are we trying to do here? What are the goals and aims of the work we do at this particular institution? And if we can start there even before we start talking about ai, and if we all agree that say, learning, let's give a very broad term to begin with. We want students to learn, okay, well, what does learning mean? What does a student who has learned something, how are they different from somebody who is not? How do we measure that? How do we value that? How do we track it across students? And over time, and I think I mean this better than anybody, I think when we look at many of the measurements we use to track student progress, they come up empty. And so the notion that sort of getting through this process more quickly means more learning or that using the technology in a way that allows 'em to get a higher score on something means more learning. I just think it's something that has to be probed and questioned and understood, and the cross pressure of administrations. I read about this in another book of mine that I wrote several years ago ago called Sustainable Resilient, free, the Future of Public Higher Education. And I wrote it during COVID when I thought higher education was going to collapse.
And I wrote it from the perspective of somebody who's taught in institutions for many years, but always as a non-tenure track lecturer who has a particular kind of status in these things. And I said, there's a disconnect between what we say. The mission is teaching, learning, research and the operations of the institution, which in many cases primarily appears to be how do we get as much money as possible so we can do this stuff that we say is part of the mission? And it's not like we can separate those things because we need money to operate these institutions. We're discovering now what happens when some of this money is threatened in different ways, but when we allow operations to have near total primacy over the mission, which created things like the adjunct underclass that I was a member of, we cannot say that the best teaching happens when it's done by people who are precariously.
We simply know that's not the case. So if we say we believe in teaching, but we have to pay these people less in order to maintain our operations, we're making a compromise between mission and operations and maybe that compromise is necessary, but if we never examine the compromise, we don't really know what we're doing. And I think something similar is happening with this technology. If we have to embrace this technology because we think it's necessary to our institutional operations, we may wake up one day and realize we've left the mission way behind. The mission is way over here, and we've lost sight of it. What that's going to look like is going to be different in every different kind of institution. No two institutions are identical. This is a problem that is best discussed and understood at the individual institutional level.
Eric Mazur:
So true also what you said about teaching and the perceived value of teaching at the administrative level institutions. It's been a problem I've been trying to fight for a long time in my career. Before we wrap up, and also to end on a positive note, what is one actionable takeaway you hope our listeners get from more than words, something they can try in their classrooms tomorrow?
John Warner:
So the last three chapters of the book are titled, resist, renew, explore Where Resist are. The things I think we need to avoid renew are the things I think we need to sort of revivify in order to make space for what's at the end, which is explore. So that's what I really encourage people to do, is truly explore and explore in a way that hits the two Cs, communication, community, and collaboration. If we can do these things together and we see it fundamentally as something we can investigate as a collective, as a community of shared values and shared goals, I have sort of no doubt that we can find a way to use this technology given its apparent power. It's where we decide that these things are a solo pursuit or that we don't have time for the very real challenges of collaboration and community that I think will get ourselves in trouble.
So I think that's the thing. Everywhere I go, everywhere I go to talk, when I go to a symposium and I see other people speak, every success story is multiple people working together in conjunction with the technology as opposed to interacting solo with a large language model. And that's what I encourage people to do. That's why I didn't really intend this answer as a plug for the exchange event for more than words, but it's really, it's one of the things that I've been cheered by, which is all of these people, several hundred people together reading and commenting and thinking about this stuff, I've learned a ton already as somebody who's thought about these issues as much as anyone. I think I've had to do some rethinking based on what people have said, and I think that's kind of our collective goal over time.
Eric Mazur:
I mean, I think you pointed out another irony of the educational system, right? It's completely focused on the individual, and then we delivered the educated people to society where they discover, I'm not working alone. I have to work with others. John, thank you so much for this incredibly thought provoking discussion. It was just a delight to have you here.
John Warner:
Oh, I had a great time. I'm happy to talk about this stuff anytime anyone will have me
Eric Mazur:
Audience for listening and invite everyone to return to our next episode on the Social Learning Amplified podcast. On behalf of all of our listeners, John, thank you again for joining us today. John's latest book, More Than Words is available on the Hachette book group website, as well as Amazon and other bookstores, and until June the eighth. So you have what, a little bit more than two weeks left. You can join John in a reading of the book at perusall.com/engage. You can find my social learning amplified podcast series on perusall.com/social learning amplified one word as well as on Apple, Spotify, and other podcast distributors subscribe to find out about other episodes. Social Learning Amplified is sponsored by Perusall, the social learning platform that motivates students by increasing engagement, driving collaboration, and building community through your favorite course content. To learn more, join us at one of our introductory webinars. Visit perusall.com to learn more and register.