Eric Mazur (00:00)
Thank you for joining us today for this episode on empowering active learning and the social learning amplified podcast series. I'm your host, Eric Mazur, and our guest on the episode today is Marc Watkins. Marc is assistant director of academic innovation, lecturer of writing and rhetoric, and serves as the director of the AI Institute for teachers of writing at the University of Mississippi. He co-chairs the AI working group with his department.
and serves as liaison with other departments on campus exploring generative AI's impact on teaching and learning. He blogs at Rhetorica, a Substack of notes on culture, AI, and education. On it, he states, these are notes from a non-traditional student turned educator. Marc, thank you for being here today.
Marc Watkins (00:54)
Thank you for having me, Eric, and it's very kind of you.
Eric Mazur (00:57)
That quote from your sub stack piqued my interest. In what way were you a non-traditional student?
Marc Watkins (01:05)
Well, I went to college a lengthy amount of time after high school, and I would just sort of worked and went to night school at a community college in central Missouri. So my sort of college pathway was not the standard of going to a traditional residential campus, living on campus. I worked in went to school at nights. And then when I got done with community college, I got into a commuter school.
And that was sort of my undergraduate degree before I went on to grad school. So in terms of being that non-traditional student too, it's just a little bit of a different pathway arriving to a traditional college education degree. And now when I sit down talk with my students too, they're sort of like, hey, so you didn't actually live on campus. You've never been in a dorm. It's like, no, I never lived on campus or dorm too. I lived off campus my whole life. And so that's a...
something we talk about is an interesting icebreaker the first day of class.
Eric Mazur (02:05)
I can't imagine because I assume that some of your students are like that too. And how has that non-traditional pathway shaped your current views on education?
Marc Watkins (02:11)
Yes.
Well, I think that it shows that, especially to a lot of our students who are coming here from the workforce, who may have had children and decided to take a positive education, is that it's not just one pathway. You can actually go to many different ways too. And there's not just one way of becoming an effective learner in this situation too either.
It takes lots of different people, lots of different times, and we bring that into our in-person teaching, our online, and also our hybrid modalities as well too.
Eric Mazur (02:54)
Turning to your substack, in May this year, you posted a blog article entitled, No One is Talking About AI's Impact on Reading. I recently saw a posting that pointed something out I did not know, although I suspected it. Gallup did a poll, even before the advent of Chat GPT and AI, that Americans on average read 25 fewer books than the
just 10 years ago. So it seems that already there were other forces at play before AI that impacted reading. I presume, you know, the Internet, the overflow of information, social media, I don't know, we can speculate. But in that article or in the blog post, you emphasize the importance of reading as a foundational skill. And you suggest that we should introduce
and I quote you here, friction into the reading process to ensure that students engage with the text. Can you elaborate for our listeners here what you mean by that friction and how we could best implement that?
Marc Watkins (04:11)
⁓ yeah, and kind of emphasizing your point too, reading as a cultural sort of practice has really changed the last 20 years. I do think technology before AI has definitely impacted that. That particular post is directed at these new generative tech technologies like Chat GPT that are coming on board that can not only summarize text, but they can also level the reading level of it to whatever your desire. So.
The newest feature that just got shipped from Chat GPT this week that is now publicly available to all users is called Canvas. And that actually has a little dial within it that changes the reading level of whatever generative output you're working with. You can go from an eighth grade reading level to a kindergarten reading level. You just click a little button. You don't even have to prompt it. So when we talk about friction, what we mean is that...
When we assign a text for a student to read, since I'm just giving you my instance too as an instructor too, a lecturer in writing and rhetoric, when I assign a student that might have a challenging reading, it's not to dissuade them from learning, it's to invite them into practicing their close reading skills too so that they develop and become better. That's generally what we mean by friction in the actual process of learning too is that we scaffold difficulties
Eric Mazur (05:09)
you
Marc Watkins (05:33)
in ways that actually promote students to retain knowledge
and retrieve knowledge and not simply offload that to generative AI. That's really the big sort of thing too I wanted to put on people's radar back in May. And now, of course, everyone is really aware of generative AI's ability to impact reading too. We've got Google's Notebook LM, which you can upload up to 50 sources or I think 4 million words per notebook for free. And it will produce automatic summaries and synthesis of that material too.
Eric Mazur (05:51)
you
Marc Watkins (06:03)
The big question I have isn't about cheating, it's not about academic misconduct, it's about what that sort of technique does to a person's critical thinking and also close reading skills. That's why I think it could be valuable to add some of what we would call friction to that process that keeps
those skills present and sort of on the surface for those students to see with each other.
Eric Mazur (06:28)
And so how do you, what kind of ways do you, you know, produce that friction to use that word?
Marc Watkins (06:36)
Well,
yeah, it's great. Friction is one term. There's another terminology that we call desirable difficulties. Yeah.
Eric Mazur (06:43)
Well, I heard I've also seen your
term for desirable difficulty. I don't know.
Marc Watkins (06:48)
Yeah,
yeah. That's from Robert Bjork. He's a social learner theory from California. He coined that term 30 years ago, well before AI, probably well before the internet really came out. But he was talking about how different types of desirable difficulties occur in learning that are there by design to help students learn. So we talk about reading a desirable difficulty or friction. And we talk about this in terms of like an in-person class.
we would assign close reading or annotation skills. I'm teaching online this semester, so if I wanted to teach that same skill set, we could use different types of social annotation. You can do that through Google Docs, you can do that through Perusall, you can do that through lots of other different techniques as well too. The whole idea is that it's designed to be an invitation for that student to slow down, take some time to explore that reading more closely.
Eric Mazur (07:36)
you
Marc Watkins (07:46)
and give their directed thoughts to specific passages or something else. We don't just want them to copy and paste the text into ChatGPT or whatever AI interface they're talking about, get an automatic summary. The thinking now, again, this is all so new, we were up for just two years since the public release of ChatGPT, I mean, it's all brand new. The thinking now is that if this does become a sort of standard method that students use, is that that
could very much so de-skill some of those soft skills we talk about with reading on critical thinking and make it much more difficult for them to really sort of navigate the world around them. And we don't want that to happen.
Eric Mazur (08:31)
Of course, I mean, throughout history, technologies have sorted.
you know, been introduced to make us more efficient and at the same time, certain skills that were present before atrophied completely. You know, I mean, the Stone Age was important to know how to work stones. Now, I think hardly anybody could do that. Closer to my own stem field.
You know, when I was in high school, which wasn't that long ago on the timescale of human history, you had to learn how to use logarithms and log tables in order to do calculations. Now, and the slide rule. Now, nobody owns a slide rule. Probably most people in the street don't even know what a slide rule is anymore. And people's understanding of logarithms is probably not what it was 50 years ago. So I, and the question I have been struggling with is.
How much should we regret that? I mean, let's talk about, let's take another example that's maybe more close in time, the calculator and the graphing calculator. 30 years ago, the mathematics community erupted in a fight, so to speak, because there were those people who were opposed to the introduction of graphing calculators, because they thought
Marc Watkins (10:05)
Mm-hmm.
Eric Mazur (10:07)
An important skill is to know how to plot data points on a graph and know how to functions. Whereas others said, well, we have to adopt this technology because people are ultimately going to be working with graphing calculators. And, you know, maybe the skill of making graphs is not that important. erupted, a fight erupted in the mathematics community about this. Now, 30 years later, I think everybody realizes, you know,
Graphic calculators are here to stay and nobody will tell their students, you're not allowed to use a graphing calculator here. You should do this without a calculator. So I sort of learned to see both sides of the story. Where do you stand with that? Could you imagine, for example, that 30 years from now we've evolved a way of working with AI? And I'm not talking about in education, no.
Marc Watkins (10:38)
Right?
Eric Mazur (11:05)
in society as a whole and that we look back on this the same way that we look back on, the graphing calculator.
Marc Watkins (11:16)
Well, I think it's very possible for us to use generative AI ethically, openly in ways that do not de-skill us, that actually support us. The challenges with that this past two years, again, 30 years from now, it could be very different, is that we haven't really had the time to sort of, you know, basically deal with this technology before it changes rapidly again, and we get some sort of upgrade or different use cases shipped. And that makes it very difficult, not just for
education, but for entire society to really think about how to use this. So I do see both sides. I think it actually is, if you go back to the example of AI and reading, it can really be helpful for students that are non-native speakers. People that are learning second, third, fourth language acquisition skills that could be very helpful to level a text at the reading level. If a student is neurodiverse, having the ability to do that is really, could be game changing for them too. It could be life altering.
Eric Mazur (12:14)
Okay.
Marc Watkins (12:15)
We just want to start thinking about and advocating for this technology exists. You can use it and get, can't stop it. I can't even imagine how you would ever even get in a situation where you could ban reading assistants, let alone AI writing assistants. The question is, you know, how this is changing the way we look at our world too. It's interesting. You mentioned the graphing calculator.
What AI is right now is easy technology, but it's a specific type of technology. And it's not just based on generative technology, it's a cultural technology. It's changing how we think about and see the world. Graphing calculators did that the same way too for maths. Now with generative tools, you're looking at this not just for reading, you're looking at it for note taking. You're looking at this for writing as we've talked about this before too as well. You've got it for coding, you've got it for image generation and the new multimodal models.
It can make avatars of you and me within a few minutes that look and sound like we do. So this technology is transforming the way that we are going to interact and connect with each other at the same time. We really just want to start thinking about, okay, how can I use this effectively to help me? How is this going to be beneficial to me? So what we're really talking about here, I think is navigating ways you can integrate this into your lives too. And that's going to include times you want to adopt different types of technology like AI, it's also going to include though, knowing when you want to resist it in some ways. So it's a little bit complicated to say the least. And I think that has turned off a lot of people, especially in education, because they like to use policy to basically decide how these tools work. And when you have a pro AI policy that doesn't get into any of the issues of the possibility scaling.
If you have a ban or resistant AI policy, you're not really engaging how this can actually help students learn. So I think what we do really need more than anything else is just the time to have these conversations, just be fully aware of what AI can do for us and what it could possibly take from us if it's not done carefully or thoughtfully.
Eric Mazur (14:27)
I think that's precisely the problem, right? We don't know yet how AI will be used, not in educational setting, but in work settings. And ultimately that is going to decide what and how the use of AI needs to be taught in academia. Well, at this point, we're just guessing. And I guess the same was probably true for the calculator. When the discussions went on, ban or include to put the extremes down, people didn't know yet how graphic calculators were going to be used or later, you know, the internet or Google or whatever.
Marc Watkins (15:05)
Right?
Eric Mazur (15:11)
So I tell my colleagues, rather than being so focused on how to use AI in your course or in your classroom, let's first really think about how AI is going to be used by us and by people in their jobs, in their daily jobs. And we're far from knowing that right now. I want to go back for a moment to this idea of friction and you know, desirable difficulties, which you mentioned in a recent article you wrote for the Chronicle of Higher Education. think the title was Make I Are Part of the Assignment. And in that article, you talk how artificial intelligence misuse can remove some of the desirable difficulties of education. Maybe you should elaborate, because I think that not many people who are listening to us may have read that. So maybe you can briefly elaborate on that removal of the desirable difficulties. But you also mentioned social learning, which you alluded to earlier. I'd like to talk about this social learning. As you know, I've been pushing for social learning for over 30 years, first with peer instruction and then later with the development of social annotation tools. So that's very much music to my ear. lately, which is very much in line with some of your thinking,
Marc Watkins (16:08)
sure.
Eric Mazur (16:38)
more more thought about the importance of human relationships. So let's put the human relationship to the next question. But if you could say a little bit about how AI misuse can remove desirable difficulties and about the importance of social learning. I'd love to hear your views.
Marc Watkins (16:46)
Yeah. Yeah, so we really started working with AI tools pretty early on in the process here at the University of Mississippi. We started maybe six months before CHAT GPT was released. And we started deploying those in writing courses too. We had a research tool. We had a counter argument tool. And one thing that we asked our students to do is that if they use these tools to reflect on them.
And what became very clear is that the actual usage of the tool kind of obscured what they learned or didn't learn. And the only way we could really suss out what learning was going on in the process we were trying to teach was through the actual reflection itself. So if we do want our students to use these tools, which I think is perfectly fine in different use cases, we want them to be able to acknowledge and advocate how this tool was helpful for them, at least for this time period. We're only two years out from Chat GPT's release.
We still don't know what aspect of this technology is helpful or harmful for learning to. And we want students to tell us how they used it and how this actually function. And that's not really ever going to happen if we just sit behind this sort of like wall of saying that all generative AI, all uses of Chat GPT is academically dishonest. That's not gonna work. So the idea behind this is to give them a clear mechanism to report how they use the tool and how it's actually impacted their learning. I do this with a Google form or Microsoft form too with a list of reflective questions where they get to pick and show me how they learn with it. I think adding that little bit of friction in the process too that puts the accountability on them to say, I use this tool. Great. Okay. Well, what did you actually use to learn with this? Tell me about it.
They have to sit down and then think about, did this do more for me than just simply save me time on the assignment? If I'm not taking out points of the dead, too, I'm more interested in just hearing that sort of human feedback for them of how they've used it, too. And the results have really helped kind of disclose just how students are using these tools. And it's not just to write the entire essay. It really is to go back and to use AI as a sort of working copilot back and forth. bouncing ideas off of it to working from skeleton outlines, getting some portions of feedback, not liking that and disregarding that from the AI and using their own critical thinking skills. So it's complicated, it's complex, and that's exactly what I've hoped to find for this, because we want students to really be thoughtful adopters of new technology and having some sort of mechanism where they can do that's very important.
Eric Mazur (19:35)
Moving on to the social part, and maybe I should tell you a little anecdote that was surprising to me in the development of Perusall. Initially, the annotation engine that we made could be equally applicable to text and video. But initially, I resisted video because I saw video as a way of transferring information where the viewer, the receiver, is extremely passive. With reading, you have to engage quite a few parts of your brain to translate the words, the image, to pick the words out of the image, to get meaning out of words, sentences, and so on. very different from just listening to somebody talk. Also, when you listen to a lecture or you watch a video, you don't have control of the delivery rate. In a sense, your brain is held captive, whereas where you read, you can pause and think and pick up and reread. mean, you're in complete control of the flow of information.
And with video, yes, you are in control. You could pause it. But if you look at students' viewing habits of video on Perusall or other platforms, it's atrocious. They put the playback speed at 1.5 or 2.0. So I resisted having video on Perusall precisely for that reason. And then when the pandemic broke out, and I realized that a lot of people were going to simply record the lectures and
Marc Watkins (20:55)
Right.
Eric Mazur (21:11)
you know, have students watch recorded lectures at a distance. I thought, you know what, we should adopt video because this way, at least as an additional layer above the video. and much to my surprise, if you look at the time students spent in Perusall watching a 20 minute video on Perusall it would be 10 minutes because of, you know, not watching the end or just watching the beginning or setting the playback speak higher. It would be longer than the actual duration of the video because they engaged in social activities. I mean, that was an eye-opener for me. I had not expected that. I think that's very much in line with your thinking about the importance of social learning. So where do you see the interplay between AI and social learning?
Marc Watkins (22:05)
I think that's a great question too. I think for some people who really want to resist a lot of the sort of impact of AI and learning too, social learning is a great sort of resource to go to because it invites students into the actual assignment too, but requires them to slow down, to interact with their peers, to also start thinking about not just the actual content, whether that's a text or video or some other sort of artifact they're viewing too, but also navigate the actual
feedback that's left by their fellow students too and engage with them. So I think that from that angle, it's really interesting. I do think there's probably a place for social learning alongside AI in some ways too. We are seeing the start of new feedback bots that can actually react to you in real time. That is fascinating in some ways to be able to respond to you.
I'm not sure how I feel about that as a teacher too, because again, we're getting into that social relationship dynamic of what that means. And one thing I've told my students too is that I don't want to use AI in a way that is going to come between me and you as our relationship as teacher and student. And some examples I give them for that are, I'm not going to use AI for feedback, though I know other teachers are doing that. I'm not going to use it for letters of recommendation, and I'm not going to use it to answer emails. That's also what Sarah Campbell who teaches with here too, on campus too, talks about this as well too, which she talks about as well with her students. So I think that there are some sort of like frameworks we want to think about is that this technology and how we're using it, how that actually interacts with that relationship dynamic that we have with each other.
So I'm not opposed to the idea of using AI within social learning. I just think we have to be very careful and frame that in a way that it's always very... clear what is the AI, what is actually giving that sort of feedback or giving that sort of analysis and make that transparent as possible.
Eric Mazur (24:09)
I mean of course there's a whole gray area between the two extremes, right? I write an email, send it to you or I feed a bullet list to ChatGPT
ask it to turn into an email, and I send it to you. In between the two are things like spell checkers, grammar checkers, little predictive typing algorithms that suggest the next word before you can even think about it. Where do we draw the line between human and non-human?
Marc Watkins (24:29)
Right, right.
Well, I'm not going to be upset if you send me an email, Eric, and use a spell checker. No one's going to be upset with that. If I write you an email talking about how my grandmother passed away or how my dog is sick and I need some emotional support from you in some way as a teacher's student, and you use AI as response, then I think that could be something that would be upsetting. So it really does invite the human users on both ends to really be reflective about, how is this actually going to impact that relationship?
I don't think that's something that's as simple as black and white as you're talking about or like a policy. We're really going to have to think about that and use our best judgment and really encourage students to disclose when they're using this too and just talk to us about this. Because again, still very much so new days using this technology.
Eric Mazur (25:33)
I'm thinking here, I'm thinking about the scenario you put in front of me, you know, your grandmother died, I have to write you an email, I'm at a loss for words, I ask, you know, Chat GPT to write me a draft, but then, you know, I make sure it is something that I would actually say and that it's my words and I send it to you.
But I have to put myself at the receiving end to see if I would be offended by that. I don't know. I could probably not tell even. Anyway, it's an interesting time we live in. Yeah. So let's go back again to the human part, because that's where we really are, because you focus a great deal on your writing about the importance and the power of human relationships.
Marc Watkins (26:12)
It is indeed,
Eric Mazur (26:28)
And if I listen to my colleagues at Harvard, so many are engaged in the task of developing AI tutors. And I've yet to see a tutor that I find even remotely useful. But in your view, how does generative AI impact the relationships between students and teachers? Do we have now a triangle where we have the teachers, the students, and AI, or is the AI medium that helps communicate students with teachers and vice versa?
Marc Watkins (27:07)
Well, I think we're now just starting to start that actual discussion too and start thinking about that. Ideally, if they are going to use an AI tutor, I would hope we'd have a triangle with this teacher, student, and the actual bot within the mix. I do think what concerns me is that there have been a few different deployments where the teacher's either been removed for that process too or sort of just sit there to monitor the actual Chat from the Chat bot.
And that does concern me because when you start talking about relationships, that's really what teaching is about, trying to develop this with this too. I do think other people would say that education is more about getting your degree and that is certainly a valid standpoint. But if you are going into a situation where you're getting the majority of your feedback from a bot, I think some students are gonna raise some questions about why are we paying to go here? Why are we paying to actually do for this? and how that actually works. Mike Sharples, who I followed well before ChatGPT was released, works at the Open University in the UK, and he was almost ready to retire right before ChatGPT was launched, and then he sort of came back into the mix, and he talks about, he says, the sort of existential question we're gonna ask ourselves in education, at least, is that students are going to be writing with these generative tools, like ChatGPT. faculty are eventually going to be responding either with a tutor bot or with some sort of feedback tool. How can we ensure that actual real human learning and connection takes place between there? That's the sort of dynamic we're gonna see. And that's why I think that just going back to some baseline principles of being open and disclosing when you're using this technology with people too, talking about it and asking about it is going to be really important going forward. And of course, we're just discussing text-based tools now. A of these new multimodal ones that are coming on there too.
They can mimic your voice. They can be anything you want it to be too. And they can also look at your screen and act on your screen too. Microsoft's new co-pilot will literally follow your mouse clicker on your screen with a generative voice talking to you about what you're clicking on the screen. I mean, it's like almost having someone over your shoulder looking at you, making these, whether they're good judgments or bad judgments over your head, it's influencing how we are going to act in digital spaces.
Eric Mazur (29:26)
So we've talked about ethical use of AI. One point that I'd like to discuss with you is assessment. I think. You know, when the pandemic broke out, I thought this is going to be a turning point and we all had to go remote. I said, this is going to be a turning point in education. And it was, of course, a draw to education, but not a turning point because the pandemic was over. And I would say 95 to 99 % of the people immediately went back to what they did before the pandemic. The whole pandemic had been a... you know, a difficulty to overcome rather than an opportunity to innovate, which it should really have been. now with Genitive AI, ⁓ and by the way, one thing that became very clear during the pandemic was already clear to me before, but it became much clearer to many people is that our current assessment practices are actually broken and they don't work. And they really didn't work during the pandemic. Unfortunately.
The pandemic was over after two years or so. Chat GPT, or AI more generally, of course, is not only not to go away, it's only going to get better. And it is really putting our assessment practices into question. Where do you see assessment heading? How do you assess your students in the age of AI?
Marc Watkins (31:11)
Well, that's been a great question too, and we've been working out this semester too. Reflections is one tool, but again, as I found out this semester, other people have too. If students are determined to not be present in the classroom, they can use AI to write their reflections. That has happened before in the past. It has made me think about what we're assessing and why we're assessing it. I do think that in education, that's going to be a major challenge.
I also think that just talking with faculty is that a lot of them don't have the bandwidth to keep up with all these different technological features. A lot of them don't even have the bandwidth to keep up with resisting AI if they want to, because they don't know what the actual capabilities are. So I think it is going to be something we want to think about institutionally across the board about how much we can fund teachers for professional development just to keep up with this, to ask that question about how you can revise your assessments, how you can actually be innovative in this new design. And we've seen a lot of different faculty try things that are unique and interesting to think about this. Some of my faculty are working on ways to incorporate speech more into the actual process of communication when they do this. Others are looking at if you are going to use generative AI to analyze that output and require students to improve it and then reflect on how they're improving that output. So there's lots of different things people are trying. It does go sort of talk about though in terms of higher education, more than anything else, if the actual capacity is there for us to do this, because our material conditions really do impact our ability to engage. And we have so many contingent faculty, so many faculty that are now in adjunct status. We kind of are setting ourselves up to the point too where they don't really have the time, space or resources to be innovative. And that's been the main argument I've been having too with a lot of different people as well who wanna help and wanna actually do things. They say, well, why aren't faculty doing this or that? Well, you have to sort of like go down to the basics of what's going on with that faculty member's life too. If they're teaching four or five different sections when they should only be scheduled for teaching three, if their course cap should only be 75, 80 students that are teaching 105, 125. Those are all factors that indicate that we need to have some much deeper conversations about what's going on here before we can move on to that sort of big question about AI and assessment.
Eric Mazur (33:47)
We're going to run out of time, Marc. We'll need a second podcast here.
Marc Watkins (33:52)
need eons of talking about this.
Eric Mazur (33:53)
Yes, yes, but I still have a couple of really burning questions that I want to ask you because we've mostly talked about reading, but you teach writing. So the question that comes to mind is what about writing? And more broadly, what's the point of writing if people read less and less? I mean, it's sort of like it's a vicious cycle there. I mean, I could imagine I just finished writing a proposal. I did not use Chat GPT.
Marc Watkins (34:02)
Yes.
Eric Mazur (34:22)
In fact, I played with Chat GPT and I was so bad that I trashed it and started from scratch. I didn't even use what it generated. But I could imagine that with improvement in AI, you could say, OK, here's a bullet list of what I want to do, produce me a proposal that is 15 pages long and here's to the following guidelines dictated by the call for proposals.
And here's the work I have done. I'm feeding you with some papers on the field, produce me that proposal. I send in that proposal. It goes to reviewers. They don't have time to read the proposal. They go to Chat GPT to turn the proposal into a bullet list that hopefully comes close to the bullet list that I've fed to the proposal. I could almost imagine a future like that where AI is used as a sort of an operating system between humans, as a way of communication.
What are your thoughts on writing, given that you teach writing? And what role does AI play in your students' writing?
Marc Watkins (35:23)
Well, yeah. I think what you're saying is what a lot of people are assuming will be the one pathway forward for the future too with this technology. What I'll say though is your experience mirrors my own when I've tried to use this for a professional task too. And that first drafts matter so much because it takes time for us to actually get our ideas in order and put them in sort of a method for people to understand.
And I have no problem with using AI after we get done with the first draft, because that's actually really helpful to help you order it and organize it further. But AI is never going to know what's inside your brain if you can't effectively communicate what you want. you're probably like me too. I don't necessarily know exactly what I want when I start writing. And that usually changes in the actual process of writing itself. So I think that we're going to see an emphasis and a power on that.
least the first draft of the writing process. Getting down there, sitting down, writing a little bit, taking some feedback. After that first draft is over with though, think generative AI can be a really powerful tool beyond just editing too. It can help you synthesize your ideas, organize it, it can help you change it around for different audiences in different ways. You can use it for different types of feedback to think about different audience members too. But you only get that if you have that first instance of your idea on the draft of the page.
And I've been really successful talking about with my students about this too. And one of the ways we do this is kind of funny because they're like, they're really impressed with Chat. GPT what can do. And I said, well, okay, you went on a date with your girlfriend last night or your boyfriend have Chat. GPT GPT, right? What that date was like. They're like, what do you mean? Why? It's like, I knew this. That's like exactly because it doesn't know anything about what's inside your head, your experience. And without that, there's no point in using a Chat bot to do that.
So writing, hopefully, in the most idealized format is going to be personal and should hopefully be more personal going forward in the future too. I really hate these sort of standardized essays that we've been trying to make our students write to. The five paragraph model too is just not really working. It hasn't for years. There's lots of reasons why, again, that would take more eons of us to actually talk about this in podcasts, but I want to get back to more personal writing, personal communication, and why that matters so much now.
And there is a place for AI within that. It just only happens after you have that down on the page or recorded in some.
Eric Mazur (37:57)
I should have asked you this right at the beginning, but I think that after our conversation here, I have a pretty good feeling for what you will say. So we're clearly at the beginning of time of great change in human interaction and human communication. And as you've pointed out on your blog, there are plenty of pretty dark sides to this technology. mean, you recently had a post about the new perplexity, completely uncensored perplexity model. I had no idea. And if you're listening here, you're wondering, just go to Marc's Substack, Rhetorica, and you'll find it. But overall, are you optimistic about the future, optimistic and excited about the future or are you very concerned?
Marc Watkins (38:57)
That's a great question. I'm more concerned than optimistic at this point. And the reason why is that we're seeing just a frenetic pace of development and we're all doing this as a giant public experiment using these generative tools without much sort of understanding about is this safe or is this not? And when developers are tasked with those questions, whether it's Sam Altman from OpenAI or others from Google or Microsoft,
They go back to say that, they need to have this as a public experiment to know what's safe or what's not safe. We can't do this in closed trials. I think that's a little bit of a cop out. I think that they know that releasing this technology, especially in education without frameworks or guidelines was a huge mistake. We just now, two years after the public release of ChatGPT finally had guidelines for writing, and many teachers of writing too are very... critical of those guidelines that they released, not only with the timing of it, but how they didn't actually really talk with teachers of writing about how these students are using Chat GPT or what they should be using for. There are ways you can actually modify the output. And Ethan Mollick who teaches at the Wharton Business School, he's open sourcing tons of prompts that force Chat GPT's output to slow down and turn it into a tutor that would be a sort of a back and forth with you that just don't give you the answer right away because that's not actually helpful. And something that he said in one of his publics, many of his public, think, speeches on AI is that a lot of the research indicates that students who are heavy AI users of Chat GPT and other platforms raise their hands less in classes, meaning they're not willing to be open about if they don't know something. And that is something that's a big concern to me because they're just going to turn to Chat GPT to get the answer.
We want students to raise their hands. We want them to be open about if they don't know something. We want them to communicate in class and be open about those relationships.
Eric Mazur (40:58)
In order to end on a more positive note though, what excites you?
Marc Watkins (41:02)
Well, so what excites me about the technology is I do think there is use cases for this for healthcare, use cases for actually solving some of the bigger problems that we have in our life. There are other instances of this technology. Some people are talking about putting this behind traffic cameras so that cars don't idle behind red lights. And you're thinking about what this looks like around the world if most of the actual Western countries adopts this. That could save lots and lots of gasoline, CO2 emissions could actually contribute to helping global warming. That's just one example of this being used as a useful technology too. I do think that the genetic breakthroughs using Alpha Fold, which is a not generative technology, but a different type of AI, can be really helpful going forward for us too.
Eric Mazur (41:47)
Marc, thank you so much for an amazing discussion. I would like to conclude by thanking our audience for listening and inviting everybody to return for our next episode. But above all, a joint thank you on behalf of my audience to you, Marc. was really a fantastic discussion.
Marc Watkins (42:06)
Thank you so much, Eric. These are really thoughtful questions too, and I think that we'll be spending a lot more time on this going forward.
Eric Mazur (42:12)
Absolutely. You can find our social learning amplified podcast and more on perusall.com slash social learning amplified altogether one word subscribe to find out about other episodes. I hope to welcome you back on a future episode.