029 – AI Transforming Education

The IDEMS Podcast
The IDEMS Podcast
029 – AI Transforming Education
Loading
/

Description

How can students be assessed effectively if large language models can be used to generate good answers to examination questions? Lily and David look at the effects of AI on education, in both the present and the future. How must pedagogy change in order to embrace the latest technological advances, and do we need to change which skills we value as a society?

[00:00:00] Lily: Hello and welcome to the IDEMS Responsible AI podcast, our special series of the IDEMS podcast. I’m Lily Clements, an Impact Activation Fellow, and I’m here with David Stern, a founding director of IDEMS. Hi David.

[00:00:19] David: Hi Lily. Looking forward to another interesting discussion.

[00:00:24] Lily: Yes, I thought today we’ll touch slightly on yours and Santiago’s end and talk about how AI can transform education.

[00:00:32] David: Yeah, we keep sort of skirting around this in other discussions, so it’s nice to actually dig into this one.

[00:00:38] Lily: Yeah. I think you’ve mentioned to me outside of the podcast before, a story which I love, the story of the calculator, when that came in and this kind of panic for the mathematical educators on, how can we teach if our students can now cheat with a calculator?

[00:00:55] David: Yes. This is actually where, if you think about AI entering into education, mathematicians as a whole, have been through the mill on this a few times before. There was an original, panic around the windy calculators.

[00:01:10] Lily: Oh wow, okay.

[00:01:11] David: And then of course you’ve got sort of handheld calculators and that was sort of, oh that’s really widely available now and it’s going to really stop the fact that anyone could ever do mental arithmetic, which unfortunately it did in certain ways and there’s all sorts of reasons as to why that actually had negative educational outputs and that’s led to mathematical educators actually being traumatized by some of the attempts at innovating maths education in the past. Whole different story, I don’t want to go down there.

[00:01:39] Lily: Sure, sure.

[00:01:39] David: But anyway mathematics has gone through this before, and they’ve gone through it again when computers came in and people had access to computers in different ways, which are like really, really powerful calculators that can not only do number calculations, but you can also use computer algebra systems, to actually differentiate, to integrate, to do all the things which you’re supposed to learn how to do by hand.

What are we going to teach people now? And so in some sense for mathematicians, this is just a continuation of the nightmare they’ve been going through. Everything they thought they wanted students to learn, the students don’t really need to learn anymore.

However, for essay subjects. Man, it’s hitting them hard. And I really do feel sorry for them. They’ve never had this before. A technology which is coming, and which is just totally sweeping your curriculum away. Your whole curriculum before was about being able to sort of write essays, that was the big thing that you did. You write essays, you communicate what you’ve learned by writing it, as opposed to communicating what AI knows by doing a few things and then getting an essay out. There’s an essay. You want an essay? I’ll give you an essay. In the past, if you didn’t write it yourself, it was obviously cheating because you’d probably paid someone else to write it, or ask somebody else to write it. And that’s obviously cheating, and that should be recognised as cheating, and that’s bad.

Asking AI to help you write an essay? That’s just smart. You know, that’s not bad. That’s exactly what, in the real world, you should be doing. You should be learning how to use AI to write your document that you need. You do this for reports.

[00:03:27] Lily: Yep. No, absolutely, yep.

[00:03:29] David: And then you read through and you critique it and you improve it and you make it better and you make it your own so you’re willing to take responsibility for it. That’s exactly what we should be doing with AI. So, getting AI to write it for you, and then you edit it, and you take ownership, and you take responsibility, and you own it. Perfect.

But that’s a whole different set of skills that we’re not necessarily teaching, that are totally new. It’s higher level skills. Not everyone likes Bloom’s Taxonomy, but you know, in this context, thinking of this as critiquing as being a better skill than writing is a really good one.

To think about that as something which is more intellectual, I think is really great. And AI can help us to engage students in those more intellectual skills. Fantastic. Using AI well in education should add value.

[00:04:26] Lily: So you’re saying if we use AI well in education, then we can improve on those higher order skills.

[00:04:34] David: We should be able to. No, let me rephrase, let me phrase this negatively.

[00:04:38] Lily: Okay.

[00:04:39] David: If in five years time, the quality of our education systems has gone down because of AI, that’s our incompetence. We cannot blame AI for that. We have to take responsibility. There is no way an additional new technology like this, used well, should not add value to the education system.

So our students coming out in five years time from now should be better than the students that came out five years ago in terms of their intellectual development. I believe that very strongly, and if we don’t have better intellectual development coming because of how we’re using these technologies in education, we have failed.

It’s that simple. It’s not AI has messed things up. No! AI’s a tool, just like the calculator was. We may fail. But we shouldn’t be aiming to fail. We shouldn’t be trying to stay where we were in the past. We should be embracing that this gives us opportunities to do better.

[00:05:44] Lily: And this is why I loved it when you mentioned the calculator to me before and why I wanted to start on it because for me as someone that grew up, but there was always a calculator there. Now, it’s like, yeah, of course you’re going to use the calculator in your education system. So I kind of hope that in 30 years time there’s, yeah, of course we’re going to… This is just seamlessly in, well, maybe not seamlessly, I don’t know.

[00:06:06] David: It’s 30 years, I hope it will be, for people who have gone through an education system. I think they will have had a seamless experience with it. There will be bumps along the road. In 10 years time it won’t be seamless, is my unfortunate prediction. I hope I’m wrong. I hope it’ll be seamless in 10 years time. But I don’t know. I would argue, well, just in terms of our own work, we work with lecturers at different universities to help with this. And it’s really interesting that when we come in to discuss Responsible AI with lecturers, we’re not really getting people to discuss something they’re not already discussing. We’re just helping to structure their conversation. We’re not often bringing that much new information, they’re already pretty well informed.

A lot of people care about this, a lot of people are thinking about this. What we find we are able to do is to help focus on that positive element. Because the natural instinct is to be afraid of change. You need to get into that mindset of accepting, you know, yes, this change is happening and you may as well embrace it. And once you’ve embraced it, it’s sort of liberating.

But it’s only possible to be liberated like that if you actually have the confidence that, yes, you’re right to embrace it. And that’s where, in some sense, we’ve sometimes been able to help with that. But often even, the lecturers we’re training, they’re already convinced.

The second area where we can often add value is actually helping people to discuss. So we’ve gone in and actually had schools where we’ve discussed with the school. They’re trying to get a strategy. We don’t give them a strategy. We can’t give them a strategy. That’s for them to determine. What we can do is help structure that discussion.

We can help that discussion to be constructive because often when you’re stuck in the middle of the discussion, you’re blinded by your perspective. Whereas somebody outside the discussion who has extra information, who’s seen this from other contexts and so on, can sometimes help to find consensus where actually there seems to be divergence.

And that’s what we found actually is often really useful here. That really we’re not telling people things they don’t really know, we’re just helping them to recognise that actually people do agree on quite a lot of this. There’s elements of disagreement, but once you get rid of the fact that there are disagreements, you accept there are certain things people disagree on, you actually find they’re relatively minor compared to the things which people agree on. That’s where we’ve often been able to help. And that’s really interesting in education, I feel, at the moment. I feel, over the next year or two, there will be a growing consensus, if it can be articulated, on how embracing AI is the right choice. Not necessarily in consensus on how to do it.

[00:09:16] Lily: Well, so that was what I was going to ask. Because you said that we should use AI to add value in education, but how can we do that? So if your , like you said at the start, your essay based subjects, you asked them to write an essay. Is it okay if they now write an essay using AI? So then I guess it comes down to what skill are we trying to get out of the student?

[00:09:37] David: Absolutely. It comes back to, well, why are you wanting them to write an essay? What do you want them to write an essay on and why? Okay, you want them to write an essay to demonstrate they have understood a particular topic.

Well, now ChatGPT can write an essay for them, so getting an essay from them about a topic doesn’t demonstrate they’ve understood it. So, if your goal is to know if they’ve understood a particular topic, maybe an essay isn’t the right tool to ensure that. Maybe there are other tools which are better, which could be used to help ensure that.

Maybe one of the elements that you could do, and one of the things that can come in much more is, and I love that this came up in some of our discussions is actually oral examinations. Actually being able to talk well about a subject, that’s something where even if you get trained on how to do that, a conversation is something we want people to be able to do, and that really demonstrates understanding better. Ironically, of course, this is how universities used to examine.

[00:10:51] Lily: Well, a few people that I know that are at university now, they are now given oral examinations. From this year. And they weren’t before this year. And I know when we spoke to one of the lecturers at AIMS Ghana, she said that they’re now had to introduce oral examinations last year, towards the end of it to help detect…

[00:11:09] David: So let’s just clarify, AIMS is the African Institute of Mathematical Sciences. It’s a fantastic mathematical initiative. It’s one of my favorite African mathematical initiatives. I’m slightly biased, I did work with AIMS in the past. I continue to support them as much as I can. I love them as an initiative.

And they are doing in particular, these master’s programmes with scholarships from students across the continent where they’re given these really engaging experiences. And they, as you said, in Ghana recently introduced oral exams as part of what they were doing to make sure that the students were engaging in the right ways with the topics that they were engaging with.

And it seems to me, this is not a step backwards. This is a step to the past because, as I say, universities used to all be about oral exams. And as they went to scale, in some sense, all exams were a lot harder to make fair in certain ways than written exams. And so there was this big push to written exams, which to be honest, as someone who’s dyslexic, I’d be quite happy with people going back to oral exams, I’m much happier talking than I am writing. But a lot people aren’t.

[00:12:22] Lily: You’ve said before to me about when you were in Germany doing your diploma, that you performed a bit better in your ones where you examined in English than in German.

[00:12:32] David: There is that as well. So there was a language issue. I accidentally did a German diploma. Very busy year, but quite fun. And that had these oral exams. It was an incredible experience for me. I mean, I really appreciated that experience. Of course, it was very low stakes because I was going back to complete my degree and I accidentally turned an Erasmus year into getting another degree. So it didn’t matter to me. I didn’t need that other degree. I just happened to do it. Anyway, that’s not, that’s a long story.

But I think the thing which I gained from this, and actually one of the reasons I went through it, is I’d never had that experience of oral exams before, and that was something which was fascinating to me, just with somebody who was interested in education, even at that age, to recognise that there were these different ways to examine and to think about how you award degrees, and I really appreciated what I learned from that.

I mean, part of what I learned was it wasn’t fair, I mean, making all exams equitable is, is almost impossible. There was sort of, I did really, really well on some exams and really badly on other exams because of how the professor asked the first question.

[00:13:50] Lily: Ah.

[00:13:51] David: And they were both topics which I knew really well. And in one case, the professor said, to put me at my ease, you know, start by telling me something about the course. And so I chose the hardest topic I knew really well and told him about that, told him about that very well. And from there on, I was already on a good mark and I just got better and better and better and got a very good mark.

And I went into another exam. Okay, there was a language issue. One was in English, one was in German. And the professor started to put me at ease by asking me the easiest definition for a simple part of the course. And I thought to myself, of course I know that. How should I fix this? What, how should I say this? And so after stumbling my way through the easiest definition in the course, I was already on for a really bad mark and it kind of got slightly better from there. The fact that I’d understood the, the, the heart of the course, and of course I knew the basic definitions, but I wasn’t prepared for that initial question in that way.

And so just that initial question set the tone for the whole oral exam, and it sort of changed. I didn’t get a very good mark on that one, but that was from that first starting point. Doing good oral examination is really, really hard because different people are put at ease in different ways.

For another student, they would have had the opposite experience of me. They’d have sort of started off with that open question, just telling something, and it wouldn’t have helped them. Whereas actually having an early definition where they can build their confidence, that would have helped. And this is, I think, more extreme than with a written exam.

So there is that element of that personal experience of how you go through it. So is it fair compared to written exams? Really difficult question. But what I do believe is that as a way to engage with content, with subject material, that conversation can be much more constructive than a written exam.

And no examination system is perfect. Writing an essay is not perfect. But if you have a discussion about the topic of the essay, you can probably get interesting things in different ways. Maybe you can’t get to the same depth as you can get in a good essay. And that structure and that time somebody spends to really carefully articulate what they’re trying to say is a really valuable skill that was valued in essays, but of course it’s totally superseded because that structuring is exactly what ChatGPT or other large language models can do well.

Now, you don’t want to just take something written there and throw it in, as you know. This doesn’t give great results. You will always do better if you then edit it and you improve it from there, but having that as a first draft rather than a blank sheet of paper, great.

[00:16:54] Lily: Very interesting. And I guess with oral examinations that that could be one really valuable avenue that we start to go into now that, again, now that AI is out there. But, does this mean that we will gain a different skill set? I suppose we…

[00:17:12] David: I hope so.

[00:17:14] Lily: But then, we still need to be able to write.

[00:17:18] David: Well, I’m dyslexic, you’re talking to the wrong person. No, we still need people who are good at writing. I absolutely agree with that. There is a question about whether that is a universal skill or a niche skill. We definitely want people who are really good at writing.

[00:17:38] Lily: Just like we want people who are good at mathematics.

[00:17:41] David: Exactly. Or, rather than mathematics Arithmetic, mental arithmetic, you know, so if we think about using the calculator, we still have people who get amazingly good at mental arithmetic, and that has value in different ways and is used in different ways.

So just because the calculator exists For some people the skill of mental arithmetic is still of value and it’s still useful. Just like now, maybe we don’t want everybody to be really good at writing essays. Maybe that’s not the skill which is needed universally.

Maybe that becomes a niche skill. But what I would argue what we do want everyone to be able to do is edit, critique, improve. So what about, we’ve discussed the oral examination because I love that and that’s fun and sorry about that, we’ve got distracted by that, but more concretely, what about giving people the output of ChatGPT, and then asking them to improve it? How would you improve?

[00:18:46] Lily: Feed back in.

[00:18:46] David: Yes, feed it back in. Ask ChatGPT to improve it. But you still need to make that judgment call, has it improved it or not? Judgment call is the same as we discussed when we were discussing work. And we want people to know that it is better now or it’s not better.

[00:19:04] Lily: And what I did one time was I took a simple sentence, and I gave it to ChatGPT and I said, improve this. It gave me the output. I then put that into ChatGPT, said improve this, I repeated this 10 times. And by the end of it, my simple sentence of hi, my name is Lily and I studied statistics or whatever. Now at the end was like greetings, this is excellent to meet you. Like I’m a master of… You know, so, well, that’s always a very fun game to play, and I recommend it if you want to see that kind of evolution, but absolutely, yeah, I guess in that case, at least at the moment, you shouldn’t feed it back into ChatGPT to improve.

[00:19:41] David: Well, I disagree. Which one of those would you want to submit?

[00:19:45] Lily: Ah, okay. Probably the first one that came out.

[00:19:48] David: This is a really interesting point. You know, the first one that came out, in your opinion, might be better. That’s a judgment call. This is what we want to teach people how to do. They could keep feeding it in forever, and now they’ve got an infinite sequence of potential things they could submit. Which one do they choose to submit?

Now, of course, I know you well enough to know that most of the time when you get something and you stick it into ChatGPT, what comes out, you want to edit yourself. You are not happy, admittedly, you are a little bit of a perfectionist, I’ve seen you sort of complaining that something is a pixel out. So, you know, I expect that of you. This is how you work. But I believe that’s the skill that we want to be there much more widely. We want everybody in the population to be able to take a text, to be able to take a document and improve it, make it better. Now there’s a question of make it better for who and for what, but that’s exactly where I want that human judgment.

[00:20:50] Lily: Well, then that can be the question as well. The question is make it better for this audience.

[00:20:56] David: Absolutely, or maybe…

[00:20:57] Lily: That’s a skill I don’t have.

[00:20:58] David: I would argue, what about instead of actually taking this through, so to say, this is the output of ChatGPT for this topic, can you now make it better in three different ways for these three different audiences?

[00:21:12] Lily: Sure, yeah, yeah, and then you’re testing… I really like that.

[00:21:17] David: And the point is, you can still ask ChatGPT to do that, but you have to make the judgment calls. Is it better for that audience? What does it mean to be better for that audience? Why is it better for that audience? Yeah, you can ask ChatGPT all these questions, but I come back to the fact that, okay, you get an answer, it might be a good answer, but It might also be a bad answer. Can you make the judgment call? Yes, this is a good answer. Now, if I come back to the thing which people are really worried about in education, is what if somebody just doesn’t put the intellectual effort in? They just use ChatGPT, they take the output, they cheat.

Well, I would argue, if you ask a question in such a way that a student can get a good answer by using ChatGPT, that’s not their fault, that’s your fault. That’s not a good pedagogical question. So, actually getting lecturers or teachers to be able to understand how to ask good questions. I don’t know how to do this yet. I recognise the tools are so new, you know, I’ve got ideas, but I don’t know that they’d work. That’s what we need to learn as a society. We need to learn how to ask these questions in ways where it requires that intellectual effort and we can then actually see that intellectual effort developing and I am convinced we can give a better education. That’s what I hope.

[00:22:50] Lily: Yes. Yeah. It’s been a really good discussion and we’ve not even touched on half of the things I want to do. So I’m sure that we’ll discuss this a bit more again. But do you have any kind of final remarks?

[00:23:01] David: It just comes back to this fact that we cannot think of these advances in technology, be it AI or any other technological advance, as being something which education should be afraid of. Education has to embrace the technological advances as quickly as possible towards deeper, better learning and better outcomes. Whatever that means, even if that means your measure of an outcome changes. In many cases, that’s changing the examination system. We’ve already discussed that a bit.

We could discuss it much more. I look forward to other discussions.

[00:23:45] Lily: Sure, we can. Great. Thank you very much, David. It’s been an insightful discussion and really, really useful.

[00:23:51] David: it’s been great fun. Thank you.