
Description
Lily Clements and David Stern continue their discussion on the role of AI in international development, focusing on the evolution of AI in education, particularly in low-resource environments. From providing automated feedback on assessments to supporting personal tutors, they stress the importance of collaboration in building effective feedback systems and consider how AI can enhance rather than replace human interactions in education.
[00:00:00] Lily: Hello and welcome to the IDEMS podcast. I’m Lily Clements, a Data Scientist, and I’m here with David Stern, a founding director of IDEMS. Hi, David.
[00:00:14] David: Hi Lily, are we going to carry on with our discussion around AI and international development, maybe even going towards education?
[00:00:23] Lily: I would really enjoy it if we did. I know that there was some really interesting points that came out on our last podcast on AI in international development, particularly with, I know that there’s a lot of work that we do in education in low resource environments.
[00:00:36] David: Yeah, and it’s really interesting that if you go back, wow, a long time probably over 50 years now, and you look at when people were first sort of building computer programs which were supposed to be semi intelligent, they, of course, weren’t using what we would now call machine learning or AI. They were just a set of choices. You could have, think about an automated system where you have a fixed set of choices which have been authored and you can choose between them.
[00:01:11] Lily: Is this when it was more like a flowchart to kind of, which category you go into, so you know and allocate you’re in this group?
[00:01:18] David: Yeah, exactly. You go through, you answer questions. It’s like a choose your own adventure.
[00:01:22] Lily: Yes. Yes.
[00:01:24] David: And that sort of interaction, this is what people knew for a long time and it definitely didn’t feel intelligent compared to, of course, the generative AI we now have where it feels like a natural conversation. But even those sorts of technologies, I would argue, are not fully exploited within education.
[00:01:50] Lily: Wow. Okay.
[00:01:52] David: Even before you get to the classification issue around this, which then after that you’ve got the generative AI. But even if you take that very early interaction, this idea of being able to have these deterministic trees of response, I would argue that there’s still a lot of potential, and it’s really not used as well as it should be in terms of being able to get, well, we do this within automated feedback within STACK, for example, that is an example of such a decision tree.
Sure, if you give one answer, you’re given a question like a maths question, and if you give one answer, we can identify and say, okay, we think that they must have done this thing here wrong to give that answer, and therefore, you gave a five. I think you didn’t use the brackets, I’m trying to use a simple example with BIDMAS, say.
Yeah, and this is the thing, and these systems exist, there’s lots of them. STACK is one that I particularly like, it has a whole computer algebra system, so it’s not just fives you can do, but you can have whole equations in there, you know, and it can read the equations and recognize, it can differentiate them, it can integrate them. You can actually do maths on the answer to figure out where someone might have gone wrong.
[00:03:22] David: But there’s loads and loads of these systems. And even if you take those deterministic systems, I would argue that we’re not yet at a stage where we’re even collaborating on understanding how to give good feedback. Every commercial system that has this inbuilt has their own set of answers and their own set of experts that authors them.
And this is a sort of finite amount of things that can go wrong in certain ways once you have a sort of deterministic system like this. And so actually even just knowing how to have those discussions is something where we don’t really have that yet. People don’t have visibility on it. This isn’t discussed as much, there’s little bits of research happening, but they’re kind of missing the bigger questions which are actually coming out of all these different software systems that all have some form of this.
[00:04:18] Lily: So do you still mean like these feedback…
[00:04:21] David: Yeah, anything with automated feedback in different ways where you’ve got an open question rather than a closed question. A closed question of course, there’s only a fixed amount of feedback you need and that is what it is. But an open question, these are things which can and should be collaborated on, I believe, to be able to advance societal knowledge on how to help learning.
Now, of course, that’s assuming your open question can be fully interpreted. So this is where a system like STACK actually can interpret more than many other systems because it has the computer algebra system behind. Whereas if it doesn’t matter whether you were just recognizing the number or you’re doing, you know, with more complicated or a simpler system.
The fact is that as soon as you’ve got an open question, which is recognizable by a computer, then those decision trees on what sort of feedback, this should be something where there’s a lot of collaboration that comes in to actually improving what you could do. Because if you do this well, and this was the original insight behind the MOOCs years and years ago is that…
[00:05:45] Lily: MOOCs, which is…
[00:05:46] David: So MOOCs being massive open online courses, that if you get loads and loads of people using assessment in these ways, then if 2 percent of people are getting it wrong and you’ve got 100, 000 students, that’s 2000 students, you can now figure out how to give them better feedback, how to help them on that learning journey, even though it’s only 2 percent of people. And the more people you have, the more refined you can get with that feedback.
[00:06:15] Lily: Yeah.
[00:06:15] David: And therefore the deeper you can get in terms of the personalised learning. And that’s without even machine learning and AI coming into play.
Now, of course, machine learning and AI adds layers and layers on top of what you could now do. And the first layer it adds is about identification. What if, instead of using your AI to give you automatic feedback, give you feedback, you simply use the AI to interpret what you wrote in a way which is machine interpretable.
You write as you want to write, maybe even in words, and it then interprets it, and it checks with you, is this what you intended to say, and if so, it can then give you the automated feedback as we’ve just described.
What I’m trying to say is that if you don’t forget about the deterministic part, the bit that you can do in a way which is, fixed and you build on it, then the potential of AI to improve the user experience is immense without compromising the fact that you can still have control over the feedback that you get to enable accurate responses.
[00:07:48] Lily: Yeah, and this is then what kind of perks my ears up a lot as someone that likes kind of data skills and education is so much of that is down to interpretation, context. So then you can have these questions which kind of really blows it open. And then we can check, what is their interpretation, how’s the context applied in this.
[00:08:10] David: Yes,
[00:08:10] Lily: An open text field for the user.
[00:08:13] David: Absolutely. So there’s some amazing things which could become possible based on this. And at the heart of this is this element that if you take this in a low resource environment, then if you want to use the machine learning or the AI to be able to do these complex, well, complex processes to generate answers on the fly, then you either need big bandwidth, big access to servers, it becomes expensive.
But if you take this from a, you know, just the identification of the interpretation of the language which is written, that is a much smaller problem. You can use a much smaller model. You could possibly even get that down to a sort of problem which can be put on a phone.
So you can now have something which is much less resource intensive to use on the fly and so on. Your deterministic responses, they can certainly be put on a phone in theory. You could possibly even have the algorithms for the text interpretation if it’s simple enough.
[00:09:28] David: And so this means that the nature of, how can I phrase this? So the nature of the aspiration of what you’re trying to do is in some sense much less. It’s not that you’re not using the great power of AI, which has come out quite recently, but you’re tying it in to simpler technologies, which are more established. And the key is that we’re actually making those things building the human structures. If we can build human structures that make them work, that make them effective, then that’s really powerful.
That’s what we want to get towards. That we want to get to this sort of element of, in my mind, in education, yes, there’s real hope that AI will be able to give a personalised education experience for any learner everywhere, which will not replace human interaction, but enhance the learning and the opportunity to learn wherever you may be.
And that’s a really exciting possibility. But the thing which I’m most excited about is that if you actually want to implement this at scale in low resource environments, then we may, again as we discussed in the previous episode, we may come to solutions to do so, which involve humans in a different way, which are more interactive in certain ways where the AI is there, but it’s playing a different role. In my mind, the role it could play could be a much simpler role, and that would be enhancing, let’s say, student teacher interaction.
These are things where I think if you look at it on the constraints of low resource, we may actually find that that leads us to more human centred solutions or ways of using AI. And that could be very attractive, especially in education.
[00:11:49] Lily: Yeah. And I think just one other point which maybe you’ve touched on, but not dug into, but definitely one that we’ve discussed in previous podcasts is in places where we work, like in Kenya, and you also obviously taught in Kenya before. So most definitely you know this more than anyone, but where you have a thousand students. And being able to have that automated feedback to those students, I mean, I’d argue that this would be more valued. I don’t mean valued, but more…
[00:12:18] David: That specific example, I don’t know that better than anyone because I managed to avoid classes that big, partly because I wrote the degree programs that then led to there being a thousand students. So there were only a thousand students after I left. So it was my former students who had to deal with that, not me.
I never had more than about 150. It was small classes in comparison. And I actually not only did that, but I had a Fulbright scholar helping me with 150, I had it easy.
[00:12:45] Lily: Sure.
[00:12:46] David: However, I do understand that context quite well because I’ve supported a number of my former students through some of those challenges.
And you’re right, that context, this automated feedback, has a disproportionate advantage over other contexts. But what’s interesting is, in other ways, the availability of manpower in low resource environments is more than it is elsewhere. What my former students are now doing is they’re recruiting recent graduates as interns who are helping them write textbooks and things like this, and actually getting them engaged in this process in a way which is both good for their capacity building, but it also means that they’re able to do things at a scale they couldn’t do as an individual.
And there is just a sort of abundance of talent which is coming through these systems, which are looking for opportunities. And so the human resource in a lot of these environments is one of the great strengths. So if you think about building solutions, which, yes, you can have this element of being able to deal with a large class.
But you can also have elements where you’re able to use that manpower in different ways or tap into that manpower in different ways, creating opportunities. I would love, for example, if instead of thinking of an AI tutor to replace a human tutor, what about thinking about an AI system to support human tutors?
Now, in Kenya, this could be huge because there’s a growing market in tutors. There’s a middle class population who are able and want to get their children the opportunity and they can hire somebody, a student from a local university or something to come in and help be a tutor. What if we had an AI assistant for tutors?
Now that’s something which in a high resource environment that’s maybe not as effective. But in a low resource environment, this could be huge. I’m not saying it wouldn’t be useful everywhere, but I’m saying that this could actually be helping create those jobs. It could be helping formalise the tutor jobs in ways which are really powerful, creating those opportunities, putting people in touch, as well as actually being able to ensure certain standards and so on.
It’s just very exciting to imagine what, I don’t know, I guess what you can imagine in this context is the impact that you could have thinking about where you’re not replacing the human interaction, but enhancing it.
[00:15:50] Lily: Yeah.
[00:15:51] David: And creating those opportunities more than they were. I think, that’s something where somehow, technologies that have really taken off in Kenya, most of them aren’t taking humans out of the loop, they’re creating opportunities. The classic example is M Pesa.
[00:16:13] Lily: Yeah.
[00:16:14] David: It’s mobile money, which has created more employment than almost anything else I know. Everywhere in Kenya, you now see an MPesa shop and somebody is making a living partly out of just providing that service to society.
[00:16:32] Lily: And so then using AI can help you enhance or can help create these jobs and improve your job performance or you’ll improve how you’re doing it. But what I’m interested in, as you said, that again, it could be disproportionate in how it would be more of an advantage in some contexts than others.
And I know my context. Why would it be more, where would the advantages be for someone in Kenya, say, with having an AI assistant versus the UK or in Europe?
[00:17:06] David: Yeah, I guess part of it is that, manpower is very expensive in Europe.
[00:17:14] Lily: Okay.
[00:17:15] David: And so, the way in which AI is sold as a commercial value is often through the reduction in manpower cost.
[00:17:28] Lily: Yes.
[00:17:28] David: This is where it brings value to organisations. If you reduce the need for manpower, there is commercial value which is gained when manpower is your most expensive thing.
[00:17:40] Lily: Yeah. Whereas in contexts like Kenya, you’re finding that actually these kind of technologies are used alongside to help.
[00:17:49] David: Yeah, manpower is not your most expensive piece. And actually, there’s a lot of other things which are driving costs in different ways. And manpower is not necessarily the most expensive component. And so therefore, technology can be used to create opportunity, rather than remove it. And this is sort of a really interesting, it’s an interesting dynamic, which can be fundamentally different. And when you’re in a society where manpower can be abundant, then the dynamics on it can play out differently.
And I guess this comes back to what we were discussing before is that the incentives which are there once your primary driving factor is not commercial and therefore extraction of value, if your primary driving is other things, then you get to a very different place.
And in education in particular, I’m so excited about the potential of this, because again, education is universal in certain ways. It’s not the same everywhere, but there are elements of education which are just, how we want the next generation to acquire skills. And those are the same problems wherever you are in the world.
[00:19:17] Lily: And I guess, before in our previous podcast, when we were discussing AI and international development, something that strikes me is that you said that a lot of that is from the AI kind of tools that came out 30 years ago. And then we started this podcast with you talking about these kind of AI developments that came out 50 years ago.
[00:19:37] David: But, also tapping into the more recent ones. So education is particularly affected by the recent advances in generative AI.
Anyway, I think we’ve probably done enough for this episode. We could carry on this topic forever because it’s one I’m passionate about and very interested in. But yeah, this has been good.
[00:19:59] Lily: No, absolutely. Thank you very much.
[00:20:01] David: Thank you. I’ve enjoyed these discussions.