196 – Scalable AI Tools for Farmers

The IDEMS Podcast
The IDEMS Podcast
196 – Scalable AI Tools for Farmers
Loading
/

Description

David and Lucie discuss ongoing efforts to responsibly use AI to assist smallholder farmers in the Sahel region. They discuss the challenges faced by farmers in identifying pests and diseases and the shortcomings of commercial AI tools. They highlight the work of Digital Green, a non-profit organisation developing an AI tool for farmers that emphasises relevance, local language integration, and responsible AI practices. They explore the potential for collaboration and future advancements in AI tools tailored for low-resource environments.

[00:00:07] David: Hello and welcome to the IDEMS Podcast. I’m David Stern, a founding director of IDEMS, and it’s my pleasure today to be here with Lucie Hazelgrove Planel, anthropologist and Social Impact Scientist at IDEMS. Normally she’s the one interviewing me, so it’s my pleasure to be reversing the roles of it today.

[00:00:25] Lucie: Ah, you’re meant to be interviewing me then.

[00:00:26] David: Sort of, well, it’s my topic for once. It’s a topic that we’ve been working on a bit over the last few years, looking at how to responsibly use AI related to smallholder farmers in low resource contexts, such as the Sahel region. And we’ve had lots of interactions on this over the years. And over the last two, three years, I suppose, there’s a whole group there, which you’ve been interacting with and supporting, and I finally come across an organisation who I think are nailing it.

Digital Green, they’re doing some amazing work on this, they’re getting, I believe, a lot of the details right. And so I do want to talk about this a little bit in general, but also get some of your experience working with the partners on why is it the current products aren’t serving them, and what have they been discussing?

[00:01:19] Lucie: Yeah. Okay. Yeah, if I start talking a bit about then the project that we’ve been supporting. So there’s a few different researchers in West Africa, and I’m not specific there because they are coming from different countries, from Mali, Burkina Faso…

[00:01:33] David: Niger, Alagah.

[00:01:36] Lucie: True. And they all were really motivated by this idea that farmers have difficulty identifying the pests that are attacking their plants, whether it’s pests or diseases, and while some of the researchers have different, I’m gonna call them solutions, they have different options that farmers could use in order to help combat these pests in order to help prevent the pests creating damages, they have difficulty getting that information out in a way which is really fast and easy to use.

And so of course when everyone started talking about AI, then these researchers came along and were like, well, you know, we want to use it too, because a lot of them already knew and have tried out commercial versions, but they just, as I think you mentioned, they’re just not serving their purposes, they’re not serving the farmers’ purposes, they’re not even accessible to the farmers.

So we’ve been working together to try to support them in developing tools and finding ways to develop tools for farmers to be able to access the research that’s coming out. So access, different options, which are adapted to them, adapted to their context. It’s a long work in progress.

[00:02:39] David: Absolutely. And I want to pick up on this commercial tools. Not all of them are commercial, but the tools that have been developed, none of them has been Sahel focused. A lot of them have been trained, particularly in India, where there’s been a lot of work which has happened. Some of them are more commercial tools, others more research focused tools.

Tailoring them to the Sahel context is necessary and hasn’t yet been a big focus, and this is one of the things they were particularly interested in. But part of the discussions and part of the problems was that, well, who is choosing where to develop these tools, how to create the data banks, and so on. And the word commercial I think was very appropriate because one of the best such tools ended up getting bought out by a input company. What do I mean by a farm input company? It’s sort of a producer of pesticides.

[00:03:40] Lucie: Yeah.

[00:03:40] David: And so that now ties in with the identification tool being owned by someone who is interested in the sales that come through it, and therefore it’s skewing the options being presented in ways which are sometimes at odds with our researchers’ focus on agroecological solutions, which the farmers could do themselves and where they wouldn’t need to buy the inputs.

[00:04:06] Lucie: There were two issues there. There is the issue that not only are the solutions often not agroecological, but also they’re financially not available to many farmers in the Sahel.

[00:04:16] David: Absolutely, both financially and physically. You know, this is one of the things which has protected the Sahel in some sense from certain inputs, because it is physically difficult to get products there. So there’s all sorts of reasons why that model of a tool owned by the company that sells the products is not the model which I believe is needed to really serve the small holders in this way. So this has been a lot of our discussions, and we’ve been looking at it in different ways.

[00:04:53] Lucie: And a lot of the researchers we worked with too, they’re very interested in farmer developed things. They don’t want to develop something for farmers, they want to develop something with farmers. ’cause they know in their own experience as well, and they’re convinced that that’s how to create something which will be used, which will scale.

[00:05:12] David: Absolutely. And this leads to a different need for a business model of the artificial intelligence tool, which is then used to support them. And so we’ve been working with them, we’ve had a number of these hackathons, which they’ve now contributed to, and got some interesting ideas out of.

So it’s been, you know, two to three years now that we’ve been working with this team, investigating this. And the thing which I’m really excited about is that I believe Digital Green, they’re not solving exactly that problem now, but they are building an AI tool for smallholder farmers, which has, I believe, all the characteristics that we need to solve this problem.

[00:05:57] Lucie: And what characteristics are those then?

[00:06:00] David: I’ll try and talk through some of them. One of them is that it is being developed by a not-for-profit. Digital Green is a not-for-profit organisation whose main motive and motivation is to scale impact in different ways. Their starting point wasn’t the AI product. You know, they started also in India, working with lots of farmers and their starting point was they had 10,000 farmer to farmer peer learning videos.

[00:06:27] Lucie: Yeah.

[00:06:28] David: Which were, if you want these ideas where farmers were sharing different practices amongst themselves, which then other farmers could take up and use. And their AI product started as saying, well, you know, it’s really effective when farmers watch the right video, but it’s very difficult to set up in advance which videos they should watch.

Their program before that was really about getting farmers together to watch videos in different ways and they had a lot to do. But, a big part of it was that, well, if the video wasn’t actually relevant to those farmers, of course it wasn’t effective. And they had 10,000 videos now.

And so actually the problem became identifying which video would be a good one for a certain farmer or farmer group to watch. They knew they had good evidence, they’ve been a group which has done some really interesting evidence behind this, they have good evidence that these farmer to farmer videos are effective at supporting farmers to take up new ideas and share, and exchange these ideas.

But the AI product chatbot they’ve been building is really, it is solving this problem of relevance, which is a really good one. It’s exactly the same problem that we are talking about with the relevance of trying to identify a pest or a disease. Are you getting the relevant information that you need? Which is very interesting.

[00:07:57] Lucie: Yeah.

[00:07:58] David: And then I get into the details. So I don’t know them, and I should be absolutely clear to anyone listening that my knowledge of Digital Green is very limited, I’ve had the privilege of meeting with them a couple of times and of looking into what they’re doing. So I do not consider myself an expert on what they are or what they do. But I do consider myself quite knowledgeable about responsible AI in low resource environments, and I’ve never been so excited as I was looking at what they’re doing.

They’re nailing details from responsible AI. They’ve got humans in the loop. At the center of their approach is a review process with humans in the loop. The AI is playing a very important and particular role, but the role of humans in the loop is central to what they’re building. And they’re building a platform which enables humans in the loop to do review and to try and guide the advice that the AI and train the AI in ways which are correctly putting as much of the power as possible with the human input, where the AI’s role is, the scalability of this, not the source of the information.

It’s great the way they’re designing it with humans in the loop, which is one of the principles of responsible AI, it’s something which we’ve talked about a lot in other contexts, maybe more with Lily than with yourself, but it’s central to one of the things that we advocate for.

And other elements that they’re doing, which I think are really interesting are, when we look at the model they’re building in terms of how this is going, language is central. So they’re actually looking at the language of interaction. Now, of course this is hard, but this means that where they have successfully got good enough language in terms of the language models…

[00:10:01] Lucie: Here, do you mean language in terms of the local languages?

[00:10:04] David: Yes. So for example, Northern Nigeria Hausa is a big language, which is also spoken in places like Maradi, where we work in Niger, so they have already got a Hausa version of the chatbot. In Kenya, they’re looking at a number of local languages, unfortunately not the areas we’re working in Western Kenya yet, but they are focusing on saying we need this interaction to be able to happen in local languages. There are limitations to this approach, but I love it, it’s exactly the right approach.

[00:10:37] Lucie: And I think I’ve seen that in the language, trying to make the tools available in the correct language for the farmers, in doing that, they’re sort of having the same difficulty as us with developing a tool which is specific for the Sahel agricultural environment in that, well, there’s just not the data there to build the models from it.

[00:10:56] David: Absolutely, and this is the thing. So there are certain languages that they have successfully done. There are others which could be relatively easy to integrate. And there are others where that would be a huge amount of work. But they are not shying away from that. This is a hard problem, and they’re not shying away from it, they’re actually embracing it.

You know, this is a long-term project. To reach different people in different contexts, you’ve gotta meet them where they are, and part of that is the language. And that means that they are having to be at the forefront in the ways of what languages do we have large language models for? Maybe we don’t need large language models. Maybe small language models are enough, but that’s a whole different discussion.

[00:11:36] Lucie: I was wondering that. Because you know, if you’re working in just one specific field, basically, then you don’t necessarily need all of the capabilities.

[00:11:43] David: Exactly. And this is something where, this is very recent research about the fact that small language models, now we’ve got the advances with large language models, actually cutting back with specific niche areas to use small language models rather than large language models, this has huge implications if we can get to it.

It’s relatively new research, but it’s really important and it does, I think, relate to the approach that they’ve taken, which is to say, okay, you know, your Housa model is almost certainly not as good as your English model, but that doesn’t matter, it’s good enough. You don’t need to have the best large language model to be able to successfully play this role.

[00:12:30] Lucie: And especially, as you say, about having the feedback loop with the human who tries to improve the model. A lot of these farmers don’t have much access to reliable information anyway, so having some, even if it’s a bit out of the way, it might, it’s going to be intriguing to them.

[00:12:48] David: Yeah. There are areas where I still feel, and you know, if people from Digital Green get to hear this, I still feel there’s areas where I would like to see it potentially behave slightly differently from what they’ve currently got. So let me just give you an example of that.

I could imagine that there’s some questions that when the AI algorithm answers them, it doesn’t have a human input related to the answer as well. So it’s actually just drawing, you know, it could measure how confident it should be that the answer it’s giving is something which has been approved. And if that was below a certain threshold, they could put in a time delay where you could actually have humans then going through and looking at the proposed answers and saying, yes, this is a sensible answer or not.

Now this is a more complicated AI system with humans more deeply in the loop. But it’s what I believe is really responsible. You know, this is where, if we think about this for the parenting work that we do, where I think it’s even more critical or high stakes that you don’t get wrong advice going out, that’s what I would recommend. Whereas in this context, I don’t think it’s critical enough that it matters if some wrong advice goes out, it then gets corrected later, whatever.

[00:14:22] Lucie: It depends how wrong the advice is.

[00:14:24] David: Well, this is the thing. Exactly, and this is what we don’t know, and just in terms of a responsible AI chatbot model for what they’re doing, building that in is the extra component, which I don’t think is easy. This is not a criticism that they haven’t done it, this is more a sort of extension, this is potentially what could make what they’re doing even more responsible.

And that would be really deep work on the AI models behind it, because you’d need to be able to have these measures of certainty coming out, which are possible. The maths behind that means that this can be done. But how this would actually be built into systems such as theirs almost certainly would be non-trivial. It’s a very mathematical term that to say this could be quite hard.

But I think that it’s, I’m not saying this to criticise what they’ve got, on the contrary, as I say, what they’ve got is the best I’ve seen by far anywhere in terms of really responsible business models, implementation models for low resource environments of AI. So congratulations for what they’ve got, it’s exciting that this exists.

[00:15:38] Lucie: Yes. Yeah.

[00:15:39] David: And maybe we can come back to our colleagues who have been interested in this. I am hoping over the next six months, a year, whatever the timeline is, that we can see about actually tying together these two things. I think the approach and the platform that they’re building for these review processes, this is exactly what our team, or not our team, sort of our collaborators have reached as a conclusion as to what they need.

They need a similar platform where they can have students doing one level of this and lecturers doing another level of review, that is exactly what’s implemented already in their processes for these models. So I feel that there’s real potential to tie together the fantastic work that Digital Green are doing and the efforts that we’re seeing in the Sahel region specifically trying to get these tools that can do identification of pests and diseases and so on.

[00:16:39] Lucie: Yeah. Yeah.

[00:16:40] David: Tying these approaches together would be really exciting.

[00:16:43] Lucie: Yeah, I’m sure the researchers would be very excited.

[00:16:46] David: And this is something which, you know, watch this space, my hope is over the next six months or a year we can find ways to collaborate with them, to engage, and to start at least piloting some of the things that could lead to these, well, further advances of the systems.

[00:17:02] Lucie: Okay. Well, great. Thank you very much, David, for letting us know about these positive advances, as you say.

[00:17:09] David: Yeah, I’m excited about this, this is genuinely, it’s rare for me to see that people are just getting the details right in this, because it’s hard. It’s not a criticism of other people who are doing it with other details, they’re serving different objectives, they’ve got different constraints. But to actually find a group that are, in my mind, getting so many details right, oh, it’s great.

[00:17:34] Lucie: Thanks.

[00:17:35] David: Cheers.