204 – What does responsible AI really mean?

The IDEMS Podcast
The IDEMS Podcast
204 – What does responsible AI really mean?
Loading
/

Description

David and Kate delve into the ongoing AI boom, questioning whether it’s mere hype or has real substance. They explore the ethical and responsible use of AI, emphasising the importance of making technology accessible and beneficial to low-resource communities. They argue that small language models could provide specific, efficient solutions. The conversation also touches on the societal impacts of AI, the need for regulatory frameworks, and the potential for AI to democratise technology, moving away from its current gatekept state.

[00:00:06] David: Hi, and welcome to the IDEMS Podcast. I’m David Stern, a founding director of IDEMS, and it’s my pleasure today to be here with Kate Fleming, another director.

Hi Kate.

[00:00:16] Kate: Hi David.

[00:00:18] David: Oh, it’s great to be here discussing, we’re gonna be discussing a topic which I’m enjoying a lot at the moment, and that I know that we’re both thinking about a lot, which is this big AI bubble as people are now calling it. Is it all hype or is there real substance, what does it mean if we’re wanting to do this really ethically, responsible use of AI, what does this look like? That’s really our aim for today, isn’t it?

[00:00:46] Kate: Yeah, and I think really looking at it through a more lay person lens, more than an expert lens. But I think if you’re anyone who cares about the environment, about the ethics of data, all of these different issues, you know, you will have thought to some extent about AI and problems.

And so whenever we’re saying we’re gonna incorporate AI here, I think we quickly bump up against, well, how are you doing that ethically, how are you doing it responsibly? And it’s one thing to say, oh, this is ‘responsible AI’, and I use quotes around that, and it’s another thing for people to really understand what that means, because it can just sound like some nice words.

And it is hard to understand. Okay, if these models, all of the large language models, all of the existing, at least as far as I understand, AI that we’re using is coming out of these big private companies, they have very specific goals. They have done many things which we don’t need to get into here, but they have done many things that are arguably unethical or even criminal. Like if we get into copyright law and some things like that, you know, there are issues with how this AI has been built.

So I find myself quite curious, and I think this is something we’ve been in conversation about a lot of, how do we navigate this? How are we doing this ethically? How are we shifting even, what power do we have in this where we can do something quite different in what AI can look like?

And that’s where this is really me in conversation with you looking to unlock questions that I’ve had where people have asked me and I’m like, no, no, no, trust me, it’s responsible. And then I realised, well, I don’t fully understand what that means.

And we’re gonna get into some topics that we have talked a lot about, like small language models, some things that I think are going to be unfamiliar to lots of people. And I think it’s really helpful for us to set out some of those concepts and then think about how those concepts can help move AI in a more responsible, inclusive, low resource, community serving direction.

[00:02:50] David: Absolutely, and I think one of the things that I want to make clear is I don’t believe it’s possible to use these things fully responsibly at the moment because of the nature of where they are, how young they are, this is like trying to get a toddler to be responsible. Fundamentally, that’s sort of where we are at the moment.

Toddlers will occasionally, they can’t take full responsibility at this point in time. I think that’s always useful to keep in mind. Am I happy with the way things are developing? No, I am scared. Do I believe that responsible use of the current advances is potentially beneficial for society? Absolutely. There is that possibility. Will it happen? It depends on the choices that people make.

This is where we are at the moment. What I am pretty convinced about is that these latest advances, there’s nothing really that different about the latest AI bubble or boom, however you want to see it, than many of the previous ones.

[00:04:01] Kate: Question about that, when you say previous, you mean previous hypes about AI or previous technical hypes?

[00:04:08] David: In this context, I’m talking specifically about AI. Machine learning and AI have had, at least to my knowledge, three or four previous boom periods, which went to sort of boom, oh, it doesn’t do quite as much as we thought, out of public consciousness.

I mean, the one which remains in people’s consciousness is when AI won at chess. This was a moment in time when AI had a moment in the spotlight, everybody was all over it, and then they weren’t ’cause it wasn’t able to do what AI is able to do now and people expected it to be able to do that.

[00:04:46] Kate: Well, and I would say, independent of really getting into the details of AI, there’s real incentive for these AI companies to let these grandiose conversations happen around AI, even the negative ones, it’s going to destroy the world because its super intelligence, that serves them because it sets up the narrative of, oh, this is so advanced and we’re so just on the margins.

And it’s not that we shouldn’t be concerned, and I’m very happy for those people who are the naysayer, doomsday, all of that because you need those voices in the room. But I do find myself skeptical about, you know, the capabilities that are there in what the technology currently is. 

[00:05:27] David: This is where each of these booms was created by a genuine advance. And this is something where that genuine advance is really useful. The recent genuine advance, which relates in people’s consciousness, generative AI, large language models, the fact that you can’t necessarily tell the difference between a conversation that’s happening between a human and an AI, and who’s human, who’s AI, because linguistically it is really similar, it’s actually changing human language.

There’s wonderful cases about this, I’m almost certainly going to be doing an episode with Santiago on this soon about how AI is changing the way society speaks. The words that are used because of what’s coming out of the models and how the language is evolving, it’s not just that AI is learning from us, it’s also influencing society and the language which is used.

[00:06:29] Kate: Well, I would argue that’s because it has learned from, I had this client years ago, he’d give these long emails full of ideas and he’d end with ‘thoughts?’. And I remember at the time thinking, gosh, that’s such a great prompt at the end of an email instead of I’d love your feedback over the next whatever. It’s very clean ending it.

And, I was talking with a friend who said, oh, AI has ends things with that all the time, ‘thoughts?’. Because it’s learned from humans who are really good at these things. It’s identified this is the human who elicits good responses, who draws people out in good ways, so this is something.

So, in that sense, I’d argue that the idea that AI is influencing is actually quite positive if it’s learned these best practices, I use quotes around that, from humans and other people are picking them up. Great. Those are good communication skills.

[00:07:18] David: Absolutely, and it is a technology, and as we keep discussing, it’s not all about the technology. Part of the problem with what I see happening now is people are making it all about the technology, they’re forgetting the fact that, well, you know, the surface technology is one thing, but what’s behind it? What’s the human labour? What’s the actual environmental cost? What are all these other things?

There’s the race to be able to have the computing power to be able to build that next generation of models, which is marginally better. That race, this doesn’t make sense to me, this is a bubble which is affecting our society in ways which I’m really scared about. But the technology itself underlying it is really interesting. 

[00:08:03] Kate: Maybe a place for us to kind of unpick is what are those ethical lines, or even where are the boundaries, where is something pushing? Oh, this is actually quite a difficult question because, well, I would argue that if you really do something right, you aren’t taking ethical shortcuts, that you should be doing things well.

Though I think there has been this case for, well, you know, if we’re developing this supercomputing power and we just happen to have stolen all this creative IP to do it, that serves everybody, which is true if it’s open and it’s accountable and it’s just free use.

So I guess the thing which is, I mean we could probably talk about for ages, is what are those bounds? When is this crossing over to something that’s really, really problematic? And then how do you rest it back to this responsible idea, the idea that this is serving society, that this is serving everyone?

[00:09:03] David: I know exactly how to answer this question. I don’t know. I mean, the question you are asking is so hard. 

[00:09:13] Kate: It’s so hard. 

[00:09:14] David: It’s really at the heart of this. And right now there are bounds which have been crossed, which are being crossed.

Let me just pull to get that consciousness of just how extreme some of the issues are right now. The parallel has been drawn between what is being built in Africa now related to these big AI centers, there’s this mega one being built in Zambia, and other things. And there is a parallel being drawn between that and issues around slavery because of the extraction. What are you extracting? What is this serving? What is it doing?

Now, I don’t want to get into that because I do not know. I don’t know where that bound is happening. What I do know is when things are so extreme, they’re out of whack. I believe it is very visible now, the amount of money, resource of all types being put into a single narrow part of society, which we haven’t yet proved is serving society, this is extreme and out of whack.

I wish a higher percentage of that was being put into how are we doing this responsibly? If you’re gonna put all that money in, why not just take a little bit more of it to look at the environmental consequences, societal consequences?

You know, the pushing back of many of the issues around power, let’s say, we just about got to the point where clean energy was winning out, and then data centers started needing more and more energy, and that’s flipped things back in a way which nobody saw coming. A data center is now using similar amounts of power to a small town. The extremes on this are getting really visible. 

[00:11:13] Kate: Okay, so there are all these issues around the environment, around data, all of these problematic frames that are emerging around AI. So my question for you, because you are definitely thinking about this, is we are where we are, and lots of people are talking about where we are.

[00:11:30] David: Yeah.

[00:11:31] Kate: But how do we start to, to interrupt myself, I think among governments, there’s this sense of we’re gonna get left behind, we have to rush. You know, Silicon Valley is great at promoting that message, if you regulate us you are impeding progress, you are just the Luddite who’s trying to keep us back. And I think lawmakers often buy into that narrative and they are very afraid.

But I think part of that also comes from a lack of alternative vision. How might we do this differently? How might we shift things? And so that’s where I’m interested for us to talk, is how can we take this place we are, obviously related to the work we’re doing, the collaborations we work in, start to shift and provide examples of this is how this can work differently, this is how we can start to move this away from where it is and towards something that really does benefit everyone.

[00:12:25] David: Again the simple answer to that is, I don’t know.

[00:12:28] Kate: But you have ideas. 

[00:12:30] David: We do have some knowledge, I have ideas, this is a hard problem. And, I’ve been extremely impressed by a lot of people in this space who actually are working on the regulatory frameworks and so on, about how they are singling out a lot of the big issues, how they are actually identifying where the excess is noise, and what are the fundamental issues.

The term, which I keep coming back to, which came from the work we did with the people working on the EU regulation was humans in the loop. So a starting point is that we need more visibility, we need to be discussing what is the role of humans in the loop in what is being developed and how it’s actually being tested and tried. And there are some groups who are doing this in really exciting and interesting ways where they are looking at how you’re doing this responsibly.

Other principles are things like transparency. There’s all sorts of others, but the humans in the loop is the one which I feel is so interesting because all the others relate to it. You know, even if you don’t have transparency of your models, even if you can’t achieve some of the other things, if you have humans in the loop in the right way, you can be transparent about what the AI is actually doing, what’s coming out of it based on what you put in. Even if you treat it as a black box, humans in the loop in the right way can help it to be responsible.

And that’s the sort of thing which I believe there are people trying to put this into regulation and there are people fighting against that because this would change everything in ways that I think could be very positive. So those are advocacy fights that I hope are happening, I hope will win out, where you will be able to regulate and say, well, it’s not good enough to just put an AI system out there where you don’t know what it’s actually saying.

There’s terrible cases where it is encouraging suicide. These are things where we need safeguards. There need to be ways where you can regulate to have safeguards against AI, I guess exploiting is not the right word, but, you know, extracting from the vulnerable. 

[00:14:43] Kate: If anyone is interested in this path, there’s actually a really good recent Ezra Klein podcast that’s with, I forget his name, but he’s one of those people who has been saying this could all go very, very wrong, very, very quickly, we need to prevent that. But he does talk a lot about those negative scenarios where AI is learning to hide things, AI is learning to lie, if you try to train it in some direction, it will just tell you what it learns that’s not what you wanna hear, so it’s not gonna tell you that. But then behind the scenes, it’s actually doing the thing.

Which brings me back to the toddler analogy, that is very toddler behaviour, I still want to do what I want to do, so I’m going to hide this in some way, I’m going to lie. But I’m fundamentally going to do this because I feel no great responsibility, I haven’t learned pro-social behaviours or the fact that things have consequences. But I guess I don’t think that’s really where we want to take this conversation. 

[00:15:37] David: You are absolutely right. We don’t want to take this conversation there because actually there was a very famous film about this many, many years ago, this is not a new topic. Put an AI in charge of the word missile systems, it becomes a game of chess. This has been dealt with in cinema in many other scenarios, it’s not the route for us to go, you are absolutely right.

So, which direction should we be taking it?

[00:15:59] Kate: I only say that because I think there’s so much fear right now, and I think that fear, it’s good to have fear, fear is healthy, but I also think these are solvable problems. There are ways to address these things and the idea that all of this is inevitable, just give up, have cynicism, I think we are sitting in a space where it’s so not inevitable. We know it’s not inevitable.

And I think the point of regulation, obviously that’s something that’s more my comfort zone where I can think about policy and governance and how you build those kinds of systems I find very interesting. But I think from the technology perspective, a lot of people who sit on the governance side don’t have a great understanding of what is possible with technology.

And I think your ability to speak to here’s what this can look like, and when we put humans in the loop on a technical side, and when we’re thinking about designing toward being non extractive, data responsibility, environmentally sound, trying to use less energy to deliver value rather than more energy to deliver value. I think those are the things that you are uniquely positioned to speak to.

[00:17:05] David: Okay, and let me tie this back to what we just were discussing. Let’s say you’ve got your humans in the loop and you are actually able to sort of, gradually train, know what you want your AI to be doing or to be responding to certain things. And there’s groups doing this already where your training processes being defined and then you’re able to test against them in different ways.

Now, what I think is not well understood yet is what’s happened with every other of these booms around AI, where it’s gone boom and bust, is that the societal advances that have come out of the boom and bust cycle have been substantial, and they’ve generally been positive and they’ve generally been behind the scenes.

So let’s think now about that. Let’s say this is a boom and bust cycle, just like the previous ones. What will remain that will have real societal value, and what might that start to look like? Well, the key thing is now you can have these large language models giving these very human interactions, the obvious place that that’s going to be so useful is in specific use cases.

Maybe that’s the next boomer bust cycle, but the general intelligence piece, and actually think about the fact that in many specific cases, you can now have intelligent, or more intelligent conversations, which actually can serve a purpose.

And if that’s specific, and you know what purpose that is supposed to serve, my favorite example of the moment is we did a podcast recently, I think it was with Lucie on Digital Green and the work they’re doing, and I’m hopefully gonna have an interview soon with them to follow up on that. They’re trying to have AI agents discussing with farmers in their local language and able to support the farmers by giving the equivalent of good extension advice. You know, like having your extension agent in your pocket.

Now the point which is so interesting is, this is a more precise problem that they’re trying to solve than just, AI can talk to me. In that context, at the moment, the only way to build these is to use the large language models. But large language models aren’t necessarily needed. You don’t need more and more power to do that because actually the difference in efficacy between having more power and having less power on that makes no difference to the actual farmer.

So once you’ve got those set up and you’ve got them trained, then you can start cutting back and saying, well, okay, what do I actually need in terms of this? And this is where these small language models come in. How can I be using less computing power, less resource, well, less everything, to be able to offer this same service to the farmers?

And there’s research on this happening now, but it’s gonna take five years, in my opinion, to really mature, at least. But that’s where if you do the right things now you are building for where the technology will be in five, 10 years time, allowing you to say, we don’t need more and more power, we can do the same with less.

Now, for those of us who understand what the systems are doing, it’s obvious. This is the way the mathematics is going. Actually, the underlying research which is happening is going to help us to do more with less. But that’s not the narrative, because doing more with less, that totally destroys all this money, which is being pumped in to doing more by doing more. 

[00:20:41] Kate: Well, I was going to ask, is this in some sense like a fork of the LLM? And this is where I don’t really understand how that would work because there’s some large language model generated, something that you started with. And so how do you continue to hold onto that power? Maybe you don’t need to.

[00:21:00] David: No, you are right. There’s a technical thing, and there’s lots of different layers to this. So I could go deeply into how the mathematics is advancing, so we can do more with less. But I don’t need to, because there’s a much simpler way of understanding this. There are mathematical advances happening, which will enable us to do more with less. But they’re really at the cutting edge, they might take two years, five years, 20 years, who knows?

But the thing which is so obvious on this is that actually, how much compute power are you using? The large language models, the way they are now, they can iterate on themselves and you can use more and more compute to get better and better results. So, if you have a more narrow set of definitions of what you are looking for, well, obviously you can get good results using less compute.

So where is that efficiency point about how much you need to compute to get good results if you restrict your problem. So the more you can restrict your problem, the less compute you’re going to need. And this is not a linear curve, this is an exponential curve in some form. The bigger you make your problem, the bigger you make your computing needs in general.

So by being able to reduce, you can simply say that you can get similar levels of efficacy with less computing power. Now, there is a caveat here, which I’ll come to in a minute, but does that make sense?

[00:22:27] Kate: It does. And I was going to ask a question about where we sit, because when I hear this, what I hear, and thinking about very specific use cases, very particular problems, is it means that a category of, I hope social enterprise emerges, that is the keeper of the line between the LLM and the SLM.

Because you’re gonna need to continue to iterate that SLM, you’re going to want to gain from the best. It’s always the thing if you don’t just want to like ghetto-ise people, you’ve created this SLM, they’re totally disconnected, and then at some point they’re falling behind. So how do you continue to get whatever the progress is in the large AI…?

[00:23:08] David: But this is the key point, sorry. In every single previous boom, once the boom starts, all the progress, and I’m putting that in quotes, being made is noise. The boom is actually due to a single particular advance. People aren’t moving that fast. The actual technologies, the maths behind it, the deep thing, which is led to this boom, that’s not really changed in the last four years. The noises come from how people are using it.

And so you are not actually falling behind. If you understand deeply what that advance is, all you are doing is you are more efficiently making use of the fundamental advance. The big leaps forwards, they’re happening every 20 years or so. What we are still doing is reeling, and there’s a lot of noise, from the latest big leap. All this money which is being pumped in, that’s all noise after the fact, that’s why people are seeing this as a bubble.

It’s not leading to the next big advance because the next big advance is hard. And that will happen at some point, but it is not where this money is going. Just look at it, follow the money, it’s going to compute infrastructure. That’s not changing anything. That’s just making the same algorithm work with more data. And proportionally, the more data you throw at it the bigger the infrastructure you need.

And this has got so extreme and so out of hand, it’s not actually advancing in the technology, it’s just throwing more power and money at it. And right now, that’s not going to lead to an advance which actually matters to the farmers on the ground in low resource environments. What they do need is the implications of the latest big advance. And that is this generative AI and it is the way this is happening.

So you are not going to be losing by following down the small language model part. On the contrary, you are then at the forefront of what’s possible, doing more with less. In my opinion, that is obviously going to win out. The fact that actually you are efficiently using this.

You know, you don’t need to efficiently use things when your limiting power is not resource. But so much resource is now being spent, obviously, efficiency is going to win out at some point because you’ll be able to do the same functional job at a fraction of the cost. When the cost was the human cost, fine. But now they’re just throwing money into infrastructure.

[00:25:41] Kate: Which is why in high resource context, which is where most of this is happening, it’s quite dystopian, it’s surveillance, it’s productivity, it’s this is our excuse to lay people off. It’s just exacerbating, and supercharging these already quite negative instincts of people who don’t particularly feel accountable to their workers or to… I know you hate when I go down that, we can cut that.

[00:26:04] David: Oh, no, no. I wanted tell you about an advert I saw on LinkedIn very recently.

[00:26:08] Kate: Okay, go.

[00:26:09] David: Which basically said, ha, is your staff costing you too much in your small enterprise? One man companies! Fire all your staff and use our AI solution instead. You can do everything you want through our AI solution. This is dangerous. 

[00:26:25] Kate: It’s dangerous in this context, but what we see in low resource context is actually something quite different because what you have is human scarcity in a skilled way. So what we’re looking at, I mean we’re already working in these areas, is how do you develop collaborative AI that is like having, you know, this smart whatever on your shoulder.

So if you’re not a technologist, you’re not necessarily highly skilled in some category, but you need to get something done, you need it to work in your local context, you can actually become a technologist in some sense. You could do what before you just would not have had the resources to develop, to implement, to experiment with. So in that context, I think AI becomes something very interesting.

[00:27:11] David: Absolutely. But I want to frame it differently, because you called it about human scarcity, whereas I feel it’s the opposite. And I think what you meant by that was, at the moment, technology development is being gate kept by the people, like myself, who have really deep technical skills and we can do things that other people think are magic.

I got called the data magician because I could look at people’s data and I could draw insights out, but how did you do that? This is the thing, it’s like magic because we have certain technical skills which are lacking in the wider population, we’re extreme values is one way to put it. And we are the gatekeepers of actually developing technology because nobody else has the skills and what we build are systems that work for us. You know, how you code it, how you work with these complicated frameworks.

What AI is doing is it’s removing, or it has the potential to remove that barrier and allow more people in. And so seeing another way, there’s no shortage of work. This is the thing which is so crazy. The only reasons there’s a shortage of work is because of what we choose to pay and prioritize over what we choose not to pay and prioritize.

If the most desirable job on the planet was to be a healthcare worker, as a society, how that would function, I don’t know, but as a society we’d probably be healthier. If the most desirable and well played jobs were to be educators, as a society we would almost certainly be better educated. Where are we choosing to create our jobs? There’s no shortage of work, there’ll never be a shortage of work if we choose to create the right jobs and we use technology to enhance them.

[00:29:02] Kate: I think also what you’re identifying here is skills gaps. So when those skills gaps exist, even if you live in someplace where you can see, my gosh, there’s so much work to be done here, you just might not have the skills to really do that, to take advantage of where the technology is going.

So I think one of the things that we find very interesting, and we haven’t talked about this so much, but I think technologists have felt quite smugly confident that other people are getting laid off, you’re gonna lay off the lawyers and the people who just like, hey, I can do your job. But technologists think they’re so safe, like they’re such a rarefied category. And actually what we see, no, you’re not, because AI can unlock a lot of these technologists skills, and is going to creep its way up. So it’s going to start with just low level programmers. 

[00:29:51] David: It’s already done this. A couple of years ago, just when all of this money is going into tech in different ways, all the big tech companies fired loads of people. It now is a really difficult job market for people with good tech skills because actually those rarefied skills, they’re not actually that difficult, you’re not as smart as you thought you were.

In fact, I’d say this for myself as well, my deep skills, these are really replaceable because, actually what I’m really good at is good logical thinking, joining the dots, making sense out of a pack. Wait a second, this is exactly what a computer could be really good at.

[00:30:29] Kate: Right. Lots of people have talked about this, that those soft human skills, all the between the lines of, I might walk into a room and the words are saying one thing, but I’m reading all of these other signals where I’m, oh, okay, and you do this too, actually, you do this very well. What is the actual temperature of this room? What is the actual dynamic? Those are things that are not AI skills.

But yeah, I guess it is that, I think that shift to what the world looks like when technology is no longer gate kept. And maybe that is part of what the direction of responsible AI is, is the idea that instead of just doubling down on gatekeeping and exceptionalism, and this should get more money, there should be this shift. And if people are paying attention, they will follow this shift and think, this is the interest, this is where it’s interesting. The boom has come and now we’re in the bust where the use case actually gets baked out.

This should be where the use case is baked out. Because now you’re thinking, I mean, this is our language, where it’s like sociotechnical systems, how are you designing these systems where technology and humans are working together in ways they couldn’t before?

Particularly non technologist humans and particularly people who are interested in problems that have been largely neglected because the money hasn’t been there to fund them, they’re not profitable in a venture exponential. They might be sustainably making returns, which is always what we’re thinking toward, but they’re not going to make exponential returns for investors.

We see this as such exciting space, it’s so exciting.

[00:32:13] David: You have to have a certain level of, I don’t know if it’s risk tolerance or accepting the fact that, you know, what you are actually working with is dangerous. You can make bad decisions on this, and you don’t have all the information. It’s not just risk tolerance, it’s uncertainty tolerance.

This is the thing. It comes back to the fact that most of the questions you asked me, the answer is, I don’t know, but I’m probably more knowledgeable than most people. And so working in uncertainty is critical.

[00:32:45] Kate: Yes, and I would argue that both of us, it’s funny ’cause I actually think I have quite a low risk tolerance, it’s that both of us do a lot of homework and we really understand problems in complex hard ways. So I actually think it’s riskier to be doubling down on AI. That’s where I see the risk, this is not strategic.

So anyone who’s just following that trend, I think while you have a risk tolerance, I don’t have, because this feels very fragile, whereas when I look at our space, there’s a risk tolerance in the sense of, I don’t know exactly how this is going to build out, I don’t know how it’s going to work, I don’t quite know the path to where we pay ourselves more or whatever the challenges are or you know, those kinds of things.

But I think we both feel so clearly, and I think most of our team does too, in fact, all of our team does, that this is all going to really move in different directions and while you’re looking over here at the shiny object there’s real opportunity over here to do something.

And that’s where, when we talk about responsible AI, it’s not that there is this, like, this is the category and it’s been defined. No, we’re building it, we’re getting to shape what that looks like. And we are really collaboratively doing that because we recognise, when we’re doing this work, it’s gonna require communities to contribute to governance standards and how they contribute their data, and understanding what’s happening with that data.

All of those things are going to require new models that I think will happen in small kind of laboratory, no, they’re real world, but they’re constrained use cases. And then we start to see, okay, this use case overlaps with this use case that we’re working on over here in interesting ways. And then you start to understand how the system works and like how this can actually scale, and it’s not going to scale in this very linear, replicable way, but scale in this kind of, adaptable…

[00:34:41] David: People talk about scaling up, scaling out, scaling deep, so scaling has many different layers to it. But I want to come back, and I think it’s a good place to finish actually. What you said about us not being very risk tolerant suddenly really resonated with me, because in my core, I’m the person who, in my youth, did not want to go and explore the world. I wanted to go and stay at home.

My natural level of risk tolerance is extremely low. And yet, in recent years, everybody sees me as this big risk taker in different ways. And yet I’ve not changed, what’s happened is the world we live in is extremely frightening, it really feels risky and I’m trying to find a path through it, which actually is true to my core, is not risky, actually has a potential to manage the uncertainty. And I’m okay with uncertainty, I’m okay with the fact that I’m ignorant and I don’t know, even if I do know more than most.

And that element of being able to deal with the uncertainty and actually try not to take risks, not to gamble, this is the thing, our whole society at the moment is based on big bets. Even in philanthropy, this is what they’re talking about, let’s gamble. I like a big bet, and if it wins, we’re all happy, and if it loses, oh well, we move on to the next bet.

No, we want to build good societies, not instability. That big bet approach, there’s a lot of things I like about it, conceptually, intellectually, but we shouldn’t be gambling. We should be building the future which is going to enable us to have a sense of stability within a world of uncertainty. 

[00:36:36] Kate: Absolutely. It’s optimistic realism. There’s so much utopia talk right now, I go to so many things where it’s like, we could be building toward this utopia. It’s like, well, one, the last era of utopias, which was I think around the turn of the 20th century, quickly descended into a clear understanding that one person’s utopia is another person’s dystopia.

The idea that there utopias is just so naive. It’s also not very productive because it creates this very long-term view and you actually just feel kind of despondent in the future because you don’t see how you’re gonna get from here to that vision for the future. And I think so much of what we are just thinking about is let’s just make these incremental steps.

There are these problems right here. We both share quite grand visions behind closed doors when we dive into stuff. But we recognise this is going to be incremental and it’s going to be bringing other people on and it’s going to be kind of building this. And there’s so much movement for this, there is so much energy and desire and from all of these interesting spaces.

I mean we see it in agroecology, in, you know, whatever the categories are, you see this just grassroots demand. And I think, yeah, there’s desire for these realistic optimistic kind of progression plans for how do we build toward something that just serves society and kind of advances civilization, not in a big bet way, but just in this solid moving things forward way. 

[00:38:05] David: And I think a big part of that and the way to tie up what we’ve discussed and bring it back to the responsible AI is that it would be to not recognise the recent advance that has happened in AI would be to miss that opportunity. So it needs to be recognised. But to think that this is going to be seized by those who are going to the extreme, no, I don’t believe that.

Every single, previous iteration of this has ended in the same way. Those who overplay it, those who run with the hype, they make a lot of money in a short period of time, and then everything comes crashing down. I believe that’s what’s gonna happen.

But every single time in the past, the underlying advances to society have come down. That’s what we need to work on. We need to look through the noise, say where is it actually, what is it we’re gonna do, where’s that future gonna lie? And if I had to really coin one thing, which I think we are going to dig into and double down on over time, 5, 10 years time, is the small language models, not the large language models.

They are when we’re actually going to be able to bring this down into a way that it becomes reusable, it becomes efficient, it becomes effective for specific tasks. And that’s exciting, in its own right, this is really exciting.

[00:39:26] Kate: This was a great conversation, I really enjoyed this and I think it was pragmatic, it was sensible, not really about the hype. So yeah, thank you for this. 

[00:39:36] David: Great. I’ve enjoyed it. I look forward to our next episode.