Description
In the first episode of a series on Responsible AI, Dr Lily Clements and David Stern discuss the “AI Safety Summit” that took place at Bletchley Park in November 2023. Are we listening to the right experts on AI? Should we be worried about killer robots? And will regulation stifle innovation?
Further information about some of the topics covered:
Official introduction of the summit: https://www.gov.uk/government/publications/ai-safety-summit-introduction
Dutch childcare benefit scandal: https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/
Amazon’s AI hiring scandal: https://www.bbc.co.uk/news/technology-45809919
[00:00:00] Lily: Hello, and welcome to the IDEMS podcast. I’m Lily Clements, an Impact Activation Fellow, and I’m here with David Stern, a founding director of IDEMS. Hi, David.
[00:00:15] David: Hi, Lily. What’s the plan for today?
[00:00:17] Lily: Well, this is likely to be the first episode of an IDEMS Responsible AI podcast series.
[00:00:22] David: Absolutely.
[00:00:23] Lily: Today I’d like to discuss the first World Summit on Artificial Intelligence Safety that took place recently in Bletchley Park in the UK. I was just wondering your thoughts on this and on how the kind of outcomes have come from this? I guess I have several questions. So let’s start with, number one, what were your thoughts on there being a summit in the first place?
[00:00:43] David: It’s, I mean, this is a topic on everybody’s minds. AI has exploded as we know. Over the last 18 months or so, we’ve got sucked into this as well. And so having something like this was sort of inevitable. And it was a, a question of when, not if, really, that something like this would happen. So I think that this is, this is important. It wasn’t necessarily the most constructive people in the room, but there were very interesting people in the room who were discussing a topic which needs to be discussed.
[00:01:22] Lily: Sure, and, and so who would you say are those kind of interesting people in the room?
[00:01:27] David: Well, governments, for one. The number of governments represented there… This is important, they are taking this seriously, they are discussing regulation. The EU representatives were there, they were discussing regulation. The Vice President of the US was there discussing regulation and putting forward what the US is doing, positioning itself as actually being at the forefront of this.
That was part of the political manoeuvering that I don’t fully understand and I don’t really need to understand.
[00:01:57] Lily: Yes.
[00:01:57] David: So, but there was also big tech. Not all big tech, but enough big tech that there was credibility.
There were academics. Their voice wasn’t quite as loud as I would have liked it to be. You know, there was a bit too much discussion about killer robots and not quite enough discussion about, let’s say, the AI scandals of governments using AI and leading to suicides and other things and so actually just worrying about bias is the actual real risks of AI right now and AI use.
[00:02:36] Lily: I guess there’s a lot of ways that I want to go with this conversation at the moment. There are a lot of different people in the room. There seems to be a lot of different governments or unions, such as the European Union who are kind of, in a way, racing to get their policy out. But when we have a, maybe not racing, sorry, but…
[00:02:54] David: No, this is an interesting one. I mean, I’m really interested by that choice of word. They’re rushing, certainly. There’s a question about whether they’re racing or not. And I think, to me, I think the big question is that haste is needed because…
[00:03:07] Lily: Yes.
[00:03:08] David: Things are moving extremely fast and people need to get things out. And arguably none of them are getting it out fast enough.
[00:03:14] Lily: OK.
[00:03:15] David: However, racing is an interesting word because that implies they’re racing each other or they’re racing someone. There’s an element of competition there. And actually, I honestly don’t know. This is a really interesting one. I don’t know whether, from the governmental perspective, from the European Union and other, there is any value or any desire to be the first to get theirs over the post and out.
There’s at least as many potential disadvantages in this context of being first as there are advantages. So it’s really not clear to me. The EU regulation led to a number of big tech companies, including OpenAI, saying Oh, if you’re going to put out regulation like that, then you won’t have access to our AI. Racing is an interesting choice of word, I’m sorry I interrupted on that. I, I don’t know… I, I’m happy to say when I don’t know. And the racing, I don’t know. Rushing, certainly.
[00:04:20] Lily: Okay…. By racing I meant, I meant it in the way of against each other, but also against AI because it’s, moving so fast and now we need to kind of keep up.
[00:04:31] David: But they’re certainly not racing against AI because they’re losing that race. They’re trying to catch up. They’re playing catch up now. And this is a really important point. The regulations that were being put forward and thought through three years ago, this was a sensible pace that was happening at. And what’s happened in the last 18 months, 2 years has been a tipping point where suddenly the AI has overtaken any efforts in this.
There was a question about whether it was too slow before, but now it’s clear. The safety regulations are playing catch up, and everybody knows this. Go back a little bit further, and you had the big tech talking you know, in the US about the fact that they accept that AI needs regulation. This is unheard of, and it really speaks to the moment we’re in right now.
There is a need to get some form of control on this.
[00:05:30] Lily: Yeah, absolutely. And then, and then I guess you also mentioned the people in the room who were the more academics, or the people I assume, when you say academics, the people who were like the philosophers that we had met before, people who were working actively on these responsible AI systems or safety AI or however we’re going to term it, because I think safety encapsulates a lot more than responsible.
[00:05:56] David: Yes, so, I would hope that responsible AI is safe but that, that’s sort of arguable. I think that there’s a number of hypothetical scenarios which are being put forward and discussed which I think are not helping the discourse. I come back to the killer robots, which was shocking to me, just how much space was accorded to that as a discussion point because it’s… You know, nuclear bombs have been around for an awful long time. Controlling weapons of mass destruction, whatever they may be, is really important and really AI shouldn’t be any any different if it’s put into sort of, you know, weapons in different ways than any other serious weapons.
That’s a discussion really, I would argue the same as any other arms. I’m not saying this isn’t important, but I’m saying that… You know, the difference between a nuclear arsenal and killer robots is, is, is sort of… The only reason you’d really be worried about the killer robot scenario is because you’re worried that it goes out of control. There’s no difficulty in terms of designing systems where that doesn’t happen or putting regulations in place for that not to happen in some systems. This isn’t the thing that we need to be worrying about right now.
[00:07:25] Lily: But I guess…. Sorry no, no…
[00:07:27] David: Go on.
[00:07:28] Lily: No, I was going to say, well, Killer Robots has this, has this grab to it, this kind of okay, let’s talk about that.
[00:07:34] David: Yeah, really nice movies about it. We’re actually entering the Terminator age.
[00:07:39] Lily: Yes.
[00:07:40] David: This is something I know something about because I’ve watched all the films. That sort of misses the point of what we really can and should be worrying about at this point in time. At this point in time, I’m really worried about announcements that have been made just as recently about UK government using AI in policing, about AI coming into sort of services. We know what happened. We’ve talked about it before, about the sort of child care services in the Netherlands. Government services, taking up AI, is the real safety issues, because this is sort of going to have issues for discrimination, this is going to have issues for minorities, this is going to have issues for safety, in all sorts of ways, the things that we’ve built as societies, to be able to put in place safeguards for societies, these are in risk. We are risking these, by using AI irresponsibly.
[00:08:40] Lily: And these risks, we’ve previously broken things down to those three components of the data, the algorithm, and then how you kind of interpret the output? Is that the right way to say it?
[00:08:51] David: No, when we broke it up before, in the actual demystifying of AI…
[00:08:54] Lily: Yeah.
[00:08:55] David: About data, algorithm, learning. Then of course there’s the layers of interpretation on top, there’s how you actually manage this, there’s the human elements around it, and all the rest of it. So, so it’s not just those three things, but in terms of actually demystifying an AI process, that’s all you need to really worry about, and there’s things you can do there.
[00:09:18] Lily: These risks to government or these risks of using these things in governmental services, are you seeing them occurring in the data phase of what’s being read in, or the learning phase? Or the algorithm?
[00:09:31] David: Everywhere.
[00:09:32] Lily: Okay.
[00:09:33] David: I, I, honestly, I know enough to know that… If somebody tells you they know how to do this safely, they’re not thinking hard enough. Actually, it depends on the problem. There are ways… I’ll give you one of my favourite examples.
[00:09:56] Lily: Ok.
[00:09:57] David: A chatbot. Now we had discussions related to one of our projects, Parenting for Lifelong Health, on how would you actually do a responsible AI driven chatbot? And the key point is that if you have the AI interacting straight with the users, I don’t see how we could possibly make that responsible. I don’t see how we could control the learning to make sure that that is a responsible thing. However, I do see how we could use AI to generate a responsible chatbot.
And this is really important that you could actually predefine. And the point is that if you think about that responsible chatbot that you could build in that way. It’s actually more cost effective, it can be delivered offline, so it can reach more people. It can be delivered at scale, it’s cheaper to get out to people, and it’s so much better. And so, the point is, once you actually dig in to what it is you’re wanting to do, and you think through how to do this responsibly, you get cost savings, you get better accuracy, you get all sorts of things.
And so using AI in the right way to develop something, you can get similar outcomes that you would get… not the same outcomes, but similar outcomes that you would get with a a fully AI integrated system. You can use AI so that people can use natural language to be able to interact with it. And the AI tries to interpret that language into a sort of, a large list of things which it understands.
[00:11:34] Lily: Sure
[00:11:34] David: But so there’s lots of different ways to actually do this. And that could mean that, you know, over time you could adapt it to sort of minority use cases in ways which would be ethical and impactful. And so there are really concrete ways in which, if you tweak how you use AI… And we could on another occasion dig into that one example.
[00:11:58] Lily: I was gonna say, I feel like I’ve taken us a little bit off track here. But…
[00:12:01] David: Yeah, and, and digging into that example would be really interesting. But the key point is if you start from a premise of not just jumping in and using it to do what you thought you were going to do, but stepping back and say, okay, what can I do responsibly? Then often you end up doing something slightly different. And that’s the key point. That there’s many things that if you just take, you know, we need an AI chatbot or an AI solution to do this then you’re possibly going to be going down the wrong path. The main difficulty is the definition of the problem.
Actually, how to define problems responsibly. And this is hard, but it is feasible. Within the current technology scenarios, within what’s possible now, defining the problems which can be done responsibly, is to me, that’s the bit where… the work is needed.
We cannot solve all problems. So understanding there’s technologies and advances that will happen in the future, which will enable us to do more. But right now with AI technology as it is, I think the limiting factor is defining the right questions to answer. Defining the right problems.
[00:13:28] Lily: Well, and you’re saying responsibly in here, but actually the, the summit was AI safety.
[00:13:34] David: Yes.
[00:13:34] Lily: Which encapsulates responsibly, I assume
[00:13:38] David: I would hope that if you are responsible, then you are safe.
[00:13:41] Lily: Sure.
[00:13:42] David: I think there are safety concerns beyond just being responsible. I honestly feel that part of the difficulty with the safety discussion for me, and I understand it from the perspective of the summit, is that this was sort of framed as a human existential threat um, as a way to think about it in terms of safety.
But that distracts from the fact that, well, yes, we can we need to deal with the existential threat bits, but the whole discussions around consciousness and AI and those sorts of problems and the thinking about the AI taking over the world, as we have in science fiction and as, as is imaginable, as has been depicted very nicely in a number of movies and books and, and articles.
I think that’s something I’m… less concerned about. Partly because we don’t understand consciousness in the first place. The research on consciousness, I actually looked into this fairly recently because of AI, to say, well, actually, you know… if AI is potentially becoming conscious, what do we mean? What do we know about consciousness?
And there are big debates going on within the research on consciousness at the moment, which broadly say to me that we still don’t know what consciousness is. And there are definitions of consciousness which may be we could then get AI systems which satisfy those definitions. But would they be conscious in a way that we still feel is correct? And there’s really big debates about consciousness with, with different life forms and different animals, in different ways. What consciousness means is a really difficult question. So, I’m not personally worried about that question. I think if AI is deemed conscious in the next 5, 10 years, it will depend on the definition of consciousness.
[00:15:44] Lily: So…
[00:15:44] David: There could be definitions where it gets to consciousness or not, but that’s not what I’m worried about.
[00:15:50] Lily: Well I guess then, to avoid going over time and there were so many things we could discuss here, I guess you’re saying that a lot of the time was spent on these discussions of killer robots, consciousness. Is this how you envisaged this sort of meeting would go?
[00:16:03] David: It didn’t surprise me, it did disappoint me.
[00:16:06] Lily: Ok.
[00:16:07] David: You know, this is sort of… There’s one way of describing this summit is, sort of a dream is that this becomes like the COPs, the Climate Summits. And the point is that, well the Climate Summits are a bit disappointing as well because they haven’t led to the right actions. I really hope it doesn’t follow the same sort of route, but it does look like it is trying to follow the same model. But there’s a lot of talking going on about things which are sensible to talk about. I’m not saying the COPs aren’t useful, and they are useful. There’s sort of agreements that come out of them in different ways. It takes time.
There is a question, just like with the climate situation, the climate crisis, some people, and I have to confess I’m one of them, would like things to move a little faster than they seem to be moving and would like these sorts of discussions to be, to be actually moving on further. And I feel the same is true with these summits, that there are really important things that should be done really fast for safety around AI, that aren’t going to happen fast enough because of the distractions caused by these other discussions.
The regulation to make sure that companies and governments don’t cause harm, or too much harm because you can’t do not causing harm, it’s gonna happen. We’re already too far along that. I don’t believe we as a society can move into the next phase of our AI journey without harm being caused. So it’s really about mitigation at this point. And I would argue the parallel with climate is quite good. You know, we’ve reached a point where the harm due to climate change is, I believe, irrefutable, others might disagree. It’s the same sort of situation where harm is, is, is unavoidable at this point.
Mitigating that harm, minimizing it, that’s what we should be focusing on. And it is frustrating that the voices that were trying to sort of bring up the key issues, which are going to be the main sources of harm in the near future, were not listened to enough. And that’s, that’s frustrating, but it’s expected. This is not a surprise.
[00:18:40] Lily: Yep, that’s very interesting. Well, thank you very much for your thoughts on, on the summit. Do you have any final things to say? Although we are very short on time.
[00:18:48] David: I will finish this by just saying that despite the disappointments, I think it’s a good thing that it happened. I think it’s a good thing that there was an agreement that another such summit will happen, that this is going to carry on. Despite my reluctance and hesitations around it, I think these discussions have to be happening. I would hope that in future events there is more space and time put to the more immediate dangers, the real dangers, because they are life threatening as well.
We know… that people have lost their lives because of results of AI systems working badly. That’s a fact. And it’s not just the case that we talked about. It’s the automated car, the first lady who died by an automated car. The ethical implications of that were huge. So… AI systems are going to cause harm.
Getting regulations around them, understanding how to actually make sure that that harm is, I don’t want to say minimized because I feel that’s unachievable, but it’s at least… that it’s bounded. And I think this is, this is an important thing. This is where we’re at at the moment. We have an opportunity to bound the harm that such systems can cause.
We might fail to bound it, but I think that… I think we can. Right now, the regulations that are being proposed are really rather sensible. There’s a question about whether they would stifle innovation. And that is a concern and a worry, but not a big one. The regulations we’re talking about…
What does it mean to stifle innovation? Let’s take… Let’s just take the example from the Netherlands with the child care system, you know, stifling that innovation might have mean that instead of getting it out when they got it out, it took an extra year or two to do so as they jumped through some hoops to go through the regulatory processes. And in doing so, they improved the system and therefore it worked better.
Well, stifling innovation in that way actually sounds positive. Now, that’s a government side. What about a tech side? Well… Maybe on the tech side, this is something where it’s a bit harder to do, the sort of the competition between big tech and the start-up culture which is growing around these AI things is moving so fast that stifling innovation there could kill it off. And so that just means that your competitors, who aren’t going through the regulatory processes, or who take shortcuts, they benefit at your expense.
That’s an interesting and difficult balance. But I do believe that, despite the speed, despite the approach there are opportunities around collaboration, which it might be that good regulation could actually enforce. One of the things that the regulatory frameworks are looking for is transparency.
Actually, transparency in there might incentivise collaboration over competition. Because if you’ve got to be transparent about your processes, then you can’t be so precious about your IP [intellectual property] and what you’re doing. And therefore, you may as well be more collaborative.
[00:22:24] Lily: Yeah.
[00:22:24] David: And therefore, that might lead to things moving just as fast across good actors who are trying to move fast, but in a collaborative way. So I believe, you know, what if big tech weren’t competing with one another on this quite as much, but were more collaborative? And the regulatory framework provided essentially incentives, which meant the collaboration was more profitable than competition in some of this.
And this is the key thing. You know, big tech has, due to its nature, it’s a competitive approach.
[00:22:57] Lily: Yeah.
[00:22:57] David: So if being collaborative is good for their bottom line, they’ll probably adopt that approach. And so it could be that good regulation will help them do better by being more collaborative and it will help them and it will help society.
And this is why they are willing to be regulated.
[00:23:20] Lily: Yeah.
[00:23:20] David: They’re not saying that we need to be regulated because they’re worried about killer robots. They’re saying that they need to be regulated because they recognize that actually good regulation could help them.
Good regulation could be good for business. Or good for good businesses. Because what they don’t want is they’ve got a lot to lose reputationally if things go wrong. So it would be good for good businesses to have regulation which helps them and which enforces and stops bad actors from taking shortcuts and not doing things properly.
We’re going to have another case where we’ll talk about this, but the Amazon example you know, they must have invested hugely in this tool for recruitment. And all that investment got thrown away because they couldn’t get it to work without being sexist.
[00:24:16] Lily: Yeah.
[00:24:16] David: And the bad press that was coming out of it meant that it was better for them to kill that investment than it was for them to pursue it. But other people have carried on pursuing it who didn’t have the reputational risk. And so have they, have they done better than Amazon? Probably not, because Amazon had more resources to put behind it to try and solve those problems. But they have less reputational risk for doing it badly. And that’s why good regulation could help businesses who want to build AI, which is responsible, which big tech has to do because of the reputational risk.
So regulation done well could help AI to develop in good ways and that’s what I hope. And hopefully these summits will help us move in that direction.
[00:25:08] Lily: Thank you very much. It’s always very interesting talking and and always 20-30 minutes is never quite enough, but thank you very much and I look forward to continuing our discussions.