
Description
In this episode, data scientist Lily Clements and co-founding director David Stern discuss the AI Summit held in France in February 2025. The newly released declaration, signed by over 60 nations, promotes inclusive, sustainable, and responsible AI practices. They consider how, despite the notable absence of the UK and US, the declaration signals a potential shift towards societal benefit over commercial interest in AI development.
[00:00:00] Lily: Hello, and welcome to the IDEMS podcast. I’m Lily Clements, a Data Scientist, and I’m here with David Stern, a founding director of IDEMS. Hi, David.
[00:00:14] David: Hi Lily, looking forward to another discussion. We’re talking about the AI Summit, aren’t we?
[00:00:20] Lily: Yes, the one that was held in France in February 2025, I mean at time of recording, yesterday.
[00:00:27] David: It’s one which, there’s a news article which has just come out about signatories to this, do they call it a treaty?
[00:00:37] Lily: Declaration?
[00:00:38] David: Yes, it’s a document, and it’s a powerful document.
[00:00:43] Lily: Yeah, it’s a kind of ‘inclusive and sustainable’ I think is the idea behind the document. And over 60 nations have signed and they expect more to sign over the oncoming days.
[00:00:53] David: Yeah.
[00:00:54] Lily: With the exception of two nations that haven’t signed as of yet the U.K. and the U. S..
[00:01:00] David: Yes. I don’t want to get political in the U. S. at the moment about this, so let’s put that one aside for now.
[00:01:07] Lily: Sure.
[00:01:08] David: It is interesting, and I didn’t expect the U. K. not to get on board with this. So that is an interesting anomaly that I don’t quite understand. But more importantly are the 60 nations that have signed this. This is a big, important step forward in terms of what we’ve kept on talking about, which is responsible AI. These documents don’t actually mean anything unless they’re implemented, the laws are implemented around them and so on. But as an intention, this is huge.
[00:01:42] Lily: And it’s hopefully showing that people are, we discussed before, one of our first podcasts actually, was on the AI Summit that was held at Bletchley Park back in 2023. October? November, 2023. And I remember you saying about that, that your thoughts on it was it was kind of focused too much on killer robots.
[00:02:02] David: Yeah.
[00:02:03] Lily: Or there was a little bit of focus there, we could have discussed a bit more on the more responsible aspects of AI.
[00:02:08] David: Exactly, whereas this summit, I would argue this declaration, I’m not saying they’re getting everything right, but this is powerful stuff. The people are thinking about responsible AI, and I suppose it’s how the world or parts of the world have moved forward over the last couple of years to take seriously some of these things.
And we have on, I think, previous episodes also discussed the sort of EU framework their building around AI and I suppose it’s these philosophers who are really getting their teeth into what ethical AI is and this sort of thing. And I can see the influence of the partners that we’d worked with on that, who had introduced us to that thinking, whether it’s them directly or it’s the thinking that they were part of, you can see that hand in this declaration. You can see how that deep thought into the ethics of it are really coming out in ways which I think are very important.
[00:03:10] Lily: Absolutely, and well, this is, I guess, the area that I can’t see as much, so I’ll have to take your word on that. But it’s very exciting to hear this kind of opinion and very exciting to hear that it looks like it’s more positive or heading in that kind of direction this time versus a year and a half ago.
[00:03:29] David: Absolutely. And I think one of the things which I’d like to explicitly draw out, and I haven’t been following it fully, but the area, the recognition that AI should be for societal benefit.
[00:03:48] Lily: Yeah.
[00:03:49] David: And the importance this summit placed on that specific element of AI for societal benefit, in some sense, rather than the commercial potential of AI, is really exciting to me. And I’m really impressed, you know, places like China are amongst the signatories. We promised listeners an episode on DeepSeek some point soon, and that will come.
This took precedence, but, there are really exciting innovations coming out of multiple places. I would argue that you feel the EU’s hand on this summit very strongly. And what we know about the EU work, which is going on in the legal frameworks around what we call responsible AI or ethical AI. We know that they are very strong and they are thinking about this well, and you can feel that influence.
But it’s so exciting to me that places like China are on board with this, that they’re actually stepping in and their innovations coming out of China are a really exciting counterbalance of looking at using AI technologies in a way which I believe is more, I think the key word here is sustainable. If you think of the amount of money being sucked up by AI in the U.S. and the U.K. particularly, it’s sort of taking the oxygen out of the room from all other businesses.
Well, you take the approach that’s coming out of China and it’s not, it’s supporting innovation elsewhere. It’s more balanced with everything else. It’s positioning itself as being leaner and open. These are really big deals and it’s out competing. The shock waves that were sent through because of these innovations and how this fits together with, I think, the potential market shock waves that will come from, actually, the purpose being sent from this summit. These are huge. This is great.
[00:06:04] Lily: So are you hoping that this kind of helps bring the, I don’t mean emphasis, but helps bring the kind of weight a little bit from that kind of commercialisation of AI and more towards AI for the people?
As we’ve discussed before, and as we will continue to discuss, I’m sure, if we think about AI for social impact, where the primary motive is to have positive impact on society, then those applications of AI may not need the absolute cutting edge. We might not need the extreme race which is happening, in certain areas of AI to get to the cutting edge, best algorithms working in the best ways, because actually the tools that now exist haven’t been fully exploited for social benefit.
[00:07:02] David: And there hasn’t actually been that much money or opportunity pumped into the real socially oriented business models, which lead to sustainable, competitive AI solutions, which are expensive, but they’re not that expensive. I think, if you’re trying to extract huge amounts of profit from it, it’s very different from if you’re trying to build things, which once they exist, are sustainable and, are really good and they’re world class in what they do.
I believe what we’re seeing is the starting of a different approach to thinking about the sorts of AI solutions that could be competing on the market. And the fact that not all of them need to be about extracting commercial benefit, and some of them can be focused primarily on social impact funded in different ways.
[00:08:06] Lily: That’s a really nice and very positive way to look at it. And I think something else that stands out to me is we talk about responsible AI and ethical AI but, actually, this kind of document, communique, I don’t know French, but…
[00:08:21] David: I don’t know, does it have an accent on the end? I’ve forgotten.
[00:08:24] Lily: I don’t know. It doesn’t. I don’t think.
[00:08:26] David: But this kind of declaration has said that to ensure AI being like open, inclusive, transparent, ethical, safe, secure, trustworthy, sustainable…
These are in the framework for the ethical AI that we were discussing, these are the big lines. They’ve taken them across. They’ve put them in. This is years of work leading to this. Congratulations to the teams who have been working behind the scenes to actually create these ideas, to solidify them. Oh it’s really impressive.
[00:09:01] Lily: Yeah, and it’s nice that we’re looking beyond just ethical. I think people will say ethical and it’s just nice to have these other ways to look at it.
[00:09:10] David: Yeah, and they’re hard. I mean, each one of these that they’ve articulated, these are hard to achieve, but because they exist in this sort of communique, or whatever it is, the document, the declaration, that they can now be integrated into let’s say EU funding streams.
So I would expect following this there to be grant mechanisms, which come out behind trying to implement these sorts of AI systems, which is transformative because there needs to be funding to support the sort of innovation and it will have to be substantial.
But what I believe to be the case is, done right, it doesn’t have to be of the order of magnitude that competes with the commercial sort of investments because it doesn’t need that return on investment. So I think that this is going to be a really exciting landscape shift in how people think about and how they talk about AI.
And I think it’s going to be central to, well, I hope to a lot of innovation, which will come out over the next 10, 15 years, which would be derived from the intention put forward by this document. It’s very exciting.
[00:10:35] Lily: That is, yeah. I guess I wonder, myself, how much can the document, what does it mean to sign the document?
[00:10:44] David: It might mean for some countries signing the document, won’t mean anything.
[00:10:47] Lily: Okay.
[00:10:48] David: But for other countries, signing this document will now influence and it will give people the munition, if you want, to be able to influence funding streams, be they research, be they grants, be they government contracts. This will actually, and we know of some of these examples of government organisations that have used AI, there have been scandals around this, but if you were now a government organisation in a signature country that was looking to use AI in your systems, you may now have to refer to this document for the suppliers. So the suppliers may need to say how are you making the AI that you’re using for this, in the procurement process, how is it related to these categories to this document?
So there’s all sorts of ways this could come into the procurement process. So this is huge. And it’s the extent of the ambition of the document, which I think is really powerful in that at the moment a lot of those funding streams go to organisations where they feel they’re above having to do some of these things.
Whereas this will now for procurement processes and other, this would now make if you’re not interested in doing these, you’re then not eligible for this contract. Now, it’s that sort of thing which could be happening. So, even if it isn’t grants or research, even in the procurement of AI by governments, in the future, in the signatory countries, I would expect this to now be part of the requirements that come up. And organisations that don’t take it seriously will start being eliminated from the procurement process.
[00:12:39] Lily: Yeah, that is excellent, and that would be such a positive thing to happen. And it’s very exciting how things can change from here. I just wonder, I wonder your thoughts on these sort of documents. I know that you like options by context, say.
[00:12:58] David: Yeah.
[00:12:59] Lily: You don’t like blanket rules.
[00:13:02] David: No.
[00:13:03] Lily: I guess these aren’t blanket. Is there a context where you don’t want this document? I guess that’s the question.
[00:13:08] David: And I guess this is why it doesn’t surprise me that the likes of the U. S. is not a signatory.
[00:13:14] Lily: Okay.
[00:13:14] David: Because I think they would be, well even without the current political situation in the U. S. this document would potentially be putting the big U. S. tech companies in a very tight spot. Because actually these requirements are not easy and they’re not the priority that the current, I would argue, Silicon Valley tech leaders are pushing.
And so in terms of the lobbying power that side of the country has, then it would potentially curtail their vision for innovation. Now, some might argue that’s a good thing. That the amounts of money that they’re sucking up, and this has been documented in a number of different ways in terms of looking at money flows on different stock markets and so on, that there has been a huge accumulation of investment into these tech approaches. There’s multiple of them, but they’re U. S. based and they’re based on particular visions of technology development.
And what I think is quite interesting here is that this document, as I understand it is a potential threat to that vision of AI development, and it’s a potential threat, partly because it now gives power to another vision of AI development, which could undermine the, I don’t want to say burst the bubble, because that’s a very overused term.
But if big progress was made on open source models, it’s been shown that open source approaches can commercially outcompete. The problem is the business models to prioritise them, which now maybe exist because of this legislation in other countries.
And so this could have real implications, not necessarily for creating competitors to those industries in terms of tech powerhouses, but in terms of finding alternatives to the technology they’re building. That could create this balance of power, this options by context.
I like the fact that if I understand it correctly, what this document is doing is broadly creating an environment change for tech development, which might mean that other forms of tech development have a chance, it’s creating a different environment. We know options by context is all about the context. And so this is creating fundamentally distinct potentially contexts, where you could actually get different forms of innovation with different sources of financing and so on.
Is the amount of money going to be pumped into that the same? No, it’s not. But it might not need to be the same. There’s real inefficiency in some of the approaches taken by the premise that the commercial extraction through technological innovation is going to be bigger and bigger all the time.
[00:16:52] Lily: That’s incredibly exciting. I mean, of course you always phrase things in a way that always makes it sound exciting and these kind of opportunities that are coming.
[00:17:01] David: I’m genuinely surprised. I did not expect this from the summit. I don’t know why. I was aware of the summit and the run up to it and it was happening, but my expectation was low.
[00:17:13] Lily: Sure.
[00:17:13] David: And there’s been lots of other pieces of paper on climate and other things where the paper exists and you don’t get the consequences. And this could just be another one of those. But I don’t think so, I think there’s something very specific about this document, which can create opportunity in a way that will lead to innovation that I’m excited by. So I’m excited by this. I think it will matter. I think it’s something to take note of.
[00:17:57] Lily: Great. And so the hope is that this kind of leads to innovation, which is not necessarily about that kind of more commercial benefits.
[00:18:04] David: Yeah, and, and it can genuinely I think also mean that the commercial approaches, the purely commercial approaches, are held to account and they’re actually forced to innovate and compete better because of this. So we all know that sort of forms of monopoly aren’t actually particularly effective or efficient or good innovators.
And so I think this is going to create that competition in certain ways that those commercial routes may still win out in terms of the finances, the use, the global impact. But my expectation is that they will be better and more ethical and more responsible as a consequence of the innovation that comes out of this.
[00:19:00] Lily: Wonderful. It sounds incredibly exciting. Let’s keep our eyes open on this over the…
[00:19:04] David: Next few years. Yes.
[00:19:07] Lily: Thank you very much.
[00:19:08] David: Thank you.