Description
Lily and David discuss consciousness in relation to AI. Whilst there is a lot of popular debate in this area, can we really have a meaningful discussion without a clear definition of consciousness? They consider advancements in AI, ethical dilemmas surrounding sentient machines, and the potential future of human-AI relationships.
[00:00:00] Lily: Hello and welcome to the IDEMS Responsible AI Podcast, our special series of the IDEMS podcast. I’m Lily Clements, an Impact Activation Fellow, and I’m here with David Stern, a founding director of IDEMS. Hi, David.
[00:00:18] David: Hi Lily, I’m really looking forward to a discussion today. What’s it on?
[00:00:23] Lily: Consciousness and AI.
[00:00:25] David: Oh yes, okay, this is a nice topic.
[00:00:28] Lily: Well I feel that this topic’s already gets too much attention in the media or in conversations but I still feel it’s one that has come up in previous discussions, so it’s one that we should dig into.
[00:00:43] David: It’s a very attractive subject in all sorts of different ways. I can understand why in science fiction this has become, or this has been for a long time, a real nugget to aim for. But in real life, the real world, the main reason this is a pretty pointless topic is because there’s no global definition of consciousness that everybody agrees with and that everybody knows and that you could actually help determine exactly what it means to be conscious. And so, given you don’t actually know what consciousness is, discussing consciousness and AI is kind of pointless. Because it all depends on your definition.
You can have definitions of consciousness where many animals might satisfy them as well. So getting a definition of consciousness where AI satisfies it isn’t that difficult and therefore you could then say AI is now conscious.
[00:01:47] Lily: Sure, well then I suppose we need different words. If we’ve all got a different understanding, or if we’ve all got a different definition of consciousness.
[00:01:57] David: It’s not that we’ve all got a different definition or a different understanding, it’s that it is just ill-defined. Nobody knows what it is. I mean, there’s research on this, there’s great research on this, and there’s been some fantastic ideas and thinking around this. And I wouldn’t do justice to that without doing quite a lot of preparation. Actually, having maybe another podcast in six months time, where we’ve gone through a literature list and able to sort of quote the right things in the right way.
But there is fantastic literature out there on consciousness and what this means in different ways from a scientific perspective. But there is, to my knowledge, no universally accepted definition of consciousness. And therefore, it is meaningless to discuss whether AI could ever become conscious or not.
Because there’s no way of knowing, you could just change the definition if ever AI jumps past that particular bar. And what it would mean to have AI thinking as we think of as thinking. We can certainly have AI replicating a lot of what we imagine and we associate with human intelligence. That is absolutely clear already and in many ways it can do some of those things much better than we can imagine.
I love this aspect and we’ve discussed this before when it comes back to creativity. You know, creative tasks, that’s something which is thought of as being related to our consciousness, but actually a lot of those creative tasks are just about taking what we’ve actually experienced before and putting things together in a different way.
Exactly what AI can do really well, take data it has from the past and putting things together in a different way. Yeah, AI can do that, not a problem. So things that we would consider highly creative and original thinking are not necessarily good indicators of consciousness with respect to AI.
[00:04:07] Lily: But then AI can’t… can’t? I don’t know, but well, no, it can’t. It can’t have opinions.
[00:04:16] David: What do you mean by that?
[00:04:17] Lily: Well, yeah.
[00:04:18] David: AI can express opinions easily. You know, ask ChatGPT for an opinion and it’ll give you an opinion, almost certainly.
[00:04:24] Lily: No. ChatGPT will say I can’t have opinions.
[00:04:27] David: Oh, ChatGPT. Okay, so a less responsible, well done, ChatGPT. Good, good responsibility for that. Yeah, ChatGPT has been programmed so it can’t have opinions.
[00:04:38] Lily: But I’m sure it can still have biases in what it says.
[00:04:42] David: Absolutely.
[00:04:43] Lily: That’s just changing the question of what you ask ChatGPT.
[00:04:46] David: Yes, and this is the thing, where if you programmed it differently, having an AI mechanism which sort of voiced opinions is really easy. Expressing opinions, it’s not a difficult thing to ask an AI system to do. And I really appreciate the fact that they’ve been careful and decided not to do that. As you can see, I’ve never been bothered to ask ChatGPT for its opinion.
[00:05:10] Lily: I’m sure it’s how you phrase it. I can ask ChatGPT for pros of this and cons of that and it will give it. And to some extent, maybe that would be an opinion, depending what I ask it, if it’s not something…
[00:05:20] David: Well, as you say, it’s not voicing an opinion, expressing an opinion, but that is a choice of how it’s coded up.
[00:05:26] Lily: Yeah.
[00:05:27] David: Whereas, I think the key point there is that, what do we actually mean by consciousness? What would it mean to be getting towards consciousness? And again it comes back to actually thinking about how have people been trying to think about consciousness in the past? And a lot of this has been around animals in different ways. In certain ways it’s this question of, well, what levels of intelligence and what type of thinking do they have? What emotions do they have?
And so where you set the bar on that for different levels in terms of consciousness in different ways is very interesting and really, really fascinating work. And there’s been some amazing work around that. But It’s totally different for artificial intelligence. You’re not starting in the same place. So in some sense, the work on consciousness related to maybe distinguishing between human intelligence and artificial intelligence, maybe that needs a whole different way of thinking about measures and how you identify it and how you differentiate it.
And that work, I would argue, is very much in its infancy. And that’s absolutely fine and proper given the rate at which artificial intelligence is exploding. There’s no problem with that being in its infancy. But it does mean that this sort of narratives that often capture headlines, you know, any narrative, any headline which I see something to do with artificial intelligence and consciousness in the same headline in any shape or form, it just makes me laugh because, of course, if you don’t have a concrete scientific definition that you’re aiming for, it is meaningless to talk about this.
[00:07:12] Lily: Well, that’s generally rule number one when you’re answering a question is okay, let’s define this.
[00:07:17] David: Rule number one to who? To a mathematician, certainly.
[00:07:21] Lily: So I was wondering if that was to a mathematician or not, which is why I left it open.
[00:07:26] David: This is the thing, and many other people, many other domains, they are used to dealing with questions which have ambiguity and which are ill defined, and they’re happy to talk about them and talk around them and have debates and discussions around that, whereas for a mathematician, that’s totally alien.
Actually, I believe this is, in this particular case, it’s really useful being a mathematician for this. I can’t have an opinion on artificial intelligence and consciousness unless you tell me what consciousness actually is. You know?
[00:07:56] Lily: Yes.
[00:07:56] David: I know quite a lot about artificial intelligence, so I could do quite a lot, but I don’t know what consciousness is. So how am I supposed to have an opinion? How can I actually know? It’s a concept where unless it’s clearly defined, well, what I’m going to say is going to be meaningless. I would argue, this is a real power of mathematicians. This is our superpower. In many ways. Is that, in mathematics, in general, you can’t talk about things unless you know what they mean. It would be meaningless to have a debate or a discussion about something you don’t know what it means.
So, you have mathematicians who talk to each other all the time about different areas, and you can have the most senior mathematician in the world talking to a relatively junior mathematician, and they’ll say, well, sorry, I don’t understand that, can you give me the definition? You know, your senior mathematician is absolutely happy saying, I’m sorry, I don’t understand that, can you give me a definition so I can understand what you mean, so I can know if I have an opinion or not? Because if I don’t know exactly what you’re meaning, I can’t know what my opinion is. And this, in the context of something like AI and consciousness, is such a healthy attitude to have. Because in absence of that, it is meaningless.
Because without understanding what consciousness is, your perspective versus my perspective of consciousness could be totally different. So I say, ha ha, we’ve got a machine learning algorithm which has now demonstrated signs of consciousness. And you say, wow, that’s incredible. But actually, what you’re understanding is totally different from the science that actually it developed, so you might interpret that as meaning something totally different.
[00:09:52] Lily: I would, if someone told me that, well, anyway.
[00:09:55] David: You can’t know that you would, because that’s the whole point.
[00:09:59] Lily: Yes, yes.
[00:10:00] David: You might, here’s the key point, you might, there’s no way of knowing that you wouldn’t because unless you have that clear definition, then you can’t know, well actually what do those signs of consciousness mean? And would you consider those signs as of consciousness? Do they correspond to what you understand as consciousness? That’s where these things escalate very, very quickly that you have people who have a discussion and they say it’s really exciting, the algorithm’s showing signs of consciousness.
Which means that, for example, it’s able to be creative as we described before, where you have taken different things which have happened in the past and put them together in new ways which haven’t been seen in the past necessarily. And yes, I’d expect an AI to be able to do that. And I would absolutely accept that that sort of level of creativity would be a sensible indicator of consciousness.
Because this is a higher order skill in certain ways. So there you have creativity as a sort of indicator of consciousness, and your AI demonstrating its ability to be creative and to learn over time or to adapt its creativity and develop new things. All of those are things I’d expect the AI to be able to do and all of those are sensible indicators of some form of consciousness.
So there’s no problem imagining AI systems which are able to show signs of consciousness, depending on your definitions. But if you take those signs and reinterpret them as being something else, well, that’s a whole different matter. I don’t have answers to this, of course but I do, as we discussed at the beginning, I do think in many ways these debates and these discussions are sort of a bit of a waste of time.
If you want to think about AI being conscious, you know, number five’s alive. You remember Short Circuit, a very old film about a conscious robot who got struck by lightning and then gained consciousness in a form which most people would associate as consciousness. That’s what is in most people’s mind when they think of AI’s gaining consciousness.
[00:12:27] Lily: Interesting. Well, in my mind, I don’t know.
[00:12:29] David: Have you seen Short Circuit? Or are you too young? You’re probably too young.
[00:12:32] Lily: No, no, I haven’t. So, let’s…
[00:12:34] David: Oh! Oh, well, there you go, in science fiction, these are old, old concepts, we’ve been dealing with them for years in many different ways. And there was sort of really good debates. And if you think about when Short Circuit was made and the time it was being built up, it just shows you how slow we’ve been. People think of AI as being fast. But the whole point is actually we thought we’d get there much faster than we did.
You know, where we’ve got now has been a huge amount of work and a lot of progress, which has been made over a long period of time with consistent progress in the background happening all the time to get to where we are now. But actually, there have been times in the past where the expectation was we would get there much faster than we have.
And I think that’s, that’s important to bear in mind. I’m not trying to advertise movies, but if you do want to think about consciousness and AI? Short Circuit is a nice movie for you to watch. And there were a lot of these interesting debates. I can’t do better than they did in the movie in some way, presenting these in a popular fashion, in ways where it can, you know, touch the heart strings while you actually consider what does it mean to be conscious.
[00:13:54] Lily: It’s interesting that those ideas or those discussions from, I’m going to assume, the 80s?
[00:14:00] David: I’m not giving anything away! I’m as young as a spring chicken!
[00:14:08] Lily: Well, it’s interesting that those discussions and those kind of films are still relevant.
[00:14:13] David: Absolutely. It’s not just that they’re relevant now. It’s that they weren’t really relevant then. It’s that they are now finally relevant now.
[00:14:20] Lily: We’ve made it.
[00:14:23] David: We’ve made it to the point where now we can and should be having that discussion. And actually, what was considered plausible then could become in some sense a reality now. But it still doesn’t answer the key questions. It doesn’t really… I don’t know, we’re not going to answer that question because we don’t know how to formulate it. It’s not a problem of the AI side of it, actually that side of it’s really well understood now, it’s really progressed. The consciousness side has been much slower progress. Maybe AI could help us to actually be able to understand and define and see if we can get to the heart and the root of what it means to be conscious.
And that’s an exciting thought in all sorts of different ways, because it might be that what we find when we sort of keep delving into developing AI further and further is that there’s so much that we associate to consciousness which we find we can replicate without achieving consciousness. And so most of the indicators of consciousness that people have thought of and think of and what they would take, they may not actually be related to consciousness, but there may be some which emerge as being fundamental, which might end up being simpler than we ever imagined. I don’t know. I have no idea. I’m not a consciousness expert, but I’ve read enough to know that there is good research happening in this direction.
[00:16:11] Lily: No, that’s enough to know, I think, that there is research going on in it and things. But for me, I guess, a question I have is, so we’ve discussed before, you know, where, where AI has gone wrong, I mean in previous podcasts, where the use of AI has gone wrong is when we haven’t had that human at the other end to interpret it.
[00:16:33] David: Or the way the humans have interpreted it has been different from what it’s actually doing. And so they have misunderstood what the algorithm is able to tell them and how to interpret it. You’re right.
[00:16:48] Lily: Sure. So for me, again, it comes back to what does consciousness mean, I suppose. And it comes back to that definition, but we’re going to always need that human on the other end.
[00:17:01] David: I believe, and this is part of the ideas that are essential now in thinking about AI development responsibly, is humans in the loop and their role, and thinking that through is really important. And don’t get me wrong on this, I don’t believe that humans are going to be necessarily morally better than AI. I think in many ways, AI can and should help us to be morally better. Actually we’ve discussed this example before, but let me, let me dig into it a bit here with respect to what you’ve just said, and this idea of conscious choice.
Let’s say for a second that when COVID hit and the government decided to use a machine learning algorithm to grade students.
[00:17:53] Lily: This is in the UK when COVID hit and we couldn’t grade students, they didn’t have their exams. I’m sure worldwide pretty much but we didn’t have exams in the UK, this was in your A levels and GCSEs, and so they used a machine learning algorithm, initially, to instead give grades.
[00:18:12] David: And then there was a big public uproar for the right reasons this was thrown out and they used teacher grades instead.
[00:18:20] Lily: Yeah.
[00:18:21] David: So imagine instead of trying to do something which I think, and we’ve discussed this elsewhere, was fundamentally unethical. You cannot and should not have a machine learning algorithm assigning grades to students because, for all sorts of different reasons, that’s not an effective approach.
The responsible job to do might’ve been to reconsider the task itself. And instead of having a task of actually the AI determining the grade to having the teacher grade be informed by AI. To AI help the teachers provide the grade. And where a teacher grade differs significantly from the AI support, there is an expectation for the teacher to provide a justification to actually dig into and play that role, make those conscious choices as to why they are going against the algorithms of the AI and their judgement of the student significantly differs from the expectation, the prediction that the AI algorithm is making.
Well, I think of that as being something where there’s a conscious choice being made there. Now, I don’t believe that teacher choices are necessarily better because there were all sorts of biases that could come in. Imagine you have a teacher who happened to have a good relationship or a bad relationship with a particular student and those biases would then come out in the grading, it makes teachers very powerful in different ways.
But now you put in place this AI system which structures and which reduces the ability of the teacher to be significantly differentiating in that way. And where your teachers significantly differentiate from what is predicted in one way or another, those could be the special cases that actually an inspector could follow up on and could make an independent choice, and actually have another layer come in where they could go in and try to re-evaluate and see whether they agree with the teacher’s assessment.
That element of actually enabling teachers and enabling that human decision, those conscious decisions, what I’m saying is that it could actually be that this helps actually remove some of the biases that may exist with teachers. My experience with teachers in the UK and elsewhere is that they are in general incredibly responsible and, you know, making good decisions.
But there are always these extreme cases and you see them in the news. And this is exactly about building systems which support your good, responsible teachers and may, if designed well, make it harder for your irresponsible teachers to actually be irresponsible, to single out and bully, as happens. I’m afraid it’s true, these things do happen. So actually being able to allow teachers to be more powerful without giving them that much more power could be really beneficial.
So to come back to the key point and this is I think where we started, in some sense. Well I suppose before I finish with that key point, are there any thoughts, because we’re going to have to close up soon, are there any thoughts you want to bring in and then I’ll sort of summarise this up?
[00:22:16] Lily: No, no, I think that that’s a really interesting point there. So summarise away, please.
[00:22:21] David: The key point I want to make is simply that if you’re thinking about consciousness and AI, we shouldn’t really be worried about this. If we build the right systems, it doesn’t matter, it’s not in competition with human consciousness.
It should be in collaboration with human consciousness. And so it doesn’t matter really whether there are definitions or metrics of consciousness which AI satisfies. What really matters is how we build systems which enable us as humans to do more to be better rather than focusing on replacing our decision making by an AI system, which may or may not have a similar form of consciousness to our own.
[00:23:17] Lily: Very interesting. Thank you very much, David.

