049 – Artificial Words: The Unseen Bias of AI in Everyday Language

The IDEMS Podcast
The IDEMS Podcast
049 – Artificial Words: The Unseen Bias of AI in Everyday Language
Loading
/

Description

This episode delves into the nuances of language discrimination, specifically focusing on the impact of language models like ChatGPT on linguistic perceptions and the inadvertent biases they might reinforce. Hosts Lily Clements and David Stern discuss a controversy that arose in Nigeria over the use of the word “delve,” which some attributed to AI-generated content, highlighting broader issues of language discrimination and the unintended consequences of AI tools in communication.

[00:00:00] Lily: Hello, and welcome to the IDEMS podcast. I’m Lily Clements, a data scientist, and I’m here today with David Stern, a founding director of IDEMS.

Hi, David.

[00:00:17] David: Hi, Lily. What are we going to delve into today? I already know the answer to that.

[00:00:22] Lily: You clearly already know the answer to that, because of the use of the word delve.

[00:00:26] David: Exactly. I’m sorry, I’m not ChatGPT, I’m not a…

[00:00:32] Lily: Maybe you are, maybe you’ve got it in front of you and you’re already planning out what to say to me based on, I don’t know, certain discussions. But no, yes, that’s today. So there was an article that came out a couple of weeks ago, which at this stage is was in early April 2024, where it was talking about language. I don’t think it was article, I think it was paper that they saw…

[00:00:55] David: They received an email or something.

[00:00:57] Lily: That was it. That was it. They received an email and they said that they will automatically ignore an email that says in it certain terms such as the word delve, because to them that says that this person’s used ChatGPT.

[00:01:11] David: Yeah. But this then caused this huge uproar in Nigeria in particular, which was what was so fun about this, because they were saying, well, how linguistically impaired are you, that you think delve is a ChatGPT term? We use this all the time. Here’s a WhatsApp screen of where I used it in my conversations with my friends just recently here and here and here.

[00:01:33] Lily: Absolutely, absolutely. And they kind of flagged up all these other words they use all the time that again you know, people now in certain circles, I guess, are assuming that this is a ChatGPT term, which could just result in some, well, biases.

[00:01:51] David: And discrimination.

[00:01:53] Lily: And discrimination.

[00:01:54] David: Let’s be clear because of course the thing which is powerful about that is that when you actually point it out as discrimination, there are laws which protect you against that. This is what is so central. This is why thinking responsibly about AI and how people use it to interpret things related to it is a really difficult issue.

I’m not suggesting this, I’m not litigious in any shape or form, but I would argue, no, I don’t argue, I have colleagues who have argued to me that the human rights legislation already covers people’s right not to be discriminated against. And therefore, people who are using AI or interpreting things as being AI in a way which does cause discrimination; people’s rights not to have that are already covered by their human rights.

[00:02:50] Lily: Absolutely. Well, that’s a completely interesting point that I didn’t kind of drive to myself. For me, it’s also, and there’s so much to say about it, but this kind of more immediate point of people trying to, you know, looking at different articles coming out, papers or even at, you know, school level, university level, trying to work out, okay, has someone cheated on this assignment and now certain, certain languages, or certain individuals being discriminated against.

[00:03:23] David: Absolutely. And we have actually mentioned this in other cases, I think there was an episode where we touched upon this a bit. But the importance of actually not trying to fight it, this is what people are doing, they’re trying to say, oh, somebody used AI, therefore they were cheating or they were of less value or whatever.

And that battle is a battle you’re going to lose, in my opinion. For two reasons. The first reason is that that is going to be discriminatory against certain groups whose language, for whatever reasons, could get confused with AI. And that has been known since, you know, we’ve been discussing that as long as we’ve been doing these episodes.

And we are not the first people to recognize this. This is well established and well recognized. But more importantly, looking to the future, actually, people using AI well to be able to communicate better shouldn’t be discriminated against either. This is a tool. This is like saying you should be typing everything on a typewriter rather than using a computer, a computer is cheating.

And there was a period at which, you know, good books were written on typewriters. Because, you know, computers, you can go and delete things, when you use a typewriter, any mistake is difficult to get rid of. There’s reasons why they were perceived at that transition points where the new technology was seen as cheating compared to the old technology. This is an old tale.

[00:05:02] Lily: Yeah.

[00:05:03] David: And there’s a number of disadvantaged groups that would benefit from AI to help in their communication, and that’s nothing but a good thing.

[00:05:13] Lily: That’s kind of the second part of this, for me anyway. I’ve just found the original tweet that they said, and it’s “Someone sent me a cold email proposing a novel project. Then I noticed it used the word delve. My point here is not that I dislike delve, though I do, but it’s a sign that the text was written by ChatGPT.” And to me, it’s like, well, so they used ChatGPT to help them, so what?

[00:05:35] David: Well, there’s two things which are fundamentally wrong with that. The first is, so, let’s say they did use ChatGPT to help them. Well, if that’s helped them to express their idea better, so that you can understand it, so what? Great. But the more important one is, and this is the one where the big uproar happened, using delve to say it was written by ChatGPT? That’s insane!

I mean, that just shows what incredible ignorance so many people have about you know, the implications of this broader than their own narrow worldview. Yes, statistically, ChatGPT might use delve more often than your most common people in your circles. But that doesn’t mean that in other circles people don’t use it, and therefore you’re excluding people outside your circle, the people who the research that they looked at to show that ChatGPT used it more often than people in whatever circle they were studying, you know?

This is so incredible to me that there isn’t this awareness about responsibility when even making these. And the uproar I saw on this, I felt was mired. Because the problem is so much more serious than I think people give it credit for. All this effort and hype which is sort of focusing on these different things, it’s just creating turmoil in a way which is very unhealthy. And this is, I think, the really serious point here.

[00:07:08] Lily: I agree. I agree. I’ve got some articles in front of me of different feedback, I guess, and people saying that “Nigerian English has evolved far above what the British handed over. In many cases, it’s a standalone dialect. Possibly superior to what the colonials buried over”. You know, people…

[00:07:26] David: I saw that one! I love it! Because the point is, actually, there is a richness to the language.

[00:07:35] Lily: Yeah.

[00:07:36] David: Which therefore is, and I know this from having grown up, not in Nigeria, but north of the border in Niger, but visited since I was very young, and having lived across Africa in different places. And there is an appropriation of language which is different, which is rich, which is vibrant.

[00:07:56] Lily: Yes, well I only see it through emails, which fortunately I’ve been receiving these emails since before ChatGPT, so I’ve never quite spotted… It does make you wonder, if I joined this job now, if I would have automatically gone, okay, clearly this group of people are using ChatGPT or a large language model, but no, that’s just, that is just their language. When you speak to them, that’s how they talk. In fact, someone else said on their tweet that Nigerians use specific vocabularies, whereas in America, you write as you speak.

[00:08:28] David: Yes, yeah.

[00:08:29] Lily: It just reinforces that of course there’s different ways that people use language.

[00:08:36] David: And there’s great richness to language, and language evolves and it has evolved over time. There has been a standardization of language, which has happened more recently, and this has been related to globalization. But it’s not a simple trend in a single direction. And it’s not that standardization of language is a good thing. I think one of the things which was pointed out somewhere else in that discussion was the fact that the richness of the language, as in the sort of total vocabulary used in what is the standard English, that being relatively small is seen as a positive thing by some.

Whereas many people who enjoy language, who have grown up reading books, you know, for them, richness of language and the complexities and the subtleties of language, this is something to be valued and enjoyed. And you don’t want a simple functional purpose for that. You actually want a rich, diverse language, which enables you to communicate subtleties.

[00:09:41] Lily: I completely agree. I completely agree. And reading this, other people were chiming in with words that kind of, I guess, send alarm bells in their head. I’m not going to sit here and pretend that I don’t do the same. The word crucial for me, when I see the word crucial, something goes off in my head going, that’s a ChatGPT word.

And I suppose it’s recognising that kind of potential, well, that unconscious bias that could be forming there that actually, this might not be a ChatGPT word, but even if it is, so what? In fact, we got an email recently from a colleague, from Tim, and I don’t know if you noticed, but he signed it off from Tim and ChatGPT.

[00:10:16] David: Yes, I’m well aware of that, we have colleagues who use ChatGPT to help them. Now, I’m dyslexic, I’ve been diagnosed as dyslexic since I was at primary school, and it’s something where I was very grateful, actually, when I moved out of the UK that this was no longer something that was recognised, everyone ignored it. And so I just had to get on with my life. And I’m not saying that it’s bad, and it’s very, very good in the UK and in other places where these issues are recognised, they’re diagnosed early and support is given or offered, but sometimes support isn’t what you need.

Sometimes you just need to get on and deal with them. But something like ChatGPT would have been a game changer for me, you know, growing up, being able to express myself better, and to be able to understand how to express myself and to learn to express myself better would have been fantastic and tools like this would have helped me. Now, I’m not saying that therefore, everybody should use ChatGPT. That’s a whole different question. Responsible AI is hard. I don’t have answers. What I do know is that anyone jumping to conclusions quickly like this, they haven’t understood the nature of technology and the way it’s changing and what’s happening right now. The moment we’re in right now with the generative AI and the impact it’s going to have on our ability to communicate for better and for worse.

And it’s not as simple as just saying, can you identify it or not? You know, there are telltale signs, but those telltale signs might also just be that it’s somebody other than you, or somebody other than the norm within your context. These are contexts which I think are so important for us to understand because by and large, and this is what still really gets to me, is that however powerful these AI systems are and they’re becoming more and more powerful, what they’re still doing is data analysis at its heart.

And part of data analysis is finding trends. Part of finding those trends, and producing them in different ways, it depends on the algorithms you’re putting in, and it depends on the data you’re using. So, at the heart of it, that’s what it’s about.

The richness of the world is much more than either the algorithm or the data. And so, to recognize these complexities and the limitations and the power of the tools is so hard. As a society, we don’t know how to do this yet. I don’t claim to know how to do it and I do claim to know more than most, but, but I know that I don’t know what to do about these things.

This is a hard problem. And what I’m really concerned about in some sense with this sort of uproar which came, I’m delighted there was an uproar.

[00:13:20] Lily: Yes.

[00:13:20] David: It concerns me that this uproar is not better understood and more people don’t understand how important that uproar is and why there are real problems at stake here. And some people do, but I think less than I would like. And the other thing which I think really comes out of this is that if we’re looking at actually building responsible AI, the scale of that challenge, I hope, becomes clear that there’s so many… You know you know, how do you enable a minority group, and don’t get me wrong, there’s a lot of people in Nigeria, this is not a small minority, but when it comes to actually the internet and ChatGPT and these sorts of things, it is still a small group.

But how do you represent the value, the richness of minorities within the whole and the globalization. These are hard, hard problems and we’re not going to get it right even if we do have mechanisms to do things responsibly. I would love to understand how we can do better but I don’t believe we really, anyone really has those technologies.

And the big money which is going into it is not going into doing this responsibly. I mean that’s the simple truth at the heart of the problem at the moment. So these things are not going to diminish, they are going to increase.

[00:14:49] Lily: But how do we do it responsibly here? Because in my, in my opinion, this is now more about that education that actually some people just have a different way of speaking to you, and that doesn’t necessarily mean ChatGPT.

I guess it also, the other side is what we already spoke about of so what if they are using it? As long as they understand what they’re saying.

[00:15:11] David: But there are issues where it’s not about them understanding what they’re saying. I heard about someone who recently wrote sort of five or six books very quickly.

[00:15:19] Lily: Yes.

[00:15:20] David: So, you know, is this the sort of things we want people to be spending time reading? You now, writing a book is therefore a trivial thing. So authoring is not important. You just get ChatGPT to write it. All you need to do is market it.

That’s not quite the world I want to live in either. I value authors. I value people who put time and effort into things. The fact that a good author may use ChatGPT to help them write better, if they’re still putting the time and effort in, in ways which are meaningful and adding value, well, that’s probably worth reading. Or it’s possibly worth reading.

So, I don’t have the answers to how to do this responsibly. You know, do you then need to credit ChatGPT as people have done in their emails? Or, you know, what are the right ways of doing this? These things are not going to be resolved in a short space of time.

And somebody who doesn’t use chat GPT, should they be disadvantaged? They are being disadvantaged now in certain ways, if it’s not done right, because the people who are using it are maybe more efficient and more effective. But how do we actually get those things ironed out to a well functioning society? I’ve no idea yet.

[00:16:35] Lily: In a way, it’s the people that are using ChatGPT, but in a way that it’s not obvious that they’re using ChatGPT. Because when other people realise, like this tweet that this conversation started with, when they realise, or when they suspect that someone’s using ChatGPT is when they then disregard.

[00:16:53] David: Yeah.

[00:16:53] Lily: So you’re now going to have these different levels of using it. And in a way, I think what that kind of initial email was saying was when they read someone using these kind of really lovely terms, like you say, like delve and, and there were other words in it, like demystify, which I find funny because we use those words all the time. Crucial, you know, words like this, that, that they then just disregard it.

[00:17:20] David: I think it’s absolutely crucial to demystify this aspect. It’s delving into good ideas in AI is important for us to do. And that’s not chatGPT, that’s me.

[00:17:32] Lily: Yes, yeah. It’s at least ChatGPT in a way, it’s ChatGPT influencing your language now.

[00:17:37] David: Yeah, you’re right.

[00:17:38] Lily: Although…

[00:17:38] David: In this particular case, it was people highlighting that ChatGPT uses these, which influenced my language in this particular case. But I’d be very happy to say that sentence, and I’d probably have said that sentence in some other context without thinking about it.

[00:17:52] Lily: Well, and I think about kind of proposals that we’ve written. This one in particular that we wrote a while ago, a couple of years ago, for the Turing Institute, about a responsible AI course. And we wrote that before we had ChatGPT.

[00:18:05] David: Yeah.

[00:18:05] Lily: Before that could have helped me a lot in writing that, I’m sure. Or it would these days, to help me in writing that. And then me and you would look over it together and we would amend it when we were working on that proposal. But, you know, the words that someone else has brought up that they said are other AI words that they see, delve, safeguard, robust, demystify in this digital world, foster, embark, empower, harness.

[00:18:29] David: Empower, I remember seeing empower and just being blown away. This is one of the words which we feel is so important because we want to empower others.

[00:18:39] Lily: Yes.

[00:18:39] David: This is a big part of our role, a big part of what IDEMS as an organisation is set up to do, is to empower others. And so by doing so, we are now reduced to actually being irrelevant and just ChatGPT. I wonder what we should use instead. I mean, what is the correct, what’s the correct non ChatGPT term to give power to others, to empower others? I have no idea.

[00:19:02] Lily: We’ll ask chat GPT.

[00:19:04] David: That’s a good idea. At some point we should do that. That could be another episode. If I’m not going to get mistaken for ChatGPT by trying to empower others, what word should I use?

[00:19:13] Lily: Well yes, and so I think of proposals, well that one in particular because that’s one that I worked on, but I’m sure that there’s countless ones that you’ve worked on, and all those words I’ve said there, I would bet that all of those words are in there, and we did not use ChatGPT because that was before ChatGPT was as popular.

[00:19:28] David: But more importantly, actually going forward, actually should we be using ChatGPT to help us with that first draft? And this is actually the important question, that’s the second side of this. You know, you coming to me with the first draft, having written it yourself, for us to then go through, dissect, redo and so on. Do I care whether you use ChatGPT for the first draft? No.

[00:19:50] Lily: Well, probably you care a little bit, right? Because you don’t want me just to give you a piece of ChatGPT that I’ve not read either.

[00:19:55] David: But I know you wouldn’t do that, because I know you well enough to know that if you did do that, the way I would tear it to pieces, you know, you would be feeling slightly uncomfortable justifying some of it. And so you would be coming to me and presenting me with something which you’ve maybe done the first draft from ChatGPT, you’ve now made sure that you’re happy that it represents your ideas and the rest of it. And that it’s just a tool to help you to communicate those ideas in a way which then helps us move forward more quickly, more efficiently, more effectively.

And that’s exactly where, you know, you have been using this in report writing at times, and this has been helpful. And others have as well within our team, and I would encourage them to do so. Because it’s a fantastic tool for that.

So there’s definitely something there where there’s so many aspects of this which is so wrong that it’s difficult to know where to start.

[00:20:52] Lily: What then makes me question it is, there’s just so many different avenues that we could go down with this because there is this much bigger problem, as you’ve said, about I mean, I call it bias, you call it a lot stronger than that, I can’t remember what word.

[00:21:05] David: Discrimination.

[00:21:06] Lily: Discrimination. There we go. Yeah, that’s the statistician…

[00:21:08] David: Those two, no, no, those two are slightly different. The fact that there’s bias is one thing. The fact that people are using some of these biases to discriminate, that’s the second thing, and that’s a human choice.

[00:21:23] Lily: Okay.

[00:21:23] David: The bias is the bias within the algorithms in different ways, and this can be a bias towards using language that people are less familiar with, or this can be biases against certain types of languages and certain groups in different ways. And those are biases within the algorithm. The way people interpret them, that’s discrimination.

And this is the point which is so, so important and it’s where I think there are going to be lawsuits about this in cases that really matter, where the human interpretation of those biases leads to discrimination.

[00:22:00] Lily: Sure.

[00:22:00] David: And I think there already are lawsuits. I was reading something about this recently. And this also came out of these articles, where people were saying that in their assignments, their language was being confused with having been authored by ChatGPT.

I had this wonderful example of somebody who was actually sat down in a room I think and asked to write it there and then and they did that and then that was passed through and identified as having been written by ChatGPT when it blatantly wasn’t. And this is the sort of things, this is the discrimination aspect, and this is why people need to be so careful about how they do this because it is discriminatory. And that is a lawsuit in the making, maybe not here in the UK, but maybe elsewhere.

[00:22:44] Lily: And maybe people that then get wise to, I guess, this idea that, oh, my language is being confused with ChatGPT. Well, you can then just give ChatGPT some examples of your work and say, okay, write it like this. Actually then maybe some people that are using it irresponsibly are going to be the ones that…

[00:22:59] David: Get away.

[00:23:01] Lily: Get away, yeah.

[00:23:02] David: This is the whole point that actually if you’re really good at using these AI models then there’s all sorts of other tools out there to help you not get caught. If you are not good at it but your language is getting confused with there’s nothing you can do because you’re genuinely not using it and you’re getting confused for it. This is discrimination in its utmost form and it’s so dangerous. And this is something where this is only going to come out more and more over the next few years because a lot of people are just going about this the wrong way.

And I don’t say that we have the answer of how people should do this, but I do think that more people should be putting time and effort into understanding the responsible aspects of this. And I’m afraid part of that comes back to funding flows. You know, if all the funding, and, you know, this seven trillion trying to be raised by OpenAI.

[00:23:57] Lily: I haven’t heard that.

[00:23:59] David: Oh, you haven’t heard about the seven trillion dollars? This is what you need to build an AI model these days. You know, just do the seven trillion and then they’ll be able to build a better AI system. And isn’t that great value for money? Well, how much of that has been put into doing it responsibly?

I would argue, you know certainly not one percent, because if it was one percent, that would be billions of dollars. Imagine if billions, you know, imagine if one percent of that was being put into the responsible efforts. That would be game changing. The world doesn’t seem so much money going into actually doing things responsibly.

So, you definitely wouldn’t put 1 percent in. You know, even 0. 1 percent is still billions of dollars, so that’s too much to hope for. Maybe they’d be willing to put in a few million, and so therefore that 0. 00001 percent or something like that.

[00:24:49] Lily: Yeah.

[00:24:49] David: That’s the sort of orders of magnitude that we’re talking about maybe people putting in to actually thinking about these things being responsible. What if instead of actually people investing their money in OpenAI and putting seven trillion dollars towards that, I don’t think they’ve succeeded at raising it yet as far as I know, but maybe if instead of investing in that people actually chose to invest in responsible AI initiatives, maybe the world would be a slightly better place. Not that I want to tell people what to do with their money. But I do think that the seven trillion ask is a bit steep.

[00:25:26] Lily: But then, and maybe this has to be a whole new conversation by itself, but then why, why don’t people want to invest in Responsible AI? It’s because the people that have the money are going to be to people less likely to be hurt by using it irresponsibly, or have I just simplified it too much there? I most definitely have simplified it.

[00:25:45] David: You’ve assumed that that’s part of the decision making process.

[00:25:48] Lily: Okay.

[00:25:49] David: And that’s not part of the decision making process. I mean, OpenAI is a really attractive thing for investors right now. Because it’s building the most known models in generative AI. It’s on the up. You know, all they need is something which is going to be on the up. Now, how far up there is from seven trillion, I don’t know, because this is actually rather large. My guess is the returns on that investment are not going to be as great as some people imagine. But, you know, there is hope for them that this is the technology of the future.

AI is going to be the big thing, it’s going to change everything. I should invest in it because that’s where the money of the future is going to be. And I simply hope at that level that’s not true. My hope is that actually, and I’m going to say this explicitly, even though I think it’s a dangerous thing to say. My hope is that actually the investors, if people do invest in that seven trillion, over the next few years it will be shown to be a bad investment. Because I think that is overestimating the value that AI will bring to our society from a single organisation, any single organisation at this point in time.

And that’s, that’s my simple assessment of where it is. I do not believe that actually that seven trillion, anyone could get a good return on investment on that. Now, of course, this isn’t about getting a good return on investment because that’s not exactly how that sort of investment works. But at some point there is going to be the question of, is it actually worth 7 trillion? And my expectation is no. And so eventually someone will have a loss. Who? I don’t know, but that would be my expectation.

[00:27:36] Lily: Interesting.

[00:27:38] David: This is not quite where I expected this particular episode to go because we were supposed to be speaking about language.

But it’s all tied in together, this is the thing. These issues about language and actually how to deal with these things, these are only going to increase unless we think about this from a very different angle than just the profit driven development of AI processes.

There was somebody who asked me at a conference fairly recently, what if academic groups got more involved in building these? But they are involved. The amounts of money they’re getting in comparison to seven trillion dollars is rather small. And so they can’t compete with open models.

But academic communities do have something which I think could potentially outcompete, which is access to really good minds. And so, you might find that there are ways in which, working together, working collaboratively, academic groups, with open models, could outcompete some of these. You know, it’s ironic that OpenAI is not an open model. And Elon Musk, who did happen to point this out rather constructively and then made their models open. So, so there are interesting things happening.

We have no idea how the world’s going to go over the next couple of years.

[00:28:53] Lily: Oh, it’s very interesting to watch anyway, to watch it unfold. Do you have any final bits that you want to say before we finish this? I know that you like to end on a positive.

[00:29:03] David: Well, I guess before we end on a positive, and I do want to end on a positive on this. I want to come back to the original…

[00:29:11] Lily: Sure.

[00:29:11] David: …sort of question and really put to you, you have your own terms, which you tend to have noticed, as you said.

[00:29:20] Lily: Crucial.

[00:29:21] David: Crucial. What is it that you want to do now? Having read this, having thought about this, what does this mean to you?

[00:29:30] Lily: I guess to me this highlighted my own unconscious bias that I’ve had, that I’ve not recognized until seeing the article. And it’s also for me highlighted something that I already knew from working with people from, you know, we had an intern working with us recently from Nigeria. You know, from working in these different parts of the world, something that I already knew, there are language differences. And it still came to me as a shock that people are disregarding emails that say in it certain terms. So when I read that list of terms, I was like, I say these terms normally, and some of them, I don’t say, like crucial, is pointing out to me.

And it really showed to me, okay, I’m kind of part of the problem there. If I do continue on this path of seeing the word crucial and having those alarm bells go off in my head. It’s not bad, someone’s using ChatGPT, and I know that, and people have different languages, and I know that.

And so I guess just going forward, just trying to find my own biases, find my own unconscious biases, make sure that they don’t turn into discriminations, absolutely, and to continue these discussions and highlighting where, well, where these discriminations can arise, and admitting when I’m finding, oh, I could see how I could fall into that. Not discrimination.

[00:30:51] David: Well, no, I mean, let’s be clear. It is not others who fall into discrimination, we all do, we all can be guilty of that and it can happen without us recognising, you know, this is one of the really important things. And what I love about your response and what I feel is exactly the point of hope I was wanting to finish on is, well, my hope is discussions around this and awareness of this is going to lead to tolerance.

This is the thing. What you are describing, what you are going to be doing more consciously than you were being, maybe doing before because you were unconscious, is you’re going to be more consciously tolerant in different ways. That’s essentially what you’re describing. I think that’s a wonderful thing.

And if AI, through highlighting some of these sort of issues can help us in society as individuals be more tolerant. Ooh, what a wonderful thing that could be. Tolerance is a very tricky thing and it’s interesting that, some of it, awareness around this can come out from these discussions.

That’s my hope. My hope is that actually, by trying to think about this, by people becoming aware of this, rather than driving us apart, maybe it can help us to be more tolerant. That could be a whole nother episode. But tolerance is the positive piece that I want to take from this. And that’s what I heard from you when you were describing this.

Your reaction is an increasing, more of an awareness of your own tolerance.

[00:32:25] Lily: Right. Well, thank you very much, David. It’s been a very insightful discussion as always.

[00:32:30] David: This has been good. And there will be others I’m sure to follow on.

[00:32:33] Lily: Yeah, absolutely. Thank you.