Description
AI tools can be helpful in analysing quantitative data, but their potential utility in qualitative analysis might be less obvious, or even concerning. In this episode of the IDEMS Podcast, data scientist Lily Clements and social impact scientist Lucie Hazelgrove Planel discuss the usage of generative AI in analysing qualitative data. They explore the ethical implications, the importance of context, and the challenge of addressing biases in AI-generated analyses, as well as the evolving intersection of AI with traditional qualitative research methods.
[00:00:07] Lily: Hello and welcome to the IDEMS Podcast. I’m Lily Clements, a data scientist, and I’m here with Lucie Hazelgrove Planel. I probably butchered the ear with my English accent.
[00:00:16] Lucie: No Planel is nice, in French you put the emphasis on the two vowels.
[00:00:19] Lily: Ah, very good to know. A Social Impact Scientist at IDEMS. Hi, Lucie.
[00:00:25] Lucie: Hi Lily.
[00:00:26] Lily: So I thought today we could discuss using generative AI and AI for analysing qualitative data.
[00:00:33] Lucie: Yeah, it’s an interesting topic. At first I was terrified of it. I remember having a conversation with David a year perhaps, or perhaps even two years ago now, where he was suggesting that there’d be a world where everyone just uses AI to analyse all of their survey data.
[00:00:50] Lily: Oh gosh.
[00:00:51] Lucie: Yeah, that was a bit my response. But I think there’s a lot that can be said, I think there’s a lot to think through, because there’s different tools available and there’s different processes within data analysis for which different tools can be used in a useful way, and in a responsible way, for some people and for some things, especially, I think there’s lots of what’s the word? When you have to qualify something.
[00:01:17] Lily: Like categorise.
[00:01:19] Lucie: No, like in the sense that it’s not useful for everyone and for everything, so you have to sort of limit down what you’re saying to be more specific, while at the same time being generalisable.
[00:01:30] Lily: I see. So I guess we should start with what can we use it for, ’cause you say about being terrified before. But there are loads of places where it can be really useful, such as transcribing. I’m sure with qualitative work, which I believe you’re pretty pro.
[00:01:48] Lucie: So in general, if we start off with data basically, so in qualitative research you can have data in many different forms. It can come from audio, that is what people say, what people sing. It can come from audio that has to come from a video. It can come from audio that you’ve recorded. You can have written notes, you can have written notes, both handwritten notes and also typed notes. And they don’t have to be notes, it can be books, it can be any kind of written document. You can have photos, you can have videos. What else? I’m surely missing things out.
So basically, anyway, there’s a whole different range of sources of information and types of formats of data that you might want to analyse.
[00:02:30] Lily: Interesting.
[00:02:31] Lucie: So you mentioned transcripts. So this is what you do when, you know, if you have an interview, if you have any audio recording, it’s often easier to analyse it if you have a written format, ’cause you can go back over things, you can read over it quicker, you can take notes next to it. It’s not very easy otherwise to take notes next to an audio recording.
Historically, let’s say, before AI came around, most social scientists, no, I’m not even gonna say most, I’m gonna say a lot of social science disciplines encouraged researchers to take or to make transcriptions, full and complete transcriptions of all of their material.
This, as an anthropologist, is a bit of a strange concept anyway, because for anthropologists, a lot of your data is through observations and interviews are perhaps less common, especially in the sense of a formal interview where you have an audio recording and which you will want to transcribe.
In many cases you do not actually need a full transcript because you learn so many other things from that recording. However, full transcripts can be incredibly useful if you’re not on your own working on a research project, but you’re actually working in a team. So there’s often a problem if you’re working in a team of sharing the information within the team of analysing the data, not just on your own but with other people in the team. And to do that, it’s a lot easier if you have a transcript because it’s faster, quite simply, and you can write notes to it.
Sorry, coming back to what you mentioned, there are lots of AI tools nowadays, which can create rough transcripts for many different types of audio recordings, whether it’s from video or just voice recordings. Some of those are free, some of them aren’t.
[00:04:15] Lily: And do you see a difference between the free and non-free ones?
[00:04:19] Lucie: It’s in terms of format and accessibility, I think. You can get transcripts from YouTube videos, it doesn’t have to even be a video. I think you can upload an audio file into YouTube and get the transcript. I’ve been using ClipChamp, and there the format that it comes out of, it’s designed, instead of long phrases going all the way across the document it sort of makes a short, what’s the word? Like the width of the column is very short.
[00:04:46] Lily: Narrow?
[00:04:47] Lucie: Yes, narrow, thank you.
So it doesn’t look tidy, but it is very much workable. But if you are in a hurry or something, there’s problems just even using AI to create transcripts. Yes, it is great because you can get a vague transcript and you can work with it. But if you’re not taking the time to listen to what was said and how it was said then you often miss a lot.
[00:05:12] Lily: Okay. Like you can miss certain inflections, key in people’s kind of tone.
[00:05:19] Lucie: Exactly, in taking the time to actually write something down and think about what was said. Perhaps it depends on the person, but personally, I think while I’m doing something, so if I am writing something, then my head is at the time sort of thinking what is being said and why is it being said, and why did they say it like that?
Otherwise, if you’re just verifying or checking a transcript that someone else has created, then it takes a bit more effort to put yourself into that mode of thinking to ask yourself those questions. So there’s something to be aware of.
[00:05:52] Lily: No, that’s really interesting and I think that that’s a pretty good point around using AI in general for analysis, you’re not going through, you’re just given everything at once. You’re like not going through that process where you can kind of understand it as you’re doing it and you can know why are we doing this? ‘Cause you’re not able to have that, it’s too quick. You’re not able to sit and think while it’s doing it.
[00:06:19] Lucie: This is where I was talking about right at the beginning, about, it can be useful for some things. So depending on what you’re wanting to transcribe, if you yourself were present in the meeting, whether it’s a focus group discussion, whether it’s an interview, whatever sort of meeting it is that you are recording, if you as a researcher were present there, so you knew what was happening and you have a good memory of what happened, if you don’t need to share it with a team then sure, use AI to transcribe it.
There is a question there as to why are you wanting to transcribe it? So as I mentioned, I think with lots of anthropologists, you don’t tend to need to always record too many conversations. However, in many cases it is useful to have recordings of people actually expressing themselves.
There, personally, one technique that I use is, when you start the recording device, to write down what the time is and as soon as you hear something, when you are in place, when you’re taking that recording or making that recording, then you write down the time of when somebody has said something interesting that you want to go back to.
[00:07:18] Lily: Okay
[00:07:19] Lucie: So when you have that audio recording afterwards, you only go back to 15 minutes into the recording and you just identify that bit that you told yourself, ah, this was something interesting, I need to take some more time to go back over it, perhaps to identify exactly the phrase that was said and how it was said.
To me, using audio recordings in that way can be more interesting and more efficient than using an AI tool to transcribe a full recording, especially if you’re working on your own. To me, there’s a difference though, again, as I say, if you’re working in a team and you want to share your data and keep your data for perhaps next generation or the next cycle of your research.
[00:07:59] Lily: Yeah, absolutely. So then when you have your transcribed interview, or when you have your kind of, I assume imagery data, ’cause you said that qualitative data could be lots of types of data, is there anything you could do in terms of AI to like interpret an image?
[00:08:16] Lucie: Yeah. But I mean, I was trying to think of examples of when you might want to analyse hundreds of photos that someone has sent. Actually, the team at IDEMS has been supporting a group of researchers in doing just that, so in trying to develop AI models to identify pests and diseases on crops in order to support farmers in better knowing how to manage the pests and diseases they’re finding.
[00:08:40] Lily: Yeah, just as some background, I’ve had some conversations with David about this in the past, and there are good apps that already do this, but they do it on these very well known, AI is very good at it if you have lots of data on it.
[00:08:52] Lucie: Exactly. Yeah. So the apps are very good for regions which are not West Africa, basically because there’s less data coming out of that region. But then also they’re usually commercially driven products, and most of the researchers we work with are interested in agroecology and not in promoting chemical fertilisers which are too expensive for the smallholder farmers and just therefore not really appropriate also for the environment.
[00:09:17] Lily: Interesting.
[00:09:18] Lucie: But in that project, that’s a clear example of how people are analysing or want to use AI to analyse huge quantities of visual data.
[00:09:27] Lily: I see.
[00:09:27] Lucie: I think there’s been another project that IDEMS works on, Global Parenting Initiative. There they’ve been exploring how to use videos of parents interacting with their children to produce recommendations for what could be better. If a parent starts yelling at a child just because they didn’t understand something, an AI software, if it is carefully designed, might be able to support and may make sensible recommendations for the parent to behave in a different manner or to breathe first.
[00:09:56] Lily: Interesting. I didn’t know about that.
[00:09:59] Lucie: Yeah, and that’s only within IDEMS that I’m aware of. Those are large research projects, obviously, which have a lot of funding to try and develop their own models to try and develop their own software in order to do it responsibly, which is the key bit here.
One thing, though, I think is coming up, which is quite interesting to me. So to me, again, I always take lots of written notes. And nowadays I think, is it AI actually? It can convert your written notes into text.
[00:10:29] Lily: Like generative AI?
[00:10:31] Lucie: I don’t know if it’s generative or not. I don’t even know if it’s AI.
[00:10:34] Lily: Well, something like, you know, chat GPT or Bard or Gemini and things.
[00:10:40] Lucie: Well, it’s actually even on my phone. Sometimes, I can’t remember what I was doing, but I was trying to take a photo of something and it asked me if I wanted to convert my written notes into a file.
[00:10:51] Lily: Wow, that’s pretty useful.
[00:10:53] Lucie: Yeah. I’m not sure. I think it has its uses.
[00:10:56] Lily: I see what you mean. Sorry, I thought you meant converting, like you had some notes, you had bullet points and you wanted that converted into a paragraph. But you mean that’s why you said written? Of course.
[00:11:06] Lucie: Exactly. I meant like literally handwritten notes that it can then turn into a typed document, basically. The same but as a typed document. Here’s an example. In our research methods support work for the Global Collaboration for Resilient Food Systems, as a community of practice the program uses lots of methods in their workshops to get everybody involved. One of the key priorities is to be very inclusive. When we have meetings with both farmers and very experienced researchers, it’s often quite difficult to get all involved at the same level.
And one technique is to use post-it notes or any other bits, small bits of paper, to get everybody writing their ideas and then contributing in that way. So contributing individually into a larger group, which can then discuss. Sometimes it’s actually really useful to collect all of those ideas afterwards, and if someone has to write all of that up, it’s a bit of a pain and it’s a bit of a hassle.
So if there are AI systems, AI or any other sorts of systems, computer systems, which are able to do that, then I think that is a time efficient thing. Except that, as we’ve mentioned before, you are not then having that processing work going on in your head of actually what is being said on these cards.
[00:12:21] Lily: Yes.
[00:12:22] Lucie: So there isn’t that intonation difference that is coming up, but there is just that mental processing and analysis process that is already going on in your head. And there’s having an awareness already, or building up understanding of the data as you are writing up things.
[00:12:42] Lily: Yeah, and in a way also, I might be completely wrong here, but as you are writing it up, you might go, okay, this person has clearly written this post-it and this post-it. Like I can tell that they’ve written both of these and you might be able to make some connections without really thinking about it.
So like while it’s a lot quicker and less tedious, you actually do lose a lot of that, you’re not able to process as much about what’s happening.
[00:13:07] Lucie: Again, though it depends on basically the quality of the data.
[00:13:12] Lily: Yeah, okay.
[00:13:12] Lucie: if it’s a meeting where you basically know what sorts of answers are going to come out of it, then yeah, go for something that’s more time efficient. If you really don’t know what’s gonna come out of it, then no, you want that time to actually explore it all and to read it all and find out. And as you said, find out who’s been saying what? And if you are the organiser of a workshop, then you perhaps want to make sure that everybody did actually contribute, which you would not be able to do if you already had that typed version.
[00:13:40] Lily: That’s very interesting. I guess I shouldn’t be too surprised it comes down to the context. It always comes back to context at IDEMS, I know, but, I’m still surprised.
[00:13:48] Lucie: And we’ve only talked about so far about getting data ready to analyse.
[00:13:53] Lily: Yeah, which I didn’t expect us to talk about for so long. But this is so interesting because I have the data often from my side. I guess that’s a lot of what machine learning now is, is that you don’t do the design as much, sorry, not machine learning in data science now, you’re not designing it, you’re given the data.
[00:14:11] Lucie: Yeah, I’ve been discussing it with Roger, actually. And he said that this was a problem with stats, statisticians back in the sort of eighties and things, that researchers weren’t aware that they should actually involve the data scientists, the statisticians, at the planning stage too, to help them understand how to do it effectively.
[00:14:29] Lily: Oh, nice. And presumably the same for qualitative, like deciding in advance, okay, we should have interviews, we should have…
[00:14:36] Lucie: Yeah, that’s a odd or interesting question. So in qualitative, often you get people who do the data collection, who are not the researchers. That happens quite often. And there, what’s important is to try and encourage them or try and train them to do the data collection in a way which you want, which will get the best quality data.
So it’s more about building trust with the people that you are talking about, not just going straight in with a questionnaire and asking your questions, but perhaps using a questionnaire to actually have a discussion. And then it’s not asking exactly directly those questions, but it’s using the questionnaire as a basis for a discussion, and then according to what the other person answers, then you might want to note down things. And it might not be in the order of the actual questionnaire, for example.
[00:15:29] Lily: I see.
[00:15:30] Lucie: And I would say in terms of who you are involving in the design, to me that would be more the, there’s never a good word for this, but I think historically people were called informants, now it’s often called participants, it depends on how you’re designing the study. Qualitative research can be done in a very participative way.
[00:15:49] Lily: Interesting.
[00:15:50] Lucie: So going on to how to actually start doing the analysis, so much qualitative data is analysed through coding. So looking at themes perhaps or looking at different ways people are actually expressing themselves. And I’ve seen that you can use AI tools to identify like themes, as I say, in a text, in a textual document or in another type of document.
[00:16:12] Lily: Have you ever used it? Because I know I’ve used for quantitative, just for fun to see how what they’ve done has differed to what I’ve done, and I’ve always been like, why have you done this?
[00:16:22] Lucie: Yes.
[00:16:23] Lily: Have you ever looked at what they…
[00:16:26] Lucie: I have. Yeah. So I first tried using it for an informal group meeting. So nothing to do with research. I was meant to be writing up notes about a community garden meeting, basically. And it’s a very informal group, not organised, let’s say. And so the stakes are very low.
There was no way I was going to take notes during the meeting, so I just recorded the conversation, made it into a transcript, and then asked, I think one of the chatbots of AI to identify main themes, basically summarise the recording, and it made a summary. It wasn’t wonderful, but it worked, it was functional. It made recommendations which didn’t really have that much use.
[00:17:08] Lily: Okay.
[00:17:09] Lucie: But so for a low stake meeting like that, which has no research, it has no relation to research, then yeah, it’s fine. If you’re actually wanting to do research though, to ask someone else to identify the codes seems really strange because if you are given a set of codes, let’s say you’ve got 10 themes that have come out, then do you actually know what they mean?
[00:17:33] Lily: Yeah, okay.
I just wanna go back a step here, I don’t do qualitative analysis, I can barely even say the word. But my understanding is that it will take all of this kind of data and group it into different, like 10 themes.
[00:17:47] Lucie: Yeah, exactly. So if you’ve got, for example, let’s say feedback data on a course.
[00:17:52] Lily: Yeah.
[00:17:53] Lucie: So you might have some people, you’ve got an open-ended question saying, how did you find the course? I’m sure there’s a better question to ask than that, but that will do for now. And so you’ve got 10 responses, and then normally you will try and identify themes, so you’ll try and say, well, there was a couple of people who said that they really enjoyed the course, it was really fun. You might have another person who said that they weren’t able to access, let’s say, physically access the course, there was something missing, which meant that perhaps they couldn’t see, they couldn’t see the resources or something like that.
So those are two types of themes. So fun course or difficult accessibility.
[00:18:29] Lily: I see. Yeah. Okay. This makes sense.
[00:18:32] Lucie: Now if your AI has just given you those themes, you don’t know what is behind the theme, if that makes sense. And difficult accessibility, in different contexts from different people, it can mean completely different things. So you then need to go back and actually try and understand from your data: how did the AI come up with this theme?
[00:18:54] Lily: Yeah.
[00:18:55] Lucie: So you might as well just do it yourself.
[00:18:57] Lily: Yeah sure. And then you kind of get to have that, like while you’re doing it again, you get to have that like processing and understanding. And I guess it’s seen a little bit of, you get the answer here through AI, but it’s not the answer.
[00:19:09] Lucie: And one of the most important things too with qualitative work is that you’re not only looking for the most general things that people are saying. So these themes, often, a machine learning system, they will try and identify the most common things. Often in qualitative research you are interested in also the exceptions.
[00:19:29] Lily: Yes.
[00:19:29] Lucie: And not because you want to get rid of them, but because actually those can tell you a lot more about something.
So this one person, in my example, this one person who wasn’t able to see the resources, actually, that’s a really important point, and it’s a really important learning point in how you can redevelop your course in order to make it more accessible for other people with that same difficulty. Just because it was one person, it doesn’t mean it’s less important in this context.
[00:19:55] Lily: No, absolutely.
[00:19:55] Lucie: And I’m not sure if I would trust the AI system to understand that.
[00:20:00] Lily: I think that’s fair.
I know from things I’ve been reading about, like classification, which is a bit more at the machine learning level, but, like data on all sorts of people, taking data, like, you know, third party cookies data, and places, so they know all of this stuff about you. They might not necessarily know your name, but they know things you like, you’ve been Googling this and that and the other. And so they get an idea of, okay, how much money you earning. And so then they know, okay, what targeted adverts can we give you?
[00:20:30] Lucie: Exactly like with politics and sort of upcoming elections I was thinking.
[00:20:34] Lily: Absolutely. Yeah.
But the kind of groupings it puts us in, people don’t really understand what those groupings are, but it works. But people don’t understand sometimes why the machine learning and stuff and why the AI has decided to put you in these groupings, like what has it done behind it to group this set of people together and that set of people together, but it’s done something and it generally works.
[00:21:02] Lucie: And this is interesting, yeah, one of the key things in terms of if you are uploading transcripts or uploading the audio recordings into an AI system, is that you’re not really sure how it’s going to use that data, or I’m not, I haven’t looked into it. And I have a feeling that most people probably wouldn’t look into it enough to properly inform in order to get informed consent.
So that’s just a side note that our researchers, you know, aren’t fully informed enough themselves to actually use these tools in an ethical way, and using ethically means being careful with other people’s data and letting them know what you’re going to do with that data.
[00:21:41] Lily: We’re just gonna give it to some robots. We don’t really know specifically what the robots are gonna do, but…
[00:21:46] Lucie: Exactly, so you don’t… If it’s you yourself who created the code, then you can say why you created the code, and you can explain that afterwards to somebody, whether it’s the participants in the study or whether it’s other researchers. If it’s another machine that created it then you would need to do a lot of work to understand how and why it created those codes. So again, you might as well just do it yourself.
[00:22:08] Lily: Yeah. Yeah, and I guess, another kind of point leading from that, which is not something I thought of before, I think it’s something that you pointed out to me, I feel like it was you, is your like, if you have confidential data, are you even allowed to use it in these cases? Are they gonna store your data? You know, generative AI could be like, okay, just upload to me your file and I’ll do some analysis for you. What happens when you upload the file? Do they keep the file? Do they now have this confidential data on their side?
[00:22:41] Lucie: There sounds like there’s some similarities there between open research and uploading your data into AI tools. But open research is a little bit different.
[00:22:51] Lily: Oh, okay.
[00:22:52] Lucie: Because it isn’t open to other people, if you’re uploading your data into chat GPT or something, I mean, it’s not open to other people, it’s only open to chat GPT. Whereas if you’ve uploaded it to an open research platform, then there it’s open especially to other researchers.
[00:23:08] Lily: I see.
[00:23:09] Lucie: So you can have a bit more trust or faith in how to be used.
[00:23:13] Lily: Yeah, I see what you’re saying. Very interesting.
[00:23:16] Lucie: There’s one other final thing I would like to mention.
[00:23:19] Lily: Yes, please.
[00:23:21] Lucie: I’m aware we’ve been talking for a long time.
But yeah, so in terms of not really understanding how these machines identify their themes, I know we’ve talked a lot about, you know, and we in general, society has talked a lot about the issues and bias in AI. And this is a question I don’t actually know the answer to. A, it’ll probably have a bias in how it’s interpreting the data, and b, actually, how does it treat that bias in your data?
If a participant says something which is, I don’t know, clearly sexist or something, then how will the AI treat that? Will it try and correct it to not be biased, or will it stay true to the actual text or whatever form…
[00:24:05] Lily: Interesting.
[00:24:06] Lucie: It is, isn’t it?
[00:24:07] Lily: Yeah, I’m not sure how it treats it for b. I know for a that what is said is, okay, well someone’s coded biases in like, if you’ve created this machine learning and you’ve coded it in a certain way, like humans have biases and they might not be aware of them, but you might have coded in a way that you’ve now accidentally made the machine biased.
Or when we look at kind of generative AI, it’s using the whole, well maybe not whole internet, but you know what I mean, it’s using a lot of data behind it, which will have an inevitable biases, and that will show if you use it for your analysis in one way or another.
[00:24:46] Lucie: Exactly. And in qualitative research, you normally always try and state all of your biases first, you try and be open about it, you say this is who I am, this is the sort of worldview I have. This is how it might have influenced it.
[00:24:56] Lily: Interesting.
[00:24:56] Lucie: But if you’re using a generative AI tool, then you don’t know what its biases are. You can’t explain what they might be.
[00:25:04] Lily: That’s really interesting. I didn’t even know in qualitative data analysis that you stated your biases, but I guess that makes sense. That makes it much more transparent. And then when we come to these generative AI tools, which are pretty non-transparent, they’ve just got all this information, people don’t really understand how they’ve come to conclusions a lot of the time, but they have.
[00:25:27] Lucie: Although I think, didn’t you say that Gemini is better with that, that it tells you which websites it uses if you ask it?
[00:25:33] Lily: Yes, with Gemini, it might just be under, like, if you use it for research, but it might be in general. I think it is in general, but anyway, I can just check. It shows you like, this is what I’m going to do, 1, 2, 3, 4, and it tells you what it’s going to do. It gives you like a step by step of what it’s going to do, and at the end it says where it’s got all of it from and gives its resources and stuff.
But the issue, again, comes from that it gives you so much output, ’cause I’ve used it just out of interest. I’ve not used it for anything with work but just out of interest, I said I wanted to test it out and I was like, okay, generate me a report based on this study in Bristol, which my sisters were involved in. And so I said, you know, read this report and it kind of spat out this 50 page report on it.
And it’s like, okay, well this is, this is too long for me to now be able to absorb what you’re telling me. If I were a researcher in this case and I was given this big report, then I’m not able to like process this.
[00:26:30] Lucie: As in, because you want the depth of it?
[00:26:32] Lily: Yeah.
[00:26:33] Lucie: You want all of that understanding, but you actually also, if you’re trying to use AI, then you’re trying to use it to help you go faster.
[00:26:39] Lily: Yeah, yeah. It was like, I want it summarised to me. And it was great that it did that, but then at the same time, it’s like, well, I guess this is what I asked for. I asked for a summary report on this, and you created me this report, and you’ve given me your citations, but it’s not quite the same as if I got a report from a researcher where I know it’s just a bit more trusted.
[00:26:58] Lucie: Yeah, exactly. If you’re gonna spend the time to read 50 pages, then you want to know where it’s coming from, and you can trust that you can trust it.
[00:27:04] Lily: Yeah, exactly. So it was nice that it was transparent on like where it’s getting it from, but at the same time it’s like, okay, if I was to use this report for me, like if I was to use this report for me to read, I don’t really trust the source. And if I was to use this report because I wanted to submit a report, actually I should be the researcher that understands and can justify the entire report. And I can’t do that here because I’ve kind of used AI for it.
[00:27:28] Lucie: Yeah.
[00:27:29] Lily: Not that that was my intention, of course. I don’t plan to produce a report on this study from Bristol that my sisters have been in their whole lives.
[00:27:39] Lucie: And to be fair, this is probably a bit of a sort of similar thing that happened once the development of the internet came about, that suddenly, I’m thinking of like literature researchers.
[00:27:50] Lily: Yeah.
[00:27:51] Lucie: Where there people do write a lot, there right blogs about, they review different texts and things, the internet suddenly gave people access to a lot more text quite simply and quite a lot more opinions. But actually whether it’s useful to use in research or not is a whole different question.
But yeah, there are some interesting opportunities if I think you use it responsibly. And we never even mentioned the difficulties of obviously transcribing is not possible in many languages. I only do it for English or French, I wouldn’t even bother trying for like my PhD, where I did my PhD work and I wouldn’t even bother trying West Africa.
All right. Thank you very much, Lily. It’s been a pleasure.
[00:28:28] Lily: It’s been very good. Thank you very much.

