Description
Lily and George discuss their personal experiences of using generative AI in their work. They explore how AI assists in course development, coding, and writing tasks, sharing insights on how these tools can enhance productivity and creativity.
[00:00:07] Lily: Hello and welcome to the IDEMS Podcast. I’m Lily Clements, a Data Scientist, and I’m here today with George Simmons. George, what is your role now?
[00:00:13] George: Well, it’s interesting you ask. I think it’s up to me to define my title, I think. I have to have that conversation with Kate. But yeah, I have, or I’m in the process of making the transition to a role with slightly more responsibility, which is really exciting. I suppose easily I’d be a Mathematical Scientist as an easy title.
[00:00:36] Lily: A mathematical scientist?
[00:00:37] George: Yeah.
[00:00:38] Lily: Okay, interesting. Well, that sounds quite fun for you to explore that avenue of what is your role?
[00:00:44] George: Yeah, exactly. Yeah, it’s really exciting to kind of moving to a place where there’s a little bit more, not just freedom, but responsibility to have that exploration. Really looking forward… I loved my time as an Impact Activation Fellow and I think this is just a natural next step.
[00:01:03] Lily: Exciting. I guess this could lead into what I thought we could talk about today, which is just on the different areas of where we work and how we kind of use generative AI and other tools in those areas.
[00:01:16] George: Yeah, I think that’s a really interesting thing to talk about ’cause I get the feeling that you’ve used generative AI, or robots as you call them, for a lot longer than I have. Maybe only in the past, even this calendar year, has been when I’ve really started to make use of them.
[00:01:34] Lily: It’s hard to stop when you start using them, it changes your workflow a lot, doesn’t it?
[00:01:38] George: Yeah. And I feel like we use them in quite different ways as well. So this could be quite interesting.
[00:01:44] Lily: Okay, so, let’s talk about this Masters in Mathematical Innovation course that we’re both developing, or we’re both working on. I’m working more on the statistics modules in it, and you’re obviously more on these mathematical modules on it. So how do you use the robots in course development?
[00:01:59] George: So, definitely two areas I found really helpful. The first is helping me generate kind of refresher materials. For example, I wanted to start one of my courses, Introduction to Modelling Complex Systems, with essentially a refresher on differential and difference equations in the context of modelling.
And the assumption is that people coming into this course will have learned about differential equations and difference equations at some point in their undergrad or even before, but maybe there’s been a few years where they’ve not used them at all.
So what I really needed was just a two or three page refresher course on how to interpret these equations, how to analyse or understand what each term means, how to understand what the parameters are doing, that kind of stuff. And this kind of thing is very hard to come by on the internet, little refreshers like this.
When they exist, they tend to be maybe not posted openly or attached as a forward or some appendix to much larger sets of lecture notes. And it’s very difficult to then signpost students to those kind of things. So, long story short, being able to go to AI and ask it, you know, describe to it my audience, which is people who I assume have come across these things before, but haven’t seen these for a while, and outline the kind of things I want from this document, recaps, but an emphasis on discussion points. As you know, the MMI courses, we’re having quite a big emphasis on student or learner led discussion.
[00:03:54] Lily: Yeah, so just a quickly mention, so this is an online course, so we want to have on there discussion boards so that there can be a level of interaction between students.
[00:04:04] George: Thank you.
Yeah, so describing to the AI that I want some of these discussion points in it or things that they can talk about or give them an equation to analyse as a series of exercises then allows me to not just direct them to read through this document, but then use those exercises in discussion boards to… from our perspective, verify that they have worked through that material, is really useful. And it kind of gives you free reign on, not just using someone else’s material, but actually being able to build something that works quite dynamically within the course.
One of the really great things about Chat GPT, which I use for this, is its ability to write in Latex as well, so to get not just a document, but ask it to write in Latex, and it performs that task very well. So it gives me Latex compilable code, I can put that into my editor and build the sheet, and then from there just make the edits that I want or ask it to make the edits.
So, that’s been a really helpful thing. And I don’t like framing it as a time saver even, but for me it’s an easy way just to gather a lot of the right ideas.
[00:05:20] Lily: The way that I see it is, yeah, you are able to now put in your time on the bit where you can really add the value yourself.
[00:05:27] George: Exactly. Yeah.
[00:05:28] Lily: The stuff that the robots could do for you, the stuff that they can generate for you, putting it into Latex format for example, that’s what I used to refer to as an evening task, you know, morning tasks are my brain time, afternoon tasks is less brain time. Evening tasks is I’m in front of the tv, and I just wanna do something while I’m watching. But what I mean is, I think it’s allowing that kind of brain power or allowing you to concentrate on kind of more important aspects.
[00:05:58] George: Yeah, exactly.
[00:05:59] Lily: And it’s interesting that you’re using it that way ’cause I’ve not thought of using it that way for creating these kind of sheets, but I can. And this why these conversations are really nice.
[00:06:10] George: Yes and it’s your turn to tell me how you are using it for these courses.
[00:06:15] Lily: Yeah. So I guess I’ve used it in so many different ways, but for example, I’m using it to, at points, for generating ideas. So say I’m doing a course on something and I’m like, I do not know how to write a course that’s gonna be 10 topics on this subject. Then help me give an idea for the course design.
And it’s never, obviously it’s never what is actually used. And particularly towards the end, I find like maybe the first topic or two I’m like, okay, something could work with this. And then I refine it heavily and you know, the topic 5, 6, 7, 8, 9, 10, that it gives me, I just completely forgotten because I’ve completely refined the first two ideas and then I fed that back into it, focusing on that area for 10 topics and so forth.
So I guess in course design, getting it just to quickly generate for me lots and lots of ideas so that I can, what is the word that I want? Like, just like lots of different mind map ideas and I can just brainstorm with it, okay, how are we actually going to start this? It’s a really useful starting point.
[00:07:18] George: I like what you said about this being kind of iterative as well. You can really use it as something to talk to. We work remotely quite a lot of time, is really nice to just have something you can go back and forth with. Like, oh, I like that idea, can we work with that?
And then you can kind of have this two-way conversation, which I think a lot of people when they use these tools, don’t get to so much. They kind of ask you the question, it produces something, and that’s kind of the end. But you can really dive into, having that kind of discussion with it, like you say, I really like those first two ideas, can we…?
[00:07:57] Lily: Yeah. So I can take those first two ideas and then I will spend some time myself to dig into them and be like, how can this form around? And I go, oh, we want like a middle idea, and I want it related to this, but I can’t yet refine it, I can’t yet myself think, I can’t nail it down. Whereas with the robots, I feel like they could really help get it more nailed down.
[00:08:16] George: That’s really interesting, that high level structure. I suppose I haven’t actually written a course yet where the structure wasn’t quite so clear, so I’m really interested in using that when I come across something like that.
[00:08:31] Lily: With some of the courses, it was like, I was like, I don’t know, like David kind of defined the course. He said, okay, yes, and we’ll have a course on this, and I’m like, okay, how are we meant to make that a 10 module course? How’s that meant to be a 60 hour course? And then done a podcast with David about it, literally got the transcript on the podcast and given that to the robots and said, you know, help me.
[00:08:54] George: Extract the points from it.
[00:08:56] Lily: And maybe there’ll be points in there where I’ve also kind of highlighted certain bits, like I really like this idea and I can see one topic being, and so like having a conversation with them and saying to them like, I wanna go in this direction. And then they go in the wrong one. And I’m like, no, this direction.
When that has happened or when I have done it like that, it’s really what that has done was that course in particular led me to a book, talking to the robots about that led me to then buying and reading a book, which then really helped me structure the course. What was the book? Oh, Weapons of Math Destruction. The robots then came back at me saying, you know, this would be a good resource to look at for this course.
And then I’ve gone, oh, tell me more about this resource. And then I’ve looked into it, and then I’ve gone, you know, what this book actually looks like it could really help me, help inform me about the course. So I’ve then gone on holiday and read the book, and then come back from holiday and worked on the course.
[00:09:47] George: That’s really interesting ’cause a lot of times I’ve used it, I tend not to trust, or kind of just ignore the references or materials it suggests because they can be quite heavily hallucinated. It’s pretty interesting to hear that it’s actually come up with something helpful.
[00:10:02] Lily: It came up with pointing me to a review of the book. I think it was like a four page review of the book, which kind of outlined the key points from the book. And I was like, oh, this sounds very, very relevant and very interesting. And so then from there, getting the book. But sometimes the resources, I wouldn’t say that they always are, I wouldn’t say that they usually are, but they’re getting better.
[00:10:25] George: Yeah. That’s really good to hear.
[00:10:27] Lily: I guess other areas as well, in that kind of generating ideas as well as like core structure, if I have to create a question or discussion points to help me iterate ideas for those discussion points or for, okay, what should this question be on or this STACK question, you know, I wanna have it interactive and to help with that.
[00:10:43] George: I find, yeah, exactly, it’s really good at coming up with points for discussion, and questions as well. And that’s mainly because it’s really good at just reframing things that you give it. And sometimes discussion just comes from having a good reframing or a slightly different object in the topic, that kind of thing.
[00:11:05] Lily: Yeah.
[00:11:06] George: Or like, I want to explore this idea with this equation, but that equation isn’t quite right, and it’s like, oh no, that’s fine, here’s some other equations that might be a lot better because they have this, this, and this feature. And that’s suddenly then fantastic discussion.
[00:11:21] Lily: Yeah, nice.
I guess just final one off the top of my head, but it does help me in more areas, I just can’t think of them all. But it’s when I’ve then, I’ve got the courses, I’ve got the ideas, I’ve got the plans down, okay, we’re gonna use these materials, we’re gonna use this video, we’re gonna have these discussion points, and I’ve got this all down and now I have to put it into the correct structure.
Okay, we have to have it in the structure, like, just physically pick it up and put it into the correct structure. I mean, I’ve got the structure down, I just now need to physically…
[00:11:53] George: Write it in the…
[00:11:54] Lily: Yeah.
And there is where, again, I guess it’s like you and your Latex example, maybe. But it’s like I can say to it, look, here’s an example of a course. Here’s an example of a topic. This is the structure I want. Now here’s my one, could you put it into that structure? And then it, you know, goes, okay, you had discussion before, now it’s there.
[00:12:14] George: Yeah. And the more times that you correct it or guide it towards that structure, the better it gets at doing it as well, I find.
[00:12:22] Lily: Yes. Yeah.
[00:12:24] George: No, that’s some, yeah, really good ideas for next time I do a course.
I suppose, from my perspective, there’s a lot more places I’ve started to use these generative AIs as well. One of those is actually my coding work. I think this is again, something that we both use it for different purposes within coding. And something I’ve started using very recently is, GitHub copilot.
[00:12:51] Lily: Oh, yeah?
[00:12:52] George: Which I can import through my developer environment in VS code. And it essentially works as some kind of, they call it an agent, where it’s literally a window next to your coding screen that you can ask it questions, you can ask it to do stuff, you can ask it to write stuff, you can ask it to help structure your work. And it makes the changes for you as if there is someone else working on your screen.
[00:13:23] Lily: That is nice. I’ve not used it.
[00:13:26] George: No, I thought you might find this interesting, because I know for a long time there was this process of I want some help starting an app or writing some kind of analysis code, and you go to chat GPT, it would produce it, you copy it in, maybe it worked, maybe it didn’t. And you had to have kind of this back and forth between different windows and yeah, using this kind of co-pilot within one developer environment, I have found really, really useful.
And it comes across as being a lot more accurate when it is suggesting changes or writing things for you.
[00:14:08] Lily: That’s definitely something I need to explore and try.
[00:14:12] George: So an example is I was starting to make some Shiny apps, which we both do, although I’m using it primarily as a survey kind of interface. So asking questions and gathering data from users.
[00:14:29] Lily: Yeah.
[00:14:30] George: And I can kind of, I spent a lot of time describing what I wanted and doing iteration on the design brief until I was comfortable that it was understanding what I wanted. Because this is not a usual use case for Shiny, although there are lots of reasons why I wanted to use Shiny for this.
And after some time back and forth, I can then say okay, I think we’re ready to actually get a draft of this. And you can see it, it knows how to make a Shiny app, it knows that I want to build it module based, and it will go through and actually start creating not just code in a file, but file structures and populating different files in different places, utilities, scripts, whatever you want, in a way that gives me just a really nice template to start working from.
And I think, again, this goes back to this kind of evening task almost, you know, the process of just getting something bare bones that needs to be populated can be quite arduous. And this kind of removed that obstacle.
[00:15:38] Lily: Yeah, nice. And presumably you then test it a fair amount.
[00:15:42] George: But this is what I’m saying, I was very impressed with the accuracy of what it can produce when it’s, in quotes, thinking.
[00:15:50] Lily: Yeah.
[00:15:51] George: And it seems to be, you know, it’s working much more closely with you than just Chat GPT on a different window. In its chat window, it kind of says, okay, going through your code, okay, these lines are doing this, you want to remove that, so I’m gonna remove those lines. And it’s kind of telling you these steps of how it’s responding to your queries.
[00:16:10] Lily: Yes, this is definitely something I should try. I’ve used generative AI for coding in different ways. I mean, I’ve used it to help me with functions and whatnot. Particularly if I’m like, what’s that bit of grep code, you know, the RegX code for me to get just that bit, it’s nice that it just contextualises things for you and gets it for you.
Or, if I’m like, oh, I need a sample script, you mentioned shiny, I need a sample Shiny script that just does this tiny thing, like creates a box plot on the script, ’cause I just need to check something in that code. And so I could just get it into my context, just say to it, okay, just write me some quick Shiny code that has a table and a box plot in it. And then I can just see like, oh, that’s how you do that code. Okay, got it. Now I can use that in my context.
I found also it’s really good at, particularly with writing R packages, writing test files and documentation, which as you say, are those evening arduous tasks otherwise, but completely streamlines it. It’s just a different ball game.
So on R-Instat, which is this open front end to R that we’ve been developing for 10 years, David and Danny, the two founding directors of IDEMS, when they were writing the back end code to this, particularly Danny, it wasn’t being documented then. And this was 10 years ago and it’s hard for them to remember, it’s hard for me to decipher, we’re actually gonna get to the point as well that R code has changed so much in 10 years that we need to update it.
I’m kind of hoping that the robots will help with that. I’ll give them a few more years and then maybe they could update it. That will be obviously a huge process if we’re updating the kind of base code in this software to check that we’re not breaking anything.
But the point is that 10 years ago this code was written and there was not much documentation on it. There were a few comments here and there, but not much documentation. And that could have been a job that took hours to then document it. It did take hours, but it did not take weeks as it could have taken.
Because we’re talking, 20,000 lines, okay, 400 functions, which needed documentation, which have been some of them built 10 years ago, some of them built over the course of the last 10 years, and I could just feed them to the robots, get the documentation, read it to check, and then just put it in. It meant it took, I can’t remember how long it took, ’cause I did it again over a few evenings, but it was a couple of evenings, as opposed to a couple of months.
[00:18:45] George: Yeah, I can imagine. And it’s kind of incredible when you put it like that. And I don’t think we should understate the ability to document and test because these are, when you want to produce kind of actual public facing software, that is a task which takes significantly longer than actually producing the code.
[00:19:06] Lily: Yes. And can very much be your second thought.
[00:19:09] George: And I find, particularly with testing, that it’s very good at considering edge cases or weird combinations of things that you don’t get to. And again, it’s this kind of iterative back and forth, like, I want to test this for this, and then it does this, and I’ve also done this and, oh, that’s good. Or how about this?
And you keep going back and forth. I’ve found that really quite helpful. And again, maybe at my stage, it’s not a time saver still, because I still want to learn how to test and document properly, manually so that I know what I’m looking for. But it’s really useful as that almost extra set of eyes, in a way.
[00:19:55] Lily: Yeah, absolutely. And then to catch things. I mean, through writing these tests for these 400 functions, some of them written, you know, 10 years ago, it found ways that we can improve them, it found like edge cases that we’re not actually covering that could one day cause a bug and things like that, ’cause it would test things and I’ll be like, why are you testing this? Your test fails. And then I’m like, oh no, it’s my function.
So it then helps actually improve the functions through testing, and through getting them to test because they’re testing additional things that I’ve not even thought of.
[00:20:28] George: On that line, the other thing it has brought my attention to, I don’t know if this exists on R, but in Python there’s a testing framework called Hypothesis, which is essentially designed to hunt down those edge cases in your tests. So it will generate an assortment of completely randomised inputs to your functions, things you could never even dream of. You know, strings with Unicode characters you’ve never seen, numbers so big or it tries negatives in there, just decimals.
Just think things that you didn’t think that could be entered by the user because you’ve written the function, you have an idea of what it’s for, and it will track down in a very systematic way the things which actually cause the base issue or your code to fail.
And that’s not itself an AI thing, but it was brought to my attention because I was trying to write tests to make sure I could deal with any user string as inputs, and I wanted to find break points. So again, just having this extra set of eyes, this extra mind, was really useful for that.
[00:21:40] Lily: Nice. I don’t know if there’s an equivalent in R, but I should look to see if there is.
Another area, beyond coding and writing courses where I find that I use the kind of robots is in writing. At the moment, I’m writing reports and some work we’ve done. I’ve got a lot of kind of GitHub issues related to the things to put in the report, say, and so giving it the GitHub issues to say this is stuff that we did, this is what we wanted to do, and this is the discussion about it and this is what we did.
And then them helping me to turn this information that we already have there into something to report on what we did and why, ’cause we’ve got it all there, or a lot of it there. Some of it we had in more verbal conversations, so I have to dig back into my memory.
[00:22:30] George: Yeah, I have the same thing with things like grant writing or coming up with product ideas, that’s something I do with Kate. And I find there’s two things, there’s like all the information that you’ve mentioned, which can be in lots of different places, but also some of these things, a lot of points that you need to satisfy, if you’re writing a grant, you know, you’ll have this massive document that goes through, okay, your methodology needs to deal with your assumptions, your models, your risk mitigation strategy, your inclusion policy, and so on.
And actually transforming your information into that format to make sure you’ve ticked off all the points is something I found really useful. Because doing it manually, you’re liable to miss something or under develop something or not link something to something. And when you’re writing things like methodologies or ideas, and I assume even reports, there is a structure that you need to satisfy, which again, even going back to our courses conversation is something that these AI models are very good at helping you conform to.
[00:23:39] Lily: Yes, yeah, absolutely. I find it’s not just there in my writing that it helps, it helps in other places such as, you know, okay, I know what I want to say and I know how I want to lay it out, but I need to think of, you know, I need to be able to say it better. I can’t think of the right word, or I need to be able to, it’s very good at making my rambles a lot more succinct.
[00:24:04] George: Yeah. I guess the final thing for me is actually, everything we’ve described is kind of quite big tasks. And sometimes I actually find myself just, you know, something I asked yesterday, what’s the difference between a deliverable and a milestone in grant writing?
These are obviously things you can Google and you can probably find a good resource, but I find just the ability of these models to gather all that information together for you and just present it in a deliverable is this and milestone is this, and then it’ll do a comparison, it will do the differences. And really small questions like that, you can get really clear answers very quickly.
It’s something I do a lot, related to work, not related to work is just ask it to explain something, explain an idea, things that you would maybe used to Google, what is this, what is that? Just being inquisitive about something. And now my default is switching to actually opening the Chat GPT app and asking that question.
[00:25:12] Lily: And I guess that’s ’cause it can contextualise it for you in the app, but then, as long as you’re aware of the kind of dangers, I guess, hallucinations.
[00:25:20] George: But a lot of this stuff I’m not doing to be accurate, is just I want to find out something.
[00:25:26] Lily: That is fair. And I think in your example of, you know, what’s the difference between this and that for your writing, it then can help you, at least it can then make you see how they see the difference in what they’re writing as well, if you ask it to write a grant versus a report.
[00:25:43] George: One of them was synonyms for building beginning with s, this is something I’ve asked it. It doesn’t just give the answer, but it knows the context that I was working with. So it knew that I meant, I wanted this as a verb building, not noun, for example.
[00:26:01] Lily: Yeah.
[00:26:02] George: And this was actually, in the context of constructing models. And it’s like, okay, then I can give these synonyms.
[00:26:09] Lily: I mean, yeah, now that you say that, I remember when I was on holiday in Canada, I told it, this is where I am, write me an itinerary for the day, just for fun. It wasn’t the smartest itinerary, it wasn’t the most logical itinerary in terms of where to go, but it was quite a fun challenge to then follow the robot’s suggestion for the day and be like, hmm, I don’t know about this.
[00:26:33] George: You tell me, so I’m gonna blindly follow.
[00:26:35] Lily: Exactly. No, that was part of the game, to blindly follow the robot.
Anyway. This has been a really interesting discussion, so thank you very much, George. And it’s really nice to see how you use the robots.
[00:26:46] George: Yeah, and I have to thank you for, I guess, coercing me into starting to use them. You were very keen, I was like, no, I want to keep using my own brain, no, no, no, but there are certainly things where I have found it improves my workflow in ways which don’t make me feel not in control.
[00:27:06] Lily: Nice.
[00:27:07] George: So, yeah, thank you for that. And yeah, I’ve really enjoyed our conversation today.
[00:27:10] Lily: Yes. Yes, me too. Thank you very much. I’m sorry for coercing you. Now you’re one of us. The people that can’t use their minds. You should have listened to the podcast I did with David a while ago on, I’m not even using my brain anymore. It wasn’t on that, it was on I feel like I’m not, I can feel my brain not being as used in the same way.
[00:27:34] George: But I think it’s in the same way or the way that you expect it to be. You’re still doing a lot of, or should be doing a lot of judgment or iteration or those kind of things.
[00:27:45] Lily: Yeah. Which then to me is why it’s important to get these ideas, to get it used in education so that children can use it and try it out in their work and play with it and see, okay, where does it help them, where doesn’t it help them? And kind of use it in a low cost setting.
[00:28:04] George: Low stakes.
[00:28:04] Lily: Low stakes, yes.
[00:28:05] George: Maybe that’s a next conversation for us.
[00:28:09] Lily: Yes. Absolutely.
[00:28:10] George: I’m really interested in these issues of just proliferating these things. So yeah, I quite look forward to talking about that.
[00:28:20] Lily: Great, me too. Anyway, thank you very much.
[00:28:23] George: Thank you, Lily.

