021 – Where Things Have Gone Wrong with AI

David Stern and Dr Lily Clements dig deeper into the Dutch Childcare AI scandal, its causes and consequences. They analyse how this could have been avoided or mitigated and highlight the need for more work on the responsible use of AI, and particularly the importance of the human element in integrating AI to help in societal issues.

020 – Research and Impact in Challenging Contexts

IDEMS supports researchers at a number of research institutions in low resource environments, and in this episode David and Lucie discuss some challenges that these researchers are up against. In challenging contexts such as these, unique opportunities can arise.

019 – Is Nepotism always harmful? Part 2

In this second part of the discussion of nepotism, Santiago questions David further on whether the win-win scenarios presented in the first part do not mean a loss for someone else and consider this in a local and global context. They delve deeper into other types of opportunities, how they arise, and consider how organisations need to balance the creation of opportunities with the seizing of opportunities.

018 – Is Nepotism always harmful? Part 1

Santiago Borio interviews David Stern on the issue of nepotism. They analyse a common definition of the term, look into examples where it’s harmful and examples where it may even be necessary. They consider how IDEMS is a nepotic organisation and what that means in a wider context. This is the first part of a two part episode on a complex issue that can sometimes have deep social consequences.

017 – Responsible AI: How Data Lies

As society embraces AI, interpreting its results can be a matter of life and death. Lily and David consider how we can be misled by data in general, including the results of AI models. They discuss how misinterpreting the output of data often comes down to misunderstanding the limits of what data can tell you.

References:
Simpson’s Paradox: https://en.wikipedia.org/wiki/Simpson%27s_paradox
How to lie with Statistics by Daryl Huff

016 – Responsible AI: Regulation

David and Lily consider regulation around the development of AI technologies. They discuss Amazon’s gender-biased AI recruitment debacle, and why many big companies are embracing regulation. Can regulation be designed to protect society at large from the dangers of irresponsible AI, whilst ensuring that the right companies benefit and the right companies are disadvantaged?

015 – New Year’s Resolutions

In this festive episode, Lucie and David reflect on the idea of New Year’s resolutions through the lenses of IDEMS’ three mechanisms for monitoring and evaluations: guiding principles, pathways of change and value creation stories.

014 – IDEMS Blunders of the Year 2023

In the spirit of celebrating failures as learning opportunities, and in accordance with a burgeoning festive tradition, David and Santiago discuss various self-defined “blunders” that members of IDEMS staff have made over the last 12 months. There can be only one winner of the IDEMS Blunder of the Year 2023!

013 – Responsible AI: Festive Special

Can how you spend the festive season influence the future of entertainment? Following the release of Alan Warburton’s film highlighting the potential effects of AI on the entertainment industry, Lily and David discuss what role AI could play in our consumption of films and videos in the future, and the potential implications for society.

Watch Alan Warburton’s film, The Wizard of AI, here: https://vimeo.com/884929644

012 – Fundamentally Profitable

Santiago Borio interviews David Stern on the concept of being “fundamentally profitable” and how and why IDEMS prioritises this over maximising profit. They explain how this idea evolved through the years in its implementations and present some challenges IDEMS faced recently to remain fundamentally profitable.