Will AI be a force for good or compound inequalities?
Artificial Intelligence (AI) will help to combat climate change, lead to new medicines, enable governments to make better policies and free up your time. AI is also going to take your job, reject your loan application, make it more likely you’ll be arrested or hit by a driverless vehicle. All of these are possible but not prescribed results of putting AI systems into practice. It’s people building these systems after all. So we’re committed to providing support for not only doing what’s necessary but also making it advantageous for organisations to develop AI responsibly.
This means taking into account all the relevant data considerations which guard against unintended consequences, prejudice and discrimination – such as those documented in a growing history of misuse cases. It also means taking into account the ethical (or contextual, philosophical) considerations critical for illuminating the wider contexts in which AI systems work – without which AI’s worst cases won’t be avoided and its best cases won’t be achieved.
Learning to live with AI
Data scientists are building AI systems. Philosophers and policy makers are trying to keep pace with the implications of their development. Consumers – all of us – are using and being served by AI, often without being aware of it yet.
As mathematicians, statisticians and responsible digital developers, we’re keen to use AI tools when they contribute to social development. As social scientists working in international development, we’re also very aware that understanding local contexts and working alongside local people is essential for any initiative to be equitable, effective and sustainable. In our experience, taking the time to factor in these local contexts, rather than ignoring them, helps to increase the likelihood of a good fit for an initiative, and identify new insights from one place that can be usefully translated elsewhere. This approach also tends to result in AI and digital initiatives augmenting rather than supplanting existing approaches, as we’ve seen in our work in parenting, climate and education.
Supporting responsible business
Investment in AI is already transforming the data and tech industries. Legislation is also imminent around the world to put accountability and compliance frameworks around the development and deployment of AI. This is in part to prevent the kind of scandals and systemic failures which have led to many of the previous problems and injustices. Our training and education work aims to prepare and empower people to work effectively in this evolving environment.
We’re also keen to support appropriate certification and quality assurance. This is because it’s empowering, rather than restrictive, providing AI developers with tools and procedures to accelerate responsible development, reduce their risks of making mistakes and misinterpretations and, importantly, to evidence the design, training and testing of AI products.
Our work in this area currently includes:
- Supporting lecturers in higher education to transform their teaching approaches in relation to AI.
- Working with companies to increase responsible uses of AI so ethics and enterprise are aligned rather than in opposition.
- Producing training and education resources for data scientists and related professionals, e.g. How Data Lies, a self-learning course developed with the Alan Turing Institute.
- Supporting and developing research methods.
- Advocating for responsible AI development in business and climate action.