Responsible AI for Lecturers

Turning AI hype into positive pedagogy

Responsible AI for Lecturers is our response to the growing need identified by academics globally for support in grasping both the clear problems and potential gains of AI in higher education.

The problems are very much here and now. Many lecturers have raised their concerns of students using AI to ‘cheat’ and hence disengaging from their learning because it can auto-produce assignments on demand. With traditional assignments, this is a genuine risk. Trying to prevent it with tools to identify AI generated content is not advisable as it will lead to an arms race between students using AI generative tools and lecturers using AI identification tools. Not only are lecturers likely to lose that race but minority and second language students are likely to be caught in the crossfire.

Stepping back from the problems opens up a world of possibility. Changing the nature of assignments creates the possibility of reimagining our education to deepen learning based on the tools which will be commonplace throughout our students’ working lives. The course aims to give lecturers from all disciplines enough layman’s understanding of responsible AI to have the confidence to adapt elements of their courses to this new reality and be prepared for further advances which are on their way.

(We can deliver the course in French, Spanish, German, Italian and Kiswahili as well as English.)

Which type of course is right for you?

The four types of course format are designed for different individual and institutional requirements:

 

Course outline

The five course modules are outlined below.

The first section of the course aims to demystify data science by presenting how it emerged from successfully exploiting large volumes of data in ways that were not possible with traditional statistical methods. It also introduces some of the ethical challenges that are emerging and pushing society towards regulating responsible AI. Case studies will be used to highlight both impactful applications as well as AI Scandals. The aim is to give educators the confidence to effectively communicate about AI with realistic expectations of the power and limitations of AI.

AI is advancing rapidly and will continue to evolve. It is therefore unsustainable for educators to try to prevent students from using AI in their assignments through new examination conditions or reliance on detection models. In other words, considering AI as a foe is likely to be unsuccessful. Hence the only option we can wholeheartedly recommend is to consider AI technologies as a friend. In other words, we advocate that lecturers actively search for opportunities to enhance and facilitate learning using AI. This means openly and actively encouraging AI to be used by students in their work.
This shift in perspective aligns with the reality of an AI-enabled future. Educational assessments should reflect the real-world, and hence, the questions asked in these assessments need to change to mirror the advances. Treating AI as a foe can result in an education which will not have any relevance in the skills that the students need in an AI-enabled world, because they have learnt skills which are not needed: AI could have done that task, and in the future it will.

Once lecturers embrace AI as part of the education they deliver, we need to focus on reshaping the questions used in assessment. If an assignment can be answered by tools like ChatGPT, then the wrong question is being asked. In the framework of Bloom’s Taxonomy, changing the question means that the assessment moves beyond the foundation levels. AI can handle the foundation levels, allowing educators to focus on the higher-order skills: Analyzing, evaluating, and creating. Hence, the aim of the assessment is to see if the student can effectively analyze, evaluate, and create. Can the student improve AI output? This might be achieved by critiquing text from tools like ChatGPT. Learning how to critique and understand something then becomes the more examinable skill. Pedagogically, these skills are more valuable skills, leading to a better education.

Current tools can not only enhance education, but equip students with the required skills needed in an AI-driven world. Practical and actionable illustrations will demonstrate the potential of the current tools. For instance, a practical will support lecturers to try out ChatGPT themselves and consider how their students can be encouraged to use it within their course. This section aims to be a hands-on guide for educators, providing them with the know-how to leverage ChatGPT and similar tools effectively within their teaching.

The course finishes by examining ongoing development of cutting-edge tools and technology. Given the speed of development ideas presented here may shift to become the Current Tool in a matter of months. For example, a revolutionary change emerging which we would argue is still just in the future is the AI personalised tutor. Khanmigo, a platform integrated into Khan Academy, is in the pilot phase of this approach, with other options already accessible. Discussions are also underway about expanding these technologies into educational institutions. AI teaching assistants for educators are additionally in development, with the additional benefit that they can reduce teachers’ administrative burdens.

Who is the course for?

The course is designed for higher education academics in any subject.
It covers responsible AI in conceptual ways that can be understood by non-specialists. It is equally applicable to the Arts, Sciences or Humanities.

About IDEMS

IDEMS is a not-for-profit social enterprise that uses the mathematical sciences for social impact. Our work includes initiatives in social development, climate change and agroecology as well as education. Over the last 18 months members of our team have been drawn out of their technical comfort zone into the philosophical debates and questions surrounding responsible AI. Universities, research centres, charities, corporates and individuals that we interact with have all reached out for support to understand the current AI explosion, its implications and particularly the ethical concerns. As technical experts who work substantially in the social space with heightened awareness of ethical challenges we find ourselves in a niche position: We advocate for responsible AI use, can demystify its transformative potential while explaining how hard it is to be ethical in its use.
In short, our position is the following: we do not believe that well intentioned actors should slow down their efforts to use AI for tasks which it is suited for. They should not use it for tasks it is not well suited to but where it is well suited they should accept that we as a society do not yet fully know how to use AI responsibly and so need to invest extra effort to understand the implications and build the human feedback mechanisms in to ensure that undesirable consequences are caught and addressed.

Certification

Interacting actively with the moderated forums in the course will earn participants a certificate of participation.
For participants with personal facilitator support, a certificate of completion can be obtained from a multi-dimensional assessment of participants’ understanding by the facilitators and content experts, this will be a mastery process where facilitators will work with determined participants until they reach a fixed level of mastery.

Benefits for you

You will gain:

  • A solid grasp of the advantages and risks of the application of AI in education;
  • Tools for rethinking your teaching practice to harness AI as a force to stimulate and stretch student learning;
  • Confidence to have open, informed, constructive discussions with your students about the potential and pitfalls of AI in their learning journeys and career goals;
  • Support from a community of peers to discuss and develop together.

Benefits for your organisation / institution

Your organisation or institution will gain:

  • Leadership from you and/or your team in a constructive pedagogical approach to the challenges and opportunities for AI across disciplines and curricula;
  • A champion for forward thinking technology use, rather than a defensive approach to innovation;
  • Shareable tools and techniques to support development across the faculty or team.