Interview: Mhairi Aitken, Education Technology Summit Keynote

Interview: Mhairi Aitken, Education Technology Summit Keynote
Artificial Intelligence's incredible upsurge in 2023 started conversations in many sectors - not least the education technology sector, which has maintained an ambivalence towards the emerging technology. AI researcher Dr. Mhairi Aitken explains the perception, risks, and application of AI in education technology prior to her keynote speech - AI in edtech - at this year's Education Technology Summit.

Mhairi Aitken

Tell us about your work as an AI ethics researcher

Artificial Intelligence (AI) is increasingly playing important roles in all areas of society and impacting all areas of our lives. To give a few examples it’s being used in everything from personalising shopping experience, to diagnosing medical conditions or informing decisions about who gets access to services both in the public and private sector. So while it may feel that we are suddenly hearing about AI all the time, it’s been part of our daily lives for quite some time – bringing big opportunities but also real risks.

It’s important to be clear that AI isn’t just evolving of its own accord. The ways that AI affects our lives, and our society is a reflection of very real, concrete and controllable decisions being made about how these technologies are designed and developed, what purposes they are used for and whose voices and interests are reflected within those decision-making processes.

A big focus of my work is trying to ensure that the benefits of AI are equitably distributed across society. We need more inclusive development processes to realise the tremendous opportunities that AI presents: AI isn’t just about big tech. I get really excited about what can be achieved when we give the public a voice in shaping future directions in AI.

The news is full of stories about the threats posed by AI - how worried should we be?

AI has been around for many years but the narrative around it has changed quite dramatically over the past year. We’ve seen a lot of headlines that AI may be developing its own form of intelligence and even posing an existential threat to humanity.

Those claims are very scary. They do worry me but not because they’re true …

What scares me is that these messages are distracting governments and regulators at a critical moment when they are grappling with developing effective approaches to regulating AI. A lot of these claims come from voices in the big tech community who have vested interests in this area.

The headlines suggest that AI itself might be shaping its own destiny instead of being a reflection of the choices being made by people and organisations. For big tech these claims are being used to distract from discussions about the accountability of developers and instead to focus attention on hypothetical, far-fetched risks.

What do you see as the real risks and their relationship with the AV and edtech industries?

There are a lot of risks around AI. There are very present and well-evidenced risks relating to bias in the datasets AI models are trained on, these biases feeds into decisions that have an impact on our lives, often leading to harmful outcomes from marginalised groups. There are risks around increasing misinformation through generative AI which could have serious consequences for future democratic processes. And there are really important risks regarding the environmental impact of these large models.

Industry, people and organisations need to be held accountable for the decisions they are making. I think a big part of the problem stems from the ongoing lack of diversity in the AI industry which can often lead to systems being developed without taking account of the ways that different people may be impacted, or the extent to which the technologies are experienced equitably.

To take an AV -related example: facial recognition is increasingly being used in the public sector including policing public spaces and studies have consistently demonstrated bias in how it operates. Facial recognition tends to be reasonably accurate at identifying faces of white men, but much less so for people of colour or women, and particularly for black women. This is important when we think about who is experiencing the negative impacts and risks of these technologies and who is experiencing benefits, but it’s also a reflection of who is designing these systems, and who is missing from those important conversations.

What can we expect from your session at the education technology summit?

I hope to raise awareness about the importance of these issues and to encourage people to think about how they might relate to their own areas of work. Ethics considers not what we can do but what we should do. Importantly different people will have different views on what we should or should not do, and this will also vary depending on contexts. That’s why it’s so important to involve a diverse range of people and perspectives in shaping the future of AI.

Getting it right is not just about avoiding negative impacts it’s about maximising the value of digital innovation. AI can have incredibly positive impacts but that depends on who controls the future of AI. I’m passionate about engaging a diverse range of voices and interests in discussions around the roles of data and AI in society. That’s what really excites me, and what I’d like to share at the summit.

the Education Technology Summit runs from 10:30-14:30 on Thursday 1 February in CC5.1. Book your place.

View all News & Insights

Related News

ISE Partners

no content here, this panel is used only to display the main title

Media Partners