Stirling’s AI experts to join multi-million pound project

Back to news
AI

Artificial intelligence experts at the University of Stirling will play a key role in a project supported by £12m in new funding from Responsible AI UK (RAi UK).

Computing scientists Dr Sandy Brownlee and Dr Leonardo Teonacio Bezerra, of the Faculty of Natural Sciences, are involved in a £3.5m initiative announced by RAi UK during the CogX conference in Los Angeles.

Led by the University of Glasgow, the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project seeks to tackle emerging concerns about generative and other forms of AI currently being built and deployed across society.

Dr Sandy Brownlee
Dr Sandy Brownlee
Senior Lecturer in Computing Science in the Faculty of Natural Sciences
We're excited that Stirling's expertise in optimisation approaches that can explore trade-offs between things like accuracy and fairness, and our expertise in explainable AI systems, mean we can contribute.

RAi UK is led from the University of Southampton and backed by UK Research and Innovation (UKRI), through the UKRI Technology Missions Fund and EPSRC. UKRI has also committed an additional £4m of funding to further support these initiatives.

Dr Sandy Brownlee, a Senior Lecturer in Computing Science at the University of Stirling, said: “As researchers in AI we are mindful of where it works well and the kinds of mistakes that it can make. As these systems see rapid uptake, we're really keen to ensure that AI is used responsibly.

“We're excited that Stirling's expertise in optimisation approaches that can explore trade-offs between things like accuracy and fairness, and our expertise in explainable AI systems, mean we can contribute to PHAWM's efforts to ensure AI is used for the benefit of people.”

The PHAWM project brings together 25 researchers from seven leading UK universities with 23 partner organisations.

Dr Leonardo Teonacio Bezerra, a Lecturer in AI/Data Science at the University of Stirling, said: “AI is a technically challenging field where it takes a lot of ingenuity to devise solutions. It is not always the case, though, that the applications and implications of those solutions reflect the nature of the people who develop them, often our students themselves.

“We are happy to discuss AI, its benefits and its harms from a perspective where both society and AI creators participate, in hopes that our next generation of students understand AI from a social and technical perspective.”

Dr Leonardo Teonacio Bezerra

Dr Leonardo Teonacio Bezerra

The University of Glasgow will lead the consortium, with support from colleagues at the Universities of Edinburgh, Sheffield, Stirling, Strathclyde, York and King’s College London.

Together, they will develop new methods for maximising the potential benefits of predictive and generative AI while minimising their potential for harm arising from bias and ‘hallucinations’, where AI tools present false or invented information as fact.

The project will pioneer participatory AI auditing, where non-experts including regulators, end-users and people likely to be affected by decisions made by AI systems will play a role in ensuring that those systems provide fair and reliable outputs.

The project will develop new tools to support the auditing process in partnership with relevant stakeholders, focusing on four key use cases for predictive and generative AI, and create new training resources to help encourage widespread adoption of the tools.

Dr Simone Stumpf, of the University of Glasgow’s School of Computing Science, the project’s principal investigator, said: “By the project’s conclusion, we will have developed a robust training programme and a route towards certification of AI solutions, and a fully featured workbench of tools to enable people without a background in artificial intelligence to participate in audits, make informed decisions, and shape the next generation of AI.”

Funding has been awarded by Responsible AI UK (RAi UK) and form the pillars of its £31m programme that will run for four years. RAi UK is backed by UK Research and Innovation (UKRI), through the UKRI Technology Missions Fund and EPSRC.

Since its launch last year, RAi UK has delivered £13m of research funding. It is developing its own research programme to support ongoing work across major initiatives such as the AI Safety Institute, the Alan Turing Institute, and BRAID UK.

RAi UK is supported by UKRI, the largest public funder of research and innovation, as part of government plans to turn the UK into a powerhouse for future AI development.

Professor Gopal Ramchurn, CEO of RAi UK, said: “The concerns around AI are not just for governments and industry to deal with – it is important that AI experts engage with researchers and policymakers to ensure we can better anticipate the issues that will be caused by AI.”

You may also be interested in