Responsible use of AI in museums to be explored after backing from £100m fund

The University of Sheffield has been awarded a share of government funding to research underlying biases in museum collections

Two new AI research projects have been backed with a share of £100m in government funding.

At the University of Sheffield, the first funded projects first sees Dr Joanna Tidy lead a team based in the university’ Department of Politics and International Relations. It will investigate the responsible use of AI in the museum and heritage sector, specifically in relation to biases in AI which stem from the colonial history of museum collections.

The project is in partnership with the Royal Armouries, the UK’s national museum of arms and armour.

Dr Tidy said: “Museums and heritage institutions are increasingly using AI tools such as Machine Learning, Natural Language Processing, and Machine Vision to enhance visitor interaction with their collections.

“However, a well-recognised problem with AI is bias, including how AI algorithms reproduce skewed underlying data. For museums and heritage institutions, a challenge for responsible AI use lies in how underlying biases in museum collections, such as those rooted in colonial origins and histories, are reproduced through AI data processing and outputs.

“It is a crucial time to be defining what the responsible use of AI can and should look like for different settings, and we need to work across academic boundaries and engage with a wide range of applied expertise to explore ways forward.”

The second project sees Dr. Denis Newman-Griffis lead a team from the University of Sheffield’s Information School and Department of Philosophy to work with organisations across public, private, and third sectors to “build shared learning, values and principles for responsible AI, enabling best practice development, helping to organise information and supporting decision making.”

Partners on this project include the British Library, Sheffield City Council, the multinational data science consultancy firm Eviden, and the Open Data Institute through the Data as Culture programme.

Dr Newman-Griffis said: “This project will help us learn what ‘responsible artificial intelligence’ really means for teams and organisations dealing with the changing AI landscape today.

“Whether it is in helping to organise and share research and heritage materials, informing data-driven policymaking in local government, or mining troves of data for business insight, using AI responsibly needs a clear understanding of who is involved and what matters to them around AI use. Our research will help map out and new directions for making responsible AI a living, breathing practice and lay the groundwork for other organisations to build their own policies on AI much more easily.”

Funding for the two projects comes from the Arts and Humanities Research Council (AHRC, through the Bridging Responsible AI Divides (BRAID) programme.

In addition to the scoping projects AHRC are confirming a further £7.6 million to fund a second phase of the BRAID programme, extending activities to 2027/28. The next phase will include a new cohort of large-scale demonstrator projects, further rounds of BRAID Fellowships, and new professional AI skills provisions, co-developed with industry and other partners.