Led by Professor Markus Gabriel from the Institute for Philosophy at Bonn and Dr Stephen Cave from the Leverhulme Centre for the Future of Intelligence at Cambridge, the project, ‘Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures’, places ethical principles at the heart of AI development. The new research project comes as the European Commission negotiates its Artificial Intelligence Act, which has ambitions to ensure AI becomes more “trustworthy” and “human-centric”. The Act will require AI systems to be assessed for their impact on fundamental rights and values. The researchers on the Desirable Digitalisation project will collaboratively investigate the many questions that arise from these plans, such as: What exactly does a “human-centric” approach to AI look like? How can we meaningfully assess whether and how AI systems violate fundamental rights and values? And how can we foster awareness of discriminatory practices and how to stop them?
Carla Hustedt, director of Stiftung Mercator’s Centre for Digital Society, explains: “The socio-technological nature of AI systems requires us to break out of silos in multiple ways: We need interdisciplinary, international research as well as the cooperation between scientific actors with actors from business and civil society. The project seeks to do exactly that.”
The Desirable Digitalisation project is divided into two parts. In the first part, researchers will investigate intercultural perspectives on AI and fundamental rights and values. As Dr Cave explains: “In order to understand the potential effects of algorithms on human dignity, we need to look beyond the code and draw on lessons from history and political science.” This part of the project will ask questions from two perspectives: anthropological (How will our idea of ‘the human’ influence and be influenced by digital technology?) and intersectional (How do the structural injustices of the past influence today’s technology and its influence on fundamental rights and values?).
The Cambridge and Bonn teams will work not only with colleagues across Europe, but also with teams in Asia and Africa. As Professor Gabriel points out: “Irrespective of our specific cultural world-views, these new technologies challenge our idea of ourselves as human beings.” The project therefore investigates foundational, anthropological questions concerning the human in the digital age. How do different ideas of the human shape different cultures’ views of desirable digitalization?
In the second part of the project, ‘Designing AI for Just and Sustainable Futures’, researchers from both universities will work with the AI industry to develop design and education principles that put sustainability and justice at the heart of technological progress. According to Prof. Aimee van Wynsberghe, Humboldt Professor at the University of Bonn, who will lead the Bonn team in this second part of the project and will be contributing her expertise on sustainability: “Sustainability in all its dimensions – social, ecological, economic and technological – is a vital value in designing AI. Only by taking it into account can these technologies improve our lives and our world.”
The five-year project will start in April 2022, with the first of its biannual conferences taking place in early 2023. The core team of seventeen researchers at Cambridge and Bonn, as well as visiting professors, will work closely with a wide range of national and international partners.
Contact:
Jan Voosholz
University of Bonn
Institut für Philosophie
Email: voosholz@uni-bonn.de
Dr Kanta Dihal
University of Cambridge
Leverhulme Centre for the Future of Intelligence
Email: ksd38@cam.ac.uk
David Alders
Stiftung Mercator
Centre for Digital Society
Email: david.alders@stiftung-mercator.de