Events
(Dr. Oliver Braganza) Oliver Braganza will present and comment Beale, Battey & Mackay (2020)’s paper: “An unethical optimization principle” Zoom link https://uni-bonn.zoom.us/j/94884157465?pwd=VW1SdnVDeEdqa3VOeUJyUjkwZ25Idz09 ID: 948 8415 7465 Password: 731091
"Cause for special concern" (Dr. Uwe Peters) Focusing on the epistemological issue of how we may detect algorithmic biases and recognize their harmfulness, I argue that algorithmic bias against people’s political orientation differs from algorithmic gender and race biases in important ways. The reason is that there are strong social norms against gender and race biases, but this is not the case for political biases. Political biases can thus more powerfully affect individuals’ cognition and behaviour. This increases the chances that they become embedded in algorithms. It also makes it harder to detect and eradicate algorithmic political biases than gender and race biases even though they all can have similarly harmful consequences. Algorithmic political bias thus raises hitherto unnoticed and distinctive epistemological and ethical challenges.
Full Title: "How is Artificial Intelligence Changing Science? A Research Project" (Prof. Jens Schröter) Link to the project: https://howisaichangingscience.eu/
"Philosophical zombies, data and the eternal question of the nature of qualia" This Talk is part of the AI Research Group (Dr. Johannes Lierfeld) In the age of artificial intelligence our world view is increasingly mechanistic. Reductive materialism seems to be able to answer everything, since most aspects of our lives appear to be representable in data. Cognition is thinking, thinking is brain activity, brain activity is either electrical or metabolic, and both forms of activity can be measured – hence, cognition can be measured. Moreover, the term "mind reading" suggests that artificial intelligence systems can also predict our minds. Recommender systems even anticipate the users next item of interest, and they do so with remarkable accuracy. However, it is nothing but interpretations.
"Rethinking computers, learning, and gender difference at MIT in the 1980s" (Dr. Apolline Taillandier) This paper explores the “critical” AI projects developed around the Lego community at MIT in the mid-1980s. While a rich scholarship studies how programming and AI were made masculine, little has been said about those AI practitioners who drew on literary criticism and feminist epistemologies with the hope to overcome the “technocentric stage of computer discourse” and undo gender hierarchies underlying computer cultures and programming experimental standards.
Full Title: "Evaluating Fairness in the Framework of a Trustworthiness Certification of AI Systems" (Dr. Sergio Genovesi, Dr. Julia Maria Mönig) Current publications on AI and fairness show that there is a need for a clear definition of fairness and that an ethical understanding of fairness exceeds the mere de-biasing of data and code. In this talk we make use of the interdisciplinary competence of our consortium and start from different definitions and understandings of "fairness". We are interested in those with regard to the certification of trustworthy AI. We will discuss which of the presented understandings of "fairness" can be operationalized in order to be able to certify what might be "fair". To illustrate, how a "fairness" certification can look like, we will discuss the use case of a credit loan algorithm, considering different fairness metrics from an ethical perspective.
Robo Ethics (Part of the AI Research Group) (Dr. Johannes Lierfeld) Zoom link: https://uni-bonn.zoom.us/j/94884157465?pwd=VW1SdnVDeEdqa3VOeUJyUjkwZ25Idz09 ID: 948 8415 7465 Password: 731091
"Examples and Implications for Systems Engineering" (Part of the AI Research Group) (Prof. Dr. Wolfgang Koch, FKIE) “Intelligence” and “autonomy” are omnipresent in the biosphere. Before any scientific reflection or technical implementation, all living creatures fuse sensory impressions with learned and communicated information. In this way, they perceive aspects of their environment in order to act in accordance with their goals. In the complex technosphere, cognitive machines support human intelligence and autonomy via artificially intelligent automation, i.e. 'cognitive machines', by which they can increase their capabilities far beyond natural levels. Which requirements of systems engineering need to be fulfilled so that such machines take account of human beings using them as a responsible person?
"Why comparisons between the transparency of artificial intelligence and human cognition are problematic" (Dr. Uwe Peters) Artificial intelligence (AI) algorithms used in high-stakes decision-making contexts often lack transparency in that the internal factors that lead them to their decisions remain unknown. While this is commonly thought to be a problem with these systems, many AI researchers respond that we shouldn’t be overly concerned because empirical evidence shows that human decision-making is equally opaque and isn’t usually required to be more transparent. I argue that the empirical data on human cognition that are claimed to support this equal opacity view don’t sufficiently support it. In fact, the equal opacity view rests on a narrow, selective, and uncritical survey of relevant psychological studies.
"AI systems and human cognition are not equally opaque" (Dr. Uwe Peters) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
(Dr. Charlotte Gauvry) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
"Clarifying the concept and evaluating the evidence (invoked by philosophers)" (Dr. Uwe Peters) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
"Goodhart’s law as an emergent feature of complex goal-oriented systems" (Dr. Oliver Braganza) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
(Dr. Johannes Lierfeld) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
(Dr. Apolline Taillandier) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
Full Title: "Virtual reality induces symptoms of depersonalization and derealization" (Dr. Niclas Braun) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
(Dr. Sergio Genovesi and Dr. Julia Mönig) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
Full Title: “Multi-modal evaluation of epilepsy patients using computational methods” (Dr. Theodor Rüber) Zoom link: https://uni-bonn.zoom.us/j/69810017849?pwd=OTV5eTcvMWFBOUh2MDBwTFFCdW9zQT09 ID: 698 1001 7849 Password: 764216
"The downside of optimization" (Dr. Oliver Braganza) Competitive societal systems by necessity rely on imperfect proxy measures. For instance, profit is used to measure economic value, the Journal Impact Factor to measure scientific value, and clicks to measure online engagement or entertainment value. However, any such proxy-measure becomes a target for the competing agents (e.g. companies, scientists or content providers). This suggests, that any competitive societal system is prone to Goodhart’s Law, most pithily formulated as: ‘When a measure becomes a target, it ceases to be a good measure’. Purported adverse consequences include environmental degradation, scientific irreproducibility and problematic social media content. The talk will explore the notion that a systematic research program into Goodhart’s Law, nascent in current AI-safety research, is urgently needed, and will have profound implications far beyond AI-safety.
(Dr. Uwe Peters) The hypothesis of extended cognition (HEC), i.e., the view that the realizers of mental states or cognition can include objects outside of the skull, has received much attention in philosophy. While many philosophers have argued that various cognitions might extend into the world, it has not yet been explored whether this also applies to cognitive biases. Focusing on confirmation bias, I argue that a modified version of the original thought experiment to support HEC helps motivate the view that this bias, too, might extend into the world. Indeed, if we endorse common conditions for extended cognition, then there is reason to believe that even in real life, confirmation bias often extends, namely into computers and websites that tailor online content to us.
Links
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/unethical-optimization-principle-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/algorithmic-political-bias-cause-for-special-concern
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/how-is-artificial-intelligence-changing-science
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/what-its-like-to-be-another-one
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/ai-in-a-different-voice-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/fairness-in-ai-systems-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/robo-ethics-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/ethically-sensitive-applications-of-ai
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/negativity-bias-in-research-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/the-impact-of-mindshaping-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/can-we-experience-the-passage-of-time
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/linguistic-bias-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/proxy-divergence-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/legal-issues-of-advanced-humanoid-robotics
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/transhumanist-visions-of-world-order
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/ai-mental-health-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/certified-ai-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/ai-mental-health-part-of-the-ai-research-group-1
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/proxyeconomics-and-goodharts-law-the-downside-of-optimization
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1/supersizing-confirmation-bias-part-of-the-ai-research-group
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1?b_start%2525252525252525252525253Aint=220&b_start%25252525252525252525253Aint=0&set_language=en&b_start:int=20
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1?b_start%2525252525252525252525253Aint=220&b_start%25252525252525252525253Aint=0&set_language=en&b_start:int=40
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1?b_start%2525252525252525252525253Aint=220&b_start%25252525252525252525253Aint=0&set_language=en&b_start:int=60
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1?b_start%2525252525252525252525253Aint=220&b_start%25252525252525252525253Aint=0&set_language=en&b_start:int=80
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1?b_start%2525252525252525252525253Aint=220&b_start%25252525252525252525253Aint=0&set_language=en&b_start:int=100
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1?b_start%2525252525252525252525253Aint=220&b_start%25252525252525252525253Aint=0&set_language=en&b_start:int=120
- https://www.uni-bonn.de/en/research-and-teaching/research-profile/transdisciplinary-research-areas/tra-4-individuals/events-1?b_start%2525252525252525252525253Aint=220&b_start%25252525252525252525253Aint=0&set_language=en&b_start:int=220