Do We Really Want Explainable AI? - Edward Ashford Lee (EECS, UC Berkeley)

Описание к видео Do We Really Want Explainable AI? - Edward Ashford Lee (EECS, UC Berkeley)

Conference Website: https://saiconference.com/IntelliSys

Abstract: "Rationality" is the principle that humans make decisions on the basis of step-by-step (algorithmic) reasoning using systematic rules of logic. An ideal "explanation" for a decision is a chronicle of the steps used to arrive at the decision. Herb Simon’s "bounded rationality" is the observation that the ability of a human brain to handle algorithmic complexity and data is limited. As a consequence, human decision making in complex cases mixes some rationality with a great deal of intuition, relying more on Daniel Kahneman's "System 1" than "System 2." A DNN-based AI, similarly, does not arrive at a decision through a rational process in this sense. An understanding of the mechanisms of the DNN yields little or no insight into any rational explanation for its decisions. The DNN is operating in a manner more like System 1 than System 2. Humans, however, are quite good at constructing post-facto rationalizations of their intuitive decisions. If we demand rational explanations for AI decisions, engineers will inevitably develop AIs that are very effective at constructing such post-facto rationalizations. With their ability to handle to handle vast amounts of data, the AIs will learn to build rationalizations using many more precedents than any human could, thereby constructing rationalizations for ANY decision that will become very hard to refute. The demand for explanations, therefore, could backfire, resulting in effectively ceding to the AIs much more power. In this talk, I will discuss similarities and differences between human and AI decision making and will speculate on how, as a society, we might be able to proceed to leverage AIs in ways that benefit humans.

0:00 Introduction
0:48: Deep Neural Networks (DNNs) as Realized on Today's Computers
2:50: Explanations in Terms of Rational Thought
6:07: Silver Bullets?
8:27: Humans are Very Good at Synthesizing Explanations
10:46: How to design such an Explanation Machine
12:00: Possible (and Risky) Uses of Explanation Machines
14:00: DARPA XAI Program Retrospective
17:48: Explanation vs. Algorithm
22:42: Reservoir Computing
23:23: Provocative Conjecture
23:50: Another Approach to Explanation: Architected DNNs
26:18: Architected Compositions
27:01: Conclusion

Edward Ashford Lee has been working on software systems for 40 years. He currently divides his time between between software systems research and studies of philosophical and societal implications of technology. After education at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in Electrical Engineering and Computer Sciences. His software research focuses on cyber-physical systems, which integrate computing with the physical world. He is author of several textbooks and two general-audience books, The Coevolution: The Entwined Futures and Humans and Machines (2020) and Plato and the Nerd: The Creative Partnership of Humans and Technology (2017).

Комментарии

Информация по комментариям в разработке