ISBN : 9780192894724
Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (for example, creditworthiness, recidivism, cancer, and combatants). If AIs can share our concepts, that will go some way towards justifying this reliance on AI. This ground-breaking study offers insight into how to take some first steps towards achieving Interpretable AI.
PART I: INTRODUCTION AND OVERVIEW
2 Alfred (The Dismissive Sceptic): Philosophers, Go Away!
PART II: A PROPOSAL FOR HOW TO ATTRIBUTE CONTENT TO AI
3 Terminology: Aboutness, Representation, and Metasemantics
4 Our Theory: De-Anthropocentrized Externalism
5 Application: The Predicate 'High Risk'
6 Application: Names and the Mental Files Framework
7 Application: Predication and Commitment
8 Four Concluding Thoughts