Can an automata understand what it’s doing? Self awareness and moral agency are central concepts to the discussion of personhood. Over the past fifty years authors in cognitive science have been laying the groundwork necessary to examine those concepts. This talk will give a broad survey of the relevant ideas and will outline a case for what it might mean to say that an artificial intelligence is a person or even perhaps that it has a soul. How such a system can be built, how its persona and values can be shaped as well as what this might mean for society are questions which will be explored through a fireside chat intermixed with questions and conversation.
Sponsored by the Stanford Artificial Intelligence Law Society (SAILS)