Human cooperation is distinctly powerful. We collaborate with others to accomplish together what none of us could do on our own; we share the benefits of collaboration fairly and trust others to do the same. I seek to understand these everyday feats of social intelligence in computational terms. I will present a formal framework based on the integration of individually rational, hierarchical Bayesian models of learning, together with socially rational multi-agent and game-theoretic models of cooperation. First, I investigate the evolutionary origins of the cognitive structures that enable cooperation through social learning. I then describe how these structures are used to learn social and moral knowledge rapidly during development. Finally I show how this knowledge is generalized in the moment, across an infinitude of possible situations: inferring the intentions and reputations of others, distinguishing who is friend or foe, and learning a new moral value.
December 2, 2019
Haines Hall 352, UCLA
Lunch provided on a first-come, first-serve basis. We request a $6 donation.