People solve challenging computational problems every day, making predictions about future events, learning new causal relationships, or discovering how objects should be divided into categories. My research investigates how this is possible, first identifying the nature of the underlying computational problems, and then examining whether we can explain aspects of human behavior as the result of approximating optimal solutions to those problems. Since many of the problems people face in everyday life are problems of induction, requiring inferences from limited data to underconstrained hypotheses, these optimal solutions draw on methods developed in statistics, machine learning, and artifical intelligence research. Exploring how these methods relate to human cognition provides connections between these fields and cognitive science, as well as a way to turn insights obtained from studying people into new formal techniques.
My current research explores three questions:
- What kind of constraints guide our inductive inferences, and how can we study them in the laboratory? To address this question, my collaborators and I have been testing models that make different assumptions about the knowledge of learners, and exploring how simple models of cultural evolution might provide a way to reveal the inductive biases of human learners.
- How can we define statistical models that can grow in complexity with the data? Methods used in nonparametric Bayesian statistics and in hierarchical Bayesian models provide some answers.
- How do people deal with the computational challenges of statistical inference? Algorithms for approximate inference are widespread in machine learning and statistics, and we are exploring whether these algorithms can shed light on human cognition.
These questions apply to any kind of inductive problem, but my collaborators and I have begun to try to answer them using problems that have been extensively studied in cognitive science, such as causal induction, language learning, categorization, and function learning.
Griffiths, T. L., & Tenenbaum, J. B. (in press). Optimal predictions in everyday cognition. Psychological Science. (article in The Economist)
Tenenbaum, J. B., Griffiths, T. L., & Kemp, C. (2006). Theory-based Bayesian models of inductive learning and reasoning. Trends in Cognitive Science, 10, 309-318. >(journal)
Steyvers, M., Griffiths, T. L., & Dennis, S. (2006). Probabilistic inference in human semantic memory. Trends in Cognitive Science, 10, 327-334. >(journal)
Griffiths, T. L., & Tenenbaum, J. B. (2005). Structure and strength in causal induction. Cognitive Psychology, 51, 354-384. >(journal)
Griffiths, T.L., & Steyvers, M. (2004). Finding scientific topics. Proceedings of the National Academy of Sciences, 101, 5228-5235. >(journal)