Human Cognition Colloquium
When we look around, we see tables, chairs, and books rather than a blooming, buzzing confusion. How do we build visual concepts that allow us to derive meaning from what we see? In this talk, I will present three lines of research that make progress on this question by leveraging computational models of visual processing, controlled experimental paradigms, and large naturalistic datasets. First, Ill suggest that how we interact with objects shapes the architecture of visual processing. Ill show that our visual system uses curvature to automatically infer the animacy and real-world size of unrecognizable objects, demonstrating that meaningful semantic processing can occur in the absence of recognition. Second, by analyzing large datasets of early visual experience, Ill characterize the consistency and variability in the inputs to our visual conceptsand the constraints these findings pose on how we acquire visual knowledge. Last, using drawings as a case study, I'll argue that our visual representations have a more protracted developmental trajectory than previously thought. Together, these results suggest that regularities in how we see and interact with objects form the backbone of our visual conceptual system while highlighting the active role that learners play in the refinement of their visual concepts. More broadly, this work advances an ecologically grounded model of visual concepts that formalizes how we connect what we see with what we know.
Zoom link: https://berkeley.zoom.us/j/92628652633