interests

Keywords: computational cognitive science, neurocompositional computing, tensor product representations, learning as Bayesian program induction, intrinsic curiosity, empiricism vs nativism, developmental psychology, causality, embodied learning.

My research broadly centers on building cognitively-plausible, mathematically-principled models of human intelligence, seeking answers to questions like:

  • The Paradox of Cognition: As encapsulated by Paul Smolensky’s Paradox of Cognition, there is an apparent need for both connectionist and symbolic systems to explain the full range of human cognition (neural: e.g., fluidity, statistical sensitivity, perceptual grounding, symbolic: e.g., structured reasoning, compositionality). How can we reconcile these seemingly incompatible paradigms into one coherent, unified cognitive architecture that gives rise to the full scope of human cognitive capabilities? Through my exploration of the Integrated Connectionist Symbolic (ICS) theory for cognition (Paul Smolensky, Géraldine Legendre and Yoshiro Miyata), I became convinced that the answer to this question lies in the development of new mathematical frameworks that allow traditionally symbolic capabilities to emerge naturally from completely continuous mathematics (i.e., a ‘connectionist’ framework all the way down, but one that appears symbolic when viewed at a higher level - an apt analogy can be drawn with quantum mechanics; at the micro level, reality is governed by probabilistic, wave-like behaviors, but when viewed at the macro level, these same principles manifest as the deterministic laws of Newtonian mechanics). During the first part of my MPhil, I pursued this vision by designing a mathematically principled, fully continuous framework for representing compositional structures - conventionally viewed as an artifact of symbolic systems. This work comprised of 2 innovations: 1) a new representational form inspired by and extending Smolensky’s Tensor Product Representation, and 2) a mathematically-principled architecture designed specifically to learn this proposed form, and culminated in a first-authored publication at Neurips 2024.

  • How Humans Learn So Much From So Little: The fundamental mystery of cognitive science contemplates how humans, even as infants, acquire complex concepts, causal relationships, and intuitive theories about the physical and social world with such minimal input data. How is it possible that we achieve such feats given the severely underconstrained nature of the learning problem? I believe the answer to this lies in the Child as Scientist framework (Alison Gopnik), which conceptualises human learning as an active process of theory formation and theory revision. To this end, I am inspired to build on computational instantiations of child-as-scientist, especailly the Bayesian program induction work of Joshua Tenenbaum and colleagues. While theories (hypotheses) are powerful, carrying causal, counterfactual and explanatory weight, a major challenge to computational instantiations of this approach lies in the innumerably vast nature of the space of possible theories. Such a challenge raises profound questions; on a theoretical level: is a fully empiricist approach to learning viable, or even supported by empiricial findings in cognitive science and developmental psychology? On a practical level: how can we efficiently search this space and resolve its computational intractability? To tackle this challenge, I am interested in incorporating interdisciplinary insights from 2 key areas. 1) Innate Concepts and Core Knowledge: Empiricial studies of nativist phenomena, particularly Elizabeth Spelke’s work on core knowledge strongly suggest humans possess domain-specific, innate cognitive structures (e.g., object, number) that constrain and scaffold learning. These structures may serve as priors that efficiently prune the hypothesis space, making the search for the Bayes optimal hypothesis (theory) more tractable. 2) Embodiment and Egocentric Data Generation: Active, egocentric interaction with the physical world - shaped by internal states such as goals, aims, and current theories - facilitates the dynamic generation of task-specific data. This dynamic data generating process has significant potential to expedite and guide the search through the hypothesis space by, for instance, refining priors, sharpening likelihoods, and targeting the most relevant regions of the hypothesis space based on the current state of inference. These insights offer promising, yet underexplored avenues for effectively pruning the hypothesis space - a task of utmost importance for computational instantiations of child-as-scientist, and will be the focus of the second work of my MPhil.