top of page

ONGOING PROJECTS

Here you’ll find a selection of our current work. We have many projects in the lab, but these projects bellow will provide some insights into what we do:

ACT-UP

Integrative, bottom-up approach to human planning and problem solving

While traditional research on human planning focused on top-down, cognitive mechanisms, ACT-UP is part of a series of projects in which we first examine bottom-up mechanisms that drive the development of human planning. In this project, we test children and adults in multi-functional tool-use tasks using an integrative approach, including simultaneous recording of eye-tracking, EEG, motion-tracking, video. By using principles from psychophysics, we quantify planning over development and inform about how planning bias and instability relate to the real-time integration between perceptual, neural, and motor processes.

image.png
Picture1.png

FOOTBALL BABYBOTS

How can robots help us understand infant learning?

Although both infancy and artificial intelligence (AI) researchers are interested in developing systems that produce adaptive, functional behaviour, the two disciplines rarely capitalize on their complementary expertise. In this project, we use simulated football-playing robots which learn how to walk to test central questions about the development of infant learning, such as whether variability in training is good for learning new skills and what is the role of errors in infant learning. Findings suggest that robotics provides a fruitful avenue for testing hypotheses about infant development; reciprocally, observations of infant behaviour may inform research on artificial intelligence.

EMBODIED REASONING

How  embodied experience shapes adults' and children’s physical reasoning

A key component underlying children’s successful interactions with the external world is their ability to reason about future events. Previous studies showed that this reasoning skill is critical when humans need to adapt to real-world environments that are variable, unpredictable, and full of novel situations.

Despite the importance of reasoning and adaptability to human function and survival, existing evidence is limited in explaining how and which embodied experience affects high-level reasoning when adaptability is required. In this project, we investigate the effects of different types of embodied experiences on high-level reasoning using virtual-reality (VR) environments with altered gravities.

Picture1_edited.png

BETTER TOGETHER

Social intervention to improve spatial skills for science and mathematics in primary school

Spatial skills are foundational to science learning throughout school, and beyond. This is because science learning involves understanding how different entities interact and transform over time and space, something which is hard to represent verbally. Instead, learners must form and manipulate dynamic spatial mental representations, allowing them to understand and reflect such concepts. Importantly, these representations of concepts or relationships must be unified so that they may be manipulated in different ways. While there have been many spatial skills interventions targeted at school-aged children, few have focused on this particular aspect. In this project, we invented a new spatial skills intervention, specifically targeting learners’ abilities to integrate spatial information from different sources into a dynamic unified spatial representation. This, we argue, is more reflective of the spatial skills necessary for science learning, and so a better focus for intervention.

image_edited.png

LEAVES

Combining physical activity with cognitive demands to improve brain connectivity and executive functions in children with ADHD

Attention Deficit Hyperactivity Disorder (ADHD) is the most common neurodevelopmental disorder in childhood. Studies have shown that aerobic physical exercise can improve function in children with ADHD. In this project, we integrate perceptual, cognitive, and motor recordings (fNIRS, behaviour, motor activity in a virtual room, computer vision, and executive functions) to test whether combining physical exercise with cognitive demands further improves the cognitive skills and brain connectivity of children with ADHD. Outcomes have clinical and educational implications for understanding the mechanisms underlying ADHD in individual children.

WhatsApp Image 2024-11-23 at 12.03.34 PM.jpeg
IMG_0941.jpg

SMARTYS

Play is a fundamental aspect of children's development, with children worldwide spending a significant amount of time playing during their early years. This project investigates the role of play in shaping development, specifically how different object affordances foster various types of play, and how these, in turn, relate to cognitive development. By video-recording play sessions and using intelligent toys, we test how preschoolers (2- to 5-year-olds) play with toys ranging from those with minimal affordances (e.g., a tablet) to those promoting high engagement (e.g., a shape sorter), and how they affect their high-level cognition. This project demonstrates how development is grounded in the body, which both constrains and supports the exploration of the world. 

Picture4_edited.png

BLOCK QUEST

To characterize the effects of ADHD, traditional research methods have predominantly used computerized tasks. However, most task  are stationary and overlook the aspect of locomotion and embodiment, which are strongly related to daily functioning. In this project, we argue that ADHD should be characterized through the use of virtual reality as an embodied tool that provides digital behavioral phenotyping of ADHD. We present a novel embodied block-construction paradigm in virtual reality that links ADHD characteristics and the real-time interaction between perception, cognition, and movement. 

ADAPTABLE PHYSICAL COGNITION

One of the fundamental components of human cognition involves the ability to reason about behaviours of objects and systems in the physical world without resorting to explicit scientific knowledge. Humans share inherent intuitions about the physical world, encompassing rules of gravity, object persistence and collision dynamics, and so on. An influential perspective posits that humans reason by forming an internal representational model of the external world and simulating actions forward in time. What happens when humans need to adapt this reasoning to new physical laws? what experience do they need when they first need to reason on Mars? how does the real-time, embodied, sensory experience affect them? We tested these questions in a series of studies to test high-level physical reasoning from school-aged children to adults using reasoning games, galvanic vestibular stimulation, and multivariate analyses to show the contribution of a variety of factors to physical reasoning, including the role of action concepts, rewards, long-term body changes, and the real-time embodied experience. 

Picture5.png
Picture6.png

BABYGROW

In collaboration with the Comparative Cognition Group at the University of Sussex, we conduct a ground-breaking longitudinal study to better understand how infant’s movements support their emerging social and communication skills. We are using intelligent onesies that record infants' natural movements in their own homes for a few minutes each week, during the first 18 months of their life. Using intelligent onesies, computer vision, and video annotations, we examine the possibility of automating the General Movement Assessment (GMA) for neurodevelopmental conditions and how weekly changes in movements predict cognitive and social assessment later in development. 

Aging of Physical Prediction

Whether and How aging shifts our ability to predict outcomes of physical events?

Physical reasoning is the capacity to anticipate how an environment will change as its elements move and interact. This cognitive skill, which is based on humans’ intuitive knowledge of physics, underlies everyday tasks that are potentially critical to older adults, such as avoiding collisions. Nevertheless, the effects of aging on physical reasoning remain understudied. This project testsphysical reasoning among younger (18–35 years) and older (over 65 years) adults as they completed different difficulty levels of a physical prediction paradigm. Participants watch object displacements in a virtual environment and had to decide the outcome of that displacement under different gravity forces (terrestrial gravity, half, and double terrestrial gravity). Different action concepts are also tested, such as Clearing, which includes multiple moving objects. By focusing on predictive judgments rather than motor control, we isolate the aging effects on cognitive physical prediction and reasoning from the aging effects on motor execution. This project will provide insights into how intuitive physics, refined over the lifespan, can still degrade in key aspects of precision and complexity. Understanding these shifts is important for developing supportive strategies that help maintain functional independence in older adulthood, particularly in tasks requiring challenging physical prediction.

Picture111.png

INTUITIVE TOWERS

A key component of cognition is intuitive reasoning about the behaviors of things in the physical world. Traditionally, intuitive physical reasoning is considered an early appearing, abstract, symbolic process, independent of bodily interaction with the environment. However, this “symbolic-process” view relies heavily on looking-time studies without assessing the practical manifestations of children’s bodily actions while they interact with objects. To test physical reasoning during active behavior that involves more than moving the eyes, children (2-, 4-, 6-, and 8-years of age) and adults built flush and offset towers with interlocking Duplo bricks. When building a tower, participants must translate intuitive physical knowledge into flexible action plans across the two hands. Their plans must update from moment to moment as the tower grows. In this project, we use novel, computer-vision algorithms to track the real-time movements of each Duplo brick and participants’ hand kinematics as a direct readout of their intuitive physical reasoning based on their behaviors. We also use advanced unsupervised machine learning to identify when participants incorporated physical reasoning into their plans. This project demonstrates the importance of studying intuitive reasoning during active behavior. We suggest that these cognitive processes continue to develop long after infancy, and their development is grounded in body-environment interactions.

Picture7.png
Picture9.png

COLLAB-COMP

The kinematics of social interactions: Testing the emergence of collaborative and competitive behaviours in children

The ability to socially interact with others emerges in the first years of life when children’s motor skills are beginning to shape. Collaboration and competition are examples of social interactions that require children to adapt their motor actions to others. Yet, little is known about how children’s kinematics during collaborative and competitive behaviour emerge over development, and how this relates to their executive functions. In this project, we test 3- to 10-year-olds in a tower-building task and record their movements using motion tracking to link motor development (kinematics), cognitive development (executive functions), and social development (collaborative and competitive behaviours) at the individual level. 

FREE PLANNING-TO-PLAN 

Generalisation of visually guided planning across structured tasks and free play

Visually guided planning is fundamental for manual actions on objects. Multi-step planning—when only the requirements for the initial action are directly visible in the scene—necessitates initial visual guidance to optimize the subsequent actions. In this project, we examine whether 3- to 5-year-old children who exhibit visually guided, multi-step planning in a structured tool-use task (hammering down a peg) also demonstrate visually guided planning during unstructured free play while interlocking Duplo bricks and Squigz pieces. Using head-mounted eye-trackers, we test whether children who exhibit visually guided planning also spend more time looking at the to-be-grasped free-play object and at their construction during reach and transport compared with children who do not demonstrate multi-step planning in the hammering task. The project aims to show that visually guided planning in young children generalizes across different manual actions on objects, including structured tool use and unstructured free play.

Picture10.png

PUPIL-SYNC

A pupil-dilation technique to test developmental differences in visual synchrony during free viewing

Visual synchrony, a form of coordinated behaviour wherein observers look at displays in a similar manner, is important for understanding how coordinated visual attention influences cognitive, emotional, and social development. Traditional developmental research tested visual synchrony using gaze location metrics—assessing the convergence of children's visual focus at any given moment. However, gaze location is not the only looking measure linked to attention and cognitive states. Pupil dilation—the process of the pupils increasing in size as a physiological response to visual stimuli—offers a window into the autonomic nervous system, providing insights into cognitive load, emotional arousal, and attention shifts. In this study, we developed a new technique to test developmental changes in visual synchrony based on pupil dilation. We demonstrate our approach in  data collected from preschoolers and adults during free viewing of cartoon videos. All analyses and tutorials are
shared. The project demonstrates the potential for using pupil dilation to explore how individuals, from children to adults, synchronize their attention and emotions. Such a technique offers a richer picture of what and how children share visual information during observation.

Picture11.png

AUTO-HAPTIC

Automatic real-time hand tracking enhances adolescent spatial skills by eliminating haptic feedback

Spatial skills underlie how humans acquire, represent, organise, manipulate, and navigate their environment, and therefore are fundamental for survival and proper function. Spatial skills involve the integration of sensorymotor information with higher-order cognitive representations. Previous research showed that spatial skills can be improved through training, yet it is unknown what sensory information is better for training. In this project, we test a new application for real-time hand tracking as a manipulator of sensory information that enhances spatial skills in late childhood. Children (ages 14-15) complete a 7-week school training programme (eight 30-minute sessions) to improve their spatial skills. Outcomes from this project suggest that omitting haptic feedback during spatial training compels reliance on mental representations, thereby bolstering spatial skills more effectively. We present a new application for real-time hand-tracking technology in educational settings and demonstrate its cost-effective potential to advance spatial abilities in young students.

Screenshot 2025-02-05 at 12.04.19.png

VIRTUAL CABINETS

“Where” and “How”: Children’s strategies for exploring solutions to problems with hidden demands

Motor exploration is essential for the development of functional behavior. Young children have limited knowledge about how things work because they have limited experience solving everyday problems like how to slide open a cabinet latch, twist open a container lid, grasp the handle of a tool, or fit an object into an aperture. Without observing more knowledgeable others or being told what to do, children must perform a variety of exploratory movements to discover where and how they should act to solve motor problems. For example, where is the closure on the cabinet, and how does the latch operate? How do children discover hidden areas and actions? 

This project addresses thie question by examining changes in exploration strategies in real-time and over development in 24- to 56-month-old children by recording how they touch a tablet screen to open a “virtual cabinet” requiring specific actions to operate the “lock.” Use of a touch screen rather than a 3D cabinet eliminates requirements of manual strength and dexterity and provided pixel-level details about the real-time sequence of touch areas and actions within each trial. Thus, we can focus on higher-level, conceptual improvements in exploration strategies. We particularly interested in children’s exploration of the correct lock area (where to direct their actions) and their exploration of the correct action (how to touch the screen) increased with age and across trials. The project promise to provide a new, cognitive perspective for research on motor problem solving in children.

Picture13.png

Funded by: 

UKRI
BA
Leverhulme
Waterloo Foundatino
Wellcome Trust
Simons Foundation
bottom of page