Home / Project

Trait judgments of faces using a task without word labels

People reliably attribute psychological traits to faces. Prior research typically elicits these judgments using specific words provided by the experimenter (e.g., ratings on “competent,” “trustworthy”), raising questions about completeness (tasks use a limited set of words) and generalizability (e.g., across languages). In two experiments (Experiment 1 with predominantly White faces; Experiment 2 with White, Black, and Asian faces) we asked over 3,000 participants to spatially arrange unfamiliar faces based on perceived trait similarity, without providing any word labels. Experiment 1 yielded five dimensions we interpret as approachability, gender-specific warmth, competence, morality and youthfulness. Experiment 2 also produced five dimensions, which showed prominent effects of stereotypes associated with race. Comparisons with four other studies that used word labels argue that trait judgments are characterized by 2-5 dimensions, that approachability/warmth emerges as a universal evaluative factor, and that race and gender stereotypes emerge depending on the set of face stimuli used. (see preprint, data, and Github code).

(ongoing) Develop algorithms to explain traits and states using human behavioral data

Traits and states are latent variables that serve to explain regularities in human and animal behavior. Traits are temporally stable, like personality traits, and states are temporally phasic, like momentary emotions. My goal is to develop algorithms to extract these latent variables from longitudinal human behavior data such as the dataset collected by the COVID-Dynamic longitudinal study. (image source)
Apply algorithms on data to study causal relations of trait and state variables. The hypothesis is that traits should cause states, which in turn cause behavior.
Based on results of Subtopic #1 and #2, develop algorithms that can be generalized to different datasets for the purpose of extracting trait/state like variables.

Investigate sparse representations in V1 with VGG19 and two-photon imaging data

I analyzed two-photon calcium imaging data from the primary visual cortex (V1) of an awake macaque monkey to investigate how neural populations represent visual stimuli. Using the VGG19 convolutional neural network and ImageNet image, I examined why only a small subset of neurons exhibited strong responses to each image. This work provided insights into the sparse coding principles underlying visual representation in the brain (see poster).

Decode visual stimulus preferences and functional networks in macaque primary visual cortex

The goal of this project was to study stimulus selectivity and functional connectivity of individual neurons in the primary visual cortex (V1) of an awake macaque monkey. Using a large-scale two-photon calcium imaging dataset of about 1,000 neurons, I applied Gaussian models to identify each neuron's preferred visual stimuli. This analysis revealed distinct populations, such as curve-tuned neurons encoding curvature and angle-tuned neurons encoding the opening degree of angles (see publication).
Treating each neuron as a node and functional connections as edges, I then constructed graphical models to examine the network dynamics of the neural population. This approach allowed me to analyze how neuronal clusters reorganize when encoding different types of visual images.