Dear IPSY and IoNS members,
Please join us on March 16th at 10.30 in room E139 for the seminar by
Dr. Giacomo Handjaras (IMT School for Advanced Studies Lucca):
Modeling cross-modal correspondences through fMRI
The ability to combine signals across different sensory modalities is essential for an efficient interaction with the external world. To this end, the brain must detect information conveyed by different senses, coupling coherent events in space and time, and solving the correspondence problem. Evidence exists that basic multisensory processing is already present in newborns, while audiovisual experience appears to be critical for the development of more complex multisensory computations lifelong. Nonetheless, the extent to which audiovisual experience is a mandatory prerequisite for the brain to develop and become able to detect shared features between senses is still undefined. Here, we tested brain synchronization during the presentation of an audiovisual, audio-only or video-only version of the same narrative in distinct groups of sensory-deprived (congenitally blind and deaf) and typically developed individuals acquired through functional magnetic resonance imaging (fMRI). By taking advantage of computational modeling, we provided a fine-grained description of the naturalistic stimulation by extracting perceptual features from both the auditory and visual streams, and semantic properties of the narrative from large language models. Intersubject correlation analysis revealed that the superior temporal cortex was synchronized across auditory and visual conditions, even in sensory-deprived individuals who lack any audiovisual experience. This synchronization was primarily mediated by low-level perceptual features, and relied on a similar modality-independent topographical organization of slow temporal dynamics. This evidence suggests that the superior temporal cortex is endowed with a functional scaffolding to yield a common representation across multisensory events.
And on the same day March 16th at 14.30 in Salle du Conseil A224 for the seminar by
Dr. Luca Cecchetti (IMT School for Advanced Studies Lucca):
Decoding brain activity: classification, inference, and related issues
In the last two decades, decoding studies have become increasingly popular in the neuroimaging literature. The central tenets of decoding are: (1) that distinct classes of stimuli or tasks exist - e.g., animals versus tools, (2) that stimulus features defining a specific class are known and under experimental control - e.g., animals, but not tools, are living creatures and capable of social interactions, and (3) that the brain responds differentially to each class - e.g., animals evoke a response in lateral, rather than medial VOTC. Researchers operationalize the decoding of brain activity in terms of supervised learning, and - in case of above-chance accuracy - they infer that a specific region contains information about the feature-defining class. However, the complexity of stimuli employed in human neuroscience makes it impractical to control for all alternative categorizations not considered by researchers during study planning. This may have (at least) two detrimental effects. Firstly, there may be more than a single confusion matrix that describes the stimuli, and there is no reason to believe that stimuli are evenly distributed between all these alternative descriptions. One of the practical implications of imbalanced data is that accuracy no longer represents an adequate metric to assess classification performance. Most importantly, the successful decoding of brain activity is not sufficient to determine the information content of a specific region. Using neuroimaging data collected from twenty participants and a well-established language comprehension paradigm, I present empirical evidence that such issues occur in actual neuroimaging experiments. In the current data, classification accuracy is highly biased toward sensitivity, and brain regions classifying meaningful from non-meaningful speech extend beyond the canonical language network. Interestingly, maps representing other performance metrics (e.g., precision) are more useful for delineating language-selective regions, when compared with meta-analytic evidence. I discuss possible approaches to mitigate these issues and how decoding results should be interpreted in neuroimaging studies.