Skip to main content
Marina Bedny
    Behavioral and neuroimaging studies of cognition frequently test hypotheses regarding mental processing of different stimulus categories (e.g. verbs, faces, animals, scenes, etc.). The conclusions of such studies hinge upon the... more
    Behavioral and neuroimaging studies of cognition frequently test hypotheses regarding mental processing of different stimulus categories (e.g. verbs, faces, animals, scenes, etc.). The conclusions of such studies hinge upon the generalizability of their findings from the specific stimuli used in the experiment to the category as a whole. This type of generalizability is explicitly tested in behavioral studies, using " item analysis ". However, generalizability to stimulus categories has up until now been assumed in neuroimaging studies, without employing item analysis for statistical validation. Here we apply item analysis to a functional magnetic resonance imaging study of nouns and verbs, demonstrating its theoretical importance and feasibility. In the subject-wise analysis, a left prefrontal and a left posterior–temporal region of interest showed putative grammatical class effects. An item-wise analysis revealed, however, that only the left posterior–temporal effect was generalizable to the stimulus categories of nouns and verbs. Taken together, the findings of the subject-and item-wise analyses suggest that grammatical-class effects in the left prefrontal cortex depend on the particular word stimuli used, rather than reflecting categorical differences between nouns and verbs. This empirical example illustrates that item analysis not only is sufficiently powered to detect task relevant changes in BOLD signal but also can make theoretically important distinctions between findings that generalize to the item populations, and those that do not.
    Results of neuroimaging and neuropsychological studies of frontal lobe function have been interpreted by some as evidence for specialized modules that are localized to distinct regions of frontal cortex, and that differ in both content... more
    Results of neuroimaging and neuropsychological studies of frontal lobe function have been interpreted by some as evidence for specialized modules that are localized to distinct regions of frontal cortex, and that differ in both content and process from those in neighboring regions. These descriptions stand in stark contrast to the many domain-general theoretical accounts of the regulatory role of the frontal lobes in cognition. Recent attempts to understand how general regulatory mechanisms might operate across multiple domains (e.g. working memory, sentence comprehension) have been increasingly important in our understanding of the frontal lobes.
    The present study characterizes the neural correlates of noun and verb imageability and addresses the question of whether components of the neural network supporting word recognition can be separately modiWed by variations in grammatical... more
    The present study characterizes the neural correlates of noun and verb imageability and addresses the question of whether components of the neural network supporting word recognition can be separately modiWed by variations in grammatical class and imageability. We examined the eVect of imageability on BOLD signal during single-word comprehension of nouns and verbs. Subjects made semantic similarity judgments while undergoing functional magnetic resonance imaging (fMRI). Nouns and verbs were matched on imageability, and imageability varied continuously within a grammatical category. We observed three anatomically separable eVects: a main eVect of grammatical class, a main eVect of imageability, and an imageability by grammatical class cross-over interaction. The left superior parie-tal lobule and a region in the left fusiform responded similarly to increases in noun and verb imageability; the left superior temporal gyrus showed greater activity for verbs than nouns after imageability was matched across grammatical class; and, in both the left middle temporal gyrus and the left inferior frontal lobe, a decrease in noun but not verb imageability resulted in higher BOLD signal. The presence of reliable and anatomically separable main eVects of both imageability and grammatical class renders unlikely the hypothesis that previously reported dissociations between nouns and verbs can be dismissed as imageability eVects. However, some regions previously thought to respond to grammatical class or imageability instead respond to the interaction of these variables.
    What role does meaning selection play in word comprehension, and what neural systems support this selection process? Most words have multiple meanings and are therefore ambiguous. This is true of both homonymous words (words that have... more
    What role does meaning selection play in word comprehension, and what neural systems support this selection process? Most words have multiple meanings and are therefore ambiguous. This is true of both homonymous words (words that have multiple unrelated meanings) and polysemous words (words that have multiple related meanings). The extant evidence indicates that meaning selection is an integral part of homonym comprehension. However, it is not known whether meaning selection extends to polysemous words, or what neural systems support meaning selection during comprehension. Prior neuroimaging and neuropsychological evidence suggest that the left inferior frontal gyrus (LIFG) may play a role in resolving competition during language processing. We therefore sought to test the hypotheses that meaning selection is part of polysemous word comprehension, and that the LIFG resolves meaning competition during word comprehension. We tested healthy participants on a version of the triplet lexical decision task, with polysemous and homonymous stimuli. Results suggest that the meanings of polysemous words, like the meanings of homonyms, are selected based on context. However, homonymous and polysemous words differed in how meaning frequency affected meaning selection. We then administered the triplet lexical decision task to patients with LIFG damage to examine whether this region plays a role in context-dependent meaning selection. Results support the hypothesis that the LIFG serves as a top-down biasing mechanism that facilitates rapid meaning selection during word comprehension. We conclude that context-dependent meaning selection is an integral part of word comprehension for both homonyms and polysemous words, and that the LIFG facilitates this selection process.
    Word comprehension engages the left ventrolateral prefrontal (lVLPFC) and posterior lateral-temporal cortices (PLTC). The contributions of these brain regions to comprehension remain controversial. We hypothesized that the PLTC activates... more
    Word comprehension engages the left ventrolateral prefrontal (lVLPFC) and posterior lateral-temporal cortices (PLTC). The contributions of these brain regions to comprehension remain controversial. We hypothesized that the PLTC activates meanings, whereas the lVLPFC resolves competition between representations. To test this hypothesis, we used functional magnetic resonance imaging (fMRI) to assess the independent effects of adaptation and competition on neural activity. Participants judged the relatedness of word pairs. Some consecutive pairs contained a common ambiguous word. The same or different meanings of this word were primed (e.g., SUMMER-FAN, CEILING-FAN; ADMIRER-FAN, CEILING-FAN). Based on the logic of fMRI adaptation, trials with more semantic overlap should produce more adaptation (less activation) in regions that activate meaning. In contrast, trials with more semantic ambiguity should produce more activation in regions that resolve competition. We observed a double dissociation between activity in the PLTC and lVLPFC. LPLTC activity depended on the amount of semantic overlap, irrespective of the amount of semantic ambiguity. In contrast, lVLPFC activity depended on the amount of semantic ambiguity. Moreover, across participants the size of the competition effect as measured by errors was correlated with the size of the competition effect in the lVLPFC. We conclude that the lVLPFC is an executive mechanism within language processing.
    Several regions of the posterior-lateral-temporal cortex (PLTC) are reliably recruited when participants read or listen to action verbs, relative to other word and nonword types. This PLTC activation is generally interpreted as reflecting... more
    Several regions of the posterior-lateral-temporal cortex (PLTC) are reliably recruited when participants read or listen to action verbs, relative to other word and nonword types. This PLTC activation is generally interpreted as reflecting the retrieval of visual-motion features of actions. This interpretation supports the broader theory, that concepts are comprised of sensory–motor features. We investigated an alternative interpretation of the same activations: PLTC activity for action verbs reflects the retrieval of modality-independent representations of event concepts, or the grammatical types associated with them, i.e., verbs. During a functional magnetic resonance imaging scan, participants made semantic-relatedness judgments on word pairs varying in amount of visual-motion information. Replicating previous results, several PLTC regions showed higher responses to words that describe actions versus objects. However, we found that these PLTC regions did not overlap with visual-motion regions. Moreover, their response was higher for verbs than nouns, regardless of visual-motion features. For example, the response of the PLTC is equally high to action verbs (e.g., to run) and mental verbs (e.g., to think), and equally low to animal nouns (e.g., the cat) and inanimate natural kind nouns (e.g., the rock). Thus, PLTC activity for action verbs might reflect the retrieval of event concepts, or the grammatical information associated with verbs. We conclude that concepts are abstracted away from sensory–motor experience and organized according to conceptual properties.
    Conventional analyses of functional magnetic resonance imaging (fMRI) data compare the brain's response to stimulus categories (e.g., pictures of faces, stories about beliefs) across participants. In order to infer that effects observed... more
    Conventional analyses of functional magnetic resonance imaging (fMRI) data compare the brain's response to stimulus categories (e.g., pictures of faces, stories about beliefs) across participants. In order to infer that effects observed with the specic items (a particular set of pictures or stories) are generalizable to the entire population (all faces, or all stories about beliefs), it is necessary to perform an " item analysis. " Item analyses may also reveal relationships between secondary (non-hypothesized) features of the items and functional activity. Here, we perform an item analysis on a set of stories commonly used for localizing brain regions putatively involved in Theory of Mind (ToM): right and left temporo-parietal junction (RTPJ/LTPJ), precuneus (PC), superior temporal sulcus (STS) and medial prefrontal cortex (MPFC). We address the following questions: Do brain regions that comprise the ToM network respond reliably across items (i.e. different stories about beliefs)? Do these brain regions demonstrate reliable preferences for items within the category? Can we predict any region's response to individual items, by using other features of the stimuli? We nd that the ToM network responds reliably to stories about beliefs, generalizing across items as well as subjects. In addition, several regions in the ToM network have reliable preferences for individual items. Linguistic features of the stimuli did not predict these item preferences.
    Humans are thought to have evolved brain regions in the left frontal and temporal cortex that are uniquely capable of language processing. However, congenitally blind individuals also activate the visual cortex in some verbal tasks. We... more
    Humans are thought to have evolved brain regions in the left frontal and temporal cortex that are uniquely capable of language processing. However, congenitally blind individuals also activate the visual cortex in some verbal tasks. We provide evidence that this visual cortex activity in fact reects language processing. We nd that in congenitally blind individuals, the left visual cortex behaves similarly to classic language regions: (i) BOLD signal is higher during sentence comprehension than during linguistically degraded control conditions that are more difcult; (ii) BOLD signal is modulated by phonological information, lexical semantic information , and sentence-level combinatorial structure; and (iii) functional connectivity with language regions in the left prefron-tal cortex and thalamus are increased relative to sighted individuals. We conclude that brain regions that are thought to have evolved for vision can take on language processing as a result of early experience. Innate microcircuit properties are not necessary for a brain region to become involved in language processing.
    Humans reason about the mental states of others; this capacity is called Theory of Mind (ToM). In typically developing adults, ToM is supported by a consistent group of brain regions: the bilateral temporoparietal junction (TPJ), medial... more
    Humans reason about the mental states of others; this capacity is called Theory of Mind (ToM). In typically developing adults, ToM is supported by a consistent group of brain regions: the bilateral temporoparietal junction (TPJ), medial prefrontal cortex (MPFC), precuneus (PC), and anterior temporal sulci (aSTS). How experience and intrinsic biological factors interact to produce this adult functional profile is not known. In the current study we investigate the role of visual experience in the development of the ToM network by studying congenitally blind adults. In experiment 1, participants listened to stories and answered true/false questions about them. The stories were either about mental or physical representations of reality (e.g., photographs). In experiment 2, participants listened to stories about people's beliefs based on seeing or hearing; people's bodily sensations (e.g., hunger); and control stories without people. Participants judged whether each story had positive or negative valance. We find that ToM brain regions of sighted and congenitally blind adults are similarly localized and functionally specific. In congenitally blind adults, reasoning about mental states leads to activity in bilateral TPJ, MPFC, PC, and aSTS. These brain regions responded more to passages about beliefs than passages about nonbelief representations or passages about bodily sensations. Reasoning about mental states that are based on seeing is furthermore similar in congenitally blind and sighted individuals. Despite their different developmental experience, congenitally blind adults have a typical ToM network. We conclude that the development of neural mechanisms for ToM depends on innate factors and on experiences represented at an abstract level, amodally. blindness development plasticity temporoparietal junction experience
    Several regions of the posterior-lateral-temporal cortex (PLTC) are reliably recruited when participants read or listen to action verbs, relative to other word and nonword types. This PLTC activation is generally interpreted as reflecting... more
    Several regions of the posterior-lateral-temporal cortex (PLTC) are reliably recruited when participants read or listen to action verbs, relative to other word and nonword types. This PLTC activation is generally interpreted as reflecting the retrieval of visual-motion features of actions. This interpretation supports the broader theory, that concepts are comprised of sensory–motor features. We investigated an alternative interpretation of the same activations: PLTC activity for action verbs reflects the retrieval of modality-independent representations of event concepts, or the grammatical types associated with them, i.e., verbs. During a functional magnetic resonance imaging scan, participants made semantic-relatedness judgments on word pairs varying in amount of visual-motion information. Replicating previous results, several PLTC regions showed higher responses to words that describe actions versus objects. However, we found that these PLTC regions did not overlap with visual-motion regions. Moreover, their response was higher for verbs than nouns, regardless of visual-motion features. For example, the response of the PLTC is equally high to action verbs (e.g., to run) and mental verbs (e.g., to think), and equally low to animal nouns (e.g., the cat) and inanimate natural kind nouns (e.g., the rock). Thus, PLTC activity for action verbs might reflect the retrieval of event concepts, or the grammatical information associated with verbs. We conclude that concepts are abstracted away from sensory–motor experience and organized according to conceptual properties.
    Many empiricist theories hold that concepts are composed of sensory-motor primitives. For example, the meaning of the word ''run'' is in part a visual image of running. If action concepts are partly visual, then the concepts of... more
    Many empiricist theories hold that concepts are composed of sensory-motor primitives. For example, the meaning of the word ''run'' is in part a visual image of running. If action concepts are partly visual, then the concepts of congenitally blind individuals should be altered in that they lack these visual features. We compared semantic judgments and neural activity during action verb comprehension in congenitally blind and sighted individuals. Participants made similarity judgments about pairs of nouns and verbs that varied in the visual motion they conveyed. Blind adults showed the same pattern of similarity judgments as sighted adults. We identified the left middle temporal gyrus (lMTG) brain region that putatively stores visual-motion features relevant to action verbs. The functional profile and location of this region was identical in sighted and congenitally blind individuals. Furthermore, the lMTG was more active for all verbs than nouns, irrespective of visual-motion features. We conclude that the lMTG contains abstract representations of verb meanings rather than visual-motion images. Our data suggest that conceptual brain regions are not altered by the sensory modality of learning.
    Recent evidence suggests that blindness enables visual circuits to contribute to language processing. We examined whether this dramatic functional plasticity has a sensitive period. BOLD fMRI signal was measured in congenitally blind,... more
    Recent evidence suggests that blindness enables visual circuits to contribute to language processing. We examined whether this dramatic functional plasticity has a sensitive period. BOLD fMRI signal was measured in congenitally blind, late blind (blindness onset 9-years-old or later) and sighted participants while they performed a sentence comprehension task. In a control condition, participants listened to backwards speech and made match/non-match to sample judgments. In both congenitally and late blind participants BOLD signal increased in bilateral foveal-pericalcarine cortex during response preparation, irrespective of whether the stimulus was a sentence or backwards speech. However, left occipital areas (pericalcarine, extrastriate, fusiform and lateral) responded more to sentences than backwards speech only in congenitally blind people. We conclude that age of blindness onset constrains the non-visual functions of occipital cortex: while plasticity is present in both congenitally and late blind individuals, recruitment of visual circuits for language depends on blindness during childhood.
    Among other things, humans talk about what they perceive and do, like " glowing, " " hopping, " and " squeaking. " What is the relationship between our sensory-motor experiences and word meanings? Does understanding action-verbs rely on... more
    Among other things, humans talk about what they perceive and do, like " glowing, " " hopping, " and " squeaking. " What is the relationship between our sensory-motor experiences and word meanings? Does understanding action-verbs rely on the same neural circuits as seeing and acting? The available evidence indicates that sensory-motor experience and word meanings are represented in distinct, but interacting systems. Understanding action-verbs does not rely on early modality-specific visual or motor circuits. Instead, word comprehension relies on a network of amodal brain regions in the left frontal, temporal, and parietal cortices that represent conceptual and grammatical properties of words. Interactions between word meanings and sensory-motor experiences occur in higher-order polymodal brain regions.
    Thinking about other people's thoughts recruits a specific group of brain regions, including the temporo-pari-etal junctions (TPJ), precuneus (PC), and medial prefrontal cortex (MPFC). The same brain regions were recruited when children... more
    Thinking about other people's thoughts recruits a specific group of brain regions, including the temporo-pari-etal junctions (TPJ), precuneus (PC), and medial prefrontal cortex (MPFC). The same brain regions were recruited when children (N = 20, 5–11 years) and adults (N = 8) listened to descriptions of characters' mental states, compared to descriptions of physical events. Between ages 5 and 11 years, responses in the bilateral TPJ became increasingly specific to stories describing mental states as opposed to people's appearance and social relationships. Functional activity in the right TPJ was related to children's performance on a high level theory of mind task. These findings provide insights into the origin of neural mechanisms of theory of mind, and how behavioral and neural changes can be related in development.
    Cross-modal plasticity refers to the recruitment of cortical regions involved in the processing of one modality (e.g. vision) for processing other modalities (e.g. audition). The principles determining how and where cross-modal plasticity... more
    Cross-modal plasticity refers to the recruitment of cortical regions involved in the processing of one modality (e.g. vision) for processing other modalities (e.g. audition). The principles determining how and where cross-modal plasticity occurs remain poorly understood. Here, we investigate these principles by testing responses to auditory motion in visual motion area MT+ of congenitally blind and sighted individuals. Replicating previous reports, we find that MT+ as a whole shows a strong and selective responses to auditory motion in congenitally blind but not sighted individuals, suggesting that the emergence of this univariate response depends on experience. Importantly, however, multivoxel pattern analyses showed that MT+ contained information about different auditory motion conditions in both blind and sighted individuals. These results were specific to MT+ and not found in early visual cortex. Basic sensitivity to auditory motion in MT+ is thus experience-independent, which may be a basis for the region's strong cross-modal recruitment in congenital blindness.
    What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on... more
    What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., " to bounce ") and low motion (e.g., " to look ") words. In task 2, participants made plausibility judgments for passages describing movement (" A centaur hurled a spear. .. ") or cognitive events (" A gentleman loved cheese. .. "). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems.
    Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights... more
    Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, " visual " cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language–syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, " visual " cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development.
    Blind people's inferences about how other people see provide a window into fundamental questions about the human capacity to think about one another's thoughts. By working with blind individuals, we can ask both what kinds of... more
    Blind people's inferences about how other people see provide a window into fundamental questions about the human capacity to think about one another's thoughts. By working with blind individuals, we can ask both what kinds of representations people form about others' minds, and how much these representations depend on the observer having had similar mental states themselves. Thinking about others' mental states depends on a specific group of brain regions, including the right temporo-parietal junction (RTPJ). We investigated the representations of others' mental states in these brain regions, using multivoxel pattern analyses (MVPA). We found that, first, in the RTPJ of sighted adults, the pattern of neural response distinguished the source of the mental state (did the protagonist see or hear something?) but not the valence (did the protagonist feel good or bad?). Second, these neural representations were preserved in congenitally blind adults. These results suggest that the temporo-parietal junction contains explicit, abstract representations of features of others' mental states, including the perceptual source. The persistence of these representations in congenitally blind adults, who have no first-person experience with sight, provides evidence that these representations emerge even in the absence of relevant first-person perceptual experiences.
    ■ In congenital blindness, the occipital cortex responds to a range of nonvisual inputs, including tactile, auditory, and linguistic stimuli. Are these changes in functional responses to stimuli accompanied by altered interactions with... more
    ■ In congenital blindness, the occipital cortex responds to a range of nonvisual inputs, including tactile, auditory, and linguistic stimuli. Are these changes in functional responses to stimuli accompanied by altered interactions with non-visual functional networks? To answer this question, we introduce a data-driven method that searches across cortex for functional connectivity differences across groups. Replicating prior work, we find increased fronto-occipital functional con-nectivity in congenitally blind relative to blindfolded sighted participants. We demonstrate that this heightened connectivity extends over most of occipital cortex but is specific to a subset of regions in the inferior, dorsal, and medial frontal lobe. To assess the functional profile of these frontal areas, we used an n-back working memory task and a sentence comprehension task. We find that, among prefrontal areas with overcon-nectivity to occipital cortex, one left inferior frontal region responds to language over music. By contrast, the majority of these regions responded to working memory load but not language. These results suggest that in blindness occipital cortex interacts more with working memory systems and raise new questions about the function and mechanism of occipital plasticity. ■
    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital (" visual ") brain regions respond to sound and spoken language. Here,... more
    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital (" visual ") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4 –17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that " visual " cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions.
    Events (e.g., " running " or " eating ") constitute a basic type within human cognition and human language. We asked whether thinking about events, as compared to other conceptual categories, depends on partially independent neural... more
    Events (e.g., " running " or " eating ") constitute a basic type within human cognition and human language. We asked whether thinking about events, as compared to other conceptual categories, depends on partially independent neural circuits. Indirect evidence for this hypothesis comes from previous studies showing elevated posterior temporal responses to verbs, which typically label events. Neural responses to verbs could, however, be driven either by their grammatical or by their semantic properties. In the present experiment, we separated the effects of grammatical class (verb vs. noun) and semantic category (event vs. object) by measuring neural responses to event nouns (e.g., " the hurricane "). Participants rated the semantic relatedness of event nouns, as well as of two categories of object nouns— animals (e.g., " the alligator ") and plants (e.g., " the acorn ")— and three categories of verbs—manner of motion (e.g., " to roll "), emission (e.g., " to sparkle "), and perception (e.g., " to gaze "). As has previously been observed, we found larger responses to verbs than to object nouns in the left posterior middle (LMTG) and superior (LSTG) temporal gyri. Crucially, we also found that the LMTG responds more to event than to object nouns. These data suggest that part of the posterior lateral temporal response to verbs is driven by their semantic properties. By contrast, a more superior region, at the junction of the temporal and parietal cortices, responded more to verbs than to all nouns, irrespective of their semantic category. We concluded that the neural mechanisms engaged when thinking about event and object categories are partially dissociable.
    A B S T R A C T Numerous theories have been proposed regarding the brain's organization and retrieval of lexical information. Neurophysiological dissociations in processing different word classes, particularly nouns and verbs, have been... more
    A B S T R A C T Numerous theories have been proposed regarding the brain's organization and retrieval of lexical information. Neurophysiological dissociations in processing different word classes, particularly nouns and verbs, have been extensively documented, supporting the contribution of grammatical class to lexical organization. However, the contribution of semantic properties to these processing differences is still unresolved. We aim to isolate this contribution by comparing ERPs to verbs (e.g. wade), object nouns (e.g. cookie), and event nouns (e.g. concert) in a paired similarity judgment task, as event nouns share grammatical category with object nouns but some semantic properties with verbs. We find that event nouns pattern with verbs in eliciting a more positive response than object nouns across left anterior electrodes 300–500 ms after word presentation. This time-window has been strongly linked to lexical-semantic access by prior electrophysiological work. Thus, the similarity of the response to words referring to concepts with more complex participant structure and temporal continuity extends across grammatical class (event nouns and verbs), and contrasts with the words that refer to objects (object nouns). This contrast supports a semantic, as well as syntactic, contribution to the differential neural organization and processing of lexical items. We also observed a late (500–800 ms post-stimulus) posterior positivity for object nouns relative to event nouns and verbs at the second word of each pair, which may reflect the impact of semantic properties on the similarity judgment task.
    In humans, the ability to reason about mathematical quantities depends on a frontoparietal network that includes the intraparietal sulcus (IPS). How do nature and nurture give rise to the neurobiology of numerical cognition? We asked how... more
    In humans, the ability to reason about mathematical quantities depends on a frontoparietal network that includes the intraparietal sulcus (IPS). How do nature and nurture give rise to the neurobiology of numerical cognition? We asked how visual experience shapes the neural basis of numerical thinking by studying numerical cognition in congenitally blind individuals. Blind (n = 17) and blindfolded sighted (n = 19) participants solved math equations that varied in difficulty (e.g., 27 − 12 = x vs. 7 − 2 = x), and performed a control sentence comprehension task while undergoing fMRI. Whole-cortex analyses revealed that in both blind and sighted participants, the IPS and dorsolateral prefrontal cortices were more active during the math task than the language task, and activity in the IPS increased para-metrically with equation difficulty. Thus, the classic frontoparietal number network is preserved in the total absence of visual experience. However, surprisingly, blind but not sighted individuals additionally recruited a subset of early visual areas during symbolic math calculation. The functional profile of these " visual " regions was identical to that of the IPS in blind but not sighted individuals. Furthermore , in blindness, number-responsive visual cortices exhibited increased functional connectivity with prefrontal and IPS regions that process numbers. We conclude that the frontoparietal number network develops independently of visual experience. In blindness, this number network colonizes parts of deafferented visual cortex. These results suggest that human cortex is highly functionally flexible early in life, and point to frontoparietal input as a mechanism of cross-modal plasticity in blindness. plasticity | blindness | number | development | vision N umerical reasoning pervades modern human culture. We readily represent quantity, whether thinking about apples, hours, people, or ideas. It has been suggested that this competence is rooted in a primitive nonsymbolic system of numerical representation that is shared among adults of diverse cultures, as well as with preverbal infants and nonhuman animals (1, 2). This nonsymbolic system allows these populations to estimate numbers of visual or auditory items and to compute over these quantities. For example, infants and monkeys can detect which of two arrays contains more items, and can add and subtract approximate quantities (1–4). The nonverbal, nonsymbolic system underlying this performance represents number in an inherently approximate way (5). However, numerate humans also have the unique ability to reason about quantities precisely using an acquired system of number symbols (5). Reasoning about approximate and exact number depends on a frontoparietal network, a key node of which is the intraparietal sulcus (IPS) (6). The IPS is active when participants estimate the number of items in a nonsymbolic display as well as when they solve symbolic math problems (e.g., 23 − 19 = x), with more IPS activity during hard math problems than easier ones (6, 7). Temporary deactivation of the IPS with transcranial magnetic stimulation (TMS) impairs performance on numerical tasks (8). In monkeys, the IPS contains numerosity-selective neurons that are tuned to specific numerosities (9). Although these findings highlight the critical role of the IPS in numerical reasoning, the developmental origins of the neural basis of number representations remain largely unknown. IPS activity during numerical processing is seen in children as young as 4 y old, but these children have had years of experience with numerical information (10). How does the nature of early experience affect the development of the IPS? Here we investigated this question by probing numerical representations following atypical perceptual experience. Specifically, we tested the role of visual experience in the development of numerical representations by studying individuals who are blind from birth. One possibility is that number is represented differently in blindness, because representations of number in the IPS are fundamentally visuospatial and develop from accumulated experience with seeing sets of items. Like early visual features such as color, contrast, and orientation, numerosity is susceptible to aftereffects. For example, viewing a large quantity of dots causes a subsequent set to be perceived as less numerous than its true quantity (11). Numerosity judgments are also influenced by the visual spatial frequency of arrays (12), suggesting that numerical estimation may tap a form of visual texture perception (13). Furthermore, the neuroanatomical location of number responses in the posterior parietal lobe is consistent with the suggestion that numerical processing is partially visual in nature (14, 15). The parietal lobe plays a central role in visuospatial processing: it is involved in guiding hand and eye movements, orienting spatial attention, mentally rotating objects, and maintaining spatial information in working memory (14, 16, 17). Hierarchical generative models trained on visual arrays develop " numerosity detectors " akin to the number neurons found in monkey IPS (18). Together, these findings suggest that visual experience may play a foundational role in the development of IPS number representations. An alternative hypothesis is that IPS representations of number are modality independent. In sighted adults, the neurobiological underpinnings of number are similar across sensory modalities and input formats; the IPS is active not only when adults estimate Significance Human numerical reasoning relies on a cortical network that includes frontal and parietal regions. We asked how the neural basis of numerical reasoning is shaped by experience by comparing congenitally blind and sighted individuals. Participants performed auditory math and language tasks while undergoing fMRI. Both groups activated frontoparietal number regions during the math task, suggesting that some aspects of the neural basis of numerical cognition develop independently of visual experience. However, blind participants additionally recruited early visual cortices that, in sighted populations, perform visual processing. In blindness, these " visual " areas showed sensitivity to mathematical difficulty. These results suggest that experience can radically change the neural basis of numerical thinking. Hence, human cortex has a broad computational capacity early in development.
    ■ Language processing depends on a left-lateralized network of frontotemporal cortical regions. This network is remarkably consistent across individuals and cultures. However, there is also evidence that developmental factors, such as... more
    ■ Language processing depends on a left-lateralized network of frontotemporal cortical regions. This network is remarkably consistent across individuals and cultures. However, there is also evidence that developmental factors, such as delayed exposure to language, can modify this network. Recently, it has been found that, in congenitally blind individuals, the typical frontotemporal language network expands to include parts of " visual " cortices. Here, we report that blindness is also associated with reduced left lateralization in frontotemporal language areas. We analyzed fMRI data from two samples of congenitally blind adults (n = 19 and n = 13) and one sample of congenitally blind children (n = 20). Laterality indices were computed for sentence comprehension relative to three different control conditions: solving math equations (Experiment 1), a memory task with nonwords (Experiment 2), and a " does this come next? " task with music (Experiment 3). Across experiments and participant samples, the frontotemporal language network was less left-lateralized in congenitally blind than in sighted individuals. Reduction in left lateralization was not related to Braille reading ability or amount of occipital plasticity. Notably, we observed a positive correlation between the lateralization of frontotemporal cortex and that of language-responsive occipital areas in blind individuals. Blind individuals with right-lateralized language responses in fronto-temporal cortices also had right-lateralized occipital responses to language. Together, these results reveal a modified neurobiol-ogy of language in blindness. Our findings suggest that, despite its usual consistency across people, the neurobiology of language can be modified by nonlinguistic experiences. ■
    What are concepts made of? One prominent theory assumes that concepts are comprised of sensory-motor features distributed throughout the sensory-motor cortices. For example, the meaning of the word "kick" is partially... more
    What are concepts made of? One prominent theory assumes that concepts are comprised of sensory-motor features distributed throughout the sensory-motor cortices. For example, the meaning of the word "kick" is partially represented in the visual motion regions that are activated during the observation of kicking. This theory makes specific predictions about concepts that have motion properties: they are represented in
    ABSTRACT Activity theory has an extensive history in the Soviet Union dating back to the works of Vygotsky and his followers. Activity (or "deyatel'nost" in Russian) refers to a coherent system of internal mental... more
    ABSTRACT Activity theory has an extensive history in the Soviet Union dating back to the works of Vygotsky and his followers. Activity (or "deyatel'nost" in Russian) refers to a coherent system of internal mental processes and external behaviors and motivations that are combined and organized by the mechanisms of self-regulation to achieve a conscious goal. Activity theory is emerging as a new paradigm for psychology and an interdisciplinary approach to human sciences in Europe. Furthermore, scientists are writing about the internationalization of this approach. For example, activity theory exerted a great influence on the development of ACTION THEORY in Germany. In the United States, activity theory is associated with the sociocultural approach initiated by Vygotsky. However much data in this area remains unknown to the English-speaking world. This article introduces and discusses an important principle of Activity theory, that is, "the principle of unity of cognition and behavior." Because this principle is important to the study of work behavior, we believe that its introduction to the English-speaking scientific community will generate new ideas in the area of human performance.
    In congenital blindness, the occipital cortex responds to a range of nonvisual inputs, including tactile, auditory, and linguistic stimuli. Are these changes in functional responses to stimuli accompanied by altered interactions with... more
    In congenital blindness, the occipital cortex responds to a range of nonvisual inputs, including tactile, auditory, and linguistic stimuli. Are these changes in functional responses to stimuli accompanied by altered interactions with nonvisual functional networks? To answer this question, we introduce a data-driven method that searches across cortex for functional connectivity differences across groups. Replicating prior work, we find increased fronto-occipital functional connectivity in congenitally blind relative to blindfolded sighted participants. We demonstrate that this heightened connectivity extends over most of occipital cortex but is specific to a subset of regions in the inferior, dorsal, and medial frontal lobe. To assess the functional profile of these frontal areas, we used an n-back working memory task and a sentence comprehension task. We find that, among prefrontal areas with overconnectivity to occipital cortex, one left inferior frontal region responds to language over music. By contrast, the majority of these regions responded to working memory load but not language. These results suggest that in blindness occipital cortex interacts more with working memory systems and raise new questions about the function and mechanism of occipital plasticity.