Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

Double dissociations emerge in a “flat” attractor network

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster A87 in Poster Session A, Tuesday, October 24, 10:15 am - 12:00 pm CEST, Espace Vieux-Port

Ihintza Malharin1, James S. Magnuson1,2,3; 1BCBL. Basque Center on Cognition, Brain, and Language, 2Ikerbasque, 3University of Connecticut

Double dissociations were long considered a gold standard for establishing functional modularity. For example, one patient with a selective impairment in processing abstract words and another with a concrete deficit would suggest separable representations and/or functions. There have been multiple computational demonstrations that double dissociations can emerge without modularity. Most relevant for us is Plaut’s (1995) demonstration that abstract vs. concrete double dissociations are observed by damaging an attractor network with separate orthographic, semantic, and phonological layers. Damage to an orthographic-to-hidden pathway led mainly to abstract deficits, while damage to a semantic-to-cleanup pathway led mainly to concrete deficits. However, random damage to either pathway could result in either kind of deficit. Because only one patient with a complementary selective impairment is sufficient to support a modularity hypothesis on classic double dissociation logic, the finding that random damage to the same pathway could lead to different deficits in different damaged networks supports the conclusion that double dissociations could result without underlying modularity. Plaut’s model was complex, with 7 sets of units and 13 layers of connections. We investigated whether double dissociations would emerge in a simpler network with two sets of units and two layers of connections. We used a new variant of Cree, McRae, and McNorgan’s (2006) attractor network. Our network takes phonological features (representing multiple phonemes simultaneously) as input, and maps them directly to a semantic layer that has recurrent connections (every semantic node has a connection to every other semantic node, and after input is applied, activation cycles 10 times, with the model trained via backpropagation). Semantic patterns and words (60 concrete and 20 abstract) were based on Plaut & Shallice (1993). Concrete words tend to have more semantic features than abstract words, and some features are more likely in concrete words, while others are more likely in abstract words. We trained 10 different networks with randomly-initialized weights on these items. After training (which resulted in the networks being able to activate the correct semantics for each of the 80 words), we created 10 copies of each network and randomly damaged 10-80% of connections (in steps of 10%, i.e., 10%, 20%, … 80%) between phonological and semantic nodes in one simulation, and 10-80% of recurrent semantic connections in another simulation. Double dissociations were apparent at every level of damage to phonological-semantic damage. Semantic-semantic damage led only to concrete deficits. The presence of double dissociations given different degrees of damage in each model reconfirm Plaut's (1995) findings in a much more “flat” architecture, with less potential for modularity. The tendency for concrete impairments given damage to the semantic attractor level is at once surprising and revealing; it demonstrates the division of labor (and partial modularity) that emerges in this network. We will discuss theoretical implications, as well as next steps in this research program.

Topic Areas: Computational Approaches, Disorders: Acquired

SNL Account Login

Forgot Password?
Create an Account

News