Language Production and Lexicalisation

My core text for this was The Psychology of Language by Harley (click image). This gives all the detail you need on the subject. Needs concentration though! Concepts such as ‘lemmas’ can be tough.

 

 

For additional evaluation & comprehension, I backed it up with reading from Cognitive Psychology: A Student’s Handbook by Eysenck & Keane. You’ll find it more readable. Might be an idea to start here and go to Harley for more detail. Note: Eysenck & Keane often cite Harley!

 

 

 

GENERAL/ DEFINITIONS

Psycholinguistics: the study of the psychological processes involved in language

Language: a comm system enabling us to talk about anything, irrespective of time & space

Lexicon: all the info/ pointers to info we know about a word – sound, meaning, look, syntax

Priming: affecting a response to a target by showing an item before it (facilitation/ inhibition)

Priming: if 2 things are involved together in processing they’ll assist/ interfere with each other

Connectionism: computer simulations with simple interconnected processing units w/o rules

Lexicalization: turning thoughts underlying words into sounds/ going from semantics to sound

Lexical selection: choosing a word

Phonological encoding: retrieving phonological forms of these words

Broca’s aphasia: non-fluent speech & grammatical errors

Wernicke’s aphasia: impaired comprehension but fluent speech with content words missing

Anomia: impaired ability in naming objects

Agrammatism: speech production lacks grammatical structure & word endings missing

 

INTRO

Language is for comms, express emotion, social interaction, to think, humour, record facts…

3 broad processes in speech production:

1) conceptualization: determining what to say

2) formulation: translating conceptual representation into linguistic form (incl. lexicalisation)

3) execution: detailed phonetic & articulatory planning

 

HOW MANY STAGES IN LEXICALIZATION?

– agreement that lexicalization is 2 stage process: 1.meaning based; 2.phonological based

– agree at least 1 stage of lexical representation with units that are each words

– disagree about type and functions of this representation

 

LEVELT (1989) – LEMMA THEORY – 2 STAGE MODEL OF LEXICALIZATION

LEVELT (1989) - LEMMA THEORY - 2 STAGE MODEL OF LEXICALIZATION

Lemma: representation of a word bet. semantic level/ representation & phonological level/representation

Lemmas are specified syntactically and semantically but not phonologically

Lemma selection: 1st stage specifying in pre-phonological abstract way

Phonological form selection (‘lexeme’): 2nd stage specifying actual concrete phonological form of the word

  • two layers of lexical representation
  • support for 2 stage model but evidence for existence of lemmas debatable

 

TWO STAGE MODEL SUPPORT – EXPERIMENTAL EVIDENCE

Kempen & Huijbers (1983)

– studied time people take before speaking when describing scenes

– we don’t start speaking until we have identified all the content

– we can’t start the first word until we’ve accessed all the lemmas & 1st phono word form

 

Wheeldon & Monsell (1992) – repetition priming

– picture naming is facilitated by saying a name or reading aloud

– previously saying a homophone (e.g.weight for wait) isn’t a good prime

– so facilitation can’t be phonologically mediated; must be semantic/ lemma based

 

Levelt et al (1991) – picture/ word interference paradigm

– subjects see pictures they have have name as quickly as possible

– also hear a word that they have to make a lexical decision about

– words prime semantic neighbours early on; late on they prime phonological neighbours

– so early stage where semantic candidates active & late stage where phono forms active

 

Bloem & La Heij (2003) – semantic interference

– evidence for 2 stages & competition bet. lexical items in the first stage

– subjects had to name pictures with distractor words on top

– time longer when picture and word related

– longer bec. semantic competitors activated slowing down selection of lexical target

 

Harley (2008) – tip of the tongue (TOT) study review

– caused by success in semantic processing but failure in phonological processing

– so supports 2 stage model

 

Badecker et al (1995) – Italian Dante, anomia

– could give gender of words he couldn’t say: this is info encoded in lemmas

– he had access to lemmas but not connected phonological forms

– supports idea of lemmas

 

TWO STAGE MODEL SUPPORT – NEUROPHYSIOLOGICAL EVIDENCE

Levelt & Indefrey (2004)

– different regions of brain activated in time sequence as we produce words

– e.g. conception of word=mid temporal gyrus; accessing phono code=Wernicke’s; prep of sounds=Broca’s

– lesions to these areas impair word naming, accessing word meaning or sounds

 

AGAINST LEMMAS

Caramazza & Miozzo (1997)

1 – it shouldn’t be possible to get phonological info without getting syntactic info (e.g. gender)

– this is because phonological stage can only be reached via lemma stage

– but Italians can get partial phono info when they can’t get gender & vice versa

2 – also lemmas are amodal & syntactically specified so grammatical problems shouldn’t be modality specific

– but SJD had difficulty producing verbs in writing but not speaking

– could produce nouns equally well in writing & speaking

 

Harley (2008) – Homophones

– Lemma model says nun & none have different lemmas, but same lexeme (phono)

– could be that they just have different lexemes

– if studies find a distinction bet. homophones then this supports Lemma model? (p418)

– conflict in studies & data trying to determine this

 


 

DELL’S (1986) – SPREADING ACTIVATION THEORY

  1. semantic level: meaning of what’s to be said
  2. syntactic level: grammatical structure of the words
  3. morphological level: morphemes (basic units of meaning) in the planned sentence
  4. phonological level: phonemes (basic units of sound)

– processing in parallel and interactively: any level can influence any other level

– spreading activation: a node (word, categories etc) causes activation to spread to related nodes

– there is a lexicon (dictionary) in the form of a connectionist network

– there are category rules at all levels; also a ‘syntactic traffic cop’

– most highly activated node belonging to appropriate category is chosen

– after selection its activation level is set to zero so it’s not selected again

 

EVIDENCE FOR SPREADING ACTIVATION THEORY

Dell (1986) – mixed error effect

A – speech errors that are semantically & phonemically related to the intended word

– e.g. saying let’s stop instead of let’s start: also shows flexible interaction of levels

– BUT difficult to work out how many incorrect words phonemically related to correct word by chance

B – found syntactic cop rules meant errors belonged to the appropriate category (e.g. nouns replacing nouns)  

C – covers exchange errors (I must write a wife to my letter)

– wife has been used and set to zero and other selected & activated word used

 

Ferriera & Griffin (2003) – interference

– sentence completion task with homophone ‘none’ interfered with picture naming showing a priest – named it ‘nun’

– semantic similarity between priest & nun combined phonological identity of nun & none

 

Baars et al (1975) – nonwords

– errors were made up from real words than non-words

– easier for words to be activated because they have representations in the lexicon

– e.g. ‘lewd rip’ was changed to ‘rude lip’; but ‘luke risk’ was not changed to ‘ruke list’

BUT:

A – theory says speech errors happen when wrong word more highly activated than right word

– Glasser (1992) found no increase in picture naming errors when semantically related distractor word shown

– so this interactive system is likely to produce many more errors than really found in speech

B – Goldrick (2006): interactive processes in the theory are more apparent in speech-error data than error-free data

 


 

LEVELT’S (2001) – WEAVER++ MODEL

WEAVER++ model, Levelt et al 1999

– a discrete 2 stage model with no interaction bet. levels

– cascaded processing isn’t allowed so activation of word form only starts after lemma selection

– concepts select lemmas by enhancing activation level of the concept dominating the lemma

– a phonological code is retrieved for each lemma and for the word as a whole

– phonological codes are spelled out as phonemes, which are strung together to make syllables

– the syllables aren’t stored in the lexicon: we create them as we go along

– the syllables are the input into phonetic encoding, which is an input to articulation

 

SUPPORT FOR WEAVER++

(see also support for lemma)

Levelt (2001) – part word prime aids target naming (e.g. mer for hammer) so all parts of the word were retrieved in one go

Levelt et al (2006) – mental syllabary

– a store of highly practiced syllable gestures driving articulation

– evidence as found syllable frequency affects naming times

 

Harley & Brown (1998) – tip of tongue

– words with few similar sounding words more susceptible to TOT (e.g. apron vs pawn)

– occurs when link between semantic & phonological systems weak

 

AGAINST WEAVER++

Damian (2007) – priming

– presented a pic simultaneously with distractor pic with phonological relation (e.g. ball,wall)

– phonological names for distractors shouldn’t be activated & influence as phono is later

– but naming of target pics faster: more consistent with spreading activation

 

Eysenck (2010) – review

  1. narrow focus on production of single words not planning & producing sentences
  2. evidence that there’s more interaction between processing levels than Weaver++
  3. speech errors suggest there’s much parallel processing in speech production
  4. As Harley says, data suggests lemmas aren’t necessary, just a split bet. semantic & phono levels

 


 

ANOMIA

Anomia: impaired ability in naming objects

WEAVER++ model explains in 2 ways:

  1. lemma selection problem, so naming errors would be similar to correct word
  2. word-form selection problem, so patients can’t find appropriate word

Kay & Ellis (1987) – patient EST

  • could select correct lemma (abstract word) but not phonological form
  • no impairment to semantic system but great problems finding words (like TOT)

BUT:

Lambon Ralph et al (2002) – no lemmas

  • assess semantic, phonological and lemma functioning in aphasics
  • anomia predicted just by considering their general impairments in these areas
  • anomia findings can be explained without lemmas & no evidence for a lemma role

ALTHOUGH:

Ingles (2007) -patient MT

  • severe anomia but no apparent semantic or phonological impairment
  • maybe impairment mapping semantic reps onto phonological, even though both systems intact

SO support for 2 stages but maybe anomia simply because of semantic & phono impairment

 

AGRAMMATISM

Agrammatism: speech production lacks grammatical structure & word endings missing

Eysenck (2010) – evaluation

– research supports syntactic level where grammatical structure of sentence is formed & content words stage

– seem to have reduced resources for syntactic processing

– some evidence that different brain areas involved in diff aspects of syntactic processing

– but Harley (2008) – sentence construction deficit, grammatical element loss and syntactic comprehension don’t always occur, which they should if it’s a syndrome

 

JARGON APHASIA

Jargon aphasia: brain damaged condition where speech grammatically correct but problems finding words, sometimes made up words

Neologisms: made-up words said by jargon aphasics

– neologism phonemes often resemble target word; resemble common consonants; recently used

Marshall (2006) – suggests caused by failure to speak & monitor own speech (syntactic cop?)

 


 

DISCARDED FROM MY NOTES:

 

GARRET’S MODEL OF SPEECH PRODUCTION (1975-92)

Garrett’s (1975) model of speech production
Message level is intention to say something

 

2 major stages of syntactic planning:

A) functional level: word order not established; semantic content of words specified. Content words selected (e.g. nouns, verbs, adjectives)

B) positional level: words are explicitly ordered; function words selected (e.g. the)

– serial model: only 1 thing is happening at each stage of processing & levels don’t interact

– there is a division between syntactic planning & lexical retrieval