Question | Answer |
What are the main differences between spoken & written word recognition? | Spoken: >Some aspects innate >Primary >Need to segment words >Extended in time but each word exists only briefly Written: >Learned (but likely we use mechanisms for spoken lang recognition) >Secondary >Segmentation is done by white space on page >Extended in space but words are permanent Suggests, different models needed to explain each |
Segmenting spoken words - 2 methods? | Pre-lexical (bottom-up): >Characteristics indicate boundary >English - stressed 1st syllable - Cutler & Carter >Different for other languages >Cutler & Norris - word spot task, embedded in nonsense syllables >BUT must be alternative for words that break rules (eg guitar, yacht) Lexical models (top-down): >Phonological representation (knowledge of what word sounds like) >Recognise each word & predict boundary >Match sounds against store (mental lexicon) >Marslen-Wilson & Welsh - problems w/ short words? |
Saffran et al (1996)? | >Babies - head turning experiment >Artificial language >Id words in 2 mins >Suggests implicit learning ability via co-occurrence of syllables & statistical properties |
Parallel activation in spoken word recognition? | Simultaneous hearing & evaluation Content processed before end of word Marslen-Wilson et al ('78 & '87) - Cohort model Word start - initial cohort Word continues - list of matches reduces Single word recognised - uniqueness point Cross modal priming & semantic similarity Gaskell & Marslen-Wilson ('02) - # words simultaneously activated limited; suggests priming effect weakens |
Lexical competition - connectionist models | eg TRACE - McCelland & Elman 3 levels - 1) Phonetic features; 2) Phonemes; 3) Word Activation - bottom-up - nodes representing phonemes activated when hear a word When all phonemes for a word are activated, activation spreads to the word Amount depends on closeness of match Phonemes c, o, en, f = "confess" & "confetti" Inhibition @ word level = competitive part of model Support - McQueen et al ('84) - segmentation via word spotting |
Visual word recognition - IAC model | McClelland & Rumelhart ('81) Top-down 3 levels: 1) visual features; 2) letters; 3) words Explains word superiority effect (letters recognised faster if in string ie word) BUT effect can be explained with no top-down feedback using spelling representations in mental lexicon (Grainger & Jacobs '94) Top-down feedback needed for word recognition? = Contentious |
Interaction of spoken & written words - DRC model | Coltheart et al ('01) Dual Route Cascade Model Assembled (rule-based, regular words) vs addressed (lexical, irregular words) phonology Speed of written word naming = assess which route used Regular faster than irregular (only low frequency) Regular use either route, irregular only one route - gives advantage for regular words Phonology important even in silent reading (e.g. rose, rows) (Van Orden, 1987) Glushko ('79) neighbouring word properties impact recognition rate - eg save, wave, gave - have BUT : Jared ('02) - word naming times affected much more by consistency than regularity |
Eye movements in reading | Active - coordinated Saccades Fixations - 200ms (dependent on word frequency - Rayner & Duffy) O'Regan & Jacobs - recognise words faster @ optimal viewing position - mid-word Shillock et al - info content greatest mid-word; less info at word ending ~10% saccades backwards - more with low predictability/ambiguity Text is present 'ahead' as well as 'before' - parallel process? Flanker effect = yes; normal reading evidence = no |
What is the mental lexicon? | Store Semantic r'ships - how meanings related Semantic content - meaning Phonological Orthographic |
Morphology? | Size of units in lexicon Morpheme - smallest meaningful sub-unit Inflections - plural; tense (-ed, -ing) Derivations - change word type (weak>weakly=adjective>verb) Irregular forms - mouse>mice Share morphemes - depart, department = unrelated 2 cognitive approaches: Full-listing: all words stored in entirety Decompositional: Taft & Forster - words broken into morphemes & stored w/ links Support:Marslen-Wilson et al ('94) - words w/ same morpheme prime strongly; faster processing prime & target Related meanings (cruel>cruelty) NOT shared prefix (casual>casualty) |
Accessing Meaning - Semantic Representations? | Type of info accessed when word recognised? Spreading activation models (eg Collins & Loftus) - Connectionist: links = semantic r'ship; meanings related Vs Featural theories - meanings=set of concepts stored in Lexicon (Both models poorly specified & data can often be interpreted to fit either) |
Semantic priming? | Semantic priming experiments: e.g. bread>butter, cheddar>cheese Associative links between prime & target Lucas ('00) - Non associative (e.g. horse and sheep, much weaker link & data robust) BUT evidence different visual & perceptual properties resulting in some degree of priming (e.g. size/shape) |
Semantic ambiguity? | Homonyms (e.g. bank – river vs money) Context allows selection of appropriate meaning, but how? >Autonomous (all meanings accessed then select contextually appropriate one) >Interactive (context rules out some meanings before they are fully activated) >Swinney ('79) supported autonomous model - Regardless of bias, both possible words primed; BUT if delay (1 sec) between stimuli, contextually appropriate word primed >Lucas ('99) supported interactive model - more priming re appropriate meanings then inappropriate |
Processing sentences? | Sentences are novel Implies process has to be constructive, rather than just recognition Parsing = deciding each word’s syntactic (grammatical) role in the sentence to understand the sentence |
Incremental model of parsing? | Syntactic structure built incrementally as each word encountered Most evidence for this one Tyler & Marslen-Wilson ('77) Ambiguous phrase "landing planes" Preceding context determines if landing = adjective or verb Speed of response dependent on prior context Supports incremental parsing Incompatible with delayed model - it wouldn't predict an effect of appropriateness, as waiting to the end of the sentence, to parse |
Garden path model of parsing? | "The horse raced past the barn fell" Fraizer ('79) Influential >Incremental & autonomous >Misinterpretations constantly corrected >Listener might be ‘led down the garden path’ >Serial model – parser makes only one potential parse at a time >Parsing based on info from sentence itself, not context (syntactic info only) |
Constraint-based models of parsing? | MacDonald et al ('94) Parsing = parallel & interactive Multiple interpretations evaluated in parallel No autonomous component Info from lexicon added during evaluation process Evidence: Multiple meanings of ambiguous words briefly activated Frequency of meanings (in the language) determines activations of alternatives Biasing contexts increases activation of an alternative |
Is parsing autonomous? | For: Ferreira & Clifton ('86) - eye-tracking re verb in "Defendant examined..." - garden path effect found Against: Trueswell et al - garden path sentences could be affected by meaning: same contexts as F&C not as constraining as their example Parsing can be affected by semantic plausibility of various parses of the system |
Constraints on parsing? | Ambiguity reduced by intonation - listeners use this (Warren) Tannenhaus et al ('95) "Put the apple on the towel in the box” environmental info used to resolve ambiguity & reduce/remove garden path effect |
Starr & Rayner ('01) - key points? | >Eye movements in reading claimed to reflect high level cognitive processes >Studies showing effects of parafoveal (right of fixation) info uptake support models of parallel processing in reading. >Challenges serial spotlight models of attention in reading >Frequency & predictability effects on fixation times suggest top-down cognitive influences of lexical & comprehension processes >Supports models of integrated & parallel processing (MacDonald et al '94) |
Want to create your own Flashcards for free with GoConqr? Learn more.