Semantic constraint, reading control, and the granularity of form-based expectations during semantic processing: Evidence from ERPs
We investigated the role that semantic constraint and participant control over stimulus presentation have on early stages of visual word recognition. Namely, we tested how the presence of a highly constraining sentential context influences the expectations that readers have during incremental sentence processing. Further, we tested whether allowing participants to self-pace the experiment affected early sensory perceptions of written stimuli. Event-related potentials (ERPs) were recorded in three experiments. Participants read sentences containing a target word from one of four conditions: 1) the target, spelled as expected; 2) the target with two internal characters transposed; 3) a nonword one vowel different from a target; or 4) an illegal consonant string. In Experiment 1, sentences were minimally constraining up to the target word (average cloze at target word: 0.01); in Experiments 2 and 3, sentences were highly constraining (average cloze at target word: 0.93). In both Experiments 1 and 2, sentences were presented using rapid-serial-visual presentation (RSVP). In Experiment 3, participants saw the same sentences used in Experiment 2 but were allowed to self-pace the presentation of each word in every trial. In Experiments 1 and 2, results showed early neural sensitivity to nonsensical consonant strings only, and only when they appeared within high constraint. In Experiment 3, results showed graded N170 effects to all target words containing unexpected visual information. P600 modulations were observed in all three experiments, indexing the difficulty of processing unexpected orthography, particularly in downstream, integrative processing. Results support a nuanced view of early visual processing, namely one arguing that visual processing is more fine-grained the more control participants have over how they read.