My work is primarily concerned with how humans process incoming information, involving its perception, comprehension, and encoding into memory. Most of the work in my lab focuses on the perception of spoken language: How do humans decode the complex acoustic signal, and recognize spoken words?

These issues can be approached in many ways, at several levels. The work in our lab has used many different methodologies, and looked at the problem from both a "bottom-up" and a "top-down" perspective. We have maintained an ongoing research effort aimed at clarifying the early types of representations used for the speech signal, and have been able to identify at least three qualitatively different levels of representation. The most concentrated effort in our lab in recent years has been on studying the recognition of spoken words. Within this domain, two recurring interests have been: (1) what is the organization of the word recognition system -- in particular, are there top-down influences from this lexical level to lower, perhaps phonemic representations?; and (2) What is the role of TIME in perceptual processing -- how do the activation levels of representations at various levels rise and fall over time?

To study these lexical-level issues, we have used many techniques, such as phoneme monitoring, dichotic migration, lexical decision, syllable monitoring, and phonemic restoration. The restoration work is built on Richard Warren's (1970) discovery that utterances sound intact, even when parts of them have been deleted and replaced by an extraneous sound. We have used this phenomenon to study the knowledge sources used by the perceptual process to restore missing parts of the signal. This is a study using restoration.

We also have projects to examine attentional principles in nonspeech domains, in both the visual and auditory modalities. Here are a series of experiments run in the visual domain, examining a phenomenon called inhibition of return. This effort reflects the general approach taken here: In order to study any complex stimulus domain, it will be important to study many cognitive processes, including attention, perception, and encoding of the information in memory.

In the auditory modality, one line of nonspeech research deals with change deafness. A number of studies have demonstrated a phenomenon called "change blindness" in which people seem to be surprisingly poor at noticing rather large changes in a visual scene. In the auditory domain our research on change deafness tests whether people are also not very good at noticing when the set of sounds they hear changes.