Ask Your Question
0

What is the delay between the time your brain takes to understand a word you read and when you speak the word aloud?

asked 2015-06-26 14:31:53 -0400

jswift gravatar image

What is the delay between the time your brain takes to understand a word you read and when you speak the word aloud?

We're developing a program to collect electrical information from the surface of the brain and to locate the regions associated with speech generation. To do this, we display the text for the subject to read on a scrolling display at a defined rate and have them read the text aloud. I am concerned about the activation in the language understanding regions of the brain (the parts that would read and interpret the words) mixing with the language production areas and clouding the signal.

For example, if a subject is reading the sentence "The quick brown fox jumps" aloud, what is the time delay between reading the word "fox" and speaking it aloud? If the word "fox" is being read and understood while the word "quick" is being spoken aloud (because the brain is capable of parallel processing) then the net signal we would acquire would including brain activity responsible for interpreting "fox" and generating speech for "quick." If we know the time delay between reading and speaking, maybe we can isolate and remove the activity associated with reading and focus on the brain activity responsible for speech generation alone.

Any recommendations for how to improve the paradigm would also be very much appreciated!

Thanks, James

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-07-03 23:17:14 -0400

usagi5886 gravatar image

What is the delay between the time your brain takes to understand a word you read and when you speak the word aloud?

This isn't my area of expertise, but I'm pretty sure there is no magic number. The extent of this delay will likely vary based on numerous factors, most notably the word's frequency-of-occurrence but also things like the location of the word within the syntactic and/or prosodic structure of the sentence.

we display the text for the subject to read on a scrolling display at a defined rate and have them read the text aloud.

Have you tried eyetracking? That strikes me as a more natural choice of method. Rather than create an artificial environment trying to force people to read at a certain pace, with eyetracking you could simply print the words on the screen and have them read it normally. The location and timing of their saccades would give you a direct measure of exactly what they were reading and when, which it sounds might be lacking with the present method.

If we know the time delay between reading and speaking, maybe we can isolate and remove the activity associated with reading and focus on the brain activity responsible for speech generation alone.

I can think of two potential solutions. First, you could collect parallel data from the subjects reading the exact same sentences but not producing them. Depending on the kinds of analyses you have planned, you might be able to treat the reading-only condition as the baseline and 'subtract it out' from the reading+speaking condition. Of course, this would involve several strong assumptions (e.g. that the same thing is happening in the brain at each time-point across the utterance in the two conditions), which are most likely false.

A second (perhaps more preferable) option would be to have speakers put the sentences they should be producing into short term memory. If the sentences are short enough (like your example The quick brown fox jumps), then this shouldn't be a problem. Just display the sentence on the screen, have them read it, go back to a blank screen for 500 ms or something, and then display a red circle on the screen to indicate that the audio-recording has started. Since the reading will be long since finished by the time they start speaking, this method would remove the brain activity associated with reading from the data, which would be conducive to the goal you described of "locating the regions associated with speech generation". Of course, there may be some additional brain activity added in from storing things in memory, but I'd imagine that should have a well-documented signature in the signal that could be factored out.

Just a few thoughts I had. I hope this helps!

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower

Stats

Asked: 2015-06-26 14:31:53 -0400

Seen: 1,539 times

Last updated: Jul 03 '15