Intro: It’s been demonstrated (and essentially common knowledge in the audio world) that humans suppress early reflection--as continuously evidenced by the fact that we don’t hear echos all day long everywhere we go. It’s the law of the first wavefront/precedence effect/Haas Effect/etc... Drs. Toole/Olive have demonstrated a very similar phenomenon with tonality. Preferred Loudspeaker Measurements An accomplishable experiment for a musician might be that you’d recognize the sound of your guitar in a living room or a gymnasium even though the acoustics in the gym are essentially awful. You’d be able to point out it’s location as well. Neither of those internal ‘filters’ are exactly cut and dry however and Dr. Olive hints at that in the previously liked blog post and Salmi and Kates show here: AES Paper In a small room this tonal filter essentially goes haywire around 300Hz: Hearing Beyond Haas Dr. Olive's iPod Demo Dr. Olive's Data Localization below 80Hz(of course play your sub alone and you’ll hear that it’s spitting out a lot of stuff above 80Hz and some more recent experiments suggest that the cutoff is even deeper for lateral displacement). When in doubt, go overboard IMO. Symmetrical placement of a great number of subs isn’t a bad idea either. ;) This is 1 or 2 deep rabbit holes, but that's the essence of what we hear in a small room.
Now here’s the Audyssey (and/or ‘room correction’) issue:
It may be less intuitive to a musician or Joe Sixpack audio nut than an audio engineer that 2 noises with identical frequency content but different time content would sound different. Just make a frequency graph swept sine wave and a white noise signal through REW(or your favorite acoustics software) for proof. How does the graph then look the same when they sound so completely different? Well, this is very useful question for anyone who wants to make approximately psychoacoustically correct measurements to understand. Welcome to the (Fast) Fourier Transform or FFT and its progeny the Time Domain and the Frequency Domain. The FFT allows you to look at sound in either. When doing this conversion, the number of viewable data points in the frequency domain is related to the amount of time used in the data display. So in a room, the collected data will contain a higher amount of reflections as the microphone gets further from the sound source. If you later try to gate(narrow the time window) of the displayed measurement so you can look at the direct sound (and get a more psychoacoustically correct spectral response above the modal region) you will end up with very poor resolution. The resolution will be too poor to properly EQed. You won't know what the speaker is really doing. If you move the mic too close to a multi driver speaker, you’ll introduce errors related to driver integration. IOW, there is no great way to to correct the ‘room’ above the fuzzy 300Hz mark. There's no substitute for good loudspeakers designed to measure well in anechoic conditions.
It's not the end of the story though. There are things you can do to improve your situation: Gating Loudspeaker Measurements Then EQ the full listening position frequency measurement below 300Hz.