this is a question for any of the game sound designers/engineers.
on the rendering side of things, games now use HDR, where color beyond the current brightness possibility of your monitor is rendered and then tonemapped to the screen... etc, simulating chromatic adaption.
similarly with audio, your speaker can only output a certain range of db. We can't have a jet engine at the real volume it is at coming out our speakers! Which limits the range. (I'll use 0 - 1 here for simplicity).
So, in real life, a rifle may have a level of 10 and a human voice of 1. Which, when clamped to the db range of the speakers should be 1 and 0.1 respectively.
I've noticed no game has done this yet... each audio sample will play between this 0-1 range. So a commander shouting at you never gets quieter no matter how many explosioins or jets are flying over him because they havn't adjusted his voice level based on the huge range of db levels that different objects project.
So, I was just curious whether audio engines in games are going to start respecting this?
Sometimes I feel audio is grossly overlooked though it's a huge part of immersion. Making a persons voice quieter as a huge truck rolls by (clamping the huge range of db's back to 0-1) really helps you get a feel for the loudness of something.