MARK HAMMER
"You have two ears, and both of them are designed to receive all frequencies. That frequency content comes from many simultaneous sources, and since few sound sources in the real world produce ONLY pure tones without harmonics, your brain has the unenviable (and constant) task of sorting the harmonics into the ones that (likely) came along with this pure tone, that pure tone, and that sort of modulated tone. Imagine you had an immense crowd photo (billboard sized) of people without any facial features, and had a second pile of little photos of faces (eyes, brows, noses and mouths), and you had to sort the faces and allocate them to the empty space they went with. That's pretty much what your brain has to do with harmonic content. In some respects, your brain can sort these harmonics into different piles based on directionality (i.e., if it came from that direction then it goes with the other sounds coming from that direction), but often the principal rule of thumb it has to work with is the relative simultaneity of harmonics and fundamentals.
And therein lies the problem. Not only do sound waves move through the air differently, but tweeters and woofers accelerate differently, may be located in a staggered manner from the listener. Even well before the speaker, different aspects of the signal path may cause "group delay", which is a sort of staggering of broad ranges of frequency content due to an assortment of factors, including capacitors in the signal path (this is why audiophiles prefer not to have caps in series with the signal).
Okay, lets take that little pile of faces and snip them up so that you cut diagonally across, splitting the mouth, nose and eyes in ways that make it even more difficult to pair them up together and figure out what face-space they go to. That's what group delay does. As a result it makes for exceedingly hard work for the brain when there are multiple sound sources, as in a band or orchestra, or something like a %^&*tail party scene in a film soundtrack. All that harmonic content flying past you has to be appropriately sorted and assigned, and that's a phenomenal amount of mental work to be done.
Boy it would help a lot if there were some way to overcome at least some of that cumulative group delay. Enter Sonic Maximizer and "the BBE process". The general basis of the process is to "correct" the phase relationships between fundamentals and harmonics, such that the fundamentals seem more obviously associated with those harmonics, and the resulting sound more crisply defined.
I can't say much with authority about the inner workings of the process, except that it is something you should think of as a more complex and targeted phase shifter. Will it provide improvement in all cases? Given the problem that it tries to be a solution for, no. It will prove to be more satisfying when there are multiple concurrent sounds, and when those sounds tend to have more harmonic content. Again, with more work for the brain to do in sorting harmonics and fundamentals, whatever makes that workload lighter will prove satisfying aurally. Can or does such a unit "fix" the problem? Well, since you can't know up front what the desired amount of "therapeutic stagger" needs to be in any given band that will restore coherence to the entire mixed signal, you might probably expect more success when applying it on an individual basis to individual instruments, as opposed to attempting to find a single stagger that works equally for all the signal sources being mixed to the PA. That's not to say you can't use it on mixed signals and expect any improvement. Rather, there is individually-tailored improvement and group-compromise improvement."