Why is there such character change in a song from 16 bit to 8 bit? I understand dynamic range goes down, and harmonics are brought out by the sqaurewave like shape that comes when the amount of data that can be provided is changed, but how come the huge difference? Perhaps it's exponential
Why does linear (zero) phase eq create prering? Isn't the point of linear phase eq to have no phase change, is it perhaps a latency thing?
I heard chorus effects help a pitched up (chipmunk) or tempo changed vocal sound less destroyed, is this just because it would mask the sound in general or is there a deeper reason for this, because the effect is pitch based so it could be something to it
Why does linear (zero) phase eq create prering? Isn't the point of linear phase eq to have no phase change, is it perhaps a latency thing?
I heard chorus effects help a pitched up (chipmunk) or tempo changed vocal sound less destroyed, is this just because it would mask the sound in general or is there a deeper reason for this, because the effect is pitch based so it could be something to it