Though perhaps not a proper dev question, I thought you guys would be most likely to know.
I have an idea about implementing what is often referred to as 'room correction' i.e. compensation for coloration of sound due to the acoustic properties of the listening space.
My thoughts are as follows:
I use a 5.1 (or 6.1 or 4.1) surround setup which is horribly scewed with the listener far off center. The space is a vehicle cabin with plenty of reflecting (e.g. glass) as well as absorbing (e.g. seats) surfaces. I want to be able to compensate both in time, to fix the scewedness, and in frequency, to fix the very uneven response.
I have a RME hdsp 9652 card in a linux box, and will soon have a reasonably decent nvidia card in there too.
Currently I use existing LADSPA plugins hosted directly in alsa (i.e. through proper hacking of the asound.conf file) to do some eq-ing, so I thought why not use csLADSPA to roll my own? Ideally though, I'd like to be able to put a measurement microphone in the listening position and measure impulse responses for all channels and then convolve each channel with the inverted (and duly tinkered to avoid stuff blowing up) IR to achieve a reasonably flat response. In additon there would be some use of delay to compensate for the different arrival times from different speakers. I was thinking of running the convolution through CUDA opcodes to avoid loading the cpu which does other audio stuff simulataneously. An added bonus would be some GUI to use while tuning everything.
Is this realistic?
Could CUDA opcodes be used in conjunction with csLADSPA?
Could perhaps the websocket opcode be a useful way to add a GUI with some sliders etc?
Since csLADSPA is single channel only (right), I would need to invoke one instance of csLADSPA per channel, would that be possible?
Is there a good way of adding multiple control values for a single control name in csLADPA, like gain for several frequency bands and just calling it '10-band eq'?