Yesterday, I configured a bridge between ASL and DMR using md380-emu. Everything is working great, but I'm wondering if I can improve audio quality on the ASL to DMR side other than just changing levels. I don't really like the sound of decoded audio from DMR on ASL, but I expect most of this is due to md380-emu. I have ordered more ThumbDV dongles, so that will probably help some in and of itself. Some is probably due to the entire stack working at 8 kHz. I haven't done any testing, but anecdotally, it seems that, on a real DMR radio, the vocoder is perhaps running at a higher sampling rate, though the audio itself is 8 kHz.
I noticed an Analog_Bridge option called USE_AUDIO_BPF, but couldn't find any info on it. I assume this is a band pass filter. Is it configurable, or fixed frequencies? If there are arguments, what does the syntax for this look like?
I'd ultimately like to use LADSPA effects to do things like downward expansion from ASL to DMR to clean up some low level noise before it hits the vocoder, maybe put a slight EQ or multi-band dynamics processing to fill out some of the low-end from analog radios. My thought, though I'm not sure how this would actually be done, is to pipe Analog_Bridge to SoX for real time processing, then back to another Analog_Bridge instance for the node. I'm just not sure of a good way to pipe raw PCM audio from A_B to stdin, so that SoX can process the audio, then pipe raw PCM audio post-processing to, I assume, a second A_B configured to send to the node
, or if this is even possible without a bunch of latency and a huge mess. Has anyone tried anything like this before?