Re: Towards open-source infrastructure for DMR repost from OpenDV
On 26/11/20 3:54 am, Steve N4IRS wrote:
DMR, after D-Star, is the most political of the digtial voice modes.I'm in agreement with the general idea. I was involved in EchoIRLP,
which came about simply because people like me simply wanted to be able
to connect to whatever network we wanted. EchoIRLP also incorporated
features to protect the networks from accidental cross linking, which
was one of the reasons it was readily accepted, once people got their
head around the concept.
While it's great to play with different modes, we do have a real "Tower
of Babel" when it comes to DV modes. Projects like DV Switch do go a
long way towards mitigating the effect of having multiple incompatible
modes among a relatively small user base.
What would I like? In an ideal world, I'd like to be able to
communicate with the hams I want to, regardless of network or mode,
using the radios I have, In essence, think of it like EchoIRLP on
steroids. I have something close to this with my AllStar node, which
not only can do AllStar, IRLP and Echolink (IRLP/EL via my existing
EchoIRLP node - with all proper lockouts in place), but it's configured
to do DMR (BM), YSF and P25 via a local DVSwitch installation. I'd
like to be able to run multiple DMR networks eventually.
The end game is something that performs the role that IP does for data -
providing routing over the top of multiple networks, so any endpoint can
find any other endpoint.
Another issue is audio processing. I believe in processing the audio as
little as possible. This philosophy goes back to EchoIRLP (2003-) and
the IRLP/Echolink integrated conference servers (2005-), and results in
the best audio quality with minimum latency. DV modes, unlike the
analog systems often don't have a "common vocoder", so some amount of
transcoding is unavoidable, but again given the aggressive vocoders in
use, I'd like to keep it to a minimum, while preserving as much metadata
as possible. Incidentally, this would be handy internally in cross mode
gateways like the one I have in the cloud.
Here's how I see it working. At the source, the audio is decoded to
PCM, and two audio streams are sent - one in PCM and a second stream in
the original encoded format. Information about the vocoder in use is
added to the metadata. Stations receiving the stream examine the
vocoder metadata. If they natively support the vocoder used, they
simply take the original encoded voice data and use that. Otherwise the
receiving nodes that can't handle the source vocoder will take the PCM
stream and re-encode that. In these days of unlimited (or near
unlimited) broadband Internet, the overheads of doing things this way
shouldn't be an issue. If a viable software implementation of the
D-STAR vocoder comes along, it would still then be possible to have a
"dumb" node linked to a "smart router" running in the cloud, to save
local processing power or bandwidth, if needed.
The only issue I see is avoiding issues like routing loops or malicious
interference (sadly, it does happen). For the former, it would be ideal
if the network was smart enough to detect loops and then take action to
block the source of problems or break the loop, until it's fixed.
Otherwise some human oversight would be needed.
I do run an XLX reflector, but keep it standalone, as the very manual
interlinking process isn't compatible with my ADHD (I did play with it
in the early days of XLX). I'm also somewhat cut off from that
community, because they moved all their internal communication to a web
forum years ago, which is relatively inaccessible to me - too slow and
cumbersome and yet another login to remember to check.
This could be a use case of the "dumb node" scenario above.
Let's hope for more openness. :)
73 de Tony VK3JED/VK3IRL