Re: DMRLink - bridge streams over writing.


Peter M0NWI
 


Ah OK, I didn't get that from the config text I was hoping to link pairs or Masters as trunk group, will have to look again.

Sent from Outlook
From: main@DVSwitch.groups.io <main@DVSwitch.groups.io> on behalf of Cort N0MJS via Groups.Io <n0mjs@...>
Sent: 11 July 2018 20:51:17
To: main@DVSwitch.groups.io
Subject: Re: [DVSwitch] DMRLink - bridge streams over writing.
 
Any system you designate as “TRUNK” will become one. There was never a limitation to one. All “TRUNK” does is remove the contention handler completely. I don’t use it a lot, but N3FE who hangs out on here is kinda the TRUNK guru. If I screw up something with the TRUNK code, Corey is the one who usually finds it first :)


On Jul 11, 2018, at 2:46 PM, Peter M0NWI <peter-martin@...> wrote:


Hehe, yes got that after looking at it for a bit, all rebuilt now just under 1000 lines in my config!   Swapped over into operation, and seems OK from my limited testing.

Best not to tell the users or they'll point out every error, and see if anyone shouts about the crashing TG being fixed?

What would have been nice, would have been to have more than one TRUNKS section, is that possible?

Thanks Cort.

From: main@DVSwitch.groups.io <main@DVSwitch.groups.io> on behalf of Cort N0MJS via Groups.Io <n0mjs@...>
Sent: 11 July 2018 20:25:54
To: main@DVSwitch.groups.io
Subject: Re: [DVSwitch] DMRLink - bridge streams over writing.
 
You just have to switch thinking that the configuration is centered on a group of arbitrarily named “conference bridges”. Each conference bridge is just like a telephone conference bridge – anyone may “dial into” it. Just think about, for each system (master peer or peer), which TS/TGID combination do you want to be part of a bridge?

I always thought bridge.py was ok, too – but an overwhelming majority had asked for an easier to configure version, and two was a bit too much to keep maintaining :)

On Jul 11, 2018, at 11:05 AM, Peter M0NWI <peter-martin@...> wrote:

Bridge.py is GREAT!  Not hard to do at all, in fact very logical and really flexible!

Just my tuppence :)

Having a go with confBridge, but its quite difficult to get it to do what because of the multiple masters I run!  

But, like the British Rail adverts from the '70s, "we're getting there!" 

73,
Peter


From: main@DVSwitch.groups.io <main@DVSwitch.groups.io> on behalf of Cort N0MJS via Groups.Io <n0mjs@...>
Sent: 11 July 2018 16:29
To: main@DVSwitch.groups.io
Subject: Re: [DVSwitch] DMRLink - bridge streams over writing.
 
You probably can. The reason for the creation of confbridge was that bridge was HARD to write rules for, and nobody actually used anything “asymmetric” that it could do and confbridge couldn’t.

On Jul 11, 2018, at 9:20 AM, Peter M0NWI <peter-martin@...> wrote:

Hi Cort,

Thanks for you're answer, even if it's not what I wanted 😊

Do you believe I can replicate what I have now under Confbridge, if so I'll start to do that work, it's just been, ain't broke, don't fix it territory!!

73,
Peter



From: main@DVSwitch.groups.io <main@DVSwitch.groups.io> on behalf of Cort N0MJS via Groups.Io <n0mjs@...>
Sent: 11 July 2018 13:14
To: main@DVSwitch.groups.io
Subject: Re: [DVSwitch] DMRLink - bridge streams over writing.
 
Peter (and group),


* None of my code EVER plays favorites with a TGID based on some designator assigned by one of the “networks”. That would go completely against what I stand for with these projects.

* Bridge.py is a retired application. It hasn’t been developed for some time and is not supported.


The goal of the contention handler routines was to keep something like this from happening. I suspect one of two scenarios – the contention handler is outright failing and sending the wrong traffic, or it’s failing not quite outright and sending BOTH streams to the repeater, which is picking the “other” stream.

I’d really like to figure out which of those things is happening. In IPSC, Motorola has some kind of convoluted (to me) method of determining who gets access to the channel. I’m sure it makes sense to them, but when you’re reverse engineering…. I really am not entirely sure how they make the choice, but it appears significantly more complicated than “who got here first”.

Can anyone duplicate this behavior with confbridge.py? If so, then I’ll dig into it for sure. If it’s just bridge.py…. then I think the best answer I have is to move to confbridge, then we’ll deal with it if it comes back. I know that’s not the answer Peter wants to hear… but it’s the one I have.

0x49 DE N0MJS


On Jul 11, 2018, at 5:11 AM, Peter M0NWI <peter-martin@...> wrote:

Cort,

As you know I've got a fairly stable DMRLink bridge, this hosts 8 repeaters.  I was asked by one keeper to link the local S2TG9 from repeater A, to  repeater B S1TG9, I added both way rules to the bridge.py files, and it seemed to be OK, when user on A keys, output is send to the repeater B no problem.

Then I got reports that while listening on repeater B (connected to bridge Master B), to the stream from repeater A (bridge Master A), if another stream arrived into repeater B, same slot, different TG, say a National or International group from Master C in the same bridge, repeater B would drop the TG9 stream and play out the new Master C stream.

I thought that as the repeater B (Master B) was already outputting network traffic, it would ignore further traffic until the first stream finished, and a hang time had expired.  

Wonder if because they are streaming from different Masters in the same bridge, that that scenario hasn't been considered, and so hold off is not invoked?

Is there a priority that can/has been set within the streams, which is allow the national groups to take precedence over the local traffic?

73,
Peter



--
Cort Buffington
H: +1-785-813-1501
M: +1-785-865-7206






--
Cort Buffington
H: +1-785-813-1501
M: +1-785-865-7206






--
Cort Buffington
H: +1-785-813-1501
M: +1-785-865-7206






--
Cort Buffington
H: +1-785-813-1501
M: +1-785-865-7206





Join main@DVSwitch.groups.io to automatically receive all group messages.