If you use HBlink OR dmr_utils, you need to read this


Cort N0MJS <n0mjs@...>
 

Folks,

PYTHON2 BASED VERSIONS OF HBLINK AND DMR_UTILS ARE NOW SUNSET – ONLY BUG FIXES WILL BE OFFERED.



If you go looking at my repos on GitHub, you’ll see a couple of new things:

HBlink3 and dmr_utils3

These are Python3 versions of HBlink and dmr_utils. Currently, dmr_utils3 does not include ambe_bridge.py and HBlink3 does not include parrot or bridge_all – effectively leaving it as the base stack and the conference bridge application… there have also been some things renamed. I removed “hb_” from the beginning of all of the files, and I also renamed hb_confbridge.py to bridge.py. Likewise I renamed hb_confbridge_rules.py to rules.py. The main goal with the name changes is to make the first character or few unique. I’m a shitty typist to begin with, so being able to type b(tab) and get bridge.py is a lot easier than having to type out hb_confbridge.py all of the time… and I have to type those things a lot when I’m in full-on coding mode :)

As for the future of the other HBlink programs – some of that is going to be dependent on community support. I would love to see someone step up and port all_bridge.py and parrot.py to Python3. I will eventually get to them, but not soon because….

The master branch of HBlink3 is stable and has been running on the K0USY Group’s “KS-DMR” network all week. On our particular system, with the things we have configured, just moving to Python3 (ok, a few bits of refactoring as well) has given us a 15-20% performance boost (measured in time between packet ingress and processing completion). You will also notice another branch of HBlink3 called uvloop. With this branch I’m swapping out the venerable Twisted module for Python3’s built-in Asyncio module, and a “drop in replacement” for portions of it called uvLoop, an ultra-fast drop-in replacement for parts of Asyncio… just moving to Python3 gave us a nice bump. My hope is that the move to uvloop makes HBlink work much faster – potentially rivaling statically compiled to machine-code software.

I’ve already moved off of the master branch to work on the uvloop branch. This development will be rapid – because the goal is to have hblink.py and bridge.py working on uvloop ASAP. Once that is completed, I will go back and start filling in the gaps as well as adding features. I do not intend to back-port new features to the Python2 versions.

The HBlink3 master branch will support the existing python2 hbmonitor software. The uvloop branch will not. I intend to “burn down” the reporting “stuff” and start over with HBlink3 on uvloop. I am looking for javascript (and related browser code) developers to help with this. Hbmonitor is a bandwidth HOG. It renders the entirety of the HTML tables on the server for every incremental change, and pushes all of that HTML out to the browser. A busy system on HBmonitor can use close to 1Mbps of bandwidth for a single browser connected. The goal is to send much less information to the browser, and let the browser build the tables… but I don’t have a clue about programming that stuff. Co-developers welcome!!!

0x49 DE N0MJS



Cort Buffington
785-865-7206


JJ Cummings
 

Cort - great stuff I just switched over so the stack now looks like this (for those that care)

DMRLink <-> IPSC_Bridge <-> HB_Bridge <-> hblink3(bridge) <-> Analog_Bridge <-> ASL

On a separate note, I noted a an error in the log specifically related  to openbridge (outbound only connections) looks like it's not registering that the END event has occurred when it happens and thusly it times out?.  Posting here but hopefully I can post a pull request if ever I get time to debug at all.

INFO 2019-01-14 14:16:30,526 (ANALOG_BRIDGE) *CALL END*   STREAM ID: 2752969323 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1, Duration: 3.02
INFO 2019-01-14 14:16:35,780 (OBP-3103) *TIME OUT*   STREAM ID: 2752969323 SUB: 1108389 PEER: 310885350 TGID: 310815 TS 1 Duration: 3.02
INFO 2019-01-14 14:19:21,827 (ANALOG_BRIDGE) *CALL START* STREAM ID: 205043449 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1
INFO 2019-01-14 14:19:21,878 (ANALOG_BRIDGE) Conference Bridge: 310815, Call Bridged to OBP System: OBP-3103 TS: 1, TGID: 310815
INFO 2019-01-14 14:19:24,859 (ANALOG_BRIDGE) *CALL END*   STREAM ID: 205043449 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1, Duration: 3.03
INFO 2019-01-14 14:19:30,780 (OBP-3103) *TIME OUT*   STREAM ID: 205043449 SUB: 1108389 PEER: 310885350 TGID: 310815 TS 1 Duration: 3.03

On Fri, Jan 11, 2019 at 11:04 AM Cort N0MJS via Groups.Io <n0mjs=me.com@groups.io> wrote:
Folks,

PYTHON2 BASED VERSIONS OF HBLINK AND DMR_UTILS ARE NOW SUNSET – ONLY BUG FIXES WILL BE OFFERED.



If you go looking at my repos on GitHub, you’ll see a couple of new things:

HBlink3 and dmr_utils3

These are Python3 versions of HBlink and dmr_utils. Currently, dmr_utils3 does not include ambe_bridge.py and HBlink3 does not include parrot or bridge_all – effectively leaving it as the base stack and the conference bridge application… there have also been some things renamed. I removed “hb_” from the beginning of all of the files, and I also renamed hb_confbridge.py to bridge.py. Likewise I renamed hb_confbridge_rules.py to rules.py. The main goal with the name changes is to make the first character or few unique. I’m a shitty typist to begin with, so being able to type b(tab) and get bridge.py is a lot easier than having to type out hb_confbridge.py all of the time… and I have to type those things a lot when I’m in full-on coding mode :)

As for the future of the other HBlink programs – some of that is going to be dependent on community support. I would love to see someone step up and port all_bridge.py and parrot.py to Python3. I will eventually get to them, but not soon because….

The master branch of HBlink3 is stable and has been running on the K0USY Group’s “KS-DMR” network all week. On our particular system, with the things we have configured, just moving to Python3 (ok, a few bits of refactoring as well) has given us a 15-20% performance boost (measured in time between packet ingress and processing completion). You will also notice another branch of HBlink3 called uvloop. With this branch I’m swapping out the venerable Twisted module for Python3’s built-in Asyncio module, and a “drop in replacement” for portions of it called uvLoop, an ultra-fast drop-in replacement for parts of Asyncio… just moving to Python3 gave us a nice bump. My hope is that the move to uvloop makes HBlink work much faster – potentially rivaling statically compiled to machine-code software.

I’ve already moved off of the master branch to work on the uvloop branch. This development will be rapid – because the goal is to have hblink.py and bridge.py working on uvloop ASAP. Once that is completed, I will go back and start filling in the gaps as well as adding features. I do not intend to back-port new features to the Python2 versions.

The HBlink3 master branch will support the existing python2 hbmonitor software. The uvloop branch will not. I intend to “burn down” the reporting “stuff” and start over with HBlink3 on uvloop. I am looking for javascript (and related browser code) developers to help with this. Hbmonitor is a bandwidth HOG. It renders the entirety of the HTML tables on the server for every incremental change, and pushes all of that HTML out to the browser. A busy system on HBmonitor can use close to 1Mbps of bandwidth for a single browser connected. The goal is to send much less information to the browser, and let the browser build the tables… but I don’t have a clue about programming that stuff. Co-developers welcome!!!

0x49 DE N0MJS



Cort Buffington
785-865-7206





Cort N0MJS <n0mjs@...>
 

Not an error. TX streams have to time out for not. There’s no explicit end for OBP TX streams yet


On Jan 14, 2019, at 2:41 PM, JJ Cummings <cummingsj@...> wrote:

Cort - great stuff I just switched over so the stack now looks like this (for those that care)

DMRLink <-> IPSC_Bridge <-> HB_Bridge <-> hblink3(bridge) <-> Analog_Bridge <-> ASL

On a separate note, I noted a an error in the log specifically related  to openbridge (outbound only connections) looks like it's not registering that the END event has occurred when it happens and thusly it times out?.  Posting here but hopefully I can post a pull request if ever I get time to debug at all.

INFO 2019-01-14 14:16:30,526 (ANALOG_BRIDGE) *CALL END*   STREAM ID: 2752969323 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1, Duration: 3.02
INFO 2019-01-14 14:16:35,780 (OBP-3103) *TIME OUT*   STREAM ID: 2752969323 SUB: 1108389 PEER: 310885350 TGID: 310815 TS 1 Duration: 3.02
INFO 2019-01-14 14:19:21,827 (ANALOG_BRIDGE) *CALL START* STREAM ID: 205043449 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1
INFO 2019-01-14 14:19:21,878 (ANALOG_BRIDGE) Conference Bridge: 310815, Call Bridged to OBP System: OBP-3103 TS: 1, TGID: 310815
INFO 2019-01-14 14:19:24,859 (ANALOG_BRIDGE) *CALL END*   STREAM ID: 205043449 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1, Duration: 3.03
INFO 2019-01-14 14:19:30,780 (OBP-3103) *TIME OUT*   STREAM ID: 205043449 SUB: 1108389 PEER: 310885350 TGID: 310815 TS 1 Duration: 3.03

On Fri, Jan 11, 2019 at 11:04 AM Cort N0MJS via Groups.Io <n0mjs=me.com@groups.io> wrote:
Folks,

PYTHON2 BASED VERSIONS OF HBLINK AND DMR_UTILS ARE NOW SUNSET – ONLY BUG FIXES WILL BE OFFERED.



If you go looking at my repos on GitHub, you’ll see a couple of new things:

HBlink3 and dmr_utils3

These are Python3 versions of HBlink and dmr_utils. Currently, dmr_utils3 does not include ambe_bridge.py and HBlink3 does not include parrot or bridge_all – effectively leaving it as the base stack and the conference bridge application… there have also been some things renamed. I removed “hb_” from the beginning of all of the files, and I also renamed hb_confbridge.py to bridge.py. Likewise I renamed hb_confbridge_rules.py to rules.py. The main goal with the name changes is to make the first character or few unique. I’m a shitty typist to begin with, so being able to type b(tab) and get bridge.py is a lot easier than having to type out hb_confbridge.py all of the time… and I have to type those things a lot when I’m in full-on coding mode :)

As for the future of the other HBlink programs – some of that is going to be dependent on community support. I would love to see someone step up and port all_bridge.py and parrot.py to Python3. I will eventually get to them, but not soon because….

The master branch of HBlink3 is stable and has been running on the K0USY Group’s “KS-DMR” network all week. On our particular system, with the things we have configured, just moving to Python3 (ok, a few bits of refactoring as well) has given us a 15-20% performance boost (measured in time between packet ingress and processing completion). You will also notice another branch of HBlink3 called uvloop. With this branch I’m swapping out the venerable Twisted module for Python3’s built-in Asyncio module, and a “drop in replacement” for portions of it called uvLoop, an ultra-fast drop-in replacement for parts of Asyncio… just moving to Python3 gave us a nice bump. My hope is that the move to uvloop makes HBlink work much faster – potentially rivaling statically compiled to machine-code software.

I’ve already moved off of the master branch to work on the uvloop branch. This development will be rapid – because the goal is to have hblink.py and bridge.py working on uvloop ASAP. Once that is completed, I will go back and start filling in the gaps as well as adding features. I do not intend to back-port new features to the Python2 versions.

The HBlink3 master branch will support the existing python2 hbmonitor software. The uvloop branch will not. I intend to “burn down” the reporting “stuff” and start over with HBlink3 on uvloop. I am looking for javascript (and related browser code) developers to help with this. Hbmonitor is a bandwidth HOG. It renders the entirety of the HTML tables on the server for every incremental change, and pushes all of that HTML out to the browser. A busy system on HBmonitor can use close to 1Mbps of bandwidth for a single browser connected. The goal is to send much less information to the browser, and let the browser build the tables… but I don’t have a clue about programming that stuff. Co-developers welcome!!!

0x49 DE N0MJS



Cort Buffington
785-865-7206





Cort N0MJS <n0mjs@...>
 

A bit more now the I’m on a real computer. The way those streams get processed on the TX side, I’ve not found a really efficient way to terminate them with the call end. The efficient place would be right before the stream metadata is used for forwarding, necessitating another check fo the same condition again after forwarding… I’m trying to find a better way, but letting them timeout isn’t a problem. It just keeps entries in a list a few seconds longer.

On Jan 14, 2019, at 4:26 PM, Cort N0MJS via Groups.Io <n0mjs@...> wrote:

Not an error. TX streams have to time out for not. There’s no explicit end for OBP TX streams yet


On Jan 14, 2019, at 2:41 PM, JJ Cummings <cummingsj@...> wrote:

Cort - great stuff I just switched over so the stack now looks like this (for those that care)

DMRLink <-> IPSC_Bridge <-> HB_Bridge <-> hblink3(bridge) <-> Analog_Bridge <-> ASL

On a separate note, I noted a an error in the log specifically related  to openbridge (outbound only connections) looks like it's not registering that the END event has occurred when it happens and thusly it times out?.  Posting here but hopefully I can post a pull request if ever I get time to debug at all.

INFO 2019-01-14 14:16:30,526 (ANALOG_BRIDGE) *CALL END*   STREAM ID: 2752969323 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1, Duration: 3.02
INFO 2019-01-14 14:16:35,780 (OBP-3103) *TIME OUT*   STREAM ID: 2752969323 SUB: 1108389 PEER: 310885350 TGID: 310815 TS 1 Duration: 3.02
INFO 2019-01-14 14:19:21,827 (ANALOG_BRIDGE) *CALL START* STREAM ID: 205043449 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1
INFO 2019-01-14 14:19:21,878 (ANALOG_BRIDGE) Conference Bridge: 310815, Call Bridged to OBP System: OBP-3103 TS: 1, TGID: 310815
INFO 2019-01-14 14:19:24,859 (ANALOG_BRIDGE) *CALL END*   STREAM ID: 205043449 SUB: 1108389 (1108389) PEER: 310857 (310857) TGID 310815 (310815), TS 1, Duration: 3.03
INFO 2019-01-14 14:19:30,780 (OBP-3103) *TIME OUT*   STREAM ID: 205043449 SUB: 1108389 PEER: 310885350 TGID: 310815 TS 1 Duration: 3.03

On Fri, Jan 11, 2019 at 11:04 AM Cort N0MJS via Groups.Io <n0mjs=me.com@groups.io> wrote:
Folks,

PYTHON2 BASED VERSIONS OF HBLINK AND DMR_UTILS ARE NOW SUNSET – ONLY BUG FIXES WILL BE OFFERED.



If you go looking at my repos on GitHub, you’ll see a couple of new things:

HBlink3 and dmr_utils3

These are Python3 versions of HBlink and dmr_utils. Currently, dmr_utils3 does not include ambe_bridge.py and HBlink3 does not include parrot or bridge_all – effectively leaving it as the base stack and the conference bridge application… there have also been some things renamed. I removed “hb_” from the beginning of all of the files, and I also renamed hb_confbridge.py to bridge.py. Likewise I renamed hb_confbridge_rules.py to rules.py. The main goal with the name changes is to make the first character or few unique. I’m a shitty typist to begin with, so being able to type b(tab) and get bridge.py is a lot easier than having to type out hb_confbridge.py all of the time… and I have to type those things a lot when I’m in full-on coding mode :)

As for the future of the other HBlink programs – some of that is going to be dependent on community support. I would love to see someone step up and port all_bridge.py and parrot.py to Python3. I will eventually get to them, but not soon because….

The master branch of HBlink3 is stable and has been running on the K0USY Group’s “KS-DMR” network all week. On our particular system, with the things we have configured, just moving to Python3 (ok, a few bits of refactoring as well) has given us a 15-20% performance boost (measured in time between packet ingress and processing completion). You will also notice another branch of HBlink3 called uvloop. With this branch I’m swapping out the venerable Twisted module for Python3’s built-in Asyncio module, and a “drop in replacement” for portions of it called uvLoop, an ultra-fast drop-in replacement for parts of Asyncio… just moving to Python3 gave us a nice bump. My hope is that the move to uvloop makes HBlink work much faster – potentially rivaling statically compiled to machine-code software.

I’ve already moved off of the master branch to work on the uvloop branch. This development will be rapid – because the goal is to have hblink.py and bridge.py working on uvloop ASAP. Once that is completed, I will go back and start filling in the gaps as well as adding features. I do not intend to back-port new features to the Python2 versions.

The HBlink3 master branch will support the existing python2 hbmonitor software. The uvloop branch will not. I intend to “burn down” the reporting “stuff” and start over with HBlink3 on uvloop. I am looking for javascript (and related browser code) developers to help with this. Hbmonitor is a bandwidth HOG. It renders the entirety of the HTML tables on the server for every incremental change, and pushes all of that HTML out to the browser. A busy system on HBmonitor can use close to 1Mbps of bandwidth for a single browser connected. The goal is to send much less information to the browser, and let the browser build the tables… but I don’t have a clue about programming that stuff. Co-developers welcome!!!

0x49 DE N0MJS



Cort Buffington
785-865-7206





Cort Buffington
785-865-7206


Spencer N4NQV
 

Cort,

I just migrated our group's server to hblink3 a couple days ago (using bridge.py now in place of the old confbridge). Everything seems to be working well at the moment. We're currently using HBmonitor as well, though we'll have to figure something else out when we migrate to the asyncio version. I have a few questions/topics of discussion about the python3/hblink3 migration:

1) I noticed a commit on a branch of HBlink a couple weeks ago fixing an LC generation bug for OpenBridge connections where there's also a TG translation. We do this in our current setup, and had issues with the old HBlink(2) not sending all voice traffic properly via OpenBridge. I never got to the bottom of what the issue actually was, but I suspect it had to to with the outbound voice traffic having the wrong TG number.  UDP packet logs showed the voice traffic getting sent to Brandmeister, but I never dumped the actual packet contents when we were having issues, so I can't verify.

Do you know whether this commit actually fixed the bug, and will a patch be made to hblink3 as well? Didn't look like it had been made the last time I glanced at the source.


2) What needs to be done to get the asyncio version of hblink3 to a usable point? I'm definitely interested in getting migrated over to asyncio soon. I'm developing a few cool new features for our repeaters that rely heavily on HBlink (and some user presence/private call routing additions), and the asyncio version will make it a lot simpler for me to develop in. 

 

3) I also started a project for a monitoring system recently, one that ties into MMDVMHost to send JSON formatted repeater status messages (transmit status, last call info, etc) from all our repeaters to a central MQTT broker, where it's aggregated, logged, and (eventually will be) distributed to browsers. Sounds like it's right up the alley of how the new HBmonitor replacement is going to work. I wouldn't mind helping out with some of the software development. 

 

Thanks,

Spencer Fowler

N4NQV


Cort N0MJS <n0mjs@...>
 

Well this is awesome – someone using HBink3 AND some great questions…. comments inline

On Jan 21, 2019, at 3:31 PM, n4nqv@... wrote:

I just migrated our group's server to hblink3 a couple days ago (using bridge.py now in place of the old confbridge). Everything seems to be working well at the moment. We're currently using HBmonitor as well, though we'll have to figure something else out when we migrate to the asyncio version. I have a few questions/topics of discussion about the python3/hblink3 migration:



I am working on some better ways to handle monitor stuff more efficiently, and it’s based on JSON, not pickles, and only sends update information once a reporting client gets the initial “full” feed… but that’s a ways off.


1) I noticed a commit on a branch of HBlink a couple weeks ago fixing an LC generation bug for OpenBridge connections where there's also a TG translation. We do this in our current setup, and had issues with the old HBlink(2) not sending all voice traffic properly via OpenBridge. I never got to the bottom of what the issue actually was, but I suspect it had to to with the outbound voice traffic having the wrong TG number.  UDP packet logs showed the voice traffic getting sent to Brandmeister, but I never dumped the actual packet contents when we were having issues, so I can't verify.

Do you know whether this commit actually fixed the bug, and will a patch be made to hblink3 as well? Didn't look like it had been made the last time I glanced at the source.



The bug was found when I was porting the initial commits of bridge.py, so it actually was “fixed” there before bridge.py in hblink3 ever worked. We’ve verified on the local system that the odd one-way traffic on a BM group that had plagued us for some time is resolved!

2) What needs to be done to get the asyncio version of hblink3 to a usable point? I'm definitely interested in getting migrated over to asyncio soon. I'm developing a few cool new features for our repeaters that rely heavily on HBlink (and some user presence/private call routing additions), and the asyncio version will make it a lot simpler for me to develop in. 



It’s very, very close. I have a version I need to actually try and chase bugs on. I expect very few at this point, but I need to get some more reporting of errors in the callbacks on asyncio so that the problems don’t just quietly happen without a traceback. Next time I have about 2 hours to work and use the KS-DMR network as my testbed, it should be good to go.

3) I also started a project for a monitoring system recently, one that ties into MMDVMHost to send JSON formatted repeater status messages (transmit status, last call info, etc) from all our repeaters to a central MQTT broker, where it's aggregated, logged, and (eventually will be) distributed to browsers. Sounds like it's right up the alley of how the new HBmonitor replacement is going to work. I wouldn't mind helping out with some of the software development. 



I would LOVE to hand off the monitor stuff to someone else. I mostly built the hbmonitor to demonstrate some of my ideas about how a monitor app might look. It’s simple, and could be much more – drill-downs, etc. ANYONE willing to pick up the torch on that will see me working extra hard to standardize what HBlink & friends uses as an output format, etc.

Since it looks like you have experience with asyncio….. any guesses why Uvloop performed like absolute crap, but asyncio kicked butt like it was supposed to?

Thanks for the comments and well worded questions, Spencer.

0x49 DE N0MJS

Thanks,

Spencer Fowler

N4NQV


--
Cort Buffington
H: +1-785-813-1501
M: +1-785-865-7206






Spencer N4NQV
 

I would LOVE to hand off the monitor stuff to someone else. I mostly built the hbmonitor to demonstrate some of my ideas about how a monitor app might look. It’s simple, and could be much more – drill-downs, etc. ANYONE willing to pick up the torch on that will see me working extra hard to standardize what HBlink & friends uses as an output format, etc.

Maybe we can start to settle on a standard format for the new logging mechanism. The MMDVMHost monitoring utility I wrote last week monitors either a log file or the systemd journal (I'm using the journal for all our logs, no more plain files) and sends out state information. Here's what the format looks like right now:


When a server is started, just basic state info is reported:
{"Status": "Idle"}

As a voice or data transmission is started (either from RF or from Net), a payload is reported with basic call info:
{"Status": "TX", "Origin": "Net", "CallType": "Group", "Destination": "31131", "Source": "N4NQV", "Mode": "Voice"}

Then, at the end of the call, a new payload with additional info gets reported:
{"Status": "Idle", "Origin": "Net", "Loss": "0%", "CallType": "Group", "Destination": "31131", "Source": "N4NQV", "Length": "4.5s", "Mode": "Voice", "BER": "0.0%"}

The topic name has the repeater's ID, as well as the timeslot. If I was going to report this via TCP or UDP instead of MQTT, I'd probably structure it something like this instead:
{
310124: {
1: {"Status": "Idle", "Origin": "Net", "Loss": "0%", "CallType": "Group", "Destination": "31131", "Source": "N4NQV", "Length": "4.5s", "Mode": "Voice", "BER": "0.0%"},
2: {"Status": "Idle", "Origin": "Net", "Loss": "1%", "CallType": "Group", "Destination": "2", "Source": "KD4LZL", "Length": "4.4s", "Mode": "Voice", "BER": "0.0%"}
}
}

I envisioned a model where different repeater controller software, bridge software, etc could report log information in a format similar to this, using their choice of TCP, UDP, or MQTT. This is just a starting point though, there's definitely more information to be added (location, IPs, etc - the sky is the limit). I'm absolutely open to critique and suggestion, though. I definitely want this standard should be a collaborative effort between the people working with and generating the data. 

Maybe we should start a new thread for this development? Or move discussion to github? I'm not entirely familiar with how this team collaborates quite yet. 

Since it looks like you have experience with asyncio….. any guesses why Uvloop performed like absolute crap, but asyncio kicked butt like it was supposed to?

Hmm, good question. I've only ever used uvloop once - I usually have just used the default event loop in my past asyncio projects. I did see a performance increase when I switched to it though, but not one that was really significant. How did you do your benchmark to determine it's slower? Maybe that has something to do with it. That or something to do with the GIL are my only two ideas on why it would ever be substantially slower than the default loop.

 

Spencer Fowler

N4NQV


Cort N0MJS <n0mjs@...>
 

We should move to the hblink subgroup and out of the main group… I’ll head over there now.

On Jan 22, 2019, at 6:20 PM, Spencer N4NQV <n4nqv@...> wrote:

I would LOVE to hand off the monitor stuff to someone else. I mostly built the hbmonitor to demonstrate some of my ideas about how a monitor app might look. It’s simple, and could be much more – drill-downs, etc. ANYONE willing to pick up the torch on that will see me working extra hard to standardize what HBlink & friends uses as an output format, etc.

Maybe we can start to settle on a standard format for the new logging mechanism. The MMDVMHost monitoring utility I wrote last week monitors either a log file or the systemd journal (I'm using the journal for all our logs, no more plain files) and sends out state information. Here's what the format looks like right now:
<Screen Shot 2019_01_22 at 5.59.56 PM.png>

When a server is started, just basic state info is reported:
{"Status": "Idle"}

As a voice or data transmission is started (either from RF or from Net), a payload is reported with basic call info:
{"Status": "TX", "Origin": "Net", "CallType": "Group", "Destination": "31131", "Source": "N4NQV", "Mode": "Voice"}

Then, at the end of the call, a new payload with additional info gets reported:
{"Status": "Idle", "Origin": "Net", "Loss": "0%", "CallType": "Group", "Destination": "31131", "Source": "N4NQV", "Length": "4.5s", "Mode": "Voice", "BER": "0.0%"}

The topic name has the repeater's ID, as well as the timeslot. If I was going to report this via TCP or UDP instead of MQTT, I'd probably structure it something like this instead:
{
310124: {
1: {"Status": "Idle", "Origin": "Net", "Loss": "0%", "CallType": "Group", "Destination": "31131", "Source": "N4NQV", "Length": "4.5s", "Mode": "Voice", "BER": "0.0%"},
2: {"Status": "Idle", "Origin": "Net", "Loss": "1%", "CallType": "Group", "Destination": "2", "Source": "KD4LZL", "Length": "4.4s", "Mode": "Voice", "BER": "0.0%"}
}
}

I envisioned a model where different repeater controller software, bridge software, etc could report log information in a format similar to this, using their choice of TCP, UDP, or MQTT. This is just a starting point though, there's definitely more information to be added (location, IPs, etc - the sky is the limit). I'm absolutely open to critique and suggestion, though. I definitely want this standard should be a collaborative effort between the people working with and generating the data. 

Maybe we should start a new thread for this development? Or move discussion to github? I'm not entirely familiar with how this team collaborates quite yet. 

Since it looks like you have experience with asyncio….. any guesses why Uvloop performed like absolute crap, but asyncio kicked butt like it was supposed to?

Hmm, good question. I've only ever used uvloop once - I usually have just used the default event loop in my past asyncio projects. I did see a performance increase when I switched to it though, but not one that was really significant. How did you do your benchmark to determine it's slower? Maybe that has something to do with it. That or something to do with the GIL are my only two ideas on why it would ever be substantially slower than the default loop.

 

Spencer Fowler

N4NQV


Cort Buffington
785-865-7206