Church Streaming: Notes and Discussion

Better this week. Issues of note:

  • Someone noticed the Chromecast on their phone and was fiddling with it during the early service. The suggestion was made to rename it to be obvious what it is. Unfortunately, without separating network segments entirely, I can’t easily block Chromecasts from random control on the network. This would be an argument for splitting the network out and kicking devices off, but I really, really don’t want to rejoin the plethora of “things” on the network to a new segment.
  • I noticed that the streaming segments are 8s long, with keyframes every 4s, but only 20s of content is provided in the indexes. I’ve extended this to 60s to see if I can resolve some random rebuffering. It’s possible that the overflow room is getting far enough back that it’s losing sync with the HLS index file and having to restart. I may try to up the keyframe rate and shorten segments if this doesn’t resolve things.
  • Our main livestreaming wrapper page (hosted by some other church, I believe) will terminate the stream if it exceeds the configured length. If a service runs to 1:10 (Lutherans may gasp and faint in the pews now), it still gets cut at 1:00, even though the streaming is still continuing and the YouTube/Facebook streams work as expected.

We had one glitch, but I’m pretty sure it was something related to the hardware encoder, as I simply stopped receiving traffic on the server for about 30s.

Extending the HLS playlist to 60s seemed to help - no drops today, only one brief pause in the overflow room that picked right back up where it left off. So, mostly solved!

Further updates and experiments:

Our castr.io subscription expired, and we’re looking at ways to not have to renew it, because it’s just expensive enough to be annoying (around $150/yr). I have this server now… and free bandwidth on it. :wink:

So, one nginx rtmp proxy later, all… doesn’t work. Facebook only supports rtmps (secure RTMP over SSL, yes, this sounds like a terrible idea for livestreams), which nginx doesn’t currently support.

So, the solution is to use a local “stunnel” proxy (literally just a TLS wrapper for streams). Point the stream to the localhost stunnel port, that fires out to the Facebook port, and all should be good.

Is it? No idea. I’ll try it in the morning when I’m onsite and have a stream to work with.

Adding TLS doesn’t sound much worse than building on top of tcp to start with. My understanding is that RTMFP is the flavor which rides on UDP.

Seems to work fine, so far… TLS and all. I think SRT is the more modern UDP based one, but I’ve not delved deeply into the details.

Once I worked out the “… wait, WTF?” issue that is standard for things like this.

2021/04/11 14:27:13 [error] 589670#589670: connect() to [2607:f8b0:400a:805::200c]:1935 failed (101: Network is unreachable)
2021/04/11 14:27:13 [error] 589670#589670: *14 relay: push failed name='Sunday' app='' playpath='' url='a.rtmp.youtube.com/live2/[stream key]', client: 184.155.206.44, server: 0.0.0.0:1935

A bit of beating IPv6 out of the path and it worked. Seriously, though. It’s an IPv4 only server…

Of possible relevance: If you have a failing disk array that nginx is trying to record to, it breaks the stream. Fortunately, the need for recording the stream has dropped off, so we can probably just pull the disk array entirely out of the path for now and just do the hardware transcoding out to a few other sources. The disk array is 5x old 1.5TB drives that have been through a huge range of servers, Drobos, etc, and they’ve been in this server for years, so hardly surprising that one is having the occasional issue.

I still need to run cables to the Chromecast, but a Chromecast Ultra seems at least mildly better behaved than the old one, in terms of wireless behavior.

So tired of dealing with this teetering tech stack… if it’s not one thing, it’s another.