Someone noticed the Chromecast on their phone and was fiddling with it during the early service. The suggestion was made to rename it to be obvious what it is. Unfortunately, without separating network segments entirely, I can’t easily block Chromecasts from random control on the network. This would be an argument for splitting the network out and kicking devices off, but I really, really don’t want to rejoin the plethora of “things” on the network to a new segment.
I noticed that the streaming segments are 8s long, with keyframes every 4s, but only 20s of content is provided in the indexes. I’ve extended this to 60s to see if I can resolve some random rebuffering. It’s possible that the overflow room is getting far enough back that it’s losing sync with the HLS index file and having to restart. I may try to up the keyframe rate and shorten segments if this doesn’t resolve things.
Our main livestreaming wrapper page (hosted by some other church, I believe) will terminate the stream if it exceeds the configured length. If a service runs to 1:10 (Lutherans may gasp and faint in the pews now), it still gets cut at 1:00, even though the streaming is still continuing and the YouTube/Facebook streams work as expected.
We had one glitch, but I’m pretty sure it was something related to the hardware encoder, as I simply stopped receiving traffic on the server for about 30s.
Extending the HLS playlist to 60s seemed to help - no drops today, only one brief pause in the overflow room that picked right back up where it left off. So, mostly solved!
Our castr.io subscription expired, and we’re looking at ways to not have to renew it, because it’s just expensive enough to be annoying (around $150/yr). I have this server now… and free bandwidth on it.
So, one nginx rtmp proxy later, all… doesn’t work. Facebook only supports rtmps (secure RTMP over SSL, yes, this sounds like a terrible idea for livestreams), which nginx doesn’t currently support.
So, the solution is to use a local “stunnel” proxy (literally just a TLS wrapper for streams). Point the stream to the localhost stunnel port, that fires out to the Facebook port, and all should be good.
Is it? No idea. I’ll try it in the morning when I’m onsite and have a stream to work with.
Of possible relevance: If you have a failing disk array that nginx is trying to record to, it breaks the stream. Fortunately, the need for recording the stream has dropped off, so we can probably just pull the disk array entirely out of the path for now and just do the hardware transcoding out to a few other sources. The disk array is 5x old 1.5TB drives that have been through a huge range of servers, Drobos, etc, and they’ve been in this server for years, so hardly surprising that one is having the occasional issue.
I still need to run cables to the Chromecast, but a Chromecast Ultra seems at least mildly better behaved than the old one, in terms of wireless behavior.
So tired of dealing with this teetering tech stack… if it’s not one thing, it’s another.