Church Streaming: Notes and Discussion

For the technical discussion of streaming issues in the context of some hypothetical church that several of us posting might attend.

Historically, services were in person. We met at a local school, and they have a very nice internet connection. As we started messing about with streaming, it wasn’t a major problem, beyond some stuff being blocked. A bit of either creative VPN work, rolling MAC addresses, and eventually asking politely for a couple static IPs solved this entirely, and things worked.

Since we own a facility (that we didn’t meet at, long story…), we dealt with early Pandemic by recording stuff ahead of time and stitching together services, uploading to the normal streaming services (YT/FB), and streaming them at the desired time.

Unfortunately, “2020 gonna 2020.” We are no longer meeting at the school, and are trying to have some small, distanced gatherings in the church building. As part of this, we’re trying to live stream stuff, and because there are more people around, we’ve got a few complexities to work through.

Our primary stream output is a BlackMagic ATEM Mini Pro ISO. This is a 4 input video switcher that can, usefully, emits a RTMP stream to a single target IP. We have been running this to castr.io, which reflects the stream to YouTube/Facebook. Unfortunately, as we’re trying to have a few more people around, the overflow area downstairs was dealing with a 5-8 second lag from YouTube. Bleh.

My attempt to fix this lag involves a vintage server that will be soon getting better guts, but was around. I set up nginx to do RTMP stuff, such that we could point a local client at the server and get a stream for the overflow area, and also fire a stream out to castr.io. This worked, but still had a few second lag, and we had some general connectivity issues and drops to the internet. It seems a 9Mbit stream doesn’t fit in a 10-12Mbit upload. sigh

Anyway, I’m attempting to make things better, and the plan right now, with new server guts, includes using a Chromecast to run the overflow room (hopefully with minimal buffering - we’ll see, that’s a wireless device and wireless sucks), and adding some packet shaping and queue magic to our router to shove RTMP stuff out first, period. Whoever’s phone is connected can wait in line.

Long term, though, I’d like to figure out how to host our own infrastructure, end to end, and I’m fine with cloud hardware. But I do not want to rely on FB/YT for streaming church services for the long term.

Any good advice in this realm? I’ve dealt with plenty of this sort of thing over the years, just… not recently.

Just to clarify, you’re pre-recording things with a method you like, and are looking for software options for hosting and [not live] streaming?

If you are looking for new ways to record and edit the broadcasts (live or not), ‘Open Broadcaster Software’ has been gaining a rapid following: https://obsproject.com

I’ve played with Icecast a little bit in the past for some audio streaming. It claims to do video too, but I’ve never tried that part. What I did try ran fine though, might be worth a look: https://www.icecast.org/

Beyond that, here’s a couple bits I found browsing around some self-hosting lists. No personal experience with them:
https://wiki.gnome.org/action/show/Projects/Rygel
and

Seems like there’s a bunch of options for audio streaming, but good video hosting/streaming projects seem harder to find…

EDIT: ohh, this one looks promising: Restreamer - Restreamer

It’s a combination. Three weeks are pre-done, but the fourth week is live, and that’s what we’re having trouble with.

Alright, church server upgraded.

Old guts: AMD Athlon X2, 2GB RAM.
New guts: Intel i7-7700K, 32GB RAM, and a boot SSD mirror instead of a single spinning 500GB drive.

Hopefully the random shutdowns will be over, it seemed to like shutting down at odd intervals. On further analysis, the main case fan wasn’t exactly hooked up to anything, which… was weird. I’m fairly certain I would have hooked that up. Such things are halfway important. But it’s not like there was any shortage of reasons for scrap pile, 10 year old hardware to be flakey.

At some point in the next week or two, I’ll go onsite and work with latency on the livestream. We’ve discovered another option on the ATEM - the USB output has a video stream (“webcam output”) that we might be able to reflect, but we’re not sure what the latency in the whole system is. If the webcam stream is lower latency, well, we can reflect that downstairs. If they’re the same, then we work on the receiver latency.

And I add some stream capture to the process, so we have a local copy of the stream instead of having to download from YouTube.

Woah, that was working as well as it was with an Athlon X2, 2GB RAM and a 500gb spinning disk? That’s about the specs of my home server, maybe it’s more capable than I thought. I wouldn’t have figured it would handle streaming much of anything really.

Maybe I should try turning that other one into an NVR after all…

All I was doing was bouncing the stream off it on the way out to castr.io - it wasn’t loaded much at all.

The main reason I upgraded it was for reliability issues - the old one was shutting down regularly - and so I have the CPU power to do some transcoding/shuffling if I want to stream to a Chromecast. Unfortunately, I don’t think I can really do much about the buffering on it. We’d like sub-100ms latency to the TV downstairs, and I’m not sure we can actually accomplish this.

Well here’s another one I know nothing about, but just came across. Looks like it’s trying to be a selfhosted Twitch. Built in chat too, might be useful.

Excellent. Latency no longer matters, and I can optimize for other things!

Upon some rethinking of how the church building is being used, we’ve moved the overflow area far enough away from the main area that it shouldn’t be within audible range (vs the previous “right below” location). So now, all I have to worry about is schlepping various streams around in useful formats.

Our ATEM encoder gizmo can emit up to 70Mbit of h.264, which is… rather excessive at 1080p, certainly, but good to know. However, we also will have a satellite campus soon that will be making use of the morning livestream, in the evening. So, new plan:

  • Run the encoder bandwidth up well beyond what our ISP can handle - 20-30Mbit, for basically “native quality.”
  • Save a copy of the stream off with the local server. This will be the source of the evening service - we can either copy it on physical media or schlep it across the internet, and if it takes a few hours, well, so be it, as long as it’s ready by evening.
  • Translate this high quality stream on the server for local playback by the Chromecast for overflow. This may involve dropping the bitrate some - not sure yet, we’ll have to see what the wifi can handle.
  • Translate the high quality stream into a lower bitrate (6-8Mbit) stream for sending out to the internet. Sadly, I can’t use HEVC right now, because Facebook doesn’t support it, but YouTube would, if I wanted. I don’t have the bandwidth to send two streams out right now, and I’m unclear as to if castr.io (our current splitter) is able to transcode streams like that. Once I have our own bouncer up, I can just live transcode on that - send it 6Mbit HEVC, mirror that to YT, transcode for FB. As long as it keeps up, no problems, just higher quality.

Now I just need to get all this working, or enough of it, to work this coming Sunday!

If you have a webcam on Linux, and want to emit a rtmp stream out of it, you might find the following incantation useful (if awfully CPU-hungry… whoof).

Assuming your webcam will emit mjpeg (a modern one should…):

ffmpeg -f v4l2 -c:v mjpeg -video_size hd720 -i /dev/video0 -vcodec libx264 -maxrate 2M -bufsize 1M -f flv rtmp://192.168.122.4/live/foo

This will send a stream to the rtmp reflector at rtmp://192.168.122.4/live/foo - which you can then dork about with!

You can find stat.xsl in /usr/share/doc/libnginx-mod-rtmp/examples/stat.xsl.gz, and might want it in /usr/share/nginx/html/ or so.

Stats output from nginx rtmp - this goes in a http/server section.

	# rtmp stat
	location /stat {
		rtmp_stat all;
		rtmp_stat_stylesheet stat.xsl;
	}
		
	location /stat.xsl {
		root html;
	}

The nginx/rtmp exec_push command is a way to run ffmpeg to transcode and push to a remote point:

In your rtmp/application section, something like this:

exec ffmpeg -i rtmp://localhost/live/$name -c:v libx264 -c:a copy -s 640x480 rtmp://target_server/etc;

One might also record to a local file. In rtmp/server/application:

recorder all {
	record all;
	record_path /tmp/recordings;
	record_suffix _%d-%m-%Y_%H:%M:%s.flv;
	exec_record_done /bin/ffmpeg -i $path -f mp4 /tmp/recordings/$basename.mp4;
}

Or so. That will record everything with a timestamp, and convert it from flv to mp4 afterwards. One might remove the source file after being confident everything worked.

And I’m at the limit of what I can do dorking about in my office right now, because my VM isn’t on the actual LAN (just the NAT), so… I’m going to work up in the house later with a VM on the real homeserver and see if I can get a Chromecast working!

Also, should you want to hardware encode the mjpeg stream off a modern webcam using hardware h264 encoders, the following might do it:

ffmpeg -vaapi_device /dev/dri/renderD128 -f v4l2 -c:v mjpeg -video_size hd720 -i /dev/video0 -vf 'format=nv12,hwupload'  -c:v h264_vaapi -qp 20 -f flv rtmp://192.168.122.4/live/foo

The trick is the hwupload bit - this passes the software decoded mjpeg to the hardware for encoding.

Magical incantation for hardware transcoding with vaapi using Intel QuickSync (after setting permissions properly, need to add users to the video group here at some point):

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 -i Sunday_29-12-2020_13\:21\:1609273284.flv -c:v h264_vaapi -rc_mode CQP -qp 25 /tmp/output.mp4

Downsides: No constant bitrate output mode, only constant quality.

Upsides: 25% CPU use instead of 400%. I’ll mess with it.

IMG_9789

Oh yeah. Streaming a livestream of my router to a Chromecast across my office. Latency sucks, 20-30 seconds, but that’s still fine for our needs.

Now to replicate this at church, and do some hardware transcoding to a lower bitrate. I’m not sure the Chromecast can handle a 20Mbit stream on wifi there…

Plus, even better, there’s a sketchy Node.js app (?) that allows me to cast a stream from the commandline - so I can probably fire off the casting automatically when the stream starts, which would simplify Sundays a bit.

… and now nothing works at all. :confused: Despite having worked fine during the week. Can’t get the Chromecast to play a DASH stream, HLS is… working, which it wasn’t before, though it stutters.

Ugh.

Well, hardware encoders are online, and the stream is going to YouTube (which is reflecting down to the overflow room), so… hopefully this holds up.

//EDIT: <Narrator> It didn’t. The CQP setting was wrong, and the stream immediately capped out the upload as soon as the pastor’s plaid shirt started having to be rendered. :frowning: I think there’s a way to set bandwidth limits on the hardware encoders, though. Now that I have a “real service” to work with as a source file (straight dump off the switching encoder), I can do more realistic testing and work out the details a bit more “real world.” Though I still can’t understand why the Chromecasts refused to listen to a stream format that worked in my shed.

Alright, so, postmortem from the first live-fire exercise.

We ran a service, streamed it through my setup, saved off a high res copy for later use, and had the overflow room working. However. Many things didn’t work as intended.

The first issue was that the ATEM switcher was in a weird state, and eventually locked up. I spent the better part of an hour trying to figure out why the stream kept pausing before catching up again, checking network connections, etc. After they rebooted the device (it apparently hung more or less completely in terms of streaming status at one point), everything magically worked again. So, new checklist item, “Power cycle the switcher before using it.” Easy enough.

Despite having the Chromecast working just fine in my office, it steadfastly refused to accept a stream this morning - even with what I’m pretty sure are identical configurations. Oddly, in my office, I could send a DASH stream but not HLS - and, this morning, it would attempt to play a HLS stream (but stutter), and wouldn’t play the DASH stream at all. So, not sure what’s going on, but I’ll mess with it. However, poking around after the fact, it occurs to me that we’re streaming 1080p/60fps, and it seems that the Chromecast generations I had to play with don’t actually support 1080p/60fps… and I’m pretty darn sure I wasn’t pushing 60fps around my office. So, I’ll drop the framerate for the local stream and see what happens there.

The more serious issue was that we capped out our upload bandwidth, again. I’d gone through and given the streaming server “You get priority for whatever you need outbound,” which certainly improved things from the previous attempt, but I saw plenty of capped out bandwidth on our limiter of 15.5Mbit or so. There are things you don’t mind seeing in a bandwidth chart, but that sort of hard line, at your cap, while dropping frames, is not one of them. Also, seriously, download bandwidth during services is nuts. Another issue to deal with.

image

My initial configuration had been to use the software transcoders for this service until I got some good data (an actual service file) to play with, but while troubleshooting the “stream glitching” issues that turned out to be ATEM related, I switched over to hardware transcoders to see if it was an issue. Yes, my webserver has permissions to /dev nodes. :confused: Playing around previously, I’d only been able to get the CQP transcoding working, not constant/variable bandwidth options. This worked fine, for the first half of the service (about 8Mbit), and once the pastor got up there with a plaid shirt, things went downhill (uphill?) quickly and capped out the stream, with associated packet loss and video glitches.

Datasheets indicate that the hardware can support fixed bitrate encoding, so I was a bit confused.

Then I discovered that there are two different driver packages for the Intel QSV stuff.

intel-media-va-driver

vs

intel-media-va-driver-non-free

One supports more than the other, and you can guess which one that is.

With the non-free version installed, options that previously whined pretty hard about not being supported worked just fine, and I can do bandwidth limited hardware translation.

Running realtime in software took 400-600% CPU. Running 3x realtime in hardware takes 11% CPU. Slight difference…

And, importantly, with the hardware engine, I can do a few other streams. Say, 8Mbit/1080p/60fps for pushing to YouTube, 8-10Mbit/1080p/30fps for the overflow room. Plus saving off the 20Mbit H.264 stream from the hardware encoder. Unless, of course, I can get a HEVC stream flowing up to castr. I’m not sure if it will split it out properly to non-HEVC endpoints (cough Facebook cough), and while I don’t care, other people do. On the other hand, it would improve quality a lot on the same upload bandwidth - 8Mbit of HEVC is more than enough to cover what our cameras can emit. So, things to mess with.

Other random bits and pieces of note:

  • File naming with colons means rsync will utterly refuse to push a file around. Also, exfat doesn’t like them. So don’t name your files with colons in a timestamp.
  • We need some method to make the service files available after the service. They’re being reused in the evening at a satellite campus. If I’ve got the hardware transcoder capability, I may look at going to HEVC - it would be an awful lot smaller. The 1-hour service was 10GB of 20Mbit/h.264.
  • With 45 devices on the church wifi during the service, I really need to get around to splitting out the network so that we don’t have guest devices doing whatever-they’re-doing with all our bandwidth.
  • More bandwidth would be nice, but I don’t think it’s required. Static public IPs, though, would be awfully useful for some hosting.

So, lessons learned!

1 Like

Experimental results from the test lab, using the full res rip of the service:

  • Trying to feed a 60fps DASH stream into the Chromecast fails exactly as it did Sunday.
  • Transcoding down to 30fps works fine. I’m unclear as to how much bandwidth will be available to the Chromecasts on a typical Sunday, as I’ve not reconfigured the network to kick everyone off the main network.
  • At least one person is fine with streaming 30p to YouTube, vs our former 60p. I’d rather use the bandwidth for quality than framerate.

So, plan for this Sunday: Either cut it to 30p at the encoder, or transcode down to 30p on the server. Push 8Mbit out to YouTube, and either reflect that to the Chromecast, or transcode down to about 4Mbit for the Chromecast.

//EDIT: I’ll try this or so.

exec_push /usr/bin/ffmpeg -hwaccel vaapi \
-hwaccel_output_format vaapi \
-hwaccel_device /dev/dri/renderD128 \
-i rtmp://127.0.0.1/$app/$name \
-c:v h264_vaapi -r 30 -b:v 2M \
-maxrate 4M -bufsize 4M \
-c:a aac -ac 1 -strict -2 -b:a 192k \
-f flv rtmp://127.0.0.1/stream_lo/$name \
-c:v h264_vaapi -r 30 -b:v 6M \
-maxrate 10M -bufsize 12M \
-c:a aac -ac 1 -strict -2 -b:a 192k \
-f flv rtmp://127.0.0.1/stream_yt/$name 2>>/tmp/ffmpeg-log.log;

I spent a bit of time dorking about with Restreamer, and this looks like it should do what we need - with the exception of not being able to split a stream to more than one remote endpoint. Seems like it will happily convert the incoming rtmp into hls and serve it out to as many clients as the bandwidth supports, which is perfect for what we need.

However, not being able to go to multiple remote endpoints really isn’t a big deal, as I can just toss nginx on there to do the rtmp reflecting out to YT/FB, and copy a stream into restreamer. Long as we don’t bandwidth cap the hosting service, no issues, though if demand on that is high I might split them into separate instances. Since I’m just copying a stream, server CPU load is quite low, and a small instance will handle it just fine.

As much as I’d like to work with HEVC, I just don’t think it’s worth the complexity right now. Not much supports it, and cloud transcoding just doesn’t seem to be the right option. h.264 → hevc → h264 seems… a bit silly. We can push out 8-10Mbit of h.264 which is fine for 1080p/30.

It’s also possible to have nginx call one of the command line Chromecast utilities, such that it starts auto-playing when a stream comes in. Useful!

Today went far, far better. Limiting bandwidth to 6Mbit bursting 10 out, and 4Mbit to the Chromecast went fine, though there are some video artifacts at 4Mbit I don’t like. I’ll see what the wireless connection can take - we’re on 5GHz internally and I found a ChromeCast 2, which supports that, so… I might be able to push a bit more.

Auto-starting the Chromecast on video stream initiation worked properly - which is a nice bit of automation for the process.

I’ll try 8Mbit out next week and see how that works - and I may try simply reflecting 8Mbit to the overflow room for the early service to see how it runs.

Notes from this week:

Pushing 8Mbit out seems to work fine, though I need to cap download as well. For reasons I don’t fully understand and seem to be upstream of my router, if download bandwidth is very high, our upload starts choking out. I assume this is an issue in the modem shifting timeslices, but if I just cap download, it should help.

image

Second issue. The Chromecast in the overflow room is glitching a bit. The TV is in a different spot, so it’s having to wireless through the TV, to an AP on the floor below, with more wireless devices around. We’re going to order an ethernet adapter and see if we can resolve it that way.

Capping download helps a lot. Capping “other devices” uploading during the service helps a lot, just… hard to test, since the problems only manifest during the actual livestream service, not the “We’re pushing data out but nobody is listening” early service.

Further investigation revealed that the Chromecast in the overflow room was still connecting on 2.4GHz. I’ve still got some drops on it. I’ve kicked it to 5GHz and will test next week, but I still think wired to it is the right answer. Then I can just reflect the full raw input to it and be fine.