Tech Blog: Hosting the Charity Stream
Recently we had the unique opportunity to field tech on a charity livestream with LANfest. It was an absolute blast, and I wanted to take a moment to share how the tech team approached and attacked this unique challenge, both at the technical and organizational levels.
We were given some criteria by the organizers (and brought a few of our own) to ensure the success of the event:
- Provide a server in the New York region
- Ensure the stability and proper configuration of the server before and during the event
- Cooperate with and ensure the success of LANfest's production team
- Cooperate with and ensure technical stability for all competitors in the event
- (Later on) Provide fun incentives to encourage viewers to donate
Pursuant to this, we came up with our own goals to ensure that the event would be smooth sailing for us:
- Automate as much of the tournament handling as possible
- Build the virtual machine with docker to prevent mishaps and enable frequent, confident testing of the server.
- Use git to track changes and collaborate between multiple tech team members
- End Goal: Tournament organizers should not have to worry about server configuration and stability. It should "just work".
With this in mind, let's get into how we pulled the darn thing off.
Warning: Be advised that this article documents a process that is completely overcomplicated for no real reason.
Normally, we'd just run something on one of the dedicated boxes we rent for hosting our public servers. However, these are all in Chicago, and the organizers wanted something a little more central so that European competitors could have a fighting chance.
I decided to push for UpCloud, yet another DigitalOcean-esque provider. This was our first time using them, and we weren’t disappointed.
Before doing any serious work on the server, we tested the performance of several test servers with bots.
UpCloud performed around the same as Vultr. We decided to go with it because it was new, had some features we might use in the far future, and it wasn't Vultr.
In the tradition of making things as hard as possible for myself and others, I decided to try and experiment with using one of UpCloud’s networking features for our SourceTV setup: a private backbone. This would enable us to connect all our relays to the master server without having to expose the server publicly.
Now, most cloud servers support this functionality (software-defined networking/virtual private cloud), but UpCloud is one of the few that makes it easy to do across datacenters, which is pretty neat. Would have made for a truly awesome SourceTV viewing experience for our, uh, 0 spectators. Hm.
In the end, things went pretty smoothly. The server performed well, and from this experience I believe it is a very strong competitor to Vultr.
(We even got a compliment from one of the European players that their ping was lower than expected!)
As many people who have run CS:GO servers will tell you, Source-engine servers are evil. One incorrect convar, one missed flag, and you will be left screaming to yourself for 20 minutes while the casters try to desperately keep the audience entertained.
For this reason, I took no chances and immediately picked good ol’ get5. If you’re unfamiliar, it’s a plugin that manages tournaments and matches that is very similar to FaceIT. One of the benefits of get5 is that it’s completely automated: everything from knife rounds to who’s on what team is handled for us.
The organizers of the tournament wanted to have the audience vote on each map, so we scrapped get5’s built-in best-of-3 system and manually created a match using the command line after changing to the next map. We also took this opportunity to use the command line to switch in backup players that were not defined using the hardcoded player list.
Get5 worked out great, and with only one hitch: everyone got kicked from the server after the first bo3 map ended. Luckily the casters and production played this off like pros and the issue was fixed for the next two rounds. (The fix was disabling
get5_check_auth, loading the next map & loading the config, then enabling
One idea we brought to the table very early on was the concept of a “crazy” match: a match where viewers can donate to sabotage or benefit their favorite team. From our vast experience
The original plan was to only do a crazy match at the beginning of the stream, and then pivot into the best-of-3 to pick the charity. However, when the teams went 1-1, several brilliant minds came together and made the tiebreaker a cs_assault crazy round, casting aside the will of the people (who probably would have picked something boring like inferno) for the greater good.
For the crazy map, we created a DonorDrive integration in Sourcemod that continuously pulled the donator list, checked for incentives, and if there were any new incentives bought, applied them. DonorDrive was LANfest's donation system, and it had pretty good native support for incentives which helped a ton. (If you're interested, we've published the plugin we created on GitHub.)
Unfortunately, this system turned out to be not as resilient as we would have hoped. During the 2-5 seconds everyone on a team was dead, an incentive could still be applied, fail to find an available player, and then cease to exist. Wtfmoses (one of the shoutcasters) even noticed this degenerate edge case while caving in and buying a whole team juggernauts.
If we had the hindsight to add commands to manually apply incentives, we could have avoided this, but we didn’t. Lesson learned: “Fully automated” isn’t always a good thing
We used docker-compose (get it? “Composing it”? I’m a bloody genius ain't I) to wrap everything into a nice package that could be deployed anywhere. Initially, we tested the server and plugin on one of my VPS-es, but closer to the tournament we migrated the whole setup to the production tournament server without a hitch. We highly recommend using docker-compose to anyone wanting to run a tournament.
Our dockerfile was very simple: first, we pulled CSGO from this lovely docker package. Many CSGO server images, including ours, will have a
run.shinside the image download the server on the first boot, but this takes up a lot of storage (30 gigs/server) and we didn’t want to deal with it while we were rapidly iterating on our dockerfile. Thus, we used this package so docker would de-duplicate it and prevent it from taking up our entire disk during testing.
From there, the dockerfile installs sourcemod, builds our plugins & get5, and copies over our config files into the image. Pretty sweet stuff.
We also used docker-compose to automate some very important bind mounts we didn’t want to forget: the ones to the get5 demo and backup path. If these were bound to the host filesystem, then we could take them elsewhere even if the container slipped in a puddle of poo and demolished itself.
This is also how we saved demos of the matches once the rounds were complete.
The core benefit of docker-compose was reproducibility—if I tested a specific scenario in a fresh container, I would know that the same scenario should play out exactly the same. We don’t have to worry about sneaky persistence or unintentional changes made to the environment.
All of this hard work preparing and practicing for the tournament lead to a blissfully uneventful and boring shift. We are extremely proud of our work, and even more proud of the $8,165 the stream raised for the Yale cancer center and the Brother Wolf animal shelter. To all who donated or tuned in—we hope you had as much of a blast watching as we had setting the thing up.
Wrapping things up
I hope you’ve enjoyed this in-depth look into how we operated the charity stream. If this kind of work is interesting to you, please consider applying to join our tech team. We're always looking to build a diverse network of interested problem solvers, tinkerers, and curious learners.
Who knows, you might even get to work on a tourney yourself
The plugin we created for the DonorDrive integration is publicly available at https://github.com/edgegamers/DonorDrive.
We thank @Aaron for involving us in this project, and the amazing people at LANfest for their masterful production and brilliant livestream management.
A huge shout out to @jii for their massive help with getting this all set up and @GardenGroveVW for donating the servers.
Source Engine Tech Lead @ EdgeGamers
Writing: @Mooshua | Art: @heidi