Read the article on DEV.to
This is the story of how I decided to automate the download process of Unity Cloud Build artifacts.
A rather big set of binaries
Let me set the scene for you.
I have a game published on Steam - Twin Horizons. It’s a music game, and it’s made with Unity.
The repository I save the code employs the git-flow model, and whenever there is a commit on the
base branch, it kicks builds on the Unity Cloud Build (thank goodness they finally supported IL2CPP for Windows.)
There are four targets, or types of the game executable, to be built. They are:
- macOS version, Production Build
- macOS version, Developer Build w/ development tools
- Windows version, Production Build
- Windows version, Developer Build w/ development tools
After they are built, I have to:
- download them
- place them on the specific location on my filesystem and
- add an Addressable bundle to all of them.
This is what I need to do so that the tools provided by Steam can see the new build.
Several month ago, I received a message from one of my tester that his installation broke on the latest update. Ah s#!+, here we go again.
I gathered the logs. I tested on Unity. I even tested the built app. No luck. Heck, it was not even supposed to occur with the build he received.
…or so I thought.
Turns out, when I placed the executables, I made a mistake and placed the production version in the development version directory.
Alex E. Proimos, CC BY 2.0, via Wikimedia Commons
Thank goodness it was not the other way around.
…Come to think of it, I had a similar mess-ups in the past. I did avoid the catastrophe at that time because I at least had a temporary branch to test the game I have uploaded.
THAT’S IT! NO MORE HANDICRAFT!
So I launched an investigation. I knew Unity Cloud Build support webhooks. So somehow, I had to receive them and automatically download them.
Where do I host the receiver, though?
I have a rented server, but it is for serving webpages. If I had done what I’m trying to do, eventually they will shut my account down for using too much of their processing capability. Remember, I have to run the uploader provided by the publisher on the machine that has the game. I do have an VPS account, but they tend to be on the expensive side. I’m a broke-ass university student; Unless I want to set up a VPN or something to get a static IP, I want to avoid that.
At this point I thought, “it’s kind of risky, but maybe I can try hosting it myself from my old PC and somehow expose it to the Internet.”
That’s when I found Loophole, a free tunneling service. If I can host an HTTP server that can receive the webhook from UCB, I can kick off automated binary downloads.
Now that the hosting method is decided, I had to learn how to listen to HTTP requests and run scripts in background simultaneously.
How do I make the server?
I wanted to make this as lightweight as possible. So using apache2 sounded very exaggerated to me; that is a full-featured web server. Jenkins was not my option for a similar reason; if I was building the game on my machine, it would’ve made sense to use it. But the only goal here is to download a game and place it in the correct place. I just wanted something lightweight that the only thing it does is to listen to a port, and worked on the language I’m comfortable with.
Since I wanted to make something in Python besides my research projects, I decided to code it in Python.
Few web searches later I found CherryPy, a lightweight web server in Python. It turns out it can safely spawn background tasks while it serves the webhook. Perfect.
What to download?
This is an example of the “Buiid Successful” payload - most of the information is redacted, but you get the idea:
artifacts array has several other elements for you to download, including debug symbols. I omitted all but the one whose
primary key is
true. That is our game.
You can also generate secret key to check the payload - see the
Several filesystem f*ck-ups later, here’s my invention:
This project receives webhooks from UCB, downloads the artifact from them, deploys them to the output folder while adding static accompaniments, should there be any.
Now I can be confident I will always deploy the correct binary! Also it was a good exercise to learn how filesystems are treated in Python.
Side note: this project can be deployed on Docker! If you cannot risk your main filesystem (as you should not,) I reckon this is the safest way to run this project.
Side side note: this code is not tested outside of my environment, so it may crash on you - if you experience any, issue reports are always welcome!