Share post
I’m not gonna lie, my sambo (Swedish for partner, which sounds way cooler than it should) and I got way too into Blue Prince. It’s this puzzle game with a mansion that keeps rearranging itself, like it’s got a mind of its own. We’d sit at our PC, one of us steering the character while the other pointed out clues, completely lost in its eerie vibe. It was our thing—until our bodies started complaining. Hours at a desk left us stiff and exhausted, and all we wanted was to play from the comfort of our bed, maybe with a cup of decaf coffee for extra coziness. Sounds simple, right? Spoiler: it wasn’t.
We had an iPad, so we figured we could stream Blue Prince to it and control the game with touch inputs. But Blue Prince isn’t built for tablets, and streaming a PC game with low lag while mimicking keyboard and mouse controls? That’s a whole mess. Still, I’m a stubborn tech nerd, so I dove into building a custom setup with WebRTC, JavaScript, Go, and a streaming server called MediaMTX. Oh, and I got to deal with the Windows raw input API because the mouse was not our friend. This is the story of how I made it work. Oh and I had a day to do this because it was Labor day.
What I Was Trying to Do
Here’s what I set out to do:
- Stream the game smoothly: Use MediaMTX, an open-source streaming tool, to send the game’s video to the iPad via WebRTC for minimal lag.
- Create touch controls: Build a web interface with virtual joysticks for movement and camera control, plus buttons for actions like clicking or opening the map.
- Control the PC remotely: Write a server to translate iPad inputs into keyboard and mouse commands the game would understand.
It sounds simple enough. So I started on building out the streaming part first, since that was the part I was least comfortable with
Streaming the video
Even though I’m pretty technical I felt pretty dumb here. It took way longer to setup OBS to get the right tweaks in place for lower latency. First off, I started by using RTMP as the protocol for streaming thinking this would be good enough. I should have read more because I quickly found out this is a TCP protocol. Okay backup a bit, why is that a bad thing?
Well when one wants to reduce latency as much as possible, things like network buffers and ordering and delivery guarantees are a hindrance that you would much rather do away with. Toss safety aside for speed, that’s what I always say. Anyway when I was testing with the html webpage I quickly drafted, the video was always so far behind.. like seconds behind, even on the same machine. So then I looked and realized, yeah duh I need something much quicker and more toward the UDP side of network processing.
That led me to find out about WebRTC. I loved this because I found out this supported more udp style traffic where you can say screw it to the guarantees of safety and message acknowledgement. Still, I’m not a frontender so I had no idea how to negotiate the stream from the client. Luckily we live in an amazing time, so some LLMs helped me out. Kinda.
Okay they didn’t help at all. But it turned out in the docs of MediaMTX they have an example Reader.js file to use as a reference, using something called WHEP. I’m gonna be honest. I copy pasted from here forward until the video worked. BUT HEY! This setup was actually pretty good! Below was what I ended up doing in OBS to pipe from OBS to the MediaMTX server which the client read from.

This basically set things up so it was much more ready for low-latency reading. I also did tweak the scale I streamed just to minimize the data I sent, but anyway it worked a dream!

The iPad Side: Crafting a Touch Interface
I started with the iPad, building a web page to show the game stream and handle controls. The layout was straightforward: a video feed taking up 70% of the screen at the top, and a canvas below for two virtual joysticks and a few buttons. One joystick handled WASD movement, the other mimicked mouse look, and buttons covered actions like “Show Map” (Tab), “Action” (left-click), “Back” (Escape), and “Text” (for text inputs which I never ended up implementing). Here’s a peek at the original draft HTML:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<style>
body { margin: 0; height: 100%; font-family: Arial; }
#video { width: 100%; height: 70%; background: rgb(30, 30, 30); }
#joystickCanvas { width: 100%; height: 30%; background: rgb(50, 50, 50); }
</style>
<script defer src="./reader.js"></script>
</head>
<body>
<video id="video"></video>
<div id="message"></div>
<canvas id="joystickCanvas"></canvas>
</body>
</html>
I wrote the JavaScript to do the heavy lifting. It drew the joysticks on the canvas, tracked touch events to move them within a fixed radius, and sent input data (joystick positions, button states) to a WebSocket server every 16ms. Still needed something to listen in since I hadn’t actually made the websocket server yet sooooo… anyway here is how things looked on the frontend:

One more thing before moving on from the frontend part. IPads have a concept of multitouch! So I had to learn this the hard way because in practice, you’d obviously want to be able to move the left and right joystick independently and simultaneously right? Well I’m dumb as hell when it comes to frontend stuff so I had no freaking clue how to handle this. Again I had to look this up and figure out what to do to handle this more properly. Minor setback ofc, and I figured it out without not too much hassle. Apparently the key is tracking touch identifications… so maps… maps everywhere.
// Track touches to joysticks
const touchMap = new Map(); // Maps touch.identifier to joystick index
const touchButtonMap = new Map(); // Maps touch.identifier to button index
const handleTouchStart = (e) => {
e.preventDefault();
const rect = canvas.getBoundingClientRect();
for (const touch of e.changedTouches) {
const x = touch.clientX - rect.left;
const y = touch.clientY - rect.top;
const joystick = getJoystickAtPoint(x, y);
if (joystick) {
const joystickIndex = joysticks.indexOf(joystick);
touchMap.set(touch.identifier, joystickIndex);
joystick.touchIds.push(touch.identifier);
updateJoystickPosition(joystick, x, y);
} else {
for (let button of buttons) {
if (isPointInButton(button, x, y)){
touchButtonMap.set(touch.identifier, buttons.indexOf(button))
button.isPressed = true;
}
}
}
}
drawJoysticks();
};
The Server Side: Go, Robotgo, and Input Hell
For the server, I wrote a Go server to catch WebSocket messages and turn them into game inputs. As an aside: People hate or love Go. I find it weird because I am in neither camp. Anyway it was a simple websocket server that parsed JSON data from the websocket into a simple struct representing the actions I cared about:
type InputData struct {
ShowMapPressed bool `json:"showMapPressed"`
ActionPressed bool `json:"actionPressed"`
EscPressed bool `json:"escPressed"`
MoveX float64 `json:"moveX"`
MoveY float64 `json:"moveY"`
MouseX float64 `json:"mouseX"`
MouseY float64 `json:"mouseY"`
}
I used the robotgo library to simulate key presses (like “W” for forward) and mouse clicks. It worked fine for keyboard inputs, but the mouse was a nightmare. Blue Prince captures the cursor, which made robotgo’s relative mouse movements not work since the cursor would no longer act on logical inputs, instead needing a more low-level trigger for relative movement. i.e. the solution needed raw mouse input. I will admit I don’t really like looking at Windows API documentation, but I have to hand it to the folks there, the API was fairly straightforward for mouse movement.
func moveMouse(x float64, y float64) {
user32 := windows.NewLazySystemDLL("user32.dll")
mouseEvent := user32.NewProc("mouse_event")
// Call mouse_event
ret, _, err := mouseEvent.Call(
uintptr(0x0001), // dwFlags
uintptr(int(x)), // dx
uintptr(int(y)), // dy
uintptr(0), // dwData
uintptr(0), // dwExtraInfo
)
if int(ret) != 1 {
log.Println(err)
}
}
Handling Sprinting
Small moment of cleverness: one thing that was neat was that I could actually use the magnitude of the movement vector to see if it was above a certain threshold, then if so I would have the SHIFT-key toggled. This is how you run in the game, so it added a seamless experience of ranges of movement speed even with key taps. Anyway I thought this was worth a mention because it was so simple and elegant I loved coding it up.
The Result
Well, of course I wouldn’t write this up if it wasn’t successful, and again I am proud to have been able to do it in a day, without full focus even! But! more importantly I got to learn some neat things and I have SO much more respect for some folks that do video processing for their software job. This stuff is annoying and the clients you need to support seem to have very interesting ways to handshake and it seems wild. Summarized in powerpoint-style bullets:
- Streaming is a beast: MediaMTX and WebRTC made low-latency streaming possible, but it took a lot of tweaking to get it right.
- Controls need finesse: Touch joysticks are only good if you can support use independently (duh). Small changes, like transitioning automatically to sprinting, made a huge difference.
- Mouse inputs are sneaky: Games that capture the cursor can derail everything. The raw input API is needed at times, which means unfortunately you may need to read windows API docs.
- Playing Blue Prince is a drug in more ways than one.

If You Want to Try It
If you’re curious about streaming a game to your tablet, my setup’s a solid starting point. You can find the full codebase here: https://github.com/birdhalfbaked/blue-prince-remote. Start with MediaMTX for streaming, build a touch interface with JavaScript, and set up a server to handle inputs. Be ready for surprises, like cranky mouse controls, and test everything early.