Skip to content

WARPing around RealDebrid blocks with VPNs

About 24h ago (just as we ran a big Riven rollout), RealDebrid finally closed the IPv6 loophole which was allowing us to Zurg with RealDebrid from datacenter IPv6 range.

We added a new product / process to allow us to connect your pods (Zurg, in this case) to your own, existing VPN. On the plus side, it's extremely flexible, since gluetun supports a huge range of VPN providers. On the downside, it requires some user geeky interaction, which I explain in this video:

The solution works great, and several our of geekier elves validated it with Mullvad, Nord, ProtonVPN, and PIA. This was only a small subset of the affected users, and the process of VPN-ifying 100+ Zurg pods was looming unpleasantly...

And then our newest Elf-Venger, @Mxrcy, pointed me a container image for CloudFlare WARP, which can be used as a SOCKS5 proxy in Zurg 0.10, with zero manual config! After a bit more refactoring, we were able to reunite all users with their beloved RD libraries via CloudFlare WARP! ๐Ÿ’‘

However...

Many users have opted to remain on the gluetun solution, preferring the peace of mind it provides, over the unknown factors in the WARP design (speed limits, potential congestion, etc).

In the end, there are 2 paths forward:

  1. Do nothing, and your Zurg will connect to RealDebrid via CloudFlare WARP, YMMV
  2. Add your own VPN (I like PIA) using gluetun, getting your hands a little dirty creating a Kubernetes ConfigMap, and rest easy in the knowledge that your Zurg is routed through a platform for which you have some control!

Riven has a UI (sort of)

We've all been enjoying geeking out on the Riven backend logs, but now we also have a bare-bones frontend, which makes the process of managing your Riven config soooo much easier! It even uses Plex OAuth to retrieve your token, so that you don't have to go hunting in XML files anymore!

Today's scoreboard

Metric Numberz Delta
๐Ÿง Total subscribers 3541 +28
๐Ÿ‘พ Zurg mounts: 141 -
โ›ฐ Riven pods: 24 -
๐Ÿ’พ ElfStorage in TBs 110 +4
๐Ÿฌ Tenant pods1 3224 -468
๐Ÿฆธ Elf-vengers 8 -
๐Ÿง‘โ€๐ŸŽ“ Trainees 23 +1
๐Ÿ› Bugz squished - -
๐Ÿ•น๏ธ New toyz 1 -

Summary

Thanks for geeking out with us, and please share these posts with related geeks!


  1. I worked out why the numbers were so weird. Under some circumstances, when a subscription expires, the provisioning script wasn't properly removing deployments, and we ended up with "orphaned" pods still running in the cluster. I've applied a temporary, manual workaround, but will perma-fix this soon.