Further to the recent blog post about how we now need a VPN to use Zurg with RD, the necessary changes to our design are now done to support this scenario:
If you don't have gluetun, your Zurg pod will use WARP
If you add gluetun, your Zurg pod will use gluetun and not WARP
If you remove gluetun again, your Zurg pod will revert back to WARP
To make this happen without impacting users bundles, we attach WARP and gluetun to your containers, but only one of them is active at a time. To confirm your connection, use the pod logs as illustrated in the gluetun video, or the troubleshooting video below.
About 24h ago (just as we ran a big Riven rollout), RealDebrid finally closed the IPv6 loophole which was allowing us to Zurg with RealDebrid from datacenter IPv6 range.
We added a new product / process to allow us to connect your pods (Zurg, in this case) to your own, existing VPN. On the plus side, it's extremely flexible, since gluetun supports a huge range of VPN providers. On the downside, it requires some user geeky interaction, which I explain in this video:
The solution works great, and several our of geekier elves validated it with Mullvad, Nord, ProtonVPN, and PIA. This was only a small subset of the affected users, and the process of VPN-ifying 100+ Zurg pods was looming unpleasantly...
And then our newest Elf-Venger, @Mxrcy, pointed me a container image for CloudFlare WARP, which can be used as a SOCKS5 proxy in Zurg 0.10, with zero manual config! After a bit more refactoring, we were able to reunite all users with their beloved RD libraries via CloudFlare WARP!
While we wait for UI dev to complete, I've replaced the UI with a realtime, live view of your Riven logs, via your browser (launched from your dashboard). From this UI, you can CTRL-C to restart (it's not actually the logs, it's the backend process itself!), and watch Riven do its magic in realtime.
Riven Revenue Sharing
Riven is the first ElfHosted product which falls under a new, experimental "revenue sharing" model. Because @spoked is working so closely with the ElfHosted team to make Riven work with our platform, 30% of your subscription will go to the Riven devs to support further development!
Remember, this is all very cutting-edge and geeky, so some assembly will be required. Jump into our dedicated #Riven forum topic with your bug reports, feature requests, and feedback!
It's been 4 days since Zurg 0.10.0rc2 was released (to sponsors), and we've had a few brave Elves testing it out. So far, there are no show-stoppers, so we've now rolled v0.10.0rc2 out to all users.
Significant improvments over v0.9.x are:
Repairing works again, without causing plex to stall its scans
Zurg can now co-operate with Real-Debrid to extract RAR-compressed "downloaded" media!
The recent transcode path fixes to Jellyfin / Emby have brought a small bug out of the woodwork.. in some cases, the streamers may insist on transcoding your media based on your perceived bandwidth limits, and then fail to transcode because (a) it's 4K content, or (b) the transcoding path is not set to /transcode, and we've prevented use of the network storage for such!
A simple workaround is to edit your Jellyfin / Emby users, and to remove their permissions to perform video transcodes, something like this:
Jellyfin 10.9 was released last week, with new features including "trickplay" (live video scrobbing), admin UI revamping, and improved ffmpeg transcoding powerz.
Unfortunately there was a significant bug in 10.9 causing random lockups, but we were unable to roll back because of database upgrades. While the devs were working on the fix, we rolled out some changes to our health checks - rather than a TCP connection to confirm Jellyfin is alive (but, sadly, still locked up), we now use an HTTP test against the API's health endpoint, which fails when Jellyfin is locked up, so that we can at least quickly kill and restart a stuck Jellyfin instance.
The bugfix has rolled out in 10.9.2, so hopefully this is now a non-issue!
It also turns out that Jellyfin was not consistent in where it stored its transcoding data, and some instances were defaulted to /config/transcodes (/config is backed by our expensive NVMe network storage, not where we want to be sending GBs of temporary transcoding data!), while others were set to /transcode (correct) or /config/cache/transcode (also incorrect).
Tonight's update symlinks all of these combinations to /transcode, the 50GB ephemeral NVMe-backed disk on the local node, avoid stressing our network storage. In summary, you can ignore the transcode path in Jellyfin. We'll make it work in the backend :)
After some careful user testing (thanks @lath!), AirDC++ is generally available!
AirDC++ (Web Client) is a modern client for the "Advanced Direct Connect Protocol", a protocol with a 25-year backstory, which allows creating file sharing communities with thousands of users. Among other things, AirDC++ is popular with comic-book-sharing communities.
Since we published the ElfHosted Cluster Grafana Dashboard, it's become apparent that something is happening on a daily basis which generates significant traffic on the 10Gbps "giant" nodes, and fully saturates the network interfaces on several 1Gbps "elf" nodes.
It served us well from Feb 2024, and was my introduction to the wider StremioAddons community, but the rapid pace of the KnightCrawler devs outpaced our build, and so while fresh builds were prancing around with fancy parsers and speedy Redis caches, we ended up with a monster MongoDB instance (shared by the consumers, and public/private addon instances), which would occasionally eat more than its allocated 4 vCPUs and get into a bad state!
To migrate our hosted instance to the v2 code, we ran a parallel build, imported the 2M torrents we'd scraped/ingested, and re-ran these through KnightCrawler's v2 parsing/ingestion process. Look at how happliy our v2 instance is purring along now!
We cut over to the v2 code a few days ago, and since then we've had some users of the Prowlarr indexer pointing out that the results coming back from the KnightCrawler indexer were...
In the past we've had issues with updates to RDTClient, since the version which we initially used (based on the laster13 fork of https://github.com/rogerfar/rdt-client) ran as root (later dropping privileges), and was hard to lock down.
Today an intrepid team of elves worked on refactoring and testing the latest official upstream (v2.0.73 currently, but it changes fast!), which I'm happy to report is working well, and is noticeably faster than the old version.
By the time you read this, you'll have been auto-upgrade to the latest version, and subsequent upstream updates will be automatically applied (no more testing required after each upstream release).
If you'd like to encourage RDTClient's developer, @rogerfar, to continue making bug fixes and feature improvements, weigh in on or just add some reactions to this issue!