I have a Docker container called qbittorrent running on my home server. It has been there for a long time. I am genuinely not sure when I set it up or why. It sits in my docker compose stack, it starts on boot, and it does nothing. The web UI is there, the ports are open, and for months it has just idled with zero torrents and zero peers and zero anything.
I also have 10 gigabit fiber to my house. I did not specifically need 10 gigabits. My ISP offered it at a price that was hard to turn down, and it was fun to set up and fun to speed-test. In practice anything past about a gigabit is overkill for what I do at home. But I have it, and on some level it feels like a waste to run at 0.0 B/s all day.
Hosted by Optimum Online (San Jose, CA) [39.73 km]: 4.786 ms
Download: 3043.67 Mbit/s
Upload: 3579.41 Mbit/s
These two facts eventually collided into an obvious idea: seed Linux ISOs.
Most of the time when someone uses BitTorrent these days it is for something that the rights holder would prefer they not have. I have done my share of that. But torrenting is just a protocol -- a very good one for distributing large files cheaply -- and the Linux world has used it for legitimate distribution since forever.
When someone downloads Fedora or Arch or Debian, they are probably using a torrent. The swarm for a new Ubuntu release on launch day is enormous. These swarms depend on people seeding, and seeders are just people with disk space and bandwidth who are willing to leave a client running. I have both. It seemed like a good use for the idle container and the idle gigabits.
qBittorrent has a built-in RSS auto-downloader. You give it a feed URL, you give it a filter rule, and when a matching torrent appears in the feed it adds it automatically. This is exactly what I wanted: subscribe to a feed of Linux ISO releases, define rules for the distros I care about, and let it run.
The feed I settled on is fosstorrents.com, which publishes an RSS feed at /feed/torrents.xml that is explicitly designed to be imported into qBittorrent. It has around 960 items covering most of the major distributions, and it updates when new releases come out.
I wrote a small Python tool to configure qBittorrent via its API. The script reads a YAML config file and pushes the RSS feed and auto-download rules into qBittorrent. It is idempotent, so you can re-run it any time you update the config without breaking anything.
The project lives in ~/iso-seeder/ and runs inside Docker. There are three files that matter.
The config file, feeds.yml, lists the RSS feed and the per-distro rules. Each rule is a case-insensitive regex matched against the torrent title in the feed:
category: linux-isos
save_path: /downloads/linux-isos/
feeds:
- name: FossTorrents
url: https://fosstorrents.com/feed/torrents.xml
rules:
- name: Ubuntu Desktop
feed: FossTorrents
pattern: "^Ubuntu \\d.*Desktop \\(amd64\\)"
- name: Fedora Workstation
feed: FossTorrents
pattern: "^Fedora \\d+.*Workstation"
- name: Arch Linux
feed: FossTorrents
pattern: "^ArchLinux \\d{4}\\.\\d{2}\\.\\d{2} - Arch Linux \\(x86_64\\)"
- name: Debian DVD 1
feed: FossTorrents
pattern: "^Debian \\d.*DVD 1 \\(amd64\\)"
- name: Linux Mint Cinnamon
feed: FossTorrents
pattern: "^Linux Mint \\d.*Cinnamon \\(amd64\\)"
- name: Manjaro KDE Plasma
feed: FossTorrents
pattern: "^Manjaro \\d.*KDE Plasma \\(x86_64\\)$"
- name: AlmaLinux x86_64
feed: FossTorrents
pattern: "^AlmaLinux \\d.*AlmaLinux \\(x86_64\\)"
- name: Pop!_OS Generic
feed: FossTorrents
pattern: "^Pop!_OS.*Generic \\(amd64\\)"
- name: Kali Linux Installer
feed: FossTorrents
pattern: "^Kali Linux \\d.*- Installer \\(amd64\\)"
- name: EndeavourOS
feed: FossTorrents
pattern: "^EndeavourOS \\d.*\\(x86_64\\)"
- name: Zorin OS Core
feed: FossTorrents
pattern: "^Zorin OS \\d.*Core"
- name: openSUSE Leap
feed: FossTorrents
pattern: "^openSUSE.*Leap.*Offline.*x86_64"
The main script connects to the qBittorrent API, registers the feed if it is not already there, and writes all the rules. The interesting part is short:
for rule in config.get('rules', []):
feed_name = rule.get('feed')
affected = [feed_url[feed_name]] if feed_name in feed_url else []
rule_def = {
'enabled': True,
'useRegex': True,
'mustContain': rule.get('pattern', ''),
'affectedFeeds': affected,
'assignedCategory': category,
'savePath': save_path,
'addPaused': False,
'ignoreDays': rule.get('ignore_days', 0),
}
client.rss_set_rule(rule_name=rule['name'], rule_def=rule_def)
log.info("Configured rule: %s", rule['name'])
It uses the qbittorrent-api Python library, which wraps the qBittorrent Web API cleanly. The Dockerfile is the standard slim Python 3.13 pattern:
FROM python:3.13-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "main.py"]
I also wrote a couple of small companion tools that live alongside it: qbt-stats, which shows transfer rates and the torrent list, and qbt-graph, which draws an ASCII bar chart of upload rate per torrent. Each is its own directory with its own Dockerfile. Shell wrappers in ~/bin/ build the image on first call and pass arguments through:
#!/usr/bin/env bash
DIR=/home/pmb/iso-seeder
if ! docker image inspect iso-seeder &>/dev/null; then
docker build -t iso-seeder "$DIR" >&2
fi
exec docker run --rm --env-file "$DIR/.env" --network host iso-seeder "$@"
So from anywhere I can run iso-seeder, qbt-stats, or qbt-graph and get something useful.
When I went to test the API, qBittorrent was silently crash-looping. It would start, log "using config directory", and exit with code 0 immediately. No error. No crash dump. Just gone.
The cause was a stale lockfile in the config directory. qBittorrent creates a lockfile when it starts to prevent multiple instances, and if it exits uncleanly the lockfile is left behind. The next time it tries to start, it sees the lockfile, assumes another instance is running, and exits quietly. The lockfile in this case was four days old and 0 bytes. Deleting it fixed it immediately.
This is the kind of thing that is obvious in retrospect and completely invisible while you are staring at logs that say nothing.
All 14 ISOs finished downloading within a couple of hours. As of publishing, the upload swarms have not picked up yet -- the trackers need time to announce us as a seeder and for peers to find us. The session totals below are bytes that went out to co-downloaders while we were still fetching the files ourselves, not real seeding throughput.
total upload 0.0 B/s session total 279.0 MB
ubuntu-24.04.4-desktop-amd64.iso [--------------------------------] 0.0 B/s 237.5 MB sent
archlinux-2026.05.01-x86_64.iso [--------------------------------] 0.0 B/s 25.0 MB sent
Zorin-OS-18.1-Core-64-bit.iso [--------------------------------] 0.0 B/s 9.5 MB sent
kali-linux-2026.1-installer-amd64.iso [--------------------------------] 0.0 B/s 2.6 MB sent
EndeavourOS_Titan-Neo-2026.04.27.iso [--------------------------------] 0.0 B/s 1.6 MB sent
linuxmint-22.3-cinnamon-64bit.iso [--------------------------------] 0.0 B/s 1.1 MB sent
manjaro-kde-26.0.4-260327-linux618.iso [--------------------------------] 0.0 B/s 996.5 KB sent
ubuntu-26.04-desktop-amd64.iso [--------------------------------] 0.0 B/s 384.0 KB sent
debian-13.4.0-amd64-DVD-1.iso [--------------------------------] 0.0 B/s 352.0 KB sent
pop-os_24.04_amd64_generic_24.iso [--------------------------------] 0.0 B/s 43.4 KB sent
AlmaLinux-8.10-x86_64 [--------------------------------] 0.0 B/s 0.0 B sent
Leap-16.1-offline-installer-x86_64.install [--------------------------------] 0.0 B/s 0.0 B sent
AlmaLinux-9.7-x86_64 [--------------------------------] 0.0 B/s 0.0 B sent
Fedora-Workstation-Live-x86_64-43 [--------------------------------] 0.0 B/s 0.0 B sent
I will update this post once the swarms pick up and there is something worth showing.
The bandwidth cost to me is zero. The disk cost is maybe 60-70 GB depending on which ISOs are in the queue, which is nothing on a modern NAS. The benefit to whoever is on the other end of the swarm is a faster download of software they need.
There is something satisfying about having infrastructure that is actually doing something useful instead of sitting there consuming power and doing nothing. The torrent client was always running. The fiber connection was always there. Pointing them at something that helps people download Fedora is not a big technical achievement, but it feels better than the alternative.
For once I am on the legal side of a torrent swarm.