Rolling My Own Git Server

May 16, 2026

A running theme on this site lately has been detangling myself from corporate services. Mail off Gmail. Files off iCloud. Photos off whatever-it-is-this-week. The point is not to be a hermit -- I still use plenty of hosted services -- but to know that the things I depend on day to day live on hardware I own, in a rack I can see, behind a power switch I control.

For a long time the one big exception was source control. Every project I've ever worked on sits in a GitHub repo somewhere. Public stuff, private stuff, half-finished stuff, dotfiles, the lot. My issue is that my github account is a single closed account at a company that has been bought once already and could change the terms tomorrow. The data is mine; the platform is not.

Ben Brown wrote a nice short post on this a few weeks ago -- Super basic self hosted git repos -- where he points out that for personal projects you do not need the full GitHub experience. A bare repo on a Raspberry Pi reachable over Tailscale gives you branches, history, push, pull, and a backup target. That is the whole job. Reading it nudged me into finally doing something I had been putting off for years.

I went a little further than a bare repo on a Pi. Not much further -- I still wanted the "stripped down" feeling -- but I wanted three things that a plain SSH-accessible bare repo does not give you on its own:

This post is the writeup of what I ended up with. The whole thing is a single FastAPI app in front of git http-backend -- git's own CGI -- with a tiny asyncio queue for running CI jobs. It lives at git.$HOMELAB behind the same traefik + cloudflared setup the rest of my services use.

The Core Trick: git-http-backend

Git ships its own HTTP server. It is a CGI binary called git-http-backend, and it implements both sides of the smart HTTP protocol -- git-upload-pack for fetches and git-receive-pack for pushes. Apache and nginx examples for it have been in the git docs for over a decade. There is nothing exotic here.

If you put git-http-backend behind a webserver and point GIT_PROJECT_ROOT at a directory of bare repos, you have a working git server. That is it. Everything else -- the web UI, the issue tracker, the merge queue, the notifications -- is GitHub doing extra things on top. For my use case I do not need any of the extra things.

What I did want was a thin wrapper that handles auth and a little admin surface. FastAPI turned out to be a fine fit for that, because all it needs to do is:

  1. Check HTTP Basic Auth on every request.
  2. For git smart-HTTP routes, pipe the request body into git-http-backend as a subprocess and stream the response back.
  3. For admin routes (/admin/repos, /admin/jobs), do the obvious filesystem operations.
  4. Watch for successful pushes and queue a CI job for each new commit SHA.

The whole server is one Python file of maybe 250 lines. The Dockerfile is six lines on top of python:3.12-slim -- just apt-get install git and copy the app in. Bare repos live in a mounted volume at /repos. Job logs go to /jobs/<repo>.git/<sha>/. That is the entire architecture.

Auth

A single HTTP Basic credential, stored in ~/docker/.env:

GIT_AUTH_USER=pete
GIT_AUTH_PASS=somethinglongandboring

Every route requires it -- admin and git alike. There is no anonymous access. There is also no per-repo or per-user permissioning, because I am the only user. If I ever need that I will add it; for now the simplest thing that works is a single account.

The Basic Auth header survives git clone https://pete:PASS@git.$HOMELAB/foo.git just fine, and once you have cloned a repo, git's credential helper will remember the credentials and stop prompting you.

Creating a Repo

This was the one thing I wanted to be one command from my laptop, not an ssh session:

curl -u pete:PASS -X POST https://git.$HOMELAB/admin/repos \
  -H "Content-Type: application/json" \
  -d '{"name":"myproject"}'

On the server side that route runs git init --bare /repos/myproject.git and then sets http.receivepack true on the new repo so pushes will work over HTTP. That second step is the gotcha -- without it the server happily accepts clones and silently refuses pushes with a confusing error. One line of config in the init handler and that whole class of mistake goes away.

One quirk worth noting: git init --bare still defaults to master as the initial branch in git 2.x unless you pass --initial-branch=main or set init.defaultBranch globally. I left it at master because I do not care, but it surprises people who have only ever used GitHub.

The CI Runner

This is the part I am most pleased with. It is also the part where I had to be careful.

When git serves a push, the client sends the new pack data along with a list of ref updates in the request body. The ref update lines look like this in pkt-line format:

<old-sha> <new-sha> refs/heads/<branch>

I parse those lines out of the incoming request body before handing it to git-http-backend, collect the new (non-zero) SHAs, and after the backend returns a successful status, push each SHA onto an asyncio.Queue. A single consumer task pulls SHAs off the queue and runs them one at a time:

  1. Make a workspace directory at /workspace/<sha>/.
  2. git clone /repos/<repo>.git into it and check out the SHA.
  3. If .ci/run.sh exists, execute it with a 600 second timeout.
  4. Append stdout and stderr to /jobs/<repo>.git/<sha>/output.log.
  5. Write the exit code to a status file in the same directory.
  6. Delete the workspace.

The script gets two environment variables: GIT_REPO (e.g. myproject.git) and GIT_SHA. Anything else it needs it picks up from the checkout.

A .ci/run.sh for one of my static sites looks like:

#!/bin/sh
set -e
rsync -a --delete ./ deploy@host:/srv/www/some-site/

That is the entire pipeline. No yaml. No matrix builds. No marketplace. The build runs as root inside the git-server container, which is fine for a homelab and not fine for anything else. If I ever want isolation I will give each job a fresh container, but I have not needed it.

The "Don't Build Inside the Build Container" Lesson

The one thing that bit me almost immediately: my git-server container is intentionally tiny. It has git, Python, and an ssh client. ...no build toolchain. The first time I tried to run a real build in .ci/run.sh it failed in the most obvious way possible -- cargo: command not found.

I briefly considered installing every toolchain I might ever want into the image. That way lies madness. The image would balloon, every language update would mean a rebuild, and I would still be missing something the day I picked up a new project.

The fix was to push the actual work back to the host. The git-server container has an ed25519 key bind-mounted into it; the matching public key sits in ~/.ssh/authorized_keys on the host, restricted by from= clauses to the Docker bridge network. The .ci/run.sh changes ownership of the workspace to my user and then ssh's to host.docker.internal to run the actual build against the cloned source. The container stays small; the build environment is whatever the host has installed.

This feels slightly wrong the first time you do it -- a container ssh'ing back to its host is the kind of thing that gets you yelled at in a production environment -- but for a single-user homelab git server it is a perfectly fine boundary. The container is just an entry point and a queue; the work happens where the work lives.

Networking

The git-server container exposes port 8000. On the host I publish it as 9001 (port 8000 is taken by my restic REST server). It joins the cloudflaretunnel Docker network, where traefik picks it up via Docker labels and routes git.$HOMELAB to it. The existing wildcard *.$HOMELAB cert handles TLS. Cloudflare tunnels the public name through cloudflared to traefik, so there is no inbound port open on my router.

That whole stack was already there for the rest of my services. Adding git was a label block on the compose service. There is real value in standardizing on one ingress pattern across the homelab -- every new thing I add ends up being half a page of compose and zero new infrastructure.

What This Buys

I now have a place to put repos that is not GitHub and not on a hosted service. The data lives in ~/docker/git/repos/ on a host I back up nightly with restic. If the git-server itself catches fire I can stand up a new one in five minutes from the same image and point it at the same volume.

For the things I actually want to share publicly -- this site, a couple of small tools, anything I want issues and PRs on -- GitHub is still the right answer. That is where the audience is. But the private stuff, the half-finished experiments, the configs and notes and shell scripts that should never have been on a corporate platform in the first place, all of that has come home.

The build of the server itself, of course, is hosted on the server itself. The repo is at git.$HOMELAB/git-server.git; pushing to it kicks off a CI job that ssh's to the host and runs docker compose build git-server && docker compose up -d git-server. The first time that worked end-to-end -- push to master, watch the container rebuild itself, see the new behavior live -- felt like the kind of small magic that made me fall in love with this stuff in the first place.

What's Next

The obvious missing piece is a read-only web view. Right now if I want to look at a repo in the browser I can't -- there is no cgit, no stagit, no nothing. I will probably bolt stagit on at some point: it generates static HTML from a bare repo, which fits the rest of the aesthetic of this site perfectly.

The other thing I want is a simple webhook notifier so I can get a desktop notification when a CI job fails. Right now I find out by remembering to check /admin/jobs, which is to say I find out when something has been broken for a week.

Neither of those is urgent. The git server works, my repos live on hardware I own, and the last big tether to a corporate platform has been cut. That is enough for one weekend.

The task that got closed today:

x 2026-05-16 self-host git +homelab @home

back