Some weeks ago, Docker Inc, the company behind Docker Hub, has announced plans to remove the Docker Free Team plan. Teams can move to paid plans or apply to open source sponsored plans. However, many smaller project will not have a budget to switch to a paid plan. Moreover, the path to an open source sponsored plan depends on Docker Inc’s approval. This resulted in a lot of controversy, outcry in the community and ultimately Docker Inc reverting its plans.

Docker Inc likely didn’t expect such community reactions, but it also leaves the community and all the open source contributors unclear about future plans of Docker Inc to monetize the docker platform. Today, docker is being used everywhere and the docker hub platform is default place for all images. But what if, in the future, the docker hub platform gets monetized, forcing people either on paid plans or migrating all images to alternative (free tier) docker registries?

Turns out, you can already prepare the move today due to how the docker registry works. No need to run and operate a complete registry on your own, you can use any existing service on your own domain. If you are only interested in the setup, jump to the end of the article.

Docker Registry Architecture

First, the GET /v2/ endpoint is called to get general info like the version and check the authentication.

Second, GET /v2/<name>/manifests/<reference> is called, where name is timonback/zephykus and reference is the image tag (latest). It returns a manifest json structure containing the information about the image flavours (processor architectures for multi-arch images). Another call to this endpoint (reference is the sha256:... manifest for your computers architecture) return the links to the individual image layers.

Third, each call to GET /v2/<name>/blobs/<digest> return a single image layer, or pull to stay in docker terminology.

To summarize, the docker registry offers a simple HTTP API. You could imagine that a read-only implementation of a docker registry is a simple website/S3 bucket serving static content.

Want To Dive Deeper?

If you are interested to verify this yourself, you can start the docker daemon with a https proxy. mitmproxy is a nice tool that can be used for this.

  1. Setup mitmproxy
    1. Install mitmproxy
    2. Setup the CA certificate as described.
    3. Start mitmproxy, which exposes the proxy at localhost:8080 by default.
  2. Setup the docker daemon to use mitmproxy
    1. As I am using a system with systemctl, I create a new file at /etc/systemd/system/docker.service.d/http-proxy.conf with
    [Service]
    Environment="HTTPS_PROXY=https://localhost:8080"
    
    1. Reload the configuration via sudo systemctl daemon-reload
    2. Add the docker and/or your domain as an insecure registry as we are intercepting the https traffic. Revert this after your tests! Create/Update the /etc/docker/daemon.json file to include
    {
      "insecure-registries" : [ "registry-1.docker.io", "docker.timonback.de" ]
    }
    
    1. (Re-)start the docker daemon via sudo systemctl restart docker
  3. Pull a docker image, i.e.
    • docker pull timonback/zephykus (registry-1.docker.io is added by convention if none is specified)
    • docker pull docker.timonback.de/timonback/zephykus

mitproxy shows all requests made. Pulling docker pull docker.timonback.de/timonback/zephykus, results in the following requests: mitmproxy-docker-zephykus

Interestingly enough, the registry-1.docker.io is also just redirecting to cloudflare servers. As the response contains a x-amz-storage-class header, the data is likely hosted on AWS S3.


Also, the spec to pull images is documented in full at https://docs.docker.com/registry/spec/api/#pulling-an-image.

Setup on your own domain

This setup is based on the blog article by httptoolkit. You will need a (sub)domain and a project in Netlify. I did try to get it setup with cloudflare pages rules, but couldn’t get the redirects to work.

Netlify Configuration File

First, you create a netlify.toml configuration file, which creates the path redirects to your docker registry - registry-1.docker.io in my case. Take note, that these are only redirects. It is not a proxy server.

[[redirects]]
  from = "/v2/"
  to = "https://registry-1.docker.io/v2/"
  status = 302
  force = true # COMMENT: ensure that we always redirect
  headers = {X-From = "Netlify", X-Context="Docker-v2"}

[[redirects]]
  from = "/v2/timonback/*"
  to = "https://registry-1.docker.io/v2/timonback/:splat"
  status = 302
  force = true # COMMENT: ensure that we always redirect
  headers = {X-From = "Netlify", X-Context="Docker-v2-library"}

To ensure that only my docker images are supported and this setup is not abused by others, the redirects are suited for timonback images only.

I store this in a github repository and added an index.html as well.

Netlify Deployment

  1. As this is a GitHub repository, I add it as a new project to Netlify (my project name is timonback-docker). It deploys right away on the automatically generated Netlify domain.
  2. As part of the free tier, I add your (sub)domain via Domain Management.
  3. After that, head over to your domain registrar and add a CNAME for the domain to your project name (in my case CNAME timonback-docker.netlify.app).
  4. Moreover, you can instruct Netlify to issue a free ssl certificate for your domain through the awesome Let’s Encrypt project.

Result

You can see the result at docker.timonback.de. When visiting the page with your web browser, you will see instructions on how to pull docker images. Right now, I am still hosting images on docker hub. But I am prepared to switch and so are you - without the need for consumers to update their docker image pull domain.

Just start using docker pull docker.timonback.de/timonback/zephykus:latest today.