Hello everyone! So this article will be on how you can create your own Docker registry hub and push your own Docker images to it. So before we start, here are the prerequisites. They're obvious, but let's make sure.
Creating a Docker registry hub
Posted on June 29th of this year at 6:17 A.M.606 views17 min read
Hello everyone! So this article will be on how you can create your own Docker registry hub and push your own Docker images to it. So before we start, here are the prerequisites. They're obvious, but let's make sure.
- You will need to have Docker installed on both your local machine and the host machine you're attempting to do this on (or if you just want to do it locally for both, that's fine)
- You will need to have at least a minimum of 10GB on both of your devices before continuing
- You will need to have root access to edit your daemon.json file
So let's get started. We're actually going to do something that's outside of the norm since the idea behind this is not to impact any ports out there on the host machine. There's a couple of things here that we have to consider, we're still going to be touching the host's ports and that's fine. We're just going to be targeting ports that shouldn't be in use by other services. You can also avoid this if you're running the very advanced setup which is Docker with ipvlan l3. This allows you to create containers with their own separate ports. So for instance container1 will have its own port 80, container2 will have its own port 80, and even the host will have its own port 80. So you can actually go directly to that container's hostname without impacting the host's settings.
So let's get started. Let's add our Docker registry hub into the daemon.json file. If you're on Windows and macOS, this JSON object will be located in; Cog icon at the top of Docker Desktop → Docker Engine. In there, you should see a JSON object/string. If you don't already have a section for insecure-registries then you'll want to add the below to the "root" of your JSON object/string.
"insecure-registries": [
"docker.local",
"docker.local:5000"
]
For Linux users, it will be located at /etc/docker/daemon.json.
Please use your own domain. In this example, we're just using docker.local. So there's a couple of things to note here before we get started about the Docker registry hub.
- Docker registry hub runs on port 80 when it runs its API calls
- Docker registry hub's default port is 5000
You will also need to edit your hosts file and put in the IP address as well as the domain name (this can be located at C:\Windows\System32\Drivers\etc\hosts for Windows users and /etc/hosts for both macOS and Linux users). So for our example, it would be something like.
127.0.0.1 docker.local
This means we're pointing the domain name docker.local to our localhost since in our example, we're running everything in the Docker Desktop. Otherwise, this would have been on a server and we would be using a different IP address.
Here's the setup we're going to be using.
/hub/
├── app/
│ ├── docker-compose.yml
│ └── docker-entrypoint.sh
│
└── Dockerfile
Here is our Dockerfile.
# Import from base image out there on the Docker hub FROM docker:latest # Change to the root user USER root # Create working directory on the image WORKDIR app # Copy all files from the local app folder into the image's base directory COPY app . # Recursively deletes all .DS_Sore files within the system RUN find . -name ".DS_Store" -delete # Update the docker-entrypoint.sh file with execute permissions RUN chmod a+x /app/docker-entrypoint.sh RUN export TZ='America/Chicago' RUN export DEBIAN_FRONTEND=noninteractive # Run these commands # Update the system # Install these packages # ca-certificates (contains all trusted authorized certificates from the certificate authorities) # curl (command that allows you to download things from the internet as well as running REST API commands) # wget (command that allows you to download things from the internet) # htop (command that allows you to monitor CPU and memory usage) # openssh-server (contains the SSH technology so you can remote into your container) # vim (command that allows you to edit files) # net-tools (contains the ifconfig command - this allows you to display the local IP address e.g. 172.8.0.36) # zip (command that allows you to compress files) # git (contains the Git technology so that you can clone down Git repositories off the internet) # bash (contains the bash/sh technology that allows you to run Bash scripts) # openrc (Alpine's version of 'service') # openrc-doc (Alpine's version of 'systemctl') # Update the system RUN apk update ; apk add ca-certificates curl wget htop openssh-server vim net-tools zip git bash openrc openrc-doc ; apk update # Expose Docker registry port EXPOSE 5000 # Expose HTTP port EXPOSE 8080
We will not be using an entrypoint since the DIND's entrypoint will be overridden if we do that and it'll error out. The DIND (Docker-In-Docker) image has a very specific way it runs. This means we can't override that. I already attempted to see if we can add in our docker-entrypoint.sh file into the DIND's entrypoint and it fails out the container. This is because DIND's entrypoint only calls its dockerd-entrypoint.sh file and in that file, it actually calls out to the docker-init binary files. From the looks of it, majority of the operations are done in those binary files. When attempting to inject our own docker-entrypoint.sh, our script's order is actually injected at the beginning of the operation. Which means it will fail out since the Docker service itself has not yet been registered and started until further down the road. We also can't inject our docker-entrypoint.sh file after the operation as well because DIND's dockerd-entrypoint.sh file uses the exec command to run the docker-init binary file. In Linux, exec ignores anything below that command and will not execute it.
That being said, seems like we're out of options here. There's no way to truly "automate" this process. There is another way to create a "DIND" solution and have our own docker-entrypoint.sh however again it's not automated so you can't just run it and if something goes wrong, restart it and it automatically coming back online. You have to manually intervene and turn the Docker services back online if the container does go down. The Docker image we're pulling from has sort of that ability where if the base container itself is turned off and you restart it, the Docker services itself inside of the container will automatically restart. Though again with the Docker image we're pulling from, we don't have the ease of use to be able to inject our own docker-entrypoint.sh file in there.
Speaking of which, here is our docker-entrypoint.sh file.
#!/bin/bash registry='docker-registry' # Check to see if the registry hub container exists aka created if [ ! "$(docker ps -a -q -f name=$registry)" ]; then FILE=/app/docker-compose.yml # Checks to see if the hostname in the docker-compose.yml file is localhost if grep -q localhost "$FILE" then # Replace localhost with the current hostname hostVariable=$(cat /etc/hostname) sed -i -e "s/localhost/$hostVariable/g" "$FILE" fi if grep -q LOGIN_USERNAME "$FILE" then # Use the system's string replace sed -i -e "s/LOGIN_USERNAME/$LOGIN_USERNAME/g" "$FILE" fi if grep -q LOGIN_PASSWORD "$FILE" then # Use the system's string replace sed -i -e "s/LOGIN_PASSWORD/$LOGIN_PASSWORD/g" "$FILE" fi if [[ ! -n "$(grep -q 172.22.1.2 "/etc/hosts")" ]] then # Attach the IP of the docker-registry container to the host's /etc/hosts file echo "172.22.1.2 docker.local" >> /etc/hosts fi # Run docker-compose to create the registry container cd /app echo "Attempting to create the registry container" docker-compose up -d else # Check if the registry hub is down, then start it up if [ "$( docker container inspect -f '{{.State.Status}}' $registry )" = "exited" ]; then docker start $registry echo "Started $registry" fi fi
Remember to change any .sh file to executable before you start executing them. Essentially what we're trying to do with this file is check to see if the docker-registry container exists. If it doesn't, then we're going to do a couple more checks.
- Check to see if the file /app/docker-compose.yml has the default "localhost" we put in there
- If it does then we replace it with the current hostname's name
- Check to see if the file /app/docker-compose.yml has the default "LOGIN_USERNAME" we put in there
- If it does then we replace it with the user's -e LOGIN_USERNAME value
- Check to see if the file /app/docker-compose.yml has the default "LOGIN_PASSWORD" we put in there
- If it does then we replace it with the user's -e LOGIN_PASSWORD value
- Check to see if the file /etc/hosts has the docker-registry container's IP address
- If it doesn't then we append it to the end of the /etc/hosts file
Then finally spin up the 2 containers from the docker-composer.yml file in detach mode. This allows it to run in the background and free up our terminal.
Here's our docker-compose.yml file.
services: docker-registry: platform: linux/amd64 container_name: docker-registry hostname: localhost image: registry:2 ports: - 5000:5000 restart: always privileged: true volumes: - ./volume:/var/lib/registry networks: docker-network: aliases: - docker-network ipv4_address: 172.22.1.2 environment: - REGISTRY_STORAGE_DELETE_ENABLED=true docker-registry-ui: platform: linux/amd64 container_name: docker-registry-ui image: jc21/registry-ui ports: - 8080:80 environment: - REGISTRY_HOST=localhost:5000 - REGISTRY_SSL=false - REGISTRY_DOMAIN=localhost:5000 - REGISTRY_STORAGE_DELETE_ENABLED=true - REGISTRY_USER=LOGIN_USERNAME - REGISTRY_PASS=LOGIN_PASSWORD restart: on-failure privileged: true networks: docker-network: aliases: - docker-network ipv4_address: 172.22.1.3 networks: docker-network: driver: bridge driver_opts: com.docker.network.bridge.name: br-docker enable_ipv6: true ipam: driver: default config: - subnet: ${IPV4_NETWORK:-172.22.1}.0/24 - subnet: ${IPV6_NETWORK:-fd4d:6169:6c63:6f77::/64}
The docker-compose.yml file is pretty straightforward. We're creating 2 containers that are running on the same bridged network. Our 1st container is the Docker registry hub running on port 5000 and our 2nd container is the UI running on port 8080. The environment variables that we're passing in will all be automatically updated when we run our docker-entrypoint.sh file.
So first thing's first. To create our host container, we'll want to build the image first and then create the container based off that image. Browse to where ever you have saved your Dockerfile file. Make sure the folder structure is the same as what we posted in the beginning of this article. Then we'll want to run our build command like so.
docker build -t hub:1.0 .
Once our base image has been built, we'll now want to create the host container. Here's the command to do so.
docker run --detach -it \
--hostname HOST_NAME \
--name CONTAINER_NAME \
--restart always \
--privileged=true \
--publish 8081:8080 --publish 5000:5000 \
-e LOGIN_USERNAME=CUSTOM_USERNAME \
-e LOGIN_PASSWORD=CUSTOM_USER_PASSWORD \
hub:1.0
Remember to replace CUSTOM_USERNAME and CUSTOM_USER_PASSWORD with your own username and password you want to choose. Remember to also replace HOST_NAME, CONTAINER_NAME, and port 8081 with whatever you want. REMEMBER, keep the port 5000 in there since you that's the port the Docker registry API runs off of. In our example, we're doing something like this.
docker run --detach -it \
--hostname docker.local \
--name hub \
--restart always \
--privileged=true \
--publish 8081:8080 --publish 5000:5000 \
-e LOGIN_USERNAME=root \
-e LOGIN_PASSWORD=root \
hub:1.0
Ok, now that we got our base/host container created, we'll actually want to go into that container and create our registry hub and the UI. For Windows and macOS users, you can simply just click into your container and browse to the Exec tab in the Docker Desktop program. When you're there, the console should already be at the /app location. All you need to do now is run the following command.
./docker-entrypoint.sh
This will run our docker-entrypoint.sh file and create those 2 containers.
For Linux users or macOS users who want to torture themselves, you can run the following command to get into your base/host container.
docker exec -it -u 0 hub /bin/bash
Remember to replace "hub" in that command with whatever container name you gave it. Once you are in the container, again you should already be at the /app location. Now again run the bash command from above to run the docker-entrypoint.sh file and that's it. Once those 2 containers are created you can actually start pushing your local Docker images to your private registry hub.
Here's an example on how you would do that.
-
Build your Docker image from your Dockerfile by doing something like so.
docker build -t powershell:1.0 .
-
Tag your local Docker image with your private registry hub's location
docker tag powershell:1.0 docker.local:5000/v2/powershell:1.0
-
Now push your newly created tag to your private registry hub using the API
docker push docker.local:5000/v2/powershell:1.0
And that's it. To access your uploaded Docker images, just browse to the UI by going to docker.local:8081 or whichever port you chose for the port 8080 mapping.
To pull Docker images down from your private registry hub, just do something like the following.
docker pull docker.local:5000/v2/powershell:1.0
powershell will always be your actual image and the 1.0 will always be your version or tag.
There is 1 thing to note here though. When pushing to a local private registry hub, it may actually take time or lag a little bit. You just have to be patient.
And that's it folks. Hopefully this article has been very helpful to my fellow developers.