Self-hosting My Container Development Environment
2023-01-24
My containerized development environment is turning out to work pretty well. In fact, all of UniCipher was created in NeoVim from within the development container. Given the success, I want to start doing more work on the environment. Up until now, that has meant switching back to my host OS and tweaking things from there. What I really want is to be able to experiment and change my development environment from within the context of the development environment itself. This will allow me to take advantage of all the nice tooling I've setup.
The big lift here is building new container images from within the container itself.
What we need is a way to talk to the Docker instance running on my host OS from within the container.
That way we can issue it commands like docker build
.
As it turns out though, this is pretty straightforward.
The Docker CLI communicates with Docker via a Unix socket located at /var/run/docker.sock
.
So to get things working, we'll need to do the following:
- Install the Docker CLI in our container image.
- Make sure our user in the container is part of the
docker
group so it has permission to access the Unix socket. - Mount the host
/var/run/docker.sock
into the container at the same location.
The last part is the magic bit. From within the container, the Docker CLI should function as though its running on our host OS.
Doing the above three mostly worked. However, a quick test showed something was amiss:
After doing some digging, I realized my problem. Looking at the file permissions on the socket revealed the following:
Notice the group for the socket is 134
, not the expected docker
group.
Turns out that while both my host OS and container have a docker
group, the groups have different IDs.
This means that my container essentially doesn't recognize the group for docker.sock
(in fact my container has no group with an ID of 134
, hence ls
just prints the ID).
I tried a few different things including:
- Making a wrapper script that aliased
docker
tosudo docker
. This would make the group ownership irrelevant. I couldn't quite get this to work. Also, it felt nasty. - Making a symbolic link to the docker socket with different ownership. However, it turns out that symbolic link ownership only affects the link itself, not what it points to.
Ultimately, I ended up hacking my Dockerfile to take in the group ID of the docker
group from the host OS as an argument.
Then using groupmod
, I manually set the container docker
group ID to the same one as the host OS (credit to this stackoverflow post for the idea).
I'm not happy with this because:
- It ties the generated container image to the specific machine it was built on. While image portability isn't something I plan to use, it would still be nice if I could have left that door open.
- There's no guaranteeing that the host OS group ID won't conflict with a group ID that is already in use in the base image.
That said, it all seems to work. After bootstrapping a new image on the host OS, I'm able to build my container from inside my container!
Using the new image is as simple as exiting the container and starting up the new image. I'll go with this for now, but if anyone has a better idea for solving my ownership problem I'd love to hear it.