Docker containers can be run without root privilege using usernetes. This will be available in upstream docker soon™ following moby/moby#38050. Use this guide for now.

NOTE: In order to run this, user must have a range of subuid(5)s and subgid(5)s available to them, i.e they must be present in /etc/subuid and /etc/subgid. subuid and subgid range can be added by editting /etc/subuid and /etc/subgid directly or by running sudo usermod --add-subuids <from>-<to> --add-subgids <from>-<to> <user>, e.g sudo usermod --add-subuids 65536-100000 --add-subgids 65536-100000 user

Running rootless containers using usernetes

# grab a build from
tar xjvf usernetes-x86_64.tbz
cd usernetes

Run dockerd server

./ default-docker-nokube

You can now run rootless containers.

If you already have upstream docker installed system-wide

# docker -H unix://$XDG_RUNTIME_DIR/docker.sock <cmd>
docker -H unix://$XDG_RUNTIME_DIR/docker.sock run --rm -it busybox ls


export DOCKER_HOST="unix://$XDG_RUNTIME_DIR/docker.sock"
docker run --rm -it busybox ls

If you don’t

# ./ <cmd>
./ run --rm -it busybox ls


You will need to disable cgroup in nvidia-container-runtime since cgroup is not yet supported in docker rootless mode.

Get nvidia-docker

If you already have nvidia-docker installed, continue to next step.


If not, you need to get nvidia-container-runtime, nvidia-container-runtime-hook, libnvidia-container and libnvidia-container-tools. You can either download prebuilt packages:

(urls may vary depending on version and distro) or build from source, more details here. Either way, put the binaries somewhere in your PATH.

Configure nvidia-docker for running rootless containers

cgroup needs to be switched off in nvidia-container-runtime.

In case you can ask for a small favor from your sysadmin, just need to find the line that says #no-cgroups = false in /etc/nvidia-container-runtime/config.toml, uncomment it and set to true, i.e no-cgroups = true, then continue to next step.

If not, create a config.toml file with the following content

disable-require = false
#swarm-resource = "DOCKER_RESOURCE_GPU"

#root = "/run/nvidia/driver"
#path = "/usr/bin/nvidia-container-cli"
environment = []
#debug = "/var/log/nvidia-container-runtime-hook.log"
#ldcache = "/etc/"
load-kmods = true
no-cgroups = true
#user = "root:video"
ldconfig = "@/sbin/ldconfig.real"

Create a nvidia-container-runtime-hook file:


/usr/bin/nvidia-container-runtime-hook -config=<absolute-path-to-config.toml> "$@"

The #!/bin/sh is important here. Without it you’ll probably get an error that contains something like exec format error

make it executable chmod +x nvidia-container-runtime-hook and put it under usernetes/bin.

Run rootless containers with nvidia runtime

Install usernetes if you haven’t

# grab a build from
tar xjvf usernetes-x86_64.tbz
cd usernetes

Register nvidia runtime

Open ./Taskfile.yaml, look for this part

    - ./boot/

change the command to ./boot/ --add-runtime "nvidia=/usr/bin/nvidia-container-runtime"

or create a config.json file with the following content

    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []

and change the command to ./boot/ --config-file="<absolute-path-to-config-file>"

Run dockerd server

./ default-docker-nokube

Run docker client

docker -H unix://$XDG_RUNTIME_DIR/docker.sock run --runtime=nvidia --rm -it nvidia/cuda:10.0-devel nvidia-smi


export DOCKER_HOST="unix://$XDG_RUNTIME_DIR/docker.sock"
docker run --runtime=nvidia --rm -it nvidia/cuda:10.0-devel nvidia-smi


./ run --runtime=nvidia --rm -it nvidia/cuda:10.0-devel nvidia-smi