Run Docker containers leveraging NVIDIA GPUs without root privilege
Docker containers can be run without root privilege using usernetes. This will be available in upstream Docker soon™ following moby/moby#38050. Use this guide for now.
NOTE: In order to run this, user must have a range of subuid(5)s and subgid(5)s available to them, i.e they must be present in /etc/subuid
and /etc/subgid
. subuid and subgid range can be added by editting /etc/subuid
and /etc/subgid
directly or by running sudo usermod --add-subuids <from>-<to> --add-subgids <from>-<to> <user>
, e.g sudo usermod --add-subuids 65536-100000 --add-subgids 65536-100000 user
UPDATE Jan 6, 2020: Rootless mode has been added as an experimental feature to Docker since v19.03.
Running rootless containers using usernetes
# grab a build from https://github.com/rootless-containers/usernetes/releases
wget https://github.com/rootless-containers/usernetes/releases/download/v20190603.1/usernetes-x86_64.tbz
tar xjvf usernetes-x86_64.tbz
cd usernetes
Run dockerd server
./run.sh default-docker-nokube
You can now run rootless containers.
If you already have upstream Docker installed system-wide
# docker -H unix://$XDG_RUNTIME_DIR/docker.sock <cmd>
docker -H unix://$XDG_RUNTIME_DIR/docker.sock run --rm -it busybox ls
or
export DOCKER_HOST="unix://$XDG_RUNTIME_DIR/docker.sock"
docker run --rm -it busybox ls
If you don’t
# ./dockercli.sh <cmd>
./dockercli.sh run --rm -it busybox ls
Rootless containers leveraging NVIDIA GPUs
You will need to disable cgroups
in nvidia-container-runtime
since it is not yet supported in Docker rootless mode.
Get NVIDIA Container Toolkit
If you already have nvidia-container-toolkit
installed, continue to next step.
If not, you need to get nvidia-container-runtime>2.0.0
, nvidia-container-toolkit
, libnvidia-container
and libnvidia-container-tools
. You can either download prebuilt packages:
nvidia-container-runtime
andnvidia-container-toolkit
libnvidia-container
andlibnvidia-container-tools
or build from source, more details here. Either way, put the binaries somewhere in your PATH
.
Configure NVIDIA Container Toolkit for rootless containers
cgroups
needs to be switched off in nvidia-container-toolkit
.
Create a config.toml
file with the following content
disable-require = false
#swarm-resource = "DOCKER_RESOURCE_GPU"
[nvidia-container-cli]
#root = "/run/nvidia/driver"
#path = "/usr/bin/nvidia-container-cli"
environment = []
#debug = "/var/log/nvidia-container-runtime-hook.log"
#ldcache = "/etc/ld.so.cache"
load-kmods = true
no-cgroups = true
#user = "root:video"
ldconfig = "@/sbin/ldconfig.real"
Create a nvidia-container-runtime-hook
file under usernetes/bin
#!/bin/sh
/usr/bin/nvidia-container-runtime-hook -config=<absolute-path-to-config.toml> "$@"
and make it executable chmod +x nvidia-container-runtime-hook
.
The #!/bin/sh
is important here. Without it you’ll probably get an error that contains something like exec format error
.
Run rootless containers leveraging NVIDIA GPUs
Register nvidia
runtime
Create a config file at ~/.config/docker/daemon.json
with the following content
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
Run dockerd server
./run.sh default-docker-nokube
Run docker client
docker -H unix://$XDG_RUNTIME_DIR/docker.sock run --gpus all --rm -it nvidia/cuda:10.0-devel nvidia-smi
or
export DOCKER_HOST="unix://$XDG_RUNTIME_DIR/docker.sock"
docker run --gpus all --rm -it nvidia/cuda:10.0-devel nvidia-smi
or
./dockercli.sh run --gpus all --rm -it nvidia/cuda:10.0-devel nvidia-smi
You can use --runtime nvidia
instead of --gpus all
. However --gpus
allows more nuanced control. More information on what options can be passed to --gpus
can be found here.