Sunday 6 August 2017

CUDA in Docker

To use CUDA within docker to take advantage of parallel programming on your GPU you need to expose your GPU inside the docker container. To do so you can use the nvidia-docker extension to expose your NVIDIA graphics card and drivers inside the container.

Requirements

NVIDIA Driver 

The first major requirement is to make sure you are using an NVIDIA graphics card and the NVIDIA propriety driver. On Ubuntu you can enable this from Software & Updates:



If using a laptop (or intel chip that includes integrated graphics) you may also need to make sure that you have selected the NVIDIA graphics card as the one in use. Once the driver is installed you can select the card using the NVIDIA X Server Settings applications:



If you had to change either of the above, restart your computer for them to take effect.

You can tell if your NVIDIA card is running by using the nvidia-smi command line tool:

$ nvidia-smi      
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 660M    Off  | 0000:01:00.0     N/A |                  N/A |
| N/A   62C    P0    N/A /  N/A |    236MiB /  1999MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
$ 

Docker and NVIDIA Docker

Obviously to use docker you must first install it. Follow the instructions on their site to download and install the latest version.

After install docker, you should then install the nvidia docker extension. Installers and instructions are available on the linked github page.

Running nvidia-docker

Once everything is installed you should be able to use the GPU in your container

$ nvidia-docker run -it --rm nvidia/cuda nvidia-smi       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 660M    Off  | 0000:01:00.0     N/A |                  N/A |
| N/A   62C    P0    N/A /  N/A |    236MiB /  1999MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
$ 
  
As you can see this is the same as the above output when running the tool outside of the container on my native host.

Choosing Type of OS

There are a number of pre-built images available using different versions of CUDA and popular OS / Containers, including:
  • Ubuntu 16.04
  • Ubuntu 14.04
  • CentOS 7
  • CentOS 8
It is also possible to view the dockerfiles used to create these images and extract the required values to re-create your own images. For example, this is the dockerfile for an Ubuntu 16.04 and CUDA 8


No comments:

Post a Comment