close

How to use Chainer with Container Station

What is Chainer?

A Powerful, Flexible, and Intuitive Framework for Neural Networks. Bridge the gap between  algorithms and implementations of deep learning. 

  • Powerful : Chainer supports CUDA computation. It only requires a few lines of code to leverage a GPU. It also runs on multiple GPUs with little effort.
  • Flexible : Chainer supports various network architectures including feed-forward nets, convnets, recurrent nets and recursive nets. It also supports per-batch architectures.
  • Intuitive : Forward computation can include any control flow statements of Python without lacking the ability of backpropagation. It makes code intuitive and easy to debug.

​​​​​​​Installation Instructions

Recommended versions

QTS 4.3.5 and Nvidia Driver qpkg v1.3.5

Tag

Pull command

Description

chainer/chainer:v5.0.0b4-python3

docker pull chainer/chainer:v5.0.0b4-python3

 
 

 

QTS 4.4.x and Nvidia Driver qpkg v2.0.0

Tag

Pull command

Description

chainer/chainer:v6.0.0b1

docker pull chainer/chainer:v6.0.0b1

 
 


Before running the Chainer container, use the Docker pull command or click “Pull” to ensure the desired image is installed.

 

Go to Control Panel -> Hardware -> Graphics Card. Assign your GPUs to Container Station in the Resource Use column and click Apply.


 

Go to Container Station and click "Create". Search for "PyTorch" and click  "Install/Create".

 

Choose the version to install (see “Recommended versions” above) and click "Next".


 

Click "Advanced Settings".


 

To add one (or more) GPUs to the container, go to "Device" and click "Add".

 

Select GPUs from the drop-down menu to assign them to the container.



 

In  "Shared folder" you can assign shared folders to store training data sets, or the data to be inference. Once finished, click "Create".


 

Click "OK" and the container will be created.


 

The container’s creation progress can be viewed in Background Task.


 

After the container is created, you can enter the container’s page and click to go into the container’s bash.




 


Monitor GPU resource usage

 

You can view GPU resource usage in Resource Monitor.

 

Using Commands to mount NVIDIA GPUs

 

These commands are for use when you are connected to your NAS via SSH.

 

Use the following commands when running Docker Run. This will use the first GPU:
 

--device /dev/nvidia0:/dev/nvidia0 \

--device /dev/nvidiactl:/dev/nvidiactl \

--device /dev/nvidia-uvm:/dev/nvidia-uvm \

 -v `/sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None`/usr/:/usr/local/nvidia

 

To use both the first and second GPUs then use the following command:
 

--device /dev/nvidia0:/dev/nvidia0 \

--device /dev/nvidia1:/dev/nvidia1 \

--device /dev/nvidiactl:/dev/nvidiactl \

--device /dev/nvidia-uvm:/dev/nvidia-uvm \

 -v `/sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None`/usr/:/usr/local/nvidia

 

To use only the second GPU then use the following commands:
 

--device /dev/nvidia1:/dev/nvidia1 \

--device /dev/nvidiactl:/dev/nvidiactl \

--device /dev/nvidia-uvm:/dev/nvidia-uvm \

 -v `/sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None`/usr/:/usr/local/nvidia

 

Sample commands:
 

Type

Docker Run sample commands

QTS 4.3.5/4.3.6
Nvidia Driver qpkg v1.3.5

docker run -it --name chainer \

--device /dev/nvidia0:/dev/nvidia0 \

--device /dev/nvidiactl:/dev/nvidiactl \

--device /dev/nvidia-uvm:/dev/nvidia-uvm \

-v `/sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None`/usr/:/usr/local/nvidia \

chainer/chainer:v5.0.0b4-python3

QTS 4.4.x
Nvidia Driver qpkg v2.0.0

docker run -it --name chainer \

--device /dev/nvidia0:/dev/nvidia0 \

--device /dev/nvidiactl:/dev/nvidiactl \

--device /dev/nvidia-uvm:/dev/nvidia-uvm \

-v `/sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None`/usr/:/usr/local/nvidia \

chainer/chainer:v6.0.0b1

 
arrow
arrow

    tttt 發表在 痞客邦 留言(0) 人氣()