読者です 読者をやめる 読者になる 読者になる

Enabling Chainer on EC2 GPU instance (5 commands to install, 4 commands to test)

(This is a translated version of the previous Japanese post.)

Thank you for users who already enjoyed Chainer, a python-based deep learning framework that our company PFN and PFI developed.

f:id:shoheihido:20150809150103p:plain

Though you can start using Chainer easily even on MacBook, however, GPU environment is almost mandatory to feel the real performance of Chainer. At the same time, we are seeing that some of the users still fail to install Chainer on their GPU-enabled desktop machines (especially for PyCUDA, which is one of the dependent libraries).

As a member of the development team, I would like to provide a turn-key environment, which enables them to simply use Chainer with no hustle. I found that GPU instances on AWS EC2 would be the most promising way to realize it *1.

Out of the articles on "Chainer on EC2", the simplest example to date is described on Ryosuke Tajima's blog.

Unfortunately, PyCUDA is still installed by building from source in his article. This is caused by the fact that sudo command overwrites $PATH in most environments so that the PyCUDA installer cannot find the CUDA-installed directory in PATH.

There are a few solutions to this problem, as recently added to the official document, like using an option to pip for installing them under the home directory of the user, or, explicitly transfer the $PATH to the sudo environment.

Therefore, this post introduces the (possibly) simplest way to set up the GPU-enabled Chainer on EC2.

The first step is to start a GPU instance using a CUDA-installed AMI provided by NVIDIA with Amazon Linux. You can find it from the AWS Marketplace.

f:id:shoheihido:20150809142133p:plain

After an instance wakes up, you can login as ec2-user. Then please add and reflect the environment variables about the CUDA-installed directory (with 3 lines of commands).

$ echo 'export PATH=/opt/nvidia/cuda/bin:$PATH' >> .bash_profile
$ echo 'export LD_LIBRARY_PATH=/opt/nvidia/cuda/lib64:$LD_LIBRARY_PATH' >> .bash_profile
$ source .bash_profile

f:id:shoheihido:20150809143350p:plain
Then let us install Chainer via pip command with "--user" option (4th line). It takes a few minutes to install NumPy.

$ pip install --user chainer

f:id:shoheihido:20150809143429p:plain
(skipped)
f:id:shoheihido:20150809143447p:plain

OK, the installation of Chainer itself, NumPy and other required libraries such as six, has been completed. Though you can already use Chainer with CPU-only, let's proceed to the hardest part, enabling GPU-mode.
As described above, try "pip install" with the "--user" option (5th line).

$ pip install --user --upgrade chainer-cuda-deps

f:id:shoheihido:20150809144031p:plain
(skipped: note that there is no error like nvcc not in path)
f:id:shoheihido:20150809144055p:plain
(skipped: wait for a moment...)
f:id:shoheihido:20150809144119p:plain

Surprise! (Yes, honestly, I know) Now the installation successfully finished even for PyCUDA.

Next we have to make sure that everything works correctly. We will execute the MNIST example after downloading Source code (tar.gz) of the latest release. It takes additional 3 lines. Here we use v1.1.2, which is the latest version at the time of this post (Aug. 10th, 2015). Be sure that the version number is correct.

$ wget https://github.com/pfnet/chainer/archive/v1.1.2.tar.gz
$ tar zxvf v1.1.2.tar.gz
$ cd chainer-1.1.2/examples/mnist/

f:id:shoheihido:20150809144506p:plain
(skipped)
f:id:shoheihido:20150809144702p:plain
Then, please run the example with the GPU-enabling option (the final line).

$ python train_mnist.py --gpu 0

f:id:shoheihido:20150809144854p:plain
(skipped: the data conversion of the MNIST dataset takes some time)
f:id:shoheihido:20150809145512p:plain
Finally, the MNIST example with 20 epochs has been completed with a test accuracy of about 98.5%.

In summary, by using GPU instances on EC2, only with the following 5+4 lines of commands, anyone can have his/her own GPU-enabled playground of Chainer. Actually, it only takes 10 to 15 mins after setting up a cloud instance on EC2.

$ echo 'export PATH=/opt/nvidia/cuda/bin:$PATH' >> .bash_profile
$ echo 'export LD_LIBRARY_PATH=/opt/nvidia/cuda/lib64:$LD_LIBRARY_PATH' >> .bash_profile
$ source .bash_profile
$ pip install --user chainer
$ pip install --user --upgrade chainer-cuda-deps
---
$ wget https://github.com/pfnet/chainer/archive/v1.1.2.tar.gz
$ tar zxvf v1.1.2.tar.gz
$ cd chainer-1.1.2/examples/mnist/
$ python train_mnist.py --gpu 0

I hope you can now use Chainer based on this post, for your research, work, or hobby .

Have a happy Chainer life!


#Disclaimer: Just in case, you may need some fees for using Chainer as introduced above, though the NVIDIA's AMI is free. Please be sure that it is your responsibility for taking care of usage, bill and payment of AWS services.

*1:Though some may think that an official AMI can solve the problem. However, the maintenance of such official AMIs seems a bit hard for the team, since new versions are being released every other week.