C

cgo_2017_artifact

Name Last Update
CLBlast Loading commit data...
docker Loading commit data...
gemm Loading commit data...
gemv_N Loading commit data...
gemv_T Loading commit data...
kernels Loading commit data...
lift Loading commit data...
nvidia-docker Loading commit data...
parboil Loading commit data...
shoc Loading commit data...
.gitattributes Loading commit data...
.gitignore Loading commit data...
README.md Loading commit data...
amd.tar.bz2 Loading commit data...
build.zsh Loading commit data...
generate_lift.zsh Loading commit data...
just_parse.zsh Loading commit data...
nvidia.tar.bz2 Loading commit data...
parse_lift.zsh Loading commit data...
plot.r Loading commit data...
rodinia_3.1.tar.bz2 Loading commit data...
run_baseline.zsh Loading commit data...
run_lift.zsh Loading commit data...

Artifact for CGO 2017

This repository is the artifact for the paper "Lift: A Functional Data-Parallel IR for High-Performance GPU Code Generation" accepted for publication at the International Symposium on Code Generationand Optimization 2017.

This artifact consists of:

  • the paper;
  • the Lift compiler implementation;
  • the lift IR expressions from which the OpenCL kernels used in the evaluation are generated;
  • the automatically generated OpenCL kernels used in the evaluation;
  • reference implementations of the benchmarks used for comparison.

Overview


Installation

Git clone

  • Due to large files in the artifact we require that you have git-lfs installed on your machine besides git. Deb and RPM packages for git-lfs can be found here: https://github.com/git-lfs/git-lfs/releases/tag/v1.5.3

  • If you have git and git-lfs installed, you should start by cloning this repository. When being ask for a username and password you can provide empty once by hitting the enter key.

    $ git version
    git version 2.1.4
    $ git lfs version
    git-lfs/1.4.4 (GitHub; linux amd64; go 1.7.3)
    $ git lfs install
    Git LFS initialized.
    $ git clone https://gitlab.com/michel-steuwer/cgo_2017_artifact.git
    Cloning into 'cgo_2017_artifact'...
    remote: Counting objects: 2572, done.
    remote: Compressing objects: 100% (122/122), done.
    remote: Total 2572 (delta 52), reused 0 (delta 0)
    Receiving objects: 100% (2572/2572), 36.57 MiB | 3.65 MiB/s, done.
    Resolving deltas: 100% (889/889), done.
    Checking connectivity... done.
    Downloading amd.tar.bz2 (52.49 MB)
    Username for 'https://gitlab.com': 
    Password for 'https://michel-steuwer@gitlab.com': 
    Downloading nvidia.tar.bz2 (51.65 MB) 
    Username for 'https://gitlab.com': 
    Password for 'https://michel-steuwer@gitlab.com': 
    Downloading rodinia_3.1.tar.bz2 (434.76 MB)
    Username for 'https://gitlab.com': 
    Password for 'https://michel-steuwer@gitlab.com': 
    Checking out files: 100% (2109/2109), done.
  • and then change into the directory:

    cd cgo_2017_artifact

    We will refer to this directory from now on with $ROOT.

Docker / Nvidia-Docker

We have prepared Dockerfiles for easily getting a working software environment.

If you have docker installed you can use the container to explore the Lift implementation and you can reproduce the OpenCL kernels used in the evaluation of the paper. But you will not be able to perform performance measurements from inside the container.

If you have an Nvidia GPU and you have Nvidia docker installed you can also perform some performance measurements.

There are two separate Dockerfiles for docker and nvidia-docker.

Get started using Docker

  • If you want to use docker and have it installed you should run the following command from the $ROOT directory to create a docker image called lift:
docker build -t lift docker
  • You can then run the image using the following command from the $ROOT directory
docker run -it --rm -v `pwd`:/lift lift

This will make the $ROOT directory accessible in the container in the /lift directory. So inside the container change into it:

cd /lift

You are now ready to start the evaluation.

Get started using Nvidia-Docker

  • If you want to use nvidia-docker and have it installed you should run the following command from the $ROOT directory to create a docker image called lift:
docker build -t lift nvidia-docker
  • You can then run the image using the following command from the $ROOT directory
nvidia-docker run -it --rm -v `pwd`:/lift lift

This will make the $ROOT directory accessible in the container in the /lift directory.So inside the container change into it:

cd /lift

You are now ready to start the evaluation.

Installation without Docker on a Linux machine

The reviewers are very welcome to not use docker and install Lift on their own Linux machines.

Dependencies

The Lift compiler is implemented in Scala and uses OpenCL to run the generated kernels on an OpenCL device, such as a GPU. Currently, we expect a Linux environment and have the following software dependencies:

  • OpenCL
  • Java 8 SDK
  • CMake
  • G++
  • LLVM / Libclang
  • OpenSSL
  • SBT

For executing scripts and plotting we require:

  • Z shell
  • Python
  • R with ggplot
  • dot

The reference implementations additionally require:

  • OpenGL (libmesa)
  • GLUT and GLEW

For an Ubuntu system these packages satisfy all dependencies (besides SBT):

sudo apt-get update && sudo apt-get install -y default-jdk cmake g++ wget unzip libclang-dev llvm libgl1-mesa-dev freeglut3-dev libglew-dev libz-dev libssl-dev opencl-headers zsh python finger r-base r-cran-ggplot2 apt-transport-https graphviz

These lines will install SBT:

echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823
sudo apt-get update
sudo apt-get install -y sbt

Building and getting Started

Overview of folder structure

  • The lift directory contains the implementation of the Lift compiler
  • The kernels directory contains the OpenCL kernels generated by the Lift compiler we used during our evaluation
  • The CLBlast, AMDAPP, NVIDIA_GPU_Computing_SDK, rodinia_3.1, gemm, gemv_N, gemv_T, parboil, and shoc directories contain reference implementations used in the performance comparison

Building Lift

  • To build lift change into the lift subdirectory (for the docker container this is /lift/lift) and execute sbt compile:
cd $ROOT/lift
sbt compile

This takes some time, as it first bootstraps Scala and the compiles the Lift compiler.

Building Reference Implementations

  • To build the reference implementations used in the experiments run:
cd $ROOT
./build.zsh

This will unpack and compile the following reference implementations:

  • CLBlast and three standalone applications using it (gemv_N, gemv_T, gemm)
  • NVIDIA SDK
  • AMD SDK
  • Rodinia
  • Parboil
  • SHOC

Experiment Workflow

Overview Experiment Workflow

To evaluate the artifact the reviewers are invited to:

  1. inspect the Lift implementation and investigate how it matches the description in the paper;
  2. reproduce the main performance result from the paper (Figure 8);
  3. generate the OpenCL kernels used in the experimental evaluation.

Inspect the Lift Implementation

The Lift implementation is in the $ROOT/lift folder.

Lift IR Example

The partial dot-product example discussed in Section 4.2 and show in Listing 1 can be found in $ROOT/lift/src/main/cgo_artifact/Listing1.scala. Running $ROOT/lift/scripts/Listing1 will generate the OpenCL code at $ROOT/partialDot.cl and a figure similar to Figure 3 at $ROOT/partialDot.pdf.

Organization of classes

Most of the classes shown in the class diagram in Figure 2, like Expr and FunDecl can be found in $ROOT/lift/src/main/ir/ast/Expr.scala and $ROOT/lift/src/main/ir/ast/Decl.scala. The implementations of more patterns discussed in the paper can be found in $ROOT/lift/src/main/ir/ast/ and $ROOT/lift/src/main/opencl/ir/pattern/.

Reproduce Results from Figure 8

  • To reproduce the performance results perform the following steps after performing the build steps.

  • To select which OpenCL platform and device to use, edit the PLATFORM and DEVICE variables in the $ROOT/run_baseline.zsh and $ROOT/run_lift.zsh scripts. By default, platform 0 and device 0 are selected.

  • For running the reference implementations execute:

    cd $ROOT
    ./run_baseline.zsh

    The results will be recorded in the baseline_results.csv file

  • For running the Lift implementations set the ARTIFACT_ROOT environment variable to point to the root directory and then execute:

    export ARTIFACT_ROOT=$ROOT
    cd $ARTIFACT_ROOT
    ./run_lift.zsh
    ./parse_lift.zsh

    The results will be recorded in the lift_results.csv file

  • For plotting the results execute:

    cd $ROOT
    ./plot.r

    This will produce the plot.pdf file which contains a plot for the selected platform similar to Figure 8.

Generating the Kernels

We provide the kernels we used in the experimental evaluation in the directory $ROOT/kernels.

  • To reproduce the kernels used in the evaluation, perform the following steps after performing the build steps.

  • For generating the kernels execute:

    cd $ROOT
    ./generate_lift.zsh

    The generated kernels can now be found in the $ROOT/generated_kernels directory.