This page will help you get started with Bitfusion Boost on AWS. We will show you how to distribute your GPU accelerated application across multiple GPU instance without having to change your code or having to do any complicated cluster setup. You’ll be up and running in a jiffy!

Launch a Bitfusion Boost AWS Cluster


A Bitfusion Boost AWS Cluster can be launched directly from the AWS Marketplace. The following AMIs are presently Boost enabled:

To launch on of the above AMIs in cluster mode, simple select one of the recommended cluster configurations on the right hand side of the screen as shown in the image below.

Bitfusion Boost AWS Cluster

These configuration are just a starting point and direct you to a pre-defined Cloud Formation Template (CFN) which you can then customize via several parameters:

You can read more here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

Parameter Description Default
Stack Name This is the name used to identify this particular cluster of machines. SelectedBitfusionClusterName
Application Instance Type This is the type of instance into which you will log in and use to run your application. g2.8xlarge
How many additional GPUs do you want to attach to the instances? This is the number of additional GPUs you would like to attach to your instance. Keep in mind, if your application instance already has GPUs, the total number of GPUs will be the number of GPUs in that instance plus the GPUs you specify here. 4
Use placement groups? A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. Keep in mind that there is usually limited availability with larger and mixed system types, this may result in a capacity error and the cluster failing to launch. You can read more here here. no
Key Name This is the key you will use to log into the application instances. If you have not created a key yet, you can do so here. This is a required field and a key must be selected.
Admin Location This is the IP range from which you are allowed to try and access the instance. This is a required field. An entry of 0.0.0.0/0 will allow connections to the application instance from all IP addresses.

The cluster creation process will take about 5 -10 minutes as we need to provision the nodes, configure the network and configure Bitfusion Boost. However, once the cluster is up and running, all you need to do is simply log into the application instance and run or deploy your application.

Running Your Application


Running your application with Bitfusion Boost is a single line effort. Simply run your application from the Boost Application Instance using the command below and your application will automatically take advantage of the Boost Cluster configuration that you created.

bfboost client <application with options>

For example, the command below will list all available GPU devices in the cluster:

bfboost client /usr/local/cuda/samples/bin/x86_64/linux/release/deviceQuery

Boost Cluster Components


Bitfusion Boost cluster configurations can be decomposed into two components:

Boost Application Instances (Clients):
These are instances into which you will log into, and on which you will run your application. They can be, but don’t have to be, instances that have GPUs.

Boost GPU Instances (Servers):
Based on the number of additional GPUs that you want to attach to the application instances, we will automatically provision and configure additional GPU instances on AWS to work together with your application instances.

Boost Cluster Configurations


There are many flexible configurations which are possible using Bitfusion Boost, however, most of them can be broken down into three categories: One-to-Many, Many-to-One, and Many-to-Many

One-to-Many: Single Boost Application Instance utilizes one or more additional GPUs
In a One-to-Many configuration a Boost Application Instance utilizes GPUs present in additional instances. The Boost Application Instance, may have local GPUs, but is not required to do so. Via Boost, all GPUs in the cluster are presented to the application as a single virtual instance. This configuration is best for maximum performance.

Boost One to Many Configuration

Many-to-One: Several Boost Application Instances share single GPU instance
In a Many-to-One configuration there are multiple Boost Application Instances which utilize a single GPU resource. The Boost Application Instances may run different GPU applications – when using Boost the GPU resource can be shared by multiple applications concurrently. This configurations is best for achieving maximum utilization of high-performance resources.

Boost Many to One Configuration

Many-to-Many: Several Boost Application Instances share several GPU instances
In a Many-to-Many configuration are are multiple Boost Application Instances which utilize multiple GPU resources. It is a combination of One-to-Many and Many-to-One and results good performance as well as good utilization.

Boost Many to Many Configuration

Launching and Configuring Bitfusion Boost Manually


All of the configurations above can be configured using our CFN templates. However, on occasion a need may arise for a custom configuration. You can always configure a cluster manually using the following steps:

  • Start a g2.2xlarge or g2.8xlarge GPU instance with one of the AMIs listed at the top of this guide.
  • SSH into each of the GPU Instance(s):
ssh -i <path to your pem file> ubuntu@<Boost GPU Instance Public IP>
  • Start the Boost Server on each of the GPU Instances(s):
sudo service bfboost start

  • To ensure the Boost Server starts after reboot remove /etc/init/bfboost.override
sudo rm /etc/init/bfboost.override
  • Start a Boost Application Instance with one of the AMIs listed at the top of this guide.
  • SSH into the Boost Application Instance:
ssh -i <path to your pem file> ubuntu@<Boost Application Instance Public IP>
  • Edit the server configuration file using your favorite editor:
sudo vi /etc/bitfusionio/adaptor.conf
  • Add the internal IP addresses of each Bitfusion Boost GPU  with which you want the Boost Application Instance to communicate with to the adaptor.conf file. Make sure to only list one IP address per line, and then save the file. The IP addresses below are only for example purposes, please grab the actual IP addresses for the servers from your AWS console.
52.3.209.127
52.3.209.128

Ideally, the Application Instance and GPU instances should be within the same VPC and you should utilize private IP address, thus incurring no bandwidth charges. If you specify public IP addresses be aware that you may incur bandwidth charges.

If you are adding the Application Instance to the cluster ensure it’s IP is the first line in the file.

Test that the Boost Application Instance can see the GPUs in the specified GPU Instance(s) by running the following command:

bfboost client /usr/local/cuda/samples/bin/x86_64/linux/release/deviceQuery

You should see a list of all the remote GPUs which are available to the Boost Application Instance.