Announcing NeoPulse® on AWS SageMaker!

Just announced at re:Invent, the NeoPulse® Framework is now available on AWS SageMaker!

This documentation is designed to help users quickly build and deploy custom AI models using SageMaker with NeoPulse®.

There are two NeoPulse® offerings available in the SageMaker Marketplace: one for CPU only instances, and one for GPU enabled instances.

AWS Prerequisites

An AWS account is required to use NeoPulse® with AWS SageMaker.

NOTE: AWS SageMaker is not available in the AWS Free Tier.

After setting up an AWS account if necessary, log in through the console:

AWS Console
Figure 1: AWS console login

Next, configure an IAM role with the appropriate permissions for SageMaker to manage AWS resources, as well as an S3 bucket to hold the data and scripts necessary to train the model.

  • IAM Role

    To create a role for SageMaker execution:

    1. Select "Roles" on the left of the IAM console screen, and then click the "Create role" button:

      Create Role
      Figure 2: Create SageMaker Role

    2. Choose SageMaker from the list of services:

      SageMaker Service
      Figure 3: Choose SageMaker Service

    3. Add tags and name the role.

    Once this is done, a role should be available that looks similar to this:

    Final SM Role Role
    Figure 4: SageMaker Role

  • S3 Bucket

    Next, create an S3 bucket to hold the NML script, CSV file, and training data that will be used to build the model.

    1. From the S3 Service select "Create bucket":

      Create S3 Bucket
      Figure 5: Create S3 Bucket

    2. Name the bucket and select the region where SageMaker will do the training:

      Name S3 Bucket
      Figure 6: S3 Bucket, Name and Region

    3. Add tags and set options to comply with best practices and click "Next":

      Tag S3 Bucket
      Figure 7: S3 Bucket, Tags and Options

    4. Similarly, set the S3 bucket permissions in accordance with best practices, then select "Next."

    5. Review the options selected and then click "Create bucket"

    6. Now either upload data to the S3 bucket through the console or sync the data using the AWS CLI. In this example, the data is uploaded through the console:

      Upload S3 Bucket
      Figure 9: Console upload to S3 bucket

      This example uses the IMDB sentiment analysis data set and the S3 bucket contains two files: training_data.csv and train.nml

      • The file training_data.csv has two column headers: "Label" (the class identifier to be learned during training), and "Review" (the text of the movie review). Each line contains a single movie review, and there are 50,000 reviews in the file (25,000 for training, 25,000 for validation):

        training data
        Figure 10: First two lines of training_data.csv file

      • The bucket also MUST contain a file named train.nml. This is the master file describing the model to be built using NeoPulse®. See the documentation for a detailed discussion of how to use the NeoPulse® Modeling Language to build custom models with different data types including text, numeric, image, audio, and video. An NML script for the sentiment analysis example is shown here:

        train NML
        Figure 11: Example train.nml file

  • Algorithm Subscription

    Once the IAM role and S3 Bucket have been set up, and the data uploaded to S3, it's time to subscribe to the appropriate NeoPulse® offering. Determine whether training will be done on a CPU or GPU enabled instance. In this example, a CPU instance will be used.

    1. Go to the appropriate offering in the AWS Marketplace and click "Continue to Subscribe":

      Figure 12: Subscribe to NeoPulse®

    2. Review the terms and conditions, and select "Accept Offer":

      Figure 13: Accept Terms and Conditions

    3. Click "Continue to configuration" and then select the region that has the S3 bucket configured above, and then click "View in SageMaker":

      Figure 14: Configure Subscription

    The algorithm should now appear in the SageMaker console under Algorithms: My Subscriptions:

    Figure 15: Successful Subscription

Training Jobs

Now that the pre-requisites are satisfied, create a training job to build a custom AI model using the data in the S3 bucket.

  1. From the "Algorithms: My Subscriptions" tab of the SageMaker console, select the NeoPulse® subscription, and then select "Create training job" from the "Actions" drop down menu:

    Figure 16: Create training job

  2. Give the training job a unique name, and select the IAM role created above. NOTE: In the Resource configuration section, make sure that there is enough room in the "Additional volume per instance (GB)" field to hold the intermediate build products. The amount of necessary storage will vary based on a number of factors including, but not limited to: model architecture, number of epochs, etc.:

    Figure 17: Specify job settings.

    • If an IAM role was not created above, it is possible to create a new IAM role at this time by selecting "Create a new role" from the IAM role drop down menu. This brings up the following dialogue:

      Figure 18: Create new IAM role for training job

  3. In the "Input data configuration" box, specify the S3 location of the bucket holding the training data and NML script:

    Figure 19: Input data configuration

  4. In the "Output data configuration" box, specifiy the S3 location of the bucket. SageMaker will create a place for the model artificats at the completion of training.

    Figure 20: Output data configuration

  5. Finally, click the "Create training job" button to start training the model. The training job will now appear under the "Training jobs" section of the SageMaker console:

    Figure 21: Training job started

  6. When the training job is completed, the model artifacts will be located in a model.tar.gz file in the folder: training_job_name/output located in the S3 bucket that you specified:

    Figure 22: Model artifacts in S3 bucket

Inside model.tar.gz, there are three model artifacts:

  • neopulse.pim: The Portable Inference Model. This PIM file can be used on any machine that runs v3.0 of the NeoPulse® Query Runtime (to be released soon).

  • results.json: This is a JSON document containing metadata about the trained model. It includes all of the parameters chosen by the oracle when the keyword auto was used in the NML script, as well as the metrics (loss, validation loss, etc.) that were computed for the trained model.

  • logs/: This is the directory containing the Tensorboard logs written during training.

NOTE: While you can download the model artifacts and examine them (and even transfer the PIM to another machine running NPQR v3.0 or greater), you must leave them in the S3 bucket to use them for inference in SageMaker.

Create Model Package

Once training is completed, SageMaker needs to create a model package to be used for transform jobs (real-time or batch).

  1. From the completed training job, select "Create model package":

    Figure 23: Create model package

  2. Give the model package a unique name, a description, and click "Next":

    Figure 24: Model name and description

  3. SageMaker will ask whether or not to validate the model package. This is necessary if you intend to list the model for sale in the Marketplace. Select "No", and click "Create model package" to finish creating the model:

    Figure 25: Finish model creation

  4. The model package should be successfully created and appear in the SageMaker console:

    Figure 26: Completed model package

Create Model for Inference

Once a model package has been created, it can be used to create a model to be used for transform jobs (real-time or batch).

  1. From the SageMaker console, under Inference -> Models, select "Create model":

    Figure 27: Create model for inference

  2. Name the model, provide the appropriate IAM role, and click "Create model":

    Figure 28: Name model and provide IAM role

The completed model should now appear in the SageMaker console:

Figure 29: Successful model creation

Transform Jobs

Since the NeoPulse® Framework is designed to allow users to build models with virtually any data type, transform jobs (either real-time or batch) take input in the form of a .zip file. That zip file MUST contain a file named query.csv referencing any other data files present in the zip archive to be used for inference.

  • Batch Transform Jobs

    Batch transform jobs are completed asynchronously and the results are written directly to an S3 bucket. The input is a zip file, also present in an S3 bucket. To create a batch transform job using the model created above:

    1. From the SageMaker Batch transform console, select "Create batch transform job":

      Figure 30: Create batch transform job

    2. Name the job and select the instance type. In the Input section, select "S3 prefix" as the S3 data type, and specify "application/zip" as the Content type. Finally, specify the location of the .zip file in S3:

      Figure 31: Name batch transform job

    3. Specify the S3 bucket in which to write the output and click "Create job":

      Figure 33: Output location

    The batch transform job should be created successfully:

    Figure 34: Successful job creation

    The output product is located in the S3 location that was specified above. It will be a CSV file named

    Figure 35: Output S3 bucket

  • Real-time Transform Jobs

    For real-time transforms, SageMaker makes it possible to deploy an endpoint that can take an HTTP post request and return the inference result in real time. To create such an endpoint:

    1. From the SageMaker model console, select the model for the endpoint and click "Create endpoint":

      Figure 35: Endpoint configuration creation

    2. Give the endpoint a name and select "Create endpoint configuration":

      Figure 36: Endpoint configuration

    3. Under "Production variants" scroll to the right and click on "Edit":

      Figure 37: Edit Production variants

    4. Configure the endpoint for the appropriate production environment and click "Save", then click "Create Endpoint Configuration":

      Figure 38: Endpoint configuration

    5. Finally click "Create endpoint":

      Figure 39: Endpoint configuration

    The endpoint should now appear in the SageMaker console:

    Figure 40: Endpoint successfully created

    Clicking on the endpoint will bring up detailed information, including the API:

    Figure 41: Endpoint details

    The endpoint can then be invoked per the AWS SageMaker endpoint documentation. For an endpoint that hosts a model built using NeoPulse®, the body should contain a binary zip file, and the "Content-Type" header should be set to "application/zip".

    NOTE: You should delete endpoints when not in use to save AWS charges.