The train construct defines how to compile the model, how long to train the model, and initializes the dashboard to visualize the model's performance during training. It contains three blocks as shown in this example:
train: compile: optimizer = SGD: [lr = 0.001], loss = 'binary_crossentropy', metrics = [Accuracy:] ; run: epochs = 8 ; dashboard: ;
compileThis component tells the compiler how to build the model. This component requires a comma separated list of two arguments, and takes an optional third, terminated by a semi-colon:
The mandatory arguments are:
optimizer- The optimization algorithm to use. NeoPulse® AI Studio provides access to all of the optimizers in the Keras library. The optimizer can be set in two ways:
optimizer = 'optimizer'- sets the optimizer with the default parameters.
optimizer = optimizer: [param1=value, ...]- sets the optimizer and allows the user to alter the default parameters.
NOTE: the auto keyword may be used to let the oracle choose the optimizer.
loss- The loss function to use. NeoPulse® AI Studio provides access to all of the loss functions in the Keras library. Besides, some of losses NeoPulse® supports beyond Keras are here. Loss functions can be set:
loss = 'function_name'
NOTE: the auto keyword may be used to let the oracle choose the loss function.
metricsOPTIONAL The default metric for NeoPulse® AI Studio is just the training loss. You can also track the accuracy of the model by setting:
metrics = [Accuracy:]in the compile component.
NOTE: You must specify an optimizer and a loss function, or ask the AI oracle to pick one by using the auto keyword.
runThis component has two arguments (with their default values shown):
epochs = 1Number of times to train on the data.
initial_epoch = 0Epoch to start training (useful when retraining or fine-tuning an already trained model)
dashboardThis component enables NeoPulse® AI Studio to provide visualization of model training using Tensorboard. It does not take any arguments at this time.
train: compile: optimizer = SGD: [lr = 0.001], loss = auto, metrics = [Accuracy:] ; run: epochs = 8 ; dashboard: ;
This example asks the AI oracle to choose the optimizer to minimize the binary_crossentropy of the model, track the accuracy of the model as well as the loss, and train eight times on the dataset.
Starting with version 2.0, NeoPulse® AI Studio now makes available one of the state-of-the-art automatic hyper-parameter optimization algorithms, Spectral Optimization. Users can now provide a list of hyper-parameters for the Spectral Optimization algorithm to optimize over training parameters during the pre-training optimization stage. Then the model will be trained with the best combination of hyper-parameters for the specified number of epochs. Now AI studio currently supports optimization over the following hyper-parameters: optimizer, learning rate, momentum, decay rate and batch size.
To use spectral hyper-parameter optimization, users need to define the NML a little differently than regular model training process.
First, we need to set oracle mode to be "spectral_opt" at the begining of NML.
oracle("mode") = "spectral_opt"
Then, to set any of the five supported hyper-parameters to be single fixed value, declare the hyper-parameter as a list of one choice in
compile component of
Train construct. If choices of any of them are not given, system will use a default choices list for that hyperparameter.
- Optimizer choices are declared by key word "opt_options". Now, the optimizers supported by AI studio includes "sgd", "rmsprop", "adam", and "adamax". The default choices list of opt_options is:
opt_options = ['sgd', 'rmsprop', 'adam', 'adamax'];
- Learning rate choices are declared by key word "lr_options", and the choices of "lr_options" could be any positive number. The default choices list of lr_options is:
lr_options = [0.3, 0.1, 0.03, 0.01, 0.003, 0.001, 0.0003, 0.0001];
- Momentum choices are declared by key word "momentum_options", and the choices of "momentum_options" could be any positive number between 0 and 1. The default choices list of momentum_options is:
momentum_options = [0.99, 0.9, 0];
- Decay rate choices are declared by key word "decay_options", and the choices of "decay_options" could be any positive number. The default choices list of decay_options is:
decay_options = [0.0001, 0];
- Batch size choices are declared by key word "batch_options", and the choices of "batch_options" could be any positive integer. The default choices list of batch_options is:
batch_options = [32, 64, 128, 256];
NOTE: When optimizing over batch size, make sure that the batch sizes are all small enough so that an entire batch of data will fit in memory on the GPU.
oracle("mode") = "spectral_opt" oracle("gpus") = [0,2] #"If you only want to use the first and third GPUs, referenced by ID"# oracle("gpus") = 2 #"Use the first two GPUs"# source: bind = "training_data.csv" ; input: x ~ from "Image" -> image: [shape = [28,28], channels = 1] -> ImageDataGenerator: [rescale=0.00392156862745098] ; output: y ~ from "Label" -> flat:  -> FlatDataGenerator:  ; params: shuffle = True, shuffle_init = True ; architecture: input: x1 ~ image: [shape = [28,28], channels = 1] ; output: y1 ~ flat:  ; x1 -> Conv2D: [32,[3,3]] -> Activation: ['relu'] -> Conv2D: [32,[3,3]] -> Activation: ['relu'] -> MaxPooling2D: [pool_size=2] -> Conv2D: [64,[3,3]] -> Activation: ['relu'] -> Conv2D: [64,[3,3]] -> Activation: ['relu'] -> MaxPooling2D: [pool_size=2] -> Flatten: -> Dense:  -> Activation: ['softmax'] -> y1 ; train: compile: opt_options = ['sgd', 'adam', 'adamax'], lr_options = [0.03, 0.01, 0.003, 0.001, 0.0003], momentum_options = [0.99, 0.9, 0.0], decay_options = [0.0001, 0.0], batch_options = [32,64,128], loss = 'categorical_crossentropy', metrics = [Accuracy:] ; run: epochs = 2 ; dashboard: ;
At this time NML does not support user defined metrics.