|Date:||Oct 05, 2017|
New in this release¶
Nervana Cloud 1.7.0 introduces support for TensorFlow training as well as additional enhancements and bug fixes.
TensorFlow training support: Users are now able to launch and manage TensorFlow training jobs using the same mechanisms we currently offer for Neon. This includes uploading volume data, importing a model, running interactive sessions, and generating output and log files. TensorFlow models can be trained using ncloud or the web interface by specifying the environment as “tf_1.2.1”. Model logs, checkpoint, metafiles, etc. should be written to the /output directory so that they can be accessed with the model results. See here using cifar10. TensorFlow models with flags defined can be passed as additional arguments when training the model. Inference and cloud metric callbacks will be added in a future release.
Multi-volume support: The ncloud command line interface now offers the ability to run various jobs such as train, interact, batch and stream with multiple volumes. Users have the option to provide more than one volume while launching their jobs by simply using the
-v argument and commas to separate volume ids, as in:
ncloud model train ../neon/examples/mnist_mlp.py -v 3,4,5
Passwords for interactive sessions: Now when starting an interactive session, the user will be able to create a password to use a secure connection with a Jupyter Notebook through the web user interface. If the password is lost or forgotten, it can always be shown using ncloud interact show or list. You do not need to go through the log file to find it.
This release of Nervana Cloud includes a number of additional features and fixes:
- New header styling and efficiency improvements for the web user interface
- Improvements to using volume upload with large files in a directory
ncloud environment list
- When a batch job fails, the status is reported as an error
ncloud volume lncan now be used when linking to volumes that reside locally
Known Limitation: The datasets command line interface has been disabled at this time. This functionality has been replaced by volumes. Volumes supports all functionality provided by datasets, plus more. Users can still run training jobs against existing datasets.
For complete history see the ChangeLog