|Date:||Nov 13, 2017|
New in this release¶
Intel® Nervana™ Cloud 1.8.0 introduces beta support for TensorFlow* inference as well as additional enhancements and bug fixes.
TensorFlow Beta Support: In the previous version, we introduced the ability to launch and manage TensorFlow training jobs using the same mechanisms we use for Neon. With 1.8.0, we have updated Nervana Cloud to support TensorFlow version 1.3. To train TensorFlow models, make sure you specify the
tf_1.3.0. We encourage feedback on this functionality as we work to harden it from beta to gold quality. At this time, cloud metric callbacks are not supported.
TensorFlow Streaming and Batch Inference: With 1.8.0, you can do batch and streaming prediction with a TensorFlow model trained with Intel® Nervana™ Cloud . To load the model for inference, the model needs to be exported to the
/output directory as a
saved_model.pb file, similar to if you were using TensorFlow Serving. Then simply kick off the inference jobs using the same mechanisms that you would use for Neon. There is no need to add additional
--environment argument since we know the environment that was used to train the model. If the volume data you are using contain images that are not the same size that was used for model training, you must use custom preprocessing.
This release includes a number of additional features and fixes:
- Ability to undeploy all running streaming predictions using
ncloud stream undeploy -aor
- Start viewing logs for interactive sessions with
ncloud interact start -Lor
- Run batch predict on imported models within the web user interface
- Fixed file count when uploading large datasets with the
- Improved styling and scrolling in the web user interface
Known Limitation: The datasets command line interface has been disabled at this time. This functionality has been replaced by volumes. Volumes supports all functionality provided by datasets, plus more. Users can still run training jobs against existing datasets.
For complete history see the ChangeLog