Getting Started

ncloud is a command line client to help you use and manage Nervana’s deep learning cloud.

To initially install it you’ll need to make sure you have the following software:

  • python (2.7 or higher supported, including python 3)
  • pip

ncloud is distributed as a python wheel package, installation is as simple as:

# (optional), first create a sandboxed python virtual environment:
#     virtualenv venv
#     . venv/bin/activate

$ pip install ncloud
# you may need to add sudo to the command above if you see permission
# issues installing system wide instead of inside a virtual environment.

This will create a new executable script called ncloud, that should now be available in your standard command PATH.

If you are running on Windows, please also read Windows Support.

Please contact Nervana Systems (info@nervanasys.com) to receive your login credentials. Once you receive them, enter them as prompted.

$ ncloud configure
INFO:root:Updating ~/.ncloudrc configuration.  Defaults are in '[]'
email [nervana@nervanasys.com]:
password:
tenant [Nervana Systems]:

You are now ready to start using the command line tool!

Once you have ncloud installed, you can upgrade it to the latest release at any time via: ncloud upgrade.

You can check which version of ncloud you are running via ncloud --version.

Note that the tutorial and command reference below are current as of the v2.0.0 release. For earlier ncloud releases, please see a prior version of this page.

ncloud Tutorial

The screencast video above walks new users through some of the core ncloud commands, several of which are also outlined below.

To begin, lets go through the process of training a neon model. We’ll use a small, fully connected image classification network trained on the MNIST handwritten digit dataset. You should download a copy of mnist_mlp.py and save it in your current working directory.

$ ncloud model train mnist_mlp.py
------------------------------
|      id      |   status    |
---------------+--------------
|           1  |   Received  |
------------------------------

Folder Training

Another way to train a model is by passing in the folder that contains the model. This allows you the flexibility to include custom dependencies in with your training job. To specify what file your training model is, name it __main__.py and include the -p parameter, or let us create it for you with the -s command (see below).

$ ncloud model train -p mnist_mlp
------------------------------
|      id      |   status    |
---------------+--------------
|           1  |   Received  |
------------------------------

You can also include the -s (--main-file) parameter to include a file you’d like to have treated as your __main__.py file in your folder. Doing so means you don’t need to have a __main__.py file in your training folder.

$ ncloud model train -p mnist_mlp --main-file train.py
------------------------------
|      id      |   status    |
---------------+--------------
|           1  |   Received  |
------------------------------

A third way to train a model is by passing a custom code url and command you wish to run. This works well when you have a git repository you’d like to be the basis of your training job. If the repo is private, you’ll need to contact us to get it whitelisted appropriately.

$ ncloud model train --custom-code-url myGitRepoUrl -- myCommandToRun
------------------------------
|      id      |   status    |
---------------+--------------
|           1  |   Received  |
------------------------------

Additionally, you can add custom neon parameters to any of your training jobs. To do so, type a double dash (--) and then your training script and custom args as shown below.

$ ncloud model train -- mnist_mlp.py --customFlag customValue --otherCustomFlag otherCustomValue
------------------------------
|      id      |   status    |
---------------+--------------
|           1  |   Received  |
------------------------------

If you’d like to train one of our neon examples but provide your own manifest or manifest_root files, you may do so by providing –manifest or –manifest_root args.

$ ncloud model train -- nmt --manifest "\"train:/myDataFolder/manifest.csv\"" --manifest "\"val:/myDataFolder/val_manifest.csv\"" --manifest_root "/myDataFolder"
------------------------------
|      id      |   status    |
---------------+--------------
|           1  |   Received  |
------------------------------

Congratulations! You’ve submitted your first model training job. Now you can view the details of this model by providing its ID.

$ ncloud model show 30611
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|    status    |                   train_command                   |  first_name  |  epochs_requested  |      train_start      |       train_end       |       name        |  batch_size  |    filename    |  epochs_completed  |  last_name   |      id      |        user_email        |     train_request     |
---------------+---------------------------------------------------+--------------+--------------------+-----------------------+-----------------------+-------------------+--------------+----------------+--------------------+--------------+--------------+--------------------------+------------------------
|   Completed  |  ncloud m t ../neon/examples/mnist_mlp.py -e1000  |        Kurt  |              1000  |  2017-02-28 21:57:41  |  2017-02-28 22:25:23  |  mnist_mlp-30611  |         128  |  mnist_mlp.py  |              1000  |      Cobain  |       30611  |  nervana@nervanasys.com  |  2017-02-28 21:57:30  |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

If you want to list the status of all of your models, use the model list command. Right now we only have a single submitted model. By default only the 10 most recent training jobs are shown, but this number can be increased or decreased by adding the -n parameter and specifying an integer. Also by default, all status types are displayed, but you can filter it to just the jobs that are currently running via --training or just the jobs that have finished running via --done.

$ ncloud model list
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|    status    |                   train_command                   |  first_name  |  last_name   |       name        |      id      |        user_email        |
---------------+---------------------------------------------------+--------------+--------------+-------------------+--------------+---------------------------
|     Removed  |  ncloud m t ../neon/examples/mnist_mlp.py -e1000  |        Kurt  |      Cobain  |  mnist_mlp-30603  |       30603  |  nervana@nervanasys.com  |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

You can also paginate these results. For example you have 1020 models to train, you can return a middle section of the examples using a combination of the count and offset parameters. You will also get the next model id offset to use to get the next batch of models.

$ ncloud m ls -n 8 -o 1000
Model offset: 992
--------------------------------------------------------------------------------------------------------------------------------------
|    status    |  train_command  |    first_name    |  last_name   |     name     |      id      |            user_email             |
---------------+-----------------+------------------+--------------+--------------+--------------+------------------------------------
|     Removed  |           None  |  NervanaSupport  |     Support  |        None  |         993  |           nervana@nervanasys.com  |
|     Removed  |           None  |           Buddy  |       Holly  |        None  |         994  |           nervana@nervanasys.com  |
|     Removed  |           None  |  NervanaSupport  |     Support  |        None  |         995  |           nervana@nervanasys.com  |
|     Removed  |           None  |           Buddy  |       Holly  |        None  |         996  |           nervana@nervanasys.com  |
|     Removed  |           None  |            Kurt  |      Cobain  |        None  |         997  |           nervana@nervanasys.com  |
|     Removed  |           None  |           Fleek  |  Flenderson  |        None  |         998  |           nervana@nervanasys.com  |
|     Removed  |           None  |            Kurt  |      Cobain  |        None  |         999  |           nervana@nervanasys.com  |
|     Removed  |           None  |            Kurt  |      Cobain  |        None  |        1000  |           nervana@nervanasys.com  |
--------------------------------------------------------------------------------------------------------------------------------------

If you are an admin user, you have the ability to see models in other tenants that you are an admin user of, using the -t flag.

$ ncloud m l -t 100 102 -n 5
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|    status    |                   train_command                   |  first_name  |  last_name   |       name        |      id      |        user_email        |
---------------+---------------------------------------------------+--------------+--------------+-------------------+--------------+---------------------------
|     Removed  |  ncloud m t ../neon/examples/mnist_mlp.py -e1000  |        Kurt  |      Cobain  |  mnist_mlp-30608  |       30608  |  nervana@nervanasys.com  |
|     Removed  |  ncloud m t ../neon/examples/mnist_mlp.py -e1000  |        Kurt  |      Cobain  |  mnist_mlp-30609  |       30609  |  nervana@nervanasys.com  |
|     Removed  |  ncloud m t ../neon/examples/mnist_mlp.py -e1000  |        Kurt  |      Cobain  |  mnist_mlp-30610  |       30610  |  nervana@nervanasys.com  |
|   Completed  |  ncloud m t ../neon/examples/mnist_mlp.py -e1000  |        Kurt  |      Cobain  |  mnist_mlp-30611  |       30611  |  nervana@nervanasys.com  |
|     Removed  |  ncloud m t ../neon/examples/mnist_mlp.py -e1000  |        Kurt  |      Cobain  |  mnist_mlp-30612  |       30612  |  nervana@nervanasys.com  |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

Upon receiving your Nervana cloud credentials, you should have been allocated a set of compute and memory resources. You can check to see a list of all available resources and what resources your current models are using via the resource list command.

$ ncloud resource list
-------------------------------------
|  units_available  |  units_total  |
--------------------+----------------
|                7  |            8  |
-------------------------------------

If for whatever reason, you decide that you want to cancel your model training job, use the model stop command.

$ ncloud model stop 1
---------------------------------------------
|      id      |           status           |
---------------+-----------------------------
|           1  |  Model training cancelled  |
---------------------------------------------

Most models produce output results files during training (learned model weights, callback data, logs), and you can transfer these to your machine via the model results command. While a model training job is running, these will represent the results as of the last serialization point (generally last epoch unless specified otherwise). In addition, you may want to create your own output files during training. You can do this by saving your desired files in the /output path. This will make your files available along with the autogenerated ones. By default, the files available are listed on the console. To actually download the results you can use either the --objects or --zip or --tar options. In the former case the files are directly transferred, whereas in the latter two a single zip or tar file containing the results will be transferred. For large files, the tar option is preferential as the results will be sent in multiple parts. You can also pass --url to get a list of URL’s to download the files. The set of files included can be refined by passing one or more --filter arguments with glob patterns.

$ ncloud model results 2
---------------------------------------------------------
|    filename    |     last_modified     |     size     |
-----------------+-----------------------+---------------
|   console.log  |  2016-02-24 17:25:31  |       19267  |
|       data.h5  |  2016-02-24 17:25:31  |       30384  |
|  launcher.log  |  2016-02-24 17:25:31  |       28793  |
|  mnist_mlp.py  |  2016-02-24 17:24:55  |        3458  |
|     model.prm  |  2016-02-24 17:25:31  |     1425688  |
|      neon.log  |  2016-02-24 17:25:31  |        2907  |
---------------------------------------------------------

$ ncloud model results --zip --filter "*.log" 2

Since training a model from scratch can be a time consuming process, you also have the ability of importing a previously trained neon model via the model import command. This will make it available for subsequent use as if you had trained it yourself. You can either import a locally saved model.prm file from disk, or pass the URL to a pre-trained file (which is handy in cases where you’d like to grab a file from our Model Zoo. Importing may take a few minutes to run as the file content is transferred to the cloud. You can optionally attach the original script used to train the model via --script, specify number of epochs trained for via --epochs, and give it a colloquial name using the --name parameter. These commands only impact details for showing the model or later downloading results.

$ ncloud model import -n alexnet https://s3-us-west-1.amazonaws.com/nervana-modelzoo/alexnet/alexnet.p
importing (may take some time)...
-------------------------------
|      id      |    status    |
---------------+---------------
|           3  |    Imported  |
-------------------------------

Once you have a trained model, you can deploy it to make it available for generating predictions in a streaming, one at a time fashion. To do that use the model deploy command.

$ ncloud model deploy 2
------------------------------------------------------------------
|      id      |         presigned_token          |    status    |
---------------+----------------------------------+---------------
|          23  |  8e517a6e6ed915103240c0c9ef0b4b  |   Deploying  |
------------------------------------------------------------------

Depending on the size and complexity of your model, this step may take a few minutes to complete. You can monitor the status of the model by calling ncloud stream list or ncloud stream show. When deployed the model will have a status value of Deployed.

Given a deployed model, you can now use it to generate predictions on new input data. To do that use the stream predict command.

$ ncloud stream predict 8e517a6e6ed915103240c0c9ef0b4b ./img1.jpg
{"outputs": [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], "shape": [10, 1]}

By default, you’ll see the raw output of the last layer of the network in JSON format, but this can be adjusted by specifying one of the formatters described in Canned Output Formatters.

When done, you can also undeploy a deployed model via the stream undeploy command.

$ ncloud stream undeploy 23
--------------------------------
|      id      |    status     |
---------------+----------------
|          23  |  Undeploying  |
--------------------------------

Again you can monitor via ncloud stream list or ncloud stream show to see when the stream is completely undeployed. When this is done, the stream will show a status of Completed.

Finally if you’d like to add your own datasets for training or predicting, you have a couple of ways of going about this depending on whether your data is already “in the cloud” somewhere, or resides locally on your own disk. For the former case there’s the dataset upload command, and for the latter you can use dataset link. To see what datasets you have available use dataset list or dataset show, and to remove datasets you can call dataset remove.

Note that you can append new records to an existing dataset via the use of the –dataset-id flag to the dataset upload command. New records will end up as part of the dataset referenced by dataset-id, and any existing records with the same filename will be overwritten.

You can also get a certain number of datasets by using the -n (count) parameter. In addition, dataset listing comes with pagination via the -o (offset) and –asc (ascending flag) parameters.

$ ncloud dataset upload tiny_image_dataset
Created dataset with ID 3.
3/3 Uploaded. 0 Failed.
--------------------------------------------------------------
|    failed    |      id      |   success    |  total_files  |
---------------+--------------+--------------+----------------
|           0  |           3  |           3  |            3  |
--------------------------------------------------------------

$ ncloud dataset list
----------------------------------------------------------------------------------
|  data_type   |      id      |           location_path           |     name     |
---------------+--------------+-----------------------------------+---------------
|       image  |           1  |                            path1  |    dataset1  |
|       video  |           2  |                            path2  |    dataset2  |
|       image  |           3  |  s3://some-initial-bucket/102/3/  |    dataset3  |
----------------------------------------------------------------------------------

$ ncloud ds ls -n 10
-----------------------------------------------------------------------------------------------------
|               location_uri               |      id      |        user_email        |     name     |
-------------------------------------------+--------------+--------------------------+---------------
|  s3://helium-datasets-dev/steven/102/45  |          45  |  nervana@nervanasys.com  |   dataset45  |
|  s3://helium-datasets-dev/steven/102/46  |          46  |  nervana@nervanasys.com  |   dataset46  |
|  s3://helium-datasets-dev/steven/102/47  |          47  |  nervana@nervanasys.com  |   dataset47  |
|  s3://helium-datasets-dev/steven/102/48  |          48  |  nervana@nervanasys.com  |   dataset48  |
|  s3://helium-datasets-dev/steven/102/49  |          49  |  nervana@nervanasys.com  |   dataset49  |
|  s3://helium-datasets-dev/steven/102/50  |          50  |  nervana@nervanasys.com  |   dataset50  |
|  s3://helium-datasets-dev/steven/102/51  |          51  |  nervana@nervanasys.com  |   dataset51  |
|  s3://helium-datasets-dev/steven/102/52  |          52  |  nervana@nervanasys.com  |   dataset52  |
|  s3://helium-datasets-dev/steven/102/53  |          53  |  nervana@nervanasys.com  |   dataset53  |
|  s3://helium-datasets-dev/steven/102/54  |          54  |  nervana@nervanasys.com  |   dataset54  |
-----------------------------------------------------------------------------------------------------

$ ncloud ds ls -n 10 -o 50
------------------------------------------------------------------------------------------------------
|               location_uri               |      id      |        user_email        |     name      |
-------------------------------------------+--------------+--------------------------+----------------
|  s3://helium-datasets-dev/steven/102/41  |          41  |  nervana@nervanasys.com  |    dataset41  |
|  s3://helium-datasets-dev/steven/102/42  |          42  |  nervana@nervanasys.com  |  sampleDigit  |
|  s3://helium-datasets-dev/steven/102/43  |          43  |  nervana@nervanasys.com  |    dataset43  |
|  s3://helium-datasets-dev/steven/102/44  |          44  |  nervana@nervanasys.com  |    dataset44  |
|  s3://helium-datasets-dev/steven/102/45  |          45  |  nervana@nervanasys.com  |    dataset45  |
|  s3://helium-datasets-dev/steven/102/46  |          46  |  nervana@nervanasys.com  |    dataset46  |
|  s3://helium-datasets-dev/steven/102/47  |          47  |  nervana@nervanasys.com  |    dataset47  |
|  s3://helium-datasets-dev/steven/102/48  |          48  |  nervana@nervanasys.com  |    dataset48  |
|  s3://helium-datasets-dev/steven/102/49  |          49  |  nervana@nervanasys.com  |    dataset49  |
|  s3://helium-datasets-dev/steven/102/50  |          50  |  nervana@nervanasys.com  |    dataset50  |
------------------------------------------------------------------------------------------------------

$ ncloud dataset remove 3

You can also download previously uploaded datasets. To see the details about an uploaded dataset, run the command below. This mimics the functionality of ncloud model results in terms of options available.

$ ncloud ds results 41
---------------------------------------------------------------------
|     size     |     last_modified     |          filename          |
---------------+-----------------------+-----------------------------
|        3247  |  2016-11-17 00:07:46  |  /mnist_mlpSYNTAXERROR.py  |
---------------------------------------------------------------------

An example command to fetch your dataset with id 2 as a zip file is below:

$ ncloud dataset results -z 2

If you’d like to generate predictions on a large collection of examples, you can use a trained model in batch inference mode instead of the usual streaming deployment. You’ll first need to train the model, and upload you prediction data through the regular dataset upload or link mechanism. Once complete you can utilize the batch predict command to kick off a batch prediction job. You can then monitor its progress via batch list or batch show. You can download the resultant predictions via batch results. If needed you can halt an in progress batch prediction job via batch stop.

$ ncloud batch predict 197 56
-------------------------------
|      id      |    status    |
---------------+---------------
|          21  |    Received  |
-------------------------------

$ ncloud batch list
----------------------------------------------
|      id      |   model_id   |    status    |
---------------+--------------+---------------
|          17  |         170  |   Completed  |
|          18  |         170  |     Removed  |
|          19  |         170  |     Removed  |
|          20  |         170  |   Completed  |
|          21  |         197  |      Queued  |
----------------------------------------------

$ ncloud b l -n 4 --asc -o 15
----------------------------------------------
|    status    |   model_id   |      id      |
---------------+--------------+---------------
|      Queued  |        1043  |          15  |
|   Completed  |        1048  |          16  |
|   Completed  |        1048  |          17  |
|   Completed  |        1048  |          18  |
----------------------------------------------

$ ncloud batch show 20
--------------------------------------------------------------------------------------------------------------------------
|  completed_predictions  |      id      |   model_id   |  requested_predictions  |      start_time       |    status    |
--------------------------+--------------+--------------+-------------------------+-----------------------+---------------
|                     13  |          20  |         170  |                     13  |  2016-07-27 16:08:09  |   Completed  |
--------------------------------------------------------------------------------------------------------------------------

$ ncloud batch results 20
-------------------------------------------------------
|     size     |     last_modified     |   filename   |
---------------+-----------------------+---------------
|          32  |  2016-11-18 19:40:57  |    metadata  |
|          35  |  2016-11-18 19:40:57  |  output.csv  |
-------------------------------------------------------

$ ncloud batch stop 21

In this tutorial we’ve focused primarily on launching long running jobs, but for initial exploratory work (as you determine how best to construct your neon script and model definition) it is more productive to work interactively in a REPL environment providing faster feedback. To enable this we support launching jupyter notebooks via the interact start command. You can see what notebooks are running via interact list, and stop them when done via interact stop (this will free up compute/memory resources they are occupying too). There’s more info in the Interactive Notebook sidebar section. There are three security options available: token, password, neither. To use the token, run the first start job and copy the token you receive from the ncloud interact show -l {id} log into the web gui. To use the password, run the second start job and type the password you selected into the web gui. To disable security (not recommended by Jupyter), run the third start job and you will be taken immediately to your interactive session without having to enter anything in.

$ ncloud interact start
--------------------------------------------------------------------------------------------------------------
|      id      |    status    |                                     url                                      |
---------------+--------------+-------------------------------------------------------------------------------
|          34  |   Launching  | https://helium.cloud.nervanasys.com/interact/067ec0cc198fc8092722c1096f4fc4  |
--------------------------------------------------------------------------------------------------------------

$ ncloud interact start -p myPassword
--------------------------------------------------------------------------------------------------------------
|      id      |    status    |                                     url                                      |
---------------+--------------+-------------------------------------------------------------------------------
|          34  |   Launching  | https://helium.cloud.nervanasys.com/interact/067ec0cc198fc8092722c1096f4fc4  |
--------------------------------------------------------------------------------------------------------------

$ ncloud interact start --unsecure
--------------------------------------------------------------------------------------------------------------
|      id      |    status    |                                     url                                      |
---------------+--------------+-------------------------------------------------------------------------------
|          34  |   Launching  | https://helium.cloud.nervanasys.com/interact/067ec0cc198fc8092722c1096f4fc4  |
--------------------------------------------------------------------------------------------------------------

$ ncloud interact list
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|    status    |  launch_command  |        name         |      id      |   user_email      |                              url                                              |
---------------+------------------+---------------------+--------------+-------------------+--------------------------------------------------------------------------------
|   Launching  |  ncloud i start  |  ipythonNotebook37  |          37  |  email@email.com  |  https://helium.cloud.nervanasys.com/interact/5d03c8d4dd6b7fd9f8b580cfc072f6  |
|   Launching  |  ncloud i start  |  ipythonNotebook38  |          38  |  email@email.com  |  https://helium.cloud.nervanasys.com/interact/bf04bb223382816f8ffd75e4fdfe50  |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

$ ncloud i list -n 1 -o 37 --asc
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|    status    |  launch_command  |        name         |      id      |        user_email        |                              url                                              |
---------------+------------------+---------------------+--------------+--------------------------+--------------------------------------------------------------------------------
|   Launching  |  ncloud i start  |  ipythonNotebook37  |          37  |  nervana@nervanasys.com  |  https://helium.cloud.nervanasys.com/interact/5d03c8d4dd6b7fd9f8b580cfc072f6  |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

$ ncloud interact stop 34
-------------------------------
|      id      |    status    |
---------------+---------------
|          34  |     Stopped  |
-------------------------------

$ ncloud i res 38
---------------------------------------------------------
|     size     |     last_modified     |    filename    |
---------------+-----------------------+-----------------
|         107  |  2016-11-15 22:16:37  |   console.log  |
|        1361  |  2016-11-15 22:16:37  |  launcher.log  |
---------------------------------------------------------

If you want to see a history of your past ncloud commands, you can do so via the ncloud history (h, hist) command. Through this command you can see history of all your commands ran, history for a certain type of command, as well as an individual interactive session or model training job command. If a model training or interactive session job failed to start, you will see (FAILED) next to the command ran.

$ ncloud h
---------------------------------------------------------------------------------------------------------
|     command      |  command_type  |    action    |           start_time            |  was_successful  |
-------------------+----------------+--------------+---------------------------------+-------------------
|     ncloud i ls  |      interact  |          ls  |  Thu, 10 Nov 2016 19:54:52 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:45:32 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:44:58 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:44:53 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:43:18 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:42:00 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:39:41 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:39:25 GMT  |           False  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:37:55 GMT  |           False  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:37:49 GMT  |            True  |
---------------------------------------------------------------------------------------------------------

$ ncloud hist m
---------------------------------------------------------------------------------------------------------
|     command      |  command_type  |    action    |           start_time            |  was_successful  |
-------------------+----------------+--------------+---------------------------------+-------------------
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:45:32 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:44:58 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:44:53 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:43:18 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:42:00 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:39:41 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:39:25 GMT  |           False  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:37:55 GMT  |           False  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:37:49 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 01:41:19 GMT  |            True  |
---------------------------------------------------------------------------------------------------------

$ ncloud history model t
-------------------------------------------------------------------------------------------------------------------------------------------
|                      command                       |  command_type  |    action    |           start_time            |  was_successful  |
-----------------------------------------------------+----------------+--------------+---------------------------------+-------------------
|  ncloud m t ../private-neon/examples/mnist_mlp.py  |         model  |           t  |  Thu, 10 Nov 2016 00:42:18 GMT  |            True  |
|  ncloud m t ../private-neon/examples/mnist_mlp.py  |         model  |           t  |  Thu, 10 Nov 2016 00:34:41 GMT  |            True  |
|  ncloud m t ../private-neon/examples/mnist_mlp.py  |         model  |           t  |  Thu, 10 Nov 2016 00:32:28 GMT  |            True  |
-------------------------------------------------------------------------------------------------------------------------------------------

$ ncloud h -w 'm s' -n 5
---------------------------------------------------------------------------------------------------------
|     command      |  command_type  |    action    |           start_time            |  was_successful  |
-------------------+----------------+--------------+---------------------------------+-------------------
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:44:53 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:39:41 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:39:25 GMT  |           False  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:37:55 GMT  |           False  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 01:21:33 GMT  |            True  |
---------------------------------------------------------------------------------------------------------

$ ncloud h -w 'm s' -n 5 -s no
---------------------------------------------------------------------------------------------------------
|     command      |  command_type  |    action    |           start_time            |  was_successful  |
-------------------+----------------+--------------+---------------------------------+-------------------
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:39:25 GMT  |           False  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:37:55 GMT  |           False  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 00:58:39 GMT  |           False  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 00:58:27 GMT  |           False  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 00:58:10 GMT  |           False  |
---------------------------------------------------------------------------------------------------------

$ ncloud h -n 3 -w ls m
---------------------------------------------------------------------------------------------------------
|     command      |  command_type  |    action    |           start_time            |  was_successful  |
-------------------+----------------+--------------+---------------------------------+-------------------
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:45:32 GMT  |            True  |
|     ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:44:58 GMT  |            True  |
|  ncloud m s 908  |         model  |           s  |  Thu, 10 Nov 2016 02:44:53 GMT  |            True  |
---------------------------------------------------------------------------------------------------------

$ ncloud h -a 2016-11-10 -n 2
------------------------------------------------------------------------------------------------------
|    command    |  command_type  |    action    |           start_time            |  was_successful  |
----------------+----------------+--------------+---------------------------------+-------------------
|  ncloud i ls  |      interact  |          ls  |  Thu, 10 Nov 2016 19:54:52 GMT  |            True  |
|  ncloud m ls  |         model  |          ls  |  Thu, 10 Nov 2016 02:45:32 GMT  |            True  |
------------------------------------------------------------------------------------------------------

As new features are added to the cloud, you’ll often need the latest release of ncloud to access them. You can upgrade to the latest release from within ncloud itself, via the upgrade command.

Note that this upgrade feature was introduced starting with the v1.1.0 release of ncloud so if you have an earlier version, you’ll first need to go through the installation steps described in the Getting Started section.

$ ncloud upgrade
Collecting ncloud==1.1.1 from
https://s3-us-west-1.amazonaws.com/nervana-ncloud/ncloud-1.1.1-py2.py3-none-any.whl
Using cached https://s3-us-west-1.amazonaws.com/nervana-ncloud/ncloud-1.1.1-py2.py3-none-any.whl
Collecting pip (from ncloud==1.1.1)
  Downloading pip-8.1.1-py2.py3-none-any.whl (1.2MB)
    100% |████████████████████████████████| 1.2MB 1.1MB/s
Installing collected packages: pip, ncloud
  Found existing installation: pip 8.1.0
    Uninstalling pip-8.1.0:
      Successfully uninstalled pip-8.1.0
  Found existing installation: ncloud 1.1.0
    Uninstalling ncloud-1.1.0:
      Successfully uninstalled ncloud-1.1.0
Successfully installed ncloud-1.1.0 pip-8.1.1

And there you have it. Most of the commands described above have additional option arguments that can be passed to specify other behaviors. These are outlined in the Commands section below.

Please feel free to contact us (info@nervanasys.com) with any questions.

ncloud Commands

Command Description
configure Update access credentials.
dataset Manage input datasets for training and batch inference.
model Train and deploy machine learning models.
interact Manage interactive ipython notebook environments.
stream Manage deployed models used for streaming inference.
batch Generate predictions against entire datasets in a batch.
resource List all resources assigned to your tenant.
stats Display usage and metric information.
tenant Manage tenants (needs admin privileges).
user Manage users (needs admin privileges).
group Manage user membership collections (needs admin privileges).
upgrade Upgrade ncloud to the latest version.
history List ncloud command history

configure

Update stored configuration options like email, password, and tenant.

ncloud configure

Update stored configuration credentials.

Syntax

ncloud configure [-h]

Optional Arguments

-h, --help show this help message and exit

dataset

Subcommands for managing datasets used as input for training, batch inference, and interactive sessions.

dataset list

List available datasets.

Syntax

ncloud dataset ls [-h] [-n COUNT] [-o OFFSET] [--asc]

Optional Arguments

-h, --help show this help message and exit
-n COUNT, –count COUNT Show up to n objects. Without the –asc flag, returns most recent objects. For unlimited set n=0. Can be used alongside the offset arg to implement pagination of objects returned. Ex: ncloud m ls -o 25 -n 10 would return objects 16-25, and ncloud m ls -o 25 -n 10 –asc would return objects 25-34.
-o OFFSET, –offset OFFSET Where in the object pagination to start next. Represents an object id to return the next newest X results, where X = count. Used as a way to paginate object listings.
–asc Displays objects in ascending order instead of the default descending order.

dataset remove

Remove a linked or uploaded dataset.

Syntax

ncloud dataset remove [-h] dataset-id

Required Arguments

dataset-id ID of dataset to remove

Optional Arguments

-h, --help show this help message and exit

dataset show

Show details for a given dataset ID.

Syntax

ncloud dataset show [-h] [-r RENAME] dataset-id

Required Arguments

dataset-id ID of dataset to show details of.

Optional Arguments

-h, --help show this help message and exit
-r RENAME, --rename RENAME Update the colloquial name of this dataset.

dataset upload

Upload a custom dataset to Nervana Cloud. This will copy files from local disk and may take some time depending on the size of data to be transferred. See Custom Datasets for required directory structure and format.

Syntax

ncloud dataset upload [-h] [-n NAME] [--folow-symlinks]
                      [-i DATASET_ID] directory

Required Arguments

directory Local path of the data root directory.

Optional Arguments

-h, --help show this help message and exit
-n NAME, --name NAME Colloquial name of the dataset. Default name will be given if not provided.
--follow-symlinks Follow symlinks while enumerating files to upload.
-i DATASET_ID --dataset-id `` ``DATASET_ID Append records to the existing specified dataset. Any identically named files are overwritten.

dataset results

Retrieve dataset file previously uploaded. Options same as model results.

Syntax

ncloud dataset (ds) results
    [-h]
    [-d DIRECTORY]
    [-url | --zip | --tar | --objects]
    [--filter FILTER]
    object_id

Required Arguments

object_id ID of dataset to retrieve results of.

Optional Arguments

-h, --help show this help message and exit
-d DIR, --directory DIR Location to download files {DIR}/{model_id}/results_files. Defaults to current directory.
-u, --url Get URLS for file download instead.
-o, --objects Download results directly.
-z, --zip Retrieve a zip file of the results.
-t, --tar Retrieve a tarball of the results.
-f FILTER, --filter FILTER Only retrieve files with names matching glob pattern <FILTER>

interact

Subcommands for launching and managing interactive environments.

interact start

Launch a new interactive notebook environment.

Syntax

ncloud interact start
    [-h]
    [-d DATASET_ID]
    [-f FRAMEWORK_VERSION]
    [-i RESUME_MODEL_ID] [-g GPUS]
    [-u CUSTOM_CODE_URL] [-c CUSTOM_CODE_COMMIT]
    [-n NAME]
    [-p PASSWORD | --unsecure]

Optional Arguments

-h, --help show this help message and exit
-d DATASET_ID, --dataset-id DATASET_ID ID of dataset to mount in notebook
-f FRAMEWORK_VERSION, --framework-version FRAMEWORK_VERSION Neon tag, branch or commit to use
-i RESUME_MODEL_ID,
--resume-model-id RESUME_MODEL_ID
Start training a new model using the state parameters of a previous one.
-g GPUS,
--gpus GPUS
Number of GPUs to train this model with.
-u CUSTOM_CODE_URL, --custom-code-url CUSTOM_CODE_URL URL for codebase containing custom neon scripts and extensions.
-c CUSTOM_CODE_COMMIT, --custom-code-commit CUSTOM_CODE_COMMIT Commit ID or branch specifier for custom code repo.
-n NAME, --name NAME Name of this interactive ipython notebook. If not supplied, one will be provided for you.
-p PASSWORD, --password PASSWORD Password to enter once notebook starts up. If no password entered, need to enter token provided in the ncloud i show {id} -l log at the end. Incompatible with unsecure flag.
--unsecure Latest notebook strongly recommends authentication. If you wish to bypass this, pass this flag and you won’t have to enter a token or password on notebook start. Incompatible with password flag.

interact list

List interactive sessions.

Syntax

ncloud interact ls [-h] [-n COUNT] [-o OFFSET] [--asc] [-a]

Optional Arguments

-h, --help show this help message and exit
-n COUNT, --count COUNT Show up to n objects. Without the –asc flag, returns most recent objects. For unlimited set n=0. Can be used alongside the offset arg to implement pagination of objects returned. Ex: ncloud m ls -o 25 -n 10 would return objects 16-25, and ncloud m ls -o 25 -n 10 –asc would return objects 25-34.
-o OFFSET, --offset OFFSET Where in the object pagination to start next. Represents an object id to return the next newest X results, where X = count. Used as a way to paginate object listings.
--asc Displays objects in ascending order instead of the default descending order.
-a, --all Show sessions in all states.

interact stop

Stop a currently running interactive session.

Syntax

ncloud interact stop ID

Required Arguments

id one or more IDs of sessions to stop.

Optional Arguments

-h, --help show this help message and exit

interact results

Retrieve interactive session files. Options same as model results.

Syntax

ncloud interact results
    [-h]
    [-d DIRECTORY]
    [-url | --zip | --tar | --objects]
    [--filter FILTER]
    object_id

Required Arguments

object_id ID of interactive session to retrieve training results of.

Optional Arguments

-h, --help show this help message and exit
-d DIR, --directory DIR Location to download files {DIR}/{model_id}/results_files. Defaults to current directory.
-u, --url Get URLS for file download instead.
-o, --objects Download results directly.
-z, --zip Retrieve a zip file of the results.
-t, --tar Retrieve a tarball of the results.
-f FILTER, --filter FILTER Only retrieve files with names matching glob pattern <FILTER>

model

Manage, train, and deploy machine learning models.

model deploy

Make a trained model available for generating streamed, one-off predictions.

Syntax

ncloud model deploy
    [-h]
    [-g GPUS]
    [-d DS]
    [-f EF]
    [--custom-code-url CUSTOM_CODE_URL]
    [--custom-code-commit CUSTOM_CODE_COMMIT]
    model_id

Required Arguments

model_id ID of model to deploy

Optional Arguments

-h, --help show this help message and exit
-g GPUS, --gpus GPUS Number of GPUs to expose in this notebook environment. Defaults to 1.
-d DS, --dataset-id DS ID of dataset to mount inside the notebook environment
-f EF, --extra-files EF Zip of extra files to include in the deployment.
--custom-code-url CUSTOM_CODE_URL URL for codebase containing custom neon scripts and extensions
--custom-code-commit CUSTOM_CODE_COMMIT Commit ID or branch specifier for custom code repository

model import

Import a previously trained neon deep learning model.

Syntax

ncloud model import
    [-h]
    [-n NAME]
    [-s SCRIPT]
    [-e EPOCHS]
    input

Required Arguments

input Serialized neon model filename or URL

Optional Arguments

-h, --help show this help message and exit
-n NAME, --name NAME Colloquial name of the model. Default name will be given if not provided.
-e EPOCHS, --epochs EPOCHS Number of epochs imported model trained (amount originally requested). Not required. Will default to 10.
-s SCRIPT, --script SCRIPT .py or .yaml script used to train the imported model. Note this isn’t required. Only useful for book-keeping, later download.

model list

List all submitted, queued, and running models.

Syntax

ncloud model ls [-h] [-n COUNT] [-o OFFSET] [--asc] [--done]
                   [--training] [-t TENANT_IDS [TENANT_IDS ...]]
                   [--details] [-z]

Optional Arguments

-h, --help show this help message and exit
--done show only completed or error status models
--training show only models that are currently running
-n COUNT, --count COUNT Limit display of results to COUNT most recent. Defaults to 10 if not specified.
--details show more details about models.
-z, --model-zoo List all models in the model zoo.
-t TENANT_IDS [TENANT_IDS ...],
--tenant_ids TENANT_IDS [TENANT_IDS ...]
Tenant(s) to view models for. Enter 0 for all tenants. Only applies to users with admin privileges.

model show

Show details for a given model ID.

Syntax

ncloud model show [-h] [-l] [-n] [-r RENAME] [-z] [-L] [-N] model_id

Required Arguments

model_id ID of model to show details of.

Optional Arguments

-h, --help show this help message and exit
-l, --console-log Show console log of model runtime.
-n, --neon-log Show neon log file.
-r RENAME, --rename RENAME Update the colloquial name of this model.
-z, --model-zoo Show model in the model zoo.
-L, --console-log-follow Show console log of model runtime as output grows. tail -f like
-N, --neon-log-follow Show neon log data as output grows. tail -f like

model stop

Stop a currently training model given its ID.

Syntax

ncloud model stop [-h] [--all-models] model_id

Required Arguments

model_id ID of model to stop.

Optional Arguments

-h, --help show this help message and exit
-a, --all-models stop all models you have access to.

model train

Start or resume training a deep learning model.

Syntax

ncloud model train
    [-h]
    [-n NAME]
    [-d DATASET_ID]
    [-v VALIDATION_PCT]
    [-e EPOCHS]
    [-z BATCH_SIZE]
    [-f FRAMEWORK_VERSION]
    [-m MGPU_VERSION]
    [-a ARGS]
    [-i RESUME_MODEL_ID]
    [-S SNAPSHOT_NUMBER]
    [-g GPUS]
    [-r REPLICAS]
    [-u CUSTOM_CODE_URL]
    [-c CUSTOM_CODE_COMMIT]
    [--model-count MC]
    [--log-immediately]
    [--package]
    [-s MAIN_FILE]
    filename [arguments [arguments ...]]

Required Arguments

filename .yaml or .py script file for Neon to execute.
arguments neon command line arguments. Prefix with –

Optional Arguments

-h, --help show this help message and exit
-n NAME, --name NAME Colloquial name of the model. Default name will be given if not provided.
-d DS, --dataset-id DS ID of dataset to use.
-v VP, --validation-pct VP Percent of a dataset to use as hold out validation split.
-e EPOCHS, --epochs EPOCHS Number of epochs to train this model.
-z BATCH_SIZE, --batch-size BATCH_SIZE Mini-batch size to train this model.
-f FV, --framework-version FV neon tag, branch or commit to use. Defaults to the latest stable version.
-m MV, --mgpu-version MV mgpu tag, branch or commit to use, if ‘gpus’ > 1.
-a ARGS, --args ARGS neon command line arguments [deprecated].
-i RM, --resume-model-id RM Start a new training job using the state parameters of a previous model training.
-S SN,
--resume-snapshot SN
When resuming a model, start with this snapshot number.
-g GPUS, --gpus GPUS Number of GPUs to train this model with.
-r REPLICAS, --replicas REPLICAS Number of the main process to invoke. 1 uses parameter server, 2+ uses p2p.
--custom-code-url CUSTOM_CODE_URL URL for codebase containing custom neon scripts and extensions
--custom-code-commit CUSTOM_CODE_COMMIT Commit ID or branch specifier for custom code repository
--model-count MC The number of model trainings to create.
-L, --log_immediately Start tailing model train logs. Triggers ncloud model show -L automatically.
-p, --package Run custom dependicies as if they were a python package. filename should be a folder with python package structure. Needs a __main__.py in a folder alongside utils.py.
-s MAIN_FILE, --main-file MAIN_FILE Optionally used with -p arg. Ignored otherwise. Treats passed file as the __main__.py file for training. Must exist inside training folder.

model results

Retrieve model training results files: model weights, callback outputs, and neon run logs.

Syntax

ncloud model results
    [-h]
    [-d DIRECTORY]
    [-url | --zip | --tar | --objects]
    [--filter FILTER]
    model_id

Required Arguments

model_id ID of model to retrieve training results of.

Optional Arguments

-h, --help show this help message and exit
-d DIR, --directory DIR Location to download files {DIR}/{model_id}/results_files. Defaults to current directory.
-u, --url Get URLS for file download instead.
-o, --objects Download results directly.
-z, --zip Retrieve a zip file of the results.
-t, --tar Retrieve a tarball of the results.
-f FILTER, --filter FILTER Only retrieve files with names matching glob pattern <FILTER>

stream

Subcommands for performing streaming, one at a time predictions against deployed models.

stream predict

Generate predicted outcomes from a deployed model and input data.

Syntax

ncloud stream predict [-h] [-t TYPE] [-f FORM] [-a ARGS] token input

Required Arguments

token Pre-signed token for authentication. Returned as part of deploy call.
input Input data filename or URL to generate predictions for.

Optional Arguments

-h, --help show this help message and exit
-t TYPE, --in-type TYPE Type of input. Valid choices are “image” (default), “json”
-f FORM, --formatter FORM How to format predicted outputs from the network. “raw”, “classification”
-a ARGS, --args ARGS Additional arguments for the formatter. See Canned Output Formatters

stream show

Show prediction details for a given stream ID.

Syntax

ncloud stream show [-h] [-l] [-L] stream_id

Required Arguments

stream_id ID of stream to show prediction details of.

Optional Arguments

-h, --help show this help message and exit.
-l, --log Show console log of predict runtime.
-L, --log-follow Show console log of predict runtime as data grows. tail -f like.

stream list

List all stream prediction deployments.

Syntax

ncloud stream ls [-h] [-n COUNT] [-o OFFSET] [--asc]

Optional Arguments

-h, --help show this help message and exit
-n COUNT, –count COUNT Show up to n objects. Without the –asc flag, returns most recent objects. For unlimited set n=0. Can be used alongside the offset arg to implement pagination of objects returned. Ex: ncloud m ls -o 25 -n 10 would return objects 16-25, and ncloud m ls -o 25 -n 10 –asc would return objects 25-34.
-o OFFSET, –offset OFFSET Where in the object pagination to start next. Represents an object id to return the next newest X results, where X = count. Used as a way to paginate object listings.
–asc Displays objects in ascending order instead of the default descending order.

stream undeploy

Remove a deployed model.

Syntax

ncloud stream undeploy [-h] stream_id

Required Arguments

stream_id ID of stream to undeploy.

Optional Arguments

-h, --help show this help message and exit

batch

Subcommands for performing batch predictions on entire datasets.

batch predict

Generate a batch of predicted outcomes from a specified dataset and trained model.

Syntax

ncloud batch predict [-h] [-f EXTRA_FILES] model_id datset_id

Required Arguments

model_id ID of model to generate predictions against. Model must have completed training.
data_id ID of dataset to generate predictions against.

Optional Arguments

-h, --help show this help message and exit
-f EXTRA_FILES, --extra-files EF Zip of additional files to include in the deployment.

batch show

Show prediction details for a given batch attempt.

Syntax

ncloud batch show [-h] batch_id

Required Arguments

batch_id ID of batch to show prediction details of.

Optional Arguments

-h, --help show this help message and exit.

batch list

List all batch predictions.

Syntax

ncloud batch ls [-h] [-n COUNT] [-o OFFSET] [--asc]

Optional Arguments

-h, --help show this help message and exit
-n COUNT, –count COUNT Show up to n objects. Without the –asc flag, returns most recent objects. For unlimited set n=0. Can be used alongside the offset arg to implement pagination of objects returned. Ex: ncloud m ls -o 25 -n 10 would return objects 16-25, and ncloud m ls -o 25 -n 10 –asc would return objects 25-34.
-o OFFSET, –offset OFFSET Where in the object pagination to start next. Represents an object id to return the next newest X results, where X = count. Used as a way to paginate object listings.
–asc Displays objects in ascending order instead of the default descending order.

batch results

Retrieve batch prediction results. Options same as model results.

Syntax

ncloud batch results
    [-h]
    [-d DIRECTORY]
    [-url | --zip | --tar | --objects]
    [--filter FILTER]
    object_id

Required Arguments

object_id ID of batch prediction to retrieve training results of.

Optional Arguments

-h, --help show this help message and exit
-d DIR, --directory DIR Location to download files {DIR}/{model_id}/results_files. Defaults to current directory.
-u, --url Get URLS for file download instead.
-o, --objects Download results directly.
-z, --zip Retrieve a zip file of the results.
-t, --tar Retrieve a tarball of the results.
-f FILTER, --filter FILTER Only retrieve files with names matching glob pattern <FILTER>

batch stop

Stop a currently running batch prediction job given its ID.

Syntax

ncloud batch stop [-h] batch_id

Required Arguments

batch_id ID of batch inference job to stop.

Optional Arguments

-h, --help show this help message and exit

resource

Subcommands for listing hardware resources assigned to users in the cloud.

resource list

List all compute resources assigned to your tenant.

Syntax

ncloud resource list [-h]

Optional Arguments

-h, --help show this help message and exit

stats

Subcommands for displaying analytics.

stats list

List usage metrics

Syntax

ncloud stats list [-h]

Optional Arguments

-h, --help show this help message and exit

history

Subcommands for showing history of prior ncloud commands. Doesn’t include any of the ncloud history calls themselves.

ncloud history (h)

Show history of ncloud commands. Options include: list all commands regardless of type, list commands by a certain type, list a model training or interactive session launch job command. See ncloud Tutorial at the end for examples on how to use the command.

Syntax

ncloud history [-h -hist] [-n COUNT] [-s {yes | no}] [-b BEFORE] [-a AFTER]
                  [-w [WILDCARDS [WILDCARDS ...]]]
                  [{configure, dataset, ds, d, model, m, interact, i, stream, s,
                    batch, b, resource, res, r, stats, sta, tenant, t, user, u,
                    group, g, upgrade}]
                  [action]

Optional Arguments

-h, --help show this help message and exit
-n COUNT, --count COUNT Number of results to display. Defaults to 10 if not provided.
{configure, dataset, ds, d, model, m, interact, i, stream, s, batch, b, resource, res, r, stats, sta, tenant, t, user, u, group, g, upgrade} Command type to further filter by. Without this, query on all command types.
action Action to get history for. Usually the third argument in an ncloud command. Examples include show, list. Command type is required if action provided.
-s {yes | no}, --was_successful {yes | no} If set, returns only those commands that were or were not successful
-b BEFORE, --before BEFORE Filter on commands before this time. Format: YYYY-MM-DD. Can be used in conjunction with the -a flag.
-a AFTER, --after AFTER Filter on commands after this time. Format: YYYY-MM-DD. Can be used in conjunction with the -b flag.
-w [WILDCARDS [WILDCARDS ...]], --wildcards [WILDCARDS [WILDCARDS ...]] Wildcard phrases to return all ncloud commands that contain those phrases. Ex: ncloud h -w model, custom-code-commit, ‘-e 1’ would return all ncloud model commands made with custom code and 1 epoch.

upgrade

Upgrade ncloud to the latest release.

ncloud upgrade

Upgrade ncloud to the latest release.

Syntax

ncloud upgrade [-h]

Optional Arguments

-h, --help show this help message and exit

user

Manage users. Admin privileges required.

tenant

Manage tenants. Admin privileges required.

group

Manage user membership collections. Admin privileges required.

Quick Reference

Here we demonstrate some of the common ncloud commands. To run a model with custom arguments, such as setting the random seed, and computing the validation loss, use:

$ ncloud model train mnist_mlp.py --args="-r 0 --eval_freq 1"

Note that mnist_mlp.py is the path on your local machine. For other neon arguments that can be included, see here. Importantly, the epoch (-e) and batch size (-z)arguments should be passed directly to ncloud model train rather than inside the --args flag:

$ ncloud model train mnist_mlp.py --args="-r 0 --eval_freq 1" -e 50 -z 64

If you have custom code in a github repository that you would like to run on the cloud, a typical command would be:

$ ncloud model train mymodel.py --framework-version v1.5.4 --custom-code-url git@github.com:MyName/MyRepo.git --custom-code-commit myname/mybranch

The above command will use neon v1.5.4, pull down myname/mybranch of your repository, and then run the mymodel.py script that is in your github repository. Here the mymodel.py is a path relative to your repository.

To use our multi-GPU framework, the command

$ ncloud model train mnist_mlp.py -g 4 --mgpu-version v1.5.4

will launch a training job with 4 GPUs using the v1.5.4 version of our multi-gpu repository. The --mgpu-version flag is optional.

Windows Support

ncloud is currently optimized for use in Linux and OSX environments. Though not officially supported at this time, running ncloud under various flavors of Microsoft Windows should still be feasible, but the following caveats and workaround will need to be adhered to (we will attempt to reduce and improve these in future releases):

  • all ncloud commands must be prefixed with python, since ncloud usually isn’t available in users execution paths. So ncloud model list becomes python ncloud model list etc.
  • all ncloud commands must specify the complete path to the ncloud executable script. Unfortunately there’s no real standard to where python scripts will reside on Windows, and where this will be placed depends on the distribution and version of python you have installed. Note too that this location is often different than where the rest of the python package gets installed. If you grabbed python from python windows and didn’t change the default installation paths, your scripts dir is likely going to be C:\Python27\Scripts. In that case, running a command like
$ ncloud model list

becomes

C:\> python C:\Python27\Scripts\ncloud model list