Use charts instead of console logs

Instead of using console logs, figure out what’s going on in your training by looking at the charts.

Use Jupyter Notebooks in the cloud

Prototype interactively using Jupyter Notebooks in the cloud. Neptune saves your code and outputs automatically.

Quickly find and compare your best experiments

Use your private “Kaggle leaderboard” like a dashboard to filter, explore and sort through your experiments. Find your top machine learning models based on your favorite metric.

Optimize hyperparameters automatically

Instead of manually tuning the hyperparameters of your machine learning models or to avoid tuning altogether, utilize cloud resources and automatically find the optimal parameters.
Neptune comes with a grid search optimization engine that can tune your code’s parameters regardless of the machine learning library you’re using.
$ neptune send

Make long-running processes interactive

Don’t wait till your experiment ends - use “Actions”. Actions are functions in your code that you can invoke manually from Neptune’s UI and modify your training in runtime.

Convert machine learning models to APIs

All you need to do is specify the schema of your model's API and let Neptune do the DevOps work for you. In most cases it's enough to choose from a set of typical predefined schemas for binary or multiclass classification, regression, object detection and more.
$ neptune model build

Deploy machine learning models in the cloud

Instantly bring to life any number of machine learning model instances in a public cloud or your local environment. Neptune governs the deployed models across multiple environments and helps you keep track of the development, staging and production phases of your machine learning models’ lifecycle.
$ neptune model deploy

Update and test deployed machine learning models

Need to maintain a lot of models in production? Neptune helps you manage their versioning and dependencies using Docker images and a queryable model repository.
Neptune also takes care of traffic management and lets you scale your models to multiple instances and customize their load balancing policy. A/B testing and rolling updates have never been easier.

Take advantage of leading-edge hardware

Want to use NVIDIA® Tesla® K80 GPUs to train your deep learning models? Or Google’s infrastructure for your computations? It’s a snap! Just select your machine type and send your computations to the cloud.
$ neptune send --worker gcp-medium
$ neptune send --worker gcp-gpu-large

Pay per second

Neptune comes with a data science-friendly, per-second billing model. Forget about expensive hourly rates and pay only for what you use - the exact time of your experiment run.

Use pre-configured compute environments

Tired of manually configuring your remote machines or installing missing libraries?
Don’t waste your time on DevOps work! Use one of Neptune’s pre-configured environments (Docker images) and focus on data science instead.
$ neptune send --environment tensorflow
$ neptune send --environment keras
$ neptune send --environment theano

Send your training processes to the cloud

Move training processes off your desktop and onto powerful machines in the cloud. Experiment more at the same time.
$ neptune send

Share your ideas, results and infrastructure with your team

How about sharing your latest machine learning model with your team or participating in a Kaggle competition with your friends?
Use Neptune and start exchanging your experiments’ source code, Jupyter notebooks, metrics and datasets. Organize your work by project, invite your team members and use shared compute resources.

Track and reproduce your work

Neptune automatically tracks all your experiments, so you never lose work and can always reproduce your results. In addition to tracking the source code (git integration), Neptune tracks all dependencies between experiments, hyper-parameters, execution history, logs and environmental requirements.

Manage your machine learning models’ lifecycle on production

Want to understand how a particular machine learning model was created? Or see the source code or the dataset used to train it?
Nothing could be easier. Behind the scenes, Neptune tracks the history of all your machine learning models on production and the experiments that were conducted to train them.