We offer a new way for easily deploying your deep neural networks: Deep Learning DS Cloud Deployments. With Cloud Deployments, you can create an inference webservice from a trained neural network model with a few clicks and zero effort. For this purpose, simply maneuvre to the new Deployments* category and create a deployment instance specifing the network to deploy. Once created, click “run” and the neurons can start to fire within our cloud and you will get a URL to access the running inference service.

What happens behind the scenes?

To offer cloud deployments, we rely on AWS Elastic Container Services and perform the creation of tasks, load balancer rules and cluster scaling automatically for you. Each cloud deployment is hosted using it’s own containers and is only accessible using an access token which can be specified during creation of a deployment instance. This container is responsible for launching a modified version of Inference DS that has been specifically adapted for automated cloud deployments.

What comes next?

Cloud Deployments is the first step to offer more easy deployments of deep neural networks for production use. In the future, we will offer functionalities to manage your Inference DS instances that are running on-premises. This wil allow you to update the deployed models directely from Deep Learning DS.

* If you don’t see the Deployments category, get into contact with your Deep Learing DS representative or distributor to unlock the new features.