Viking Analytics is a Swedish company operating from Gotheburg offering predictive maintenance and smart condition monitoring solutions. Daeploy is a culmination of more than 4 years of delivering machine learning solutions to industry. This is the story of our journey and Daeploy!
Typical machine learning project for industrial 4.0 clients
Customers often come to us with pain points they want to relieve by leveraging their data and the power of machine learning. A typical project often starts with a discovery phase in which we look at customer data together to understand the data and evaluate the likelihood of finding a solution.
If this initial data readiness analysis shows potential, a proof-of-concept phase starts where the focus is on proving the feasibility of a solution. The tool of choice in this step is often Jupyter Notebooks.
A successful proof of concept is followed by a minimum viable product (MVP). This is where the challenge starts, as taking a project from data to an application is a demanding task that spans over several skill sets. While the journey from data to proof of concept is often machine learning heavy, the MVP phase requires a wide range of other skillsets such as software development and DevOps.

Ideally, one needs a team of data scientists, software developers, DevOps engineers, and if necessary, UI/UX to complete the journey to an application successfully. In practice, however, it is common that data scientists or ML engineers are tasked to convert their models and algorithms to applications.
Our mission: enabling data scientists to be data scientists!
We wanted to enable data scientists to focus on what they love to do: working with and understanding data, model training, model evaluation, model optimization and spend as little time and effort as possible on productification and deployment of their solution. We identified two key problems that needed to be addressed:
1. Adding functionalities
As “the hidden technical debt in machine learning systems” article famously showed, the ML code often is a small part of an ML project. There are many functionalities required to enable the ML code to deliver its value reliably.

Data scientists around the world are spending days and weeks adding functionalities to their models, such as REST API, configuration, and notifications.
To address this, we have developed the Daeploy software development kit (SDK) which provides loads of functionality out of the box.
2. Deployment
Software deployment is the process of converting the code to running applications on a host and making it available for use. This is often the domain of DevOps engineers. To deploy a solution successfully, one needs to understand concepts such as access, security and authentications, proxies, SSL certificates, CI/CD pipelines, code containerization, service management, and more.
The variety of potential deployment targets adds another dimension to the complexity of ML deployment. The target can be on the cloud or on-premise. To deploy on the cloud one must be familiar with a variety of cloud providers, their terminology, and their user interface. The on-premise deployment depends on the customer IT and their readiness for hosting software solutions.
We address the deployment challenge with the Daeploy manager, a docker image that is run in the target of your applications. The target could be any machine that:
- Able to run Docker
- Allows access to the Docker daemon.
Given the above requirements are satisfied, the target could be any machine with any operating system. It could be your personal computer, a server in a factory, a virtual machine in the cloud, or a Raspberry Pi. The Daeploy manager provides security and authentication and provides a secure API to deploy your code, extract logs from running services, get notifications, and much more.
Let us have a closer look at some of the features of Daeploy SDK and the Daeploy manager.
Daeploy SDK
The Daeploy SDK is used when writing the code that should run as a service. It is installed as a python package in the Python environment and strives to make the process of creating these services as simple as possible.
$ pip install daeploy
Create API
One common use case is to convert a trained model to a prediction service with an API that can receive data and produce predictions. Using the Daeploy SDK, you can convert any Python function to an API with a single decorator.
# Lets import what we need from daeploy SDK
from daeploy import service
# decorate a python function with service.entrypoint to create an API endpoint!
@service.entrypoint
def hello(name: str) -> str:
logger.info(f"Greeting someone with the name: {name}")
return f"hello {name}"
Alarms and notifications
Monitoring models or generally running services is an important aspect of creating a reliable solution. The use cases are endless. One common pain point is the degradation of a trained model over time. The Daeploy SDK comes with build-in notification functionality, so you can get notifications on the manager dashboard and to your email.
@service.entrypoint
def hello(name: str) -> str:
if name == "world":
notify(
msg="Someone is trying to greet the World!!",
severity=Severity.WARNING,
emails=["your@email.com"],
)
logger.info(f"Greeting someone with the name: {name}")
return f"hello {name}"
Build applications using microservices architecture
Creating an application often requires multiple services, for example:
- Database connector service: a service that extracts or saves data from or to a dedicated database.
- Prediction service: a service that gets data as input and returns predictions.
- Business logic service: a service that contains the business-related logic and is responsible for converting the predictions to business-relevant actions.
- Dashboard service: a service that provides visualization or interaction with the application.
Using microservices architecture has many benefits over monolithic applications such as flexibility, scalability, and separation of concern, to name a few. However, there are drawbacks too. Managing the communication between services is one example of additional overhead caused by the microservices architecture.
If using REST, one needs to set up a Flask server for each service, create and manage endpoints and handle eventual exceptions that happen using REST API. The Daeploy SDK makes communication between the services as easy as a normal python function call. Here is an example of using the call_service and calling the hello function from another service:
# import call_service from communication package
from daeploy.communication import call_servicedef greet_the_world() -> str:
# No need for API call. Just call any entry point from other # services using call_service function
reponse = call_service(
service_name="greeting_service",
entrypoint_name="hello",
arguments={"name": "world"}
)
logger.info(f"{reponse}")
return f"{reponse}"
Daeploy manager
The Daeploy manager aims to simplify the deployment process. Let’s look at a few ways how the Daeploy manager simplifies the complexity of deploying a service.
Security and authentication
Handing access, security, and authentication are essential parts of deploying an application a remote server, be it on a cloud or on a customer premise. Especially if one is working with business-sensitive data. Daeploy manager comes with a build-in token-based authentication and HTTPS functionality.
One can communicate directly to the Daeploy manager REST API or use the command-line interface (CLI) which comes with the SDK. To use the CLI, all you need is a terminal with an activated Python environment with Daeploy SDK installed. To communicate with the Daeploy manager, first use the login command to authenticate:
$ daeploy login
Enter daeploy host: https://your-host
Logging in to Daeploy instance at https://your-host
Username: admin
Password: ***************
Changed host to https://your-host
You can also use CLI to create tokens with different validity periods. For example, you can create a token valid for five days as below:
$ daeploy token 5
Active host: https://your-host
Use the token in the header {"Authorization": "Bearer token"}, for further details see the docs
eyJ0eXAiOiJKV1QiLCJhb.....
Containerization
It is becoming an industry-standard to deploy services using containers such as Docker. The containers have several advantages such as portability, consistent operation, and they are much more lightweight compared to virtual machines (VMs).
The Daeploy manager converts Python code to a Docker image, installs all the required dependencies, and runs it as a service all with a single command:
$ daeploy deploy [options] <service-name> <version> <source>
The source could be your local development folder, a git repo, or even an existing docker image!
Shadow deployment
The shadow deployment is a handy feature the allows testing new versions of an existing service without disrupting the production. In shadow deployment, a new version is deployed alongside the current version. The new version receives all the inbound traffic to the service but responses are ignored.

It is easy to switch the main version of the service with the assign command. For more information, see here.
$ daeploy assign <service-name> <version>
To learn more about starting the manager, check out the getting started article.
We are excited to release Daeploy and make it available for free to the community. We look forward to seeing how it helps you take your ideas to running applications in no time. Happy Daeploying!