
A360 Deploy has a simple wizard to automatically package ML Models into Docker containers that expose REST APIs for model scoring and monitoring. These wizards generate YAML based on the Starpack specification which is used to automatically deploy leveraging GitOps.
Accelerate AI
Pipeline Integration
Compute resource management, training reproducibility, API integration errors are all cited as examples of roadblocks businesses encounter on the road to full integration of AI into their IT stack. A360 removes roadblocks by integrating into your existing CI/CD Pipeline tools.
Deploying AI applications to cloud and on-prem environments is simplified with A360 Deploy. Through the Deployment Hub interface, engineers can utilize a Model Deployment as Code approach to production model serving. Configuration is automated and repeatable via Kubernetes container orchestration.

Scale AI Application
Serving to Production
80% of AI projects fail to move to a secure production environment. Those that make it to production usually takes 6 to 9 months on average to go from development to deployment due to recoding of Juptyer Notebooks into Python Microservices. A360 accelerates experiment to production from months to days with automated packaging and deployment.
Why Businesses Use A360 Deploy for Model Deployment
Centralized Deployment
Deploy models at scale using Starpack by ensuring that your model artifacts stay with your model from training to serving. Easily audit training and test data sets along with model performance metrics across all model deployments in cloud, on-prem, and edge environments.
Open and Flexible
A360 Deploy with Starpack technology can be integrated into any Enterprise CI/CD toolchain and ML Toolchain including Data and ML platforms such as Snowflake, Databricks, C3.ai, H2O.ai, Domino, SageMaker, Colab, Azure ML, etc.
Versioning
A360 Deploy automatically tracks the model development environment, ML package and framework versions. Roll back your deployments to whichever version you wish.
GitOps Flow
Your team already uses GitOps for infrastructure and application deployments, now they can use it for machine learning models!
Deploy AI Anywhere
Deploy models across development, validation, and production environments on cloud, on-prem, edge and hybrid (anywhere Kubernetes or Docker images can run). Model serving is managed through a central hub.
Seamless CI/CD Integration
We believe that ML engineers should be able to deploy and manage models as easily as DevOps manages the rest of your software. A360 uses Git and CI/CD tools to make ML operations faster, smarter, and cheaper.
Reproduce Models Easily
A360 Starpack which is in a human-readable YAML format can be used to easily repackage and reproduce models and artifacts from any version, reducing downtime and minimizing the risk of application failure.
Manage Compute Resources
Managing the infrastructure for scientific computing is a hassle. A360 does the hard work for you and provides options to help select the optimal amount of resources needed for your unique AI workload.
Cost Effective
A360 AI empowers data scientists and machine learning engineers to have visibility and control over compute resources as they deploy the models into production. They are able to comprehend their allocated capacity, how much they have already used and how much is available for models to be deployed. A360 also flags recommended instance and automatically defaults to the selection.
Scales to Production
It’s not enough to deploy a single AI model to production. Scaling AI to your business means being able to harness the power of AI for each problem you face. Deploy models when you need them, with speed.
Helps Meet Governance Requirements
Viewing all of your model deployments under a single pane of glass increases the visibility of AI within your organization. Automated tracking helps meet regulatory and compliance requirements.