The A360 Model Registry tracks every model across all of its versions, experiments, and runs with a unique full log capture of Jupyter Notebook execution. Easily review and audit key model performance metrics and select the optimal experiment for packaging and deployment.
Import Models From
Import your current models and immediately take advantage of the experimentation and collaboration. Or start with cutting edge and industry-leading open-source models to fine-tune and perform transfer learning.
Use the built-in workflows for model review, approvals, and auditing that incorporates full Jupyter Notebook execution logs and tracks key model performance metrics.
Why Businesses Use A360 Build for Responsible and
cost-effective Model Development
Environments are Easy to Access
Seamless access to model development environments and data significantly reduces model development time. Spin up workspaces and Jupyter notebooks backed by powerful compute resources based on your training needs with a the click of a button. Data Scientists don’t have to worry about DevOps or infrastructure.
Advanced Model Development
Choose your Jupyter image that comes preinstalled with the necessary machine learning and deep learning libraries such as pyTorch, Tensorflow, scikit-learn, and others. These images have support for Optuna hyperparameter search right out of the box. Parallel processing and Apache Spark support to accelerate model training.
Model training code from Jupyter Notebook and model artifacts are saved for every experiment and every run, ensuring organization and reproducibility. Never lose your place.
With the A360 Model Development Kit, you have a powerful toolset that automates the usual manual steps in building, experimenting, and testing AI models. Execution scripts and state of data are automatically stored as snapshots.
Cloud Data Support
Big data isn’t a big deal with A360’s built-in Snowflake connectors. The A360 MDK makes connections to AWS S3 buckets and cloud data stores without exposing credentials in notebooks!
A model review workflow allows data scientists to collaborate with peers and business users to review their models prior to deployment. Ensure production-level software code review practices and look forward to quality deployments.
Tracking and Versioning
Full tracking and versioning of models and experiments allows data scientists to share work and review insights. Reproducibility means that you’ll never lose your work and can rewind at any time.
Security and Access
Project-based access controls allow you to assign individual collaborators to projects. Ensure the security of your data, workspaces, experiments, and models.
Makes AI Teams Dynamic
Get your team of data scientists working together in common workspaces to quickly iterate and improve the accuracy of AI models. Have one data scientist continue exactly where another one left.
Share experiments and run results with other project collaborators to explore and iterate on findings within teams.