This page will help you get started quickly to deploy Instill Core on your local machine.
#🔮 Instill Core
Instill Core is a full-stack AI platform that allows you to create versatile data pipelines with Pipeline, orchestrate unstructured data via Instill Artifact, and leverage Model to serve AI models on your local machine or remote instances. The following instructions will guide you through the setup process using Docker Compose.
#Prerequisites
Before getting started, please ensure you meet the following prerequisites:
-
macOS or Linux - Instill Core is compatible with macOS and Linux.
-
Windows - Instill Core can be set up on Windows via Windows Subsystem for Linux (WSL2). To learn more about setting up Instill Core on Windows, please refer to our Docker Compose guide.
-
Docker and Docker Compose - Instill Core uses Docker Compose to manage services locally. See the official installation instructions and review the Docker Resource Requirements for optimal setup.
#Launch
To fire up Instill Core on your local machine or remote instance, simply run the following commands:
git clone -b v0.50.3-beta https://github.com/instill-ai/instill-core.git && cd instill-coremake all
Once all services are up and running, the Console UI will be available at http://localhost:3000.
If you change to a different version of
Instill Core, you will need to re-build the docker images by running make build-latest
instead of make all
.
#Shutdown
To tear down and clean up all Instill Core resources, run:
make down
Please refer to our Deployment Guide for further information on how to deploy Instill Core using Docker Compose, Kubernetes using Helm or via our Instill CLI.
#Next Steps
Now that you're set up with Instill Core, you're ready to dive deeper into the platform's capabilities.
#Explore Our Key Services
The three core services that underpin our full-stack AI solution are:
- Artifact - Manage and orchestrate all stateful data in Catalog, our augmented data catalog for unstructured data
- Pipeline - Create versatile AI-powered pipelines that seamlessly integrate with your data and applications
- Model - Serve, orchestrate and monitor AI models
#See Our Examples
Explore, test, modify and draw inspiration from the diverse range of AI products you can build with our services on our Examples page. This includes:
- Pipelines that are API-ready for external integrations
- Servable models that are ready to be deployed on Model
- Tutorials that give you step-by-step guidance on how to build your own AI applications
- Instill AI Cookbooks that demonstrate how to solve real-world problems with our Python SDK
#Read Our Blog
Stay up-to-date with our latest product updates, AI insights, and tutorials by visiting our Blog.
#Support
Please see our Support page for more information on how to get help.