Overview
Architecture
Architecture

Product video
Spice.ai Enterprise is a portable (<150MB) compute engine built in Rust for data-intensive and intelligent applications. It accelerates SQL queries across databases, data warehouses, and data lakes using Apache Arrow, DataFusion, DuckDB, or SQLite. Integrated and co-deployed with data-intensive applications, Spice materializes and accelerates data from object storage, ensuring sub-second query performance and resilient AI applications. Deployable as a container on AWS ECS, EKS, or hybrid cloud & edge, it includes enterprise licensing, support, and SLAs.
Note: Spice.ai Enterprise requires an existing commercial license. For details, please contact sales@spice.ai .
Highlights
- Unified data query and AI engine accelerating SQL queries across databases, data warehouses, and data lakes. Delivers sub-second query performance while grounding mission-critical AI applications with real-time context to minimize errors and hallucinations.
- Advanced AI and retrieval tools, featuring vector and hybrid search, text-to-SQL, and LLM memory, enabling data-grounded AI applications with more than 25 data connectors enabling federated queries and real-time applications.
- Deployable as a container on AWS ECS, EKS, or on-premises, with dedicated support and SLAs for scalable, secure integration into any architecture.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Quick Launch
Pricing
Vendor refund policy
Refunds for Spice.ai Enterprise container subscriptions are not available after activation, as usage begins immediately upon deployment. Ensure compatibility with AWS ECS, EKS, or on-premises setups before purchase. For billing inquiries, contact AWS Marketplace support or Spice AI directly at support@spice.ai .
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Container Deployment
- Amazon ECS
- Amazon EKS
- Amazon ECS Anywhere
Container image
Containers are lightweight, portable execution environments that wrap server application software in a filesystem that includes everything it needs to run. Container applications run on supported container runtimes and orchestration services, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Both eliminate the need for you to install and operate your own container orchestration software by managing and scheduling containers on a scalable cluster of virtual machines.
Version release notes
Spice v1.8.0-enterprise (Oct 6, 2025)
Spice v1.8.0-enterprise delivers major advances in data writes, scalable vector search, and now in preview - managed acceleration snapshots for fast cold starts. This release introduces write support for Iceberg tables using standard SQL INSERT INTO, partitioned S3 Vector indexes for petabyte-scale vector search, and preview of the AI SQL function for direct LLM integration in SQL. Additional improvements include improved reliability, and the v3.0.3 release of the Spice.js Node.js SDK.
What's New in v1.8.0-enterprise
Iceberg Table Write Support (Preview)
Append Data to Iceberg Tables with SQL INSERT INTO: Spice now supports writing to Iceberg tables and catalogs using standard SQL INSERT INTO statements. This enables data ingestion, transformation, and pipeline use cases - no Spark or external writer required.
- Append-only: Initial version targets appends; no overwrite or delete.
- Schema validation: Inserted data must match the target table schema.
- Secure by default: Writes are only enabled for datasets or catalogs explicitly marked with access: read_write.
Example Spicepod configuration:
catalogs: - from: iceberg:<https://glue.ap-northeast-3.amazonaws.com/iceberg/v1/catalogs/111111/namespaces> name: ice access: read_write datasets: - from: iceberg:<https://iceberg-catalog-host.com/v1/namespaces/my_namespace/tables/my_table> name: iceberg_table access: read_writeExample SQL usage:
-- Insert from another table INSERT INTO iceberg_table SELECT * FROM existing_table; -- Insert with values INSERT INTO iceberg_table (id, name, amount) VALUES (1, 'John', 100.0), (2, 'Jane', 200.0); -- Insert into catalog table INSERT INTO ice.sales.transactions VALUES (1001, '2025-01-15', 299.99, 'completed');Note: Only Iceberg datasets and catalogs with access: read_write support writes. Internal Spice tables and other connectors remain read-only.
Learn more in the Iceberg Data Connector documentation .
Acceleration Snapshots for Fast Cold Starts (Preview)
Bootstrap Managed Accelerations from Object Storage: Spice now supports managed acceleration snapshots in preview, enabling datasets accelerated with file-based engines (DuckDB or SQLite) to bootstrap from a snapshot stored in object storage (such as S3) if the local acceleration file does not exist on startup. This dramatically reduces cold start times and enables ephemeral storage for accelerations with persistent recovery.
Key features:
- Rapid readiness: Datasets can become ready in seconds by downloading a pre-built snapshot, skipping lengthy initial acceleration.
- Hive-style partitioning: Snapshots are organized by month, day, and dataset for easy retention and management.
- Flexible bootstrapping: Configurable fallback and retry behavior if a snapshot is missing or corrupted.
Example Spicepod configuration:
snapshots: enabled: true location: s3://some_bucket/some_folder/ # Folder for storing snapshots bootstrap_on_failure_behavior: warn # Options: warn, retry, fallback params: s3_auth: iam_role # All S3 dataset params accepted here datasets: - from: s3://some_bucket/some_table/ name: some_table params: file_format: parquet s3_auth: iam_role acceleration: enabled: true snapshots: enabled # Options: enabled, disabled, bootstrap_only, create_only engine: duckdb mode: file params: duckdb_file: /nvme/some_table.dbHow it works:
- On startup, if the acceleration file does not exist, Spice checks the snapshot location for the latest snapshot and downloads it.
- Snapshots are stored as: s3://some_bucket/some_folder/month=2025-09/day=2025-09-30/dataset=some_table/some_table_<timestamp>.db
- If no snapshot is found, a new acceleration file is created as usual.
- Snapshots are written after each refresh (unless configured otherwise).
Supported snapshot modes:
- enabled: Download and write snapshots.
- bootstrap_only: Only download on startup, do not write new snapshots.
- create_only: Only write snapshots, do not download on startup.
- disabled: No snapshotting.
Note: This feature is only supported for file-based accelerations (DuckDB or SQLite) with dedicated files.
Why use acceleration snapshots?
- Faster cold starts: Skip waiting for full acceleration on startup.
- Ephemeral storage: Use fast local disks (e.g., NVMe) for acceleration, with persistent recovery from object storage.
- Disaster recovery: Recover from federated source outages by bootstrapping from the latest snapshot.
Learn more in the Acceleration Snapshots documentation .
Partitioned S3 Vector Indexes
Efficient, Scalable Vector Search with Partitioning: Spice now supports partitioning Amazon S3 Vector indexes and scatter-gather queries using a partition_by expression in the dataset vector engine configuration. Partitioned indexes enable faster ingestion, lower query latency, and scale to billions of vectors.
Example Spicepod configuration:
datasets: - name: reviews vectors: enabled: true engine: s3_vectors params: s3_vectors_bucket: my-bucket s3_vectors_index: base-embeddings partition_by: - 'bucket(50, PULocationID)' columns: - name: body embeddings: from: bedrock_titan - name: title embeddings: from: bedrock_titanSee the Amazon S3 Vectors documentation for details.
AI SQL function for LLM Integration (Preview)
LLMs Directly In SQL: A new asynchronous ai SQL function enables direct calls to LLMs from SQL queries for text generation, translation, classification, and more. This feature is released in preview and supports both default and model-specific invocation.
Example Spicepod model configuration:
models: - name: gpt-4o from: openai:gpt-4o params: openai_api_key: ${secrets:openai_key}Example SQL usage:
-- basic usage with default model SELECT ai('hi, this prompt is directly from SQL.'); -- basic usage with specified model SELECT ai('hi, this prompt is directly from SQL.', 'gpt-4o'); -- Using row data as input to the prompt SELECT ai(concat_ws(' ', 'Categorize the zone', Zone, 'in a single word. Only return the word.')) AS category FROM taxi_zones LIMIT 10;Learn more in the SQL Reference AI documentation .
Spice.js v3.0.3 SDK
Spice.js v3.0.3 Released: The official Spice.ai Node.js/JavaScript SDKÂ has been updated to v3.0.3, bringing cross-platform support, new APIs, and improved reliability for both Node.js and browser environments.
- Modern Query Methods: Use sql(), sqlJson(), and nsql() for flexible querying, streaming, and natural language to SQL.
- Browser Support: SDK now works in browsers and web applications, automatically selecting the optimal transport (gRPC or HTTP).
- Health Checks & Dataset Refresh: Easily monitor Spice runtime health and trigger dataset refreshes on demand.
- Automatic HTTP Fallback: If gRPC/Flight is unavailable, the SDK falls back to HTTP automatically.
- Migration Guidance: v3 requires Node.js 20+, uses camelCase parameters, and introduces a new package structure.
Example usage:
import { SpiceClient } from '@spiceai/spice'; const client = new SpiceClient(apiKey); const table = await client.sql('SELECT * FROM my_table LIMIT 10'); console.table(table.toArray());See Spice.js SDK documentation for full details, migration tips, and advanced usage.
Additional Improvements
- Reliability: Improved logging, error handling, and network readiness checks across connectors (Iceberg, Databricks, etc.).
- Vector search durability and scale: Refined logging, stricter default limits, safeguards against index-only scans and duplicate results, and always-accessible metadata for robust queryability at scale.
- Cache behavior: Tightened cache logic for modification queries.
- Full-Text Search: FTS metadata columns now usable in projections; max search results increased to 1000.
- RRF Hybrid Search: Reciprocal Rank Fusion (RRF) UDTF enhancements for advanced hybrid search scenarios.
Contributors
Breaking Changes
This release introduces two breaking changes associated with the search observability and tooling.
Firstly, the document_similarity tool has been renamed to search. This has the equivalent change to tracing of these tool calls:
## Old: v1.7.1 >> spice trace tool_use::document_similarity >> curl -XPOST <http://localhost:8090/v1/tools/document_similarity> \ -d '{ "datasets": ["my_tbl"], "text": "Welcome to another Spice release" }' ## New: v1.8.0 >> spice trace tool_use::search >> curl -XPOST <http://localhost:8090/v1/tools/search> \ -d '{ "datasets": ["my_tbl"], "text": "Welcome to another Spice release" }'Secondly, the vector_search task in runtime.task_history has been renamed to search.
Cookbook Updates
- Added new AI SQL function recipe for invoking LLMs within SQL queries.
- Updated Iceberg Catalog Connector recipe for Iceberg Writes.
- Updated Spice.js JavaScript (Node.js) SDKÂ for v3.0.3 with examples and v2 to v3 migration guide.
The Spice Cookbook now includes 80 recipes to help you get started with Spice quickly and easily.
Upgrading
To upgrade to v1.8.0-enterprise, use one of the following methods:
CLI:
spice upgradeHomebrew:
brew upgrade spiceai/spiceai/spiceDocker:
Pull the spiceai/spiceai:1.8.0 image:
docker pull spiceai/spiceai:1.8.0For available tags, see DockerHub .
Helm:
helm repo update helm upgrade spiceai spiceai/spiceaiAWS Marketplace:
Spice is now available in the AWS Marketplace !
What's Changed
Dependencies
- iceberg-rust: Upgraded to v0.7.0-rc.1Â
- mimalloc: Upgraded from 0.1.47 to 0.1.48
- azure_core: Upgraded from 0.27.0 to 0.28.0
- Jimver/cuda-toolkit: Upgraded from 0.2.27 to 0.2.28
Changelog
- Add #[cfg(feature = "postgres")] to acceleration refresh tests by @Jeadie in https://github.com/spiceai/spiceai/pull/7241Â
- fix: Update benchmark snapshots by @github-actions[bot] in https://github.com/spiceai/spiceai/pull/7267Â
- fix: Update benchmark snapshots by @github-actions[bot] in https://github.com/spiceai/spiceai/pull/7268Â
- fix: Update benchmark snapshots by @github-actions[bot] in https://github.com/spiceai/spiceai/pull/7269Â
- Update the tpch benchmark snapshots for: federated/databricks[sql_warehouse].yaml by @github-actions[bot] in https://github.com/spiceai/spiceai/pull/7270Â
- EmbeddingInput cache keys to include model name by @mach-kernel in https://github.com/spiceai/spiceai/pull/7275Â
- Ensure FTS metadata columns can be used in projection by @Jeadie in https://github.com/spiceai/spiceai/pull/7282Â
- Use 8-core runners for Windows CUDA builds by @sgrebnov in https://github.com/spiceai/spiceai/pull/7284Â
- Make search test more robust by @krinart in https://github.com/spiceai/spiceai/pull/7283Â
- Post-release housekeeping by @sgrebnov in https://github.com/spiceai/spiceai/pull/7272Â
- fix: Use median cached response duration for test search cache by @peasee in https://github.com/spiceai/spiceai/pull/7286Â
- Bump dirs from 5.0.1 to 6.0.0 by @dependabot[bot] in https://github.com/spiceai/spiceai/pull/7244Â
- Bump indexmap from 2.11.0 to 2.11.4 by @dependabot[bot] in https://github.com/spiceai/spiceai/pull/7248Â
- Fix JOIN level filters not having columns in schema by @Jeadie in https://github.com/spiceai/spiceai/pull/7287Â
- use SessionContext::new_empty in RRF by @kczimm in https://github.com/spiceai/spiceai/pull/7291Â
- Use rust:1.89-slim-bookworm for build, more places to bump rust version by @sgrebnov in https://github.com/spiceai/spiceai/pull/7293Â
- Update openapi.json by @github-actions[bot] in https://github.com/spiceai/spiceai/pull/7290Â
- Enable chunking in SearchIndex by @Jeadie in https://github.com/spiceai/spiceai/pull/7143Â
- Add index name and remove duplicate records string to S3 Vectors log by @lukekim in https://github.com/spiceai/spiceai/pull/7260Â
- Use file-based fts index by @Jeadie in https://github.com/spiceai/spiceai/pull/7024Â
- Remove 'PostApplyCandidateGeneration' by @Jeadie in https://github.com/spiceai/spiceai/pull/7288Â
- RRF: Rank and recency boosting by @mach-kernel in https://github.com/spiceai/spiceai/pull/7294Â
- Update ROADMAP.md by removing v1.7 milestone by @sgrebnov in https://github.com/spiceai/spiceai/pull/7297Â
- RRF: Preserve base ranking when results differ -> FULL OUTER JOIN does not produce time column by @mach-kernel in https://github.com/spiceai/spiceai/pull/7300Â
- chore: remove unused Dataset methods by @kczimm in https://github.com/spiceai/spiceai/pull/7295Â
- fix removing embedding column by @Jeadie in https://github.com/spiceai/spiceai/pull/7302Â
- fix: Add feature flag for using object store in spicepod by @peasee in https://github.com/spiceai/spiceai/pull/7303Â
- Upgrade to iceberg-rust v0.7.0-rc1 by @sgrebnov in https://github.com/spiceai/spiceai/pull/7296Â
- Enable DML Update SQL operations for datasets configured as access: read_write by @sgrebnov in https://github.com/spiceai/spiceai/pull/7304Â
- Create and parse partitioned S3 vector index names by @kczimm in https://github.com/spiceai/spiceai/pull/7198Â
- RRF: Fix decay for disjoint result sets by @mach-kernel in https://github.com/spiceai/spiceai/pull/7305Â
- RRF: Project top scores, do not yield duplicate results by @mach-kernel in https://github.com/spiceai/spiceai/pull/7306Â
- RRF: Case sensitive column/ident handling by @mach-kernel in https://github.com/spiceai/spiceai/pull/7309Â
- For vector_search, use a default limit of 1000 if no limit specified by @lukekim in https://github.com/spiceai/spiceai/pull/7311Â
- Don't cache modification queries (DDL, DML, COPY) by @sgrebnov in https://github.com/spiceai/spiceai/pull/7316Â
- Fix Anthropic model regex and add validation tests by @ewgenius in https://github.com/spiceai/spiceai/pull/7319Â
- Enhancement: Implement before/after/lag metrics for acceleration refresh by @krinart in https://github.com/spiceai/spiceai/pull/7310Â
- Refactor chat model health check to lower tokens usage for reasoning models by @ewgenius in https://github.com/spiceai/spiceai/pull/7317Â
- Add support for writing into Iceberg tables by @sgrebnov in https://github.com/spiceai/spiceai/pull/7315Â
- Fix lint warnings by @lukekim in https://github.com/spiceai/spiceai/pull/7327Â
- Use logical plan in SearchQueryProvider. by @Jeadie in https://github.com/spiceai/spiceai/pull/7314Â
- FTS max search results 100 -> 1000 by @Jeadie in https://github.com/spiceai/spiceai/pull/7331Â
- Improve Databricks SQL Warehouse Error Handling by @sgrebnov in https://github.com/spiceai/spiceai/pull/7332Â
- Use spicepod embedding model name for 'model_name()' by @Jeadie in https://github.com/spiceai/spiceai/pull/7333Â
- Handle async queries for Databricks SQL Warehouse API by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7335Â
- Enable DML (INSERT INTO) operations for catalogs configured as access:read_write by @sgrebnov in https://github.com/spiceai/spiceai/pull/7330Â
- Bump regex from 1.11.2 to 1.11.3 by @dependabot[bot] in https://github.com/spiceai/spiceai/pull/7336Â
- Update qa_analytics.csv with 1.7.0 release data by @sgrebnov in https://github.com/spiceai/spiceai/pull/7337Â
- RRF: Fix ident resolution for struct fields, autohashed join key for varying types by @mach-kernel in https://github.com/spiceai/spiceai/pull/7339Â
- v1.7.1 release notes by @kczimm in https://github.com/spiceai/spiceai/pull/7348Â
- Bump Jimver/cuda-toolkit from 0.2.27 to 0.2.28 by @dependabot[bot] in https://github.com/spiceai/spiceai/pull/7343Â
- Add support for writing into Glue (Iceberg) tables and catalogs by @sgrebnov in https://github.com/spiceai/spiceai/pull/7355Â
- Bump mimalloc from 0.1.47 to 0.1.48 by @dependabot[bot] in https://github.com/spiceai/spiceai/pull/7342Â
- Add ai async UDF by @lukekim in https://github.com/spiceai/spiceai/pull/7328Â
- Use self-hosted and spiceai-macos runners for workflows where possible by @lukekim in https://github.com/spiceai/spiceai/pull/7371Â
- Several updates for improved search testing by @Jeadie in https://github.com/spiceai/spiceai/pull/7358Â
- Update supported versions in SECURITY.md by @Jeadie in https://github.com/spiceai/spiceai/pull/7377Â
- 1.7.1 release analytics by @mach-kernel in https://github.com/spiceai/spiceai/pull/7380Â
- Add acceleration_file_path helper and refactor spice_sys to use Snafu errors by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7376Â
- fix: Update benchmark snapshots by @github-actions[bot] in https://github.com/spiceai/spiceai/pull/7353Â
- Robust search test by @Jeadie in https://github.com/spiceai/spiceai/pull/7381Â
- [bug] Fix ai UDF bug of mismatched column length by @lukekim in https://github.com/spiceai/spiceai/pull/7383Â
- Add OpenOption to spice_sys acceleration tables by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7379Â
- Add new snapshots Spicepod configuration by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7384Â
- Update naming of tool_use::document_similarity and vector_search spans by @Jeadie in https://github.com/spiceai/spiceai/pull/7273Â
- fix: Update benchmark snapshots by @github-actions[bot] in https://github.com/spiceai/spiceai/pull/7354Â
- Make ai UDF a models only feature by @lukekim in https://github.com/spiceai/spiceai/pull/7387Â
- Add new runtime_acceleration crate; create SnapshotManager; implement SnapshotManager::download_latest_snapshot by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7386Â
- Refactor 'VectorScanTableProvider' to use just 'VectorIndex::list_table_provider' by @Jeadie in https://github.com/spiceai/spiceai/pull/7318Â
- Fix embed logs by @Jeadie in https://github.com/spiceai/spiceai/pull/7382Â
- Enable spicepod dependencies in testoperator by @Jeadie in https://github.com/spiceai/spiceai/pull/7334Â
- ai UDF security and performance optimizations by @lukekim in https://github.com/spiceai/spiceai/pull/7392Â
- Wire up the snapshot download on dataset startup by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7389Â
- Implement initial snapshot creation logic in SnapshotManager by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7391Â
- Make tool_use::table_schema output model-friendly by @krinart in https://github.com/spiceai/spiceai/pull/7393Â
- Fix minor lint warnings by @lukekim in https://github.com/spiceai/spiceai/pull/7395Â
- Enable metadata columns in document-based object store datasets by @Jeadie in https://github.com/spiceai/spiceai/pull/7397Â
- Core dependencies of financebench by @Jeadie in https://github.com/spiceai/spiceai/pull/7400Â
- Add S3vector variant to financebench. by @Jeadie in https://github.com/spiceai/spiceai/pull/7399Â
- Set PostgreSQL unsupported_spice_action=string by default by @lukekim in https://github.com/spiceai/spiceai/pull/7398Â
- Use non-blocking connection check for verify_ns_lookup_and_tcp_connect by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7401Â
- Bump moka from 0.12.10 to 0.12.11 by @dependabot[bot] in https://github.com/spiceai/spiceai/pull/7340Â
- Bump tokio-postgres from 0.7.13 to 0.7.14 by @dependabot[bot] in https://github.com/spiceai/spiceai/pull/7344Â
- Bump azure_core from 0.27.0 to 0.28.0 by @dependabot[bot] in https://github.com/spiceai/spiceai/pull/7338Â
- Forbid INSERT OVERWRITE DML operations by @sgrebnov in https://github.com/spiceai/spiceai/pull/7402Â
- Make database connection pool sizes consistent by @lukekim in https://github.com/spiceai/spiceai/pull/7403Â
- Disable vector index only scans by @Jeadie in https://github.com/spiceai/spiceai/pull/7405Â
- Make CLI --endpoint and --cloud args & table output consistent by @lukekim in https://github.com/spiceai/spiceai/pull/7396Â
- Write new snapshots at the end of an accelerated refresh. by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7410Â
- Read and write partitioned S3 indexes by @kczimm in https://github.com/spiceai/spiceai/pull/7313Â
- Fix partial data writes in Iceberg data connector by @sgrebnov in https://github.com/spiceai/spiceai/pull/7411Â
- Remove nix by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7414Â
- Use DataFusion JoinSetTracer for async context propagation by @lukekim in https://github.com/spiceai/spiceai/pull/7416Â
- Implement cache invalidation for DML (INSERT INTO) operations by @sgrebnov in https://github.com/spiceai/spiceai/pull/7394Â
- Make cleanup disk GH action; use in integration tests by @Jeadie in https://github.com/spiceai/spiceai/pull/7418Â
- Move S3Vector to 'search' crate by @Jeadie in https://github.com/spiceai/spiceai/pull/7373Â
- Use LogicalPlan builder API for LogicalPlans by @Jeadie in https://github.com/spiceai/spiceai/pull/7408Â
- Use hive-style partitioned paths for DB snapshots by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7422Â
- Limit results from SearchIndex::query_table_provider by @Jeadie in https://github.com/spiceai/spiceai/pull/7421Â
- Delay initial readiness if snapshots are enabled with an append-mode refresh by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7425Â
- Disable snapshots by default by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7426Â
- Rewrite ChunkedNonIndexVectorGeneration to use LogicalPlanBuilder (instead of string formatting). by @Jeadie in https://github.com/spiceai/spiceai/pull/7413Â
- Fix for search field as metadata for chunked search indexes by @Jeadie in https://github.com/spiceai/spiceai/pull/7429Â
- Add feature is currently in preview warning for read_write access mode by @sgrebnov in https://github.com/spiceai/spiceai/pull/7440Â
- Add feature is currently in preview warning for snapshots by @sgrebnov in https://github.com/spiceai/spiceai/pull/7442Â
- Fix tracing so that ai_completions are parented under sql_query by @lukekim in https://github.com/spiceai/spiceai/pull/7415Â
- Disable acceleration refresh metrics by @krinart in https://github.com/spiceai/spiceai/pull/7450Â
- Enable snapshot acceleration by default by @phillipleblanc in https://github.com/spiceai/spiceai/pull/7451Â
- fix: partition name validation by @kczimm in https://github.com/spiceai/spiceai/pull/7452Â
Additional details
Usage instructions
Prerequisites
Ensure the following tools and resources are ready before starting:
- Docker: Install from https://docs.docker.com/get-docker/Â .
- AWS CLI: Install from https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html .
- AWS ECR Access: Authenticate to the AWS Marketplace registry: aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
- Spicepod Configuration: Prepare a spicepod.yaml file in your working directory. A spicepod is a YAML manifest file that configures which components (i.e. datasets) are loaded. Refer to https://spiceai.org/docs/getting-started/spicepods for details.
- AWS ECS Prerequisites (for ECS deployment): An ECS cluster (Fargate or EC2) configured in your AWS account. An IAM role for ECS task execution (e.g., ecsTaskExecutionRole) with permissions for ECR, CloudWatch, and other required services. A VPC with subnets and a security group allowing inbound traffic on ports 8090 (HTTP) and 50051 (Flight).
Running the Container
- Ensure the spicepod.yaml is in the current directory (e.g., ./spicepod.yaml).
- Launch the container, mounting the current directory to /app and exposing HTTP and Flight endpoints externally:
docker run --name spiceai-enterprise
-v $(pwd):/app
-p 50051:50051
-p 8090:8090
709825985650.dkr.ecr.us-east-1.amazonaws.com/spice-ai/spiceai-enterprise-byol:1.8.0-enterprise-models
--http 0.0.0.0:8090
--flight 0.0.0.0:50051
- The -v $(pwd):/app mounts the current directory to /app, where spicepod.yaml is expected.
- The --http and --flight flags set endpoints to listen on 0.0.0.0, allowing external access (default is 127.0.0.1).
- Ports 8090 (HTTP) and 50051 (Flight) are mapped for external access.
Verify and Monitor the Container
- Confirm the container is running:
docker ps
Look for spiceai-enterprise with a STATUS of Up.
- Inspect logs for troubleshooting:
docker logs spiceai-enterprise
Deploying to AWS ECS Create an ECS Task Definition and use this value for the image: 709825985650.dkr.ecr.us-east-1.amazonaws.com/spice-ai/spiceai-enterprise-byol:1.7.0-enterprise-models. Configure the port mappings for the HTTP and Flight ports (i.e. 8090 and 50051).
Override the command to expose the HTTP and Flight ports publically and link to the Spicepod configuration hosted on S3:
"command": [ "--http", "0.0.0.0:8090", "--flight", "0.0.0.0:50051", "s3://your_bucket/path/to/spicepod.yaml" ]
Register the task definition in your AWS account, i.e. aws ecs register-task-definition --cli-input-json file://spiceai-task-definition.json --region us-east-1
Then run the task as you normally would in ECS.
Resources
Vendor resources
Support
Vendor support
Spice.ai Enterprise includes 24/7 dedicated support with a dedicated Slack/Team channel, priority email and ticketing, ensuring critical issues are addressed per the Enterprise SLA.
Detailed enterprise support information is available in the Support Policy & SLA document provided at onboarding.
For general support, please email support@spice.ai .
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.