Overview
Architecture
Architecture

Product video
Spice.ai Enterprise is a portable (<150MB) compute engine built in Rust for data-intensive and intelligent applications. It accelerates SQL queries across databases, data warehouses, and data lakes using Apache Arrow, DataFusion, DuckDB, or SQLite. Integrated and co-deployed with data-intensive applications, Spice materializes and accelerates data from object storage, ensuring sub-second query performance and resilient AI applications. Deployable as a container on AWS ECS, EKS, or hybrid cloud & edge, it includes enterprise licensing, support, and SLAs.
Note: Spice.ai Enterprise requires an existing commercial license. For details, please contact sales@spice.ai .
Highlights
- Unified data query and AI engine accelerating SQL queries across databases, data warehouses, and data lakes. Delivers sub-second query performance while grounding mission-critical AI applications with real-time context to minimize errors and hallucinations.
- Advanced AI and retrieval tools, featuring vector and hybrid search, text-to-SQL, and LLM memory, enabling data-grounded AI applications with more than 25 data connectors enabling federated queries and real-time applications.
- Deployable as a container on AWS ECS, EKS, or on-premises, with dedicated support and SLAs for scalable, secure integration into any architecture.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Quick Launch
Pricing
Vendor refund policy
Refunds for Spice.ai Enterprise container subscriptions are not available after activation, as usage begins immediately upon deployment. Ensure compatibility with AWS ECS, EKS, or on-premises setups before purchase. For billing inquiries, contact AWS Marketplace support or Spice AI directly at support@spice.ai .
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Container Deployment
- Amazon ECS
- Amazon EKS
- Amazon ECS Anywhere
Container image
Containers are lightweight, portable execution environments that wrap server application software in a filesystem that includes everything it needs to run. Container applications run on supported container runtimes and orchestration services, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Both eliminate the need for you to install and operate your own container orchestration software by managing and scheduling containers on a scalable cluster of virtual machines.
Version release notes
Spice v1.5.2-enterprise (Aug 12, 2025)
Spice v1.5.2-enterprise introduces a new Amazon Bedrock Models Provider for converse API (Nova) compatible models, AWS Redshift support using the Postgres data connector, and Hadoop Catalog Support for Iceberg tables along with several bug fixes and improvements.
What's New in v1.5.2-enterprise
Amazon Bedrock Models Provider: Adds a new Amazon Bedrock LLM Provider. Models compatible with the Converse APIÂ (Nova) are supported.
Amazon Bedrock provides access to a range of foundation models for generative AI. Spice supports using Bedrock-hosted models by specifying the bedrock prefix in the from field and configuring the required parameters.
Supported Model IDs:
- amazon.nova-lite-v1:0
- amazon.nova-micro-v1:0
- amazon.nova-premier-v1:0
- amazon.nova-pro-v1:0
Refer to the Amazon Bedrock documentation for details on available models and cross-region inference profiles.
Example Spicepod.yaml:
models: - from: bedrock:us.amazon.nova-lite-v1:0 name: novash params: aws_region: us-east-1 aws_access_key_id: ${ secrets:AWS_ACCESS_KEY_ID } aws_secret_access_key: ${ secrets:AWS_SECRET_ACCESS_KEY } bedrock_guardrail_identifier: arn:aws:bedrock:abcdefg012927:0123456789876:guardrail/hello bedrock_guardrail_version: DRAFT bedrock_trace: enabled bedrock_temperature: 42For more information, see the Amazon Bedrock Documentation .
AWS Redshift Support for Postgres Data Connector: Spice now supports connecting to Amazon Redshift using the PostgreSQL data connector. Redshift is a columnar OLAP database compatible with PostgreSQL, allowing you to use the same connector and configuration parameters.
To connect to Redshift, use the format postgres:schema.table in your Spicepod and set the connection parameters to match your Redshift cluster settings.
Example Spicepod.yaml:
# Example datasets for Redshift TPCH tables datasets: - from: postgres:public.customer name: customer params: pg_host: ${secrets:PG_HOST} pg_port: 5439 pg_sslmode: prefer pg_db: dev pg_user: ${secrets:PG_USER} pg_pass: ${secrets:PG_PASS} - from: postgres:public.lineitem name: lineitem params: pg_host: ${secrets:PG_HOST} pg_port: 5439 pg_sslmode: prefer pg_db: dev pg_user: ${secrets:PG_USER} pg_pass: ${secrets:PG_PASS}Redshift types are mapped to PostgreSQL types. See the PostgreSQL connector documentation for details on supported types and configuration.
Hadoop Catalog Support for Iceberg: The Iceberg Data and Catalog connectors now support connecting to Hadoop catalogs on filesystem (file://) or S3 object storage (s3://, s3a://). This enables connecting to Iceberg catalogs without a separate catalog provider service.
Example Spicepod.yaml:
catalogs: - from: iceberg:file:///tmp/hadoop_warehouse/ name: local_hadoop - from: iceberg:s3://my-bucket/hadoop_warehouse/ name: s3_hadoop # Example datasets - from: iceberg:file:///data/hadoop_warehouse/test/my_table_1 name: local_hadoop - from: iceberg:s3://my-bucket/hadoop_warehouse/test/my_table_2 name: s3_hadoopFor more details, see the Iceberg Data Connector documentation and the Iceberg Catalog Connector documentation .
Parquet Reader: Optional Parquet Page Index: Fixed an issue where the Parquet reader, using arrow-rs and DataFusion, errored on files missing page indexes, despite the Parquet spec allowing optional indexes. The Spice team contributed optional page index support to arrow-rs (PR #6Â ) and configurable handling in DataFusion (PR #93Â ). A new runtime parameter, parquet_page_index, makes Parquet Page Indexes configurable in Spice:
runtime: params: parquet_page_index: required # Options: required, skip, auto- required: (Default) Errors if page indexes are absent.
- skip: Ignores page indexes, potentially reducing query performance.
- auto: Uses page indexes if available; skips otherwise.
This improves compatibility and query flexibility for Parquet datasets.
Additional details
Usage instructions
Prerequisites
Ensure the following tools and resources are ready before starting:
- Docker: Install from https://docs.docker.com/get-docker/Â .
- AWS CLI: Install from https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html .
- AWS ECR Access: Authenticate to the AWS Marketplace registry: aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
- Spicepod Configuration: Prepare a spicepod.yaml file in your working directory. A spicepod is a YAML manifest file that configures which components (i.e. datasets) are loaded. Refer to https://spiceai.org/docs/getting-started/spicepods for details.
- AWS ECS Prerequisites (for ECS deployment): An ECS cluster (Fargate or EC2) configured in your AWS account. An IAM role for ECS task execution (e.g., ecsTaskExecutionRole) with permissions for ECR, CloudWatch, and other required services. A VPC with subnets and a security group allowing inbound traffic on ports 8090 (HTTP) and 50051 (Flight).
Running the Container
- Ensure the spicepod.yaml is in the current directory (e.g., ./spicepod.yaml).
- Launch the container, mounting the current directory to /app and exposing HTTP and Flight endpoints externally:
docker run --name spiceai-enterprise
-v $(pwd):/app
-p 50051:50051
-p 8090:8090
709825985650.dkr.ecr.us-east-1.amazonaws.com/spice-ai/spiceai-enterprise-byol:1.5.2-enterprise-models
--http 0.0.0.0:8090
--flight 0.0.0.0:50051
- The -v $(pwd):/app mounts the current directory to /app, where spicepod.yaml is expected.
- The --http and --flight flags set endpoints to listen on 0.0.0.0, allowing external access (default is 127.0.0.1).
- Ports 8090 (HTTP) and 50051 (Flight) are mapped for external access.
Verify and Monitor the Container
- Confirm the container is running:
docker ps
Look for spiceai-enterprise with a STATUS of Up.
- Inspect logs for troubleshooting:
docker logs spiceai-enterprise
Deploying to AWS ECS Create an ECS Task Definition and use this value for the image: 709825985650.dkr.ecr.us-east-1.amazonaws.com/spice-ai/spiceai-enterprise-byol:1.5.2-enterprise-models. Configure the port mappings for the HTTP and Flight ports (i.e. 8090 and 50051).
Override the command to expose the HTTP and Flight ports publically and link to the Spicepod configuration hosted on S3:
"command": [ "--http", "0.0.0.0:8090", "--flight", "0.0.0.0:50051", "s3://your_bucket/path/to/spicepod.yaml" ]
Register the task definition in your AWS account, i.e. aws ecs register-task-definition --cli-input-json file://spiceai-task-definition.json --region us-east-1
Then run the task as you normally would in ECS.
Resources
Vendor resources
Support
Vendor support
Spice.ai Enterprise includes 24/7 dedicated support with a dedicated Slack/Team channel, priority email and ticketing, ensuring critical issues are addressed per the Enterprise SLA.
Detailed enterprise support information is available in the Support Policy & SLA document provided at onboarding.
For general support, please email support@spice.ai .
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.