
Overview
LFM-7B is specifically optimized for response quality, accuracy, and usefulness. To assess its chat capabilities, we leverage a diverse frontier LLM jury to compare responses generated by LFM-7B against other models in the 7B-8B parameter category. It allows us to reduce individual biases and produce more reliable comparisons.
Highlights
- **Innovative Model Architecture**: Liquid AI's Foundation Models utilize a unique architecture that combines liquid neural networks and non-transformer designs, allowing these models to be efficient in memory usage and capable of handling sequential data, such as text, video, and real-time signals. This setup optimizes performance while minimizing computational demands.
- **Enhanced Adaptability and Real-Time Learning**: Unlike conventional models, LFMs can adapt their internal processes based on new inputs in real time, making them highly responsive.
- **Efficiency in Long-Context Processing**: Liquid AI's models can efficiently process extended input sequences without the steep memory and processing requirements typical of transformer-based models, supporting applications like document summarization and complex chatbot interactions with minimal hardware demands. With LFMs, it’s possible to fit up to 1 million tokens-worth of data and map it onto 16 gigabytes of memory. **
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g4dn.12xlarge Inference (Batch) Recommended | Model inference on the ml.g4dn.12xlarge instance type, batch mode | $10.00 |
ml.g6e.2xlarge Inference (Real-Time) Recommended | Model inference on the ml.g6e.2xlarge instance type, real-time mode | $3.00 |
ml.g6e.xlarge Inference (Real-Time) | Model inference on the ml.g6e.xlarge instance type, real-time mode | $3.00 |
ml.g6e.4xlarge Inference (Real-Time) | Model inference on the ml.g6e.4xlarge instance type, real-time mode | $3.00 |
ml.g6e.16xlarge Inference (Real-Time) | Model inference on the ml.g6e.16xlarge instance type, real-time mode | $3.00 |
ml.g6e.8xlarge Inference (Real-Time) | Model inference on the ml.g6e.8xlarge instance type, real-time mode | $3.00 |
Vendor refund policy
We don’t offer refunds, but we’re happy to assist! Contact us anytime at  support+aws@liquid.ai .
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Initial version
Additional details
Inputs
- Summary
The model leverages OpenAI's chat format as detailed in OpenAI API documentation , with the following key specifics:
- Supports text-only interactions.
- Mandates the model parameter to be explicitly set to /opt/ml/model for proper functionality.
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
Request | https://platform.openai.com/docs/api-reference/chat | Type: FreeText | Yes |
Resources
Vendor resources
Support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products


