As of June 26, 2024, MemoryDB delivers the fastest vector search performance at the highest recall rates among popular vector databases on AWS. With vector search for MemoryDB, you can store millions of vector embeddings, achieve single-digit millisecond vector search, and update latencies at more than 99% recall with the highest levels of throughput.
Vectors are numerical representations of unstructured data, such as text, images, and videos, created from machine learning (ML) models that help capture the semantic meaning of the underlying data. MemoryDB enables ML and generative AI models to work with data stored in MemoryDB in real time without having to move your data. With MemoryDB, you can store, index, retrieve, and search vector embeddings within Valkey and Redis OSS data structures. You can also store vector embeddings from AI/ML models, such as those from Amazon Bedrock and Amazon SageMaker, in your MemoryDB database. Read our documentation to learn more about vector search for MemoryDB.
Vector search for MemoryDB is suited toward use cases where peak performance is the most important selection criterion. You can use vector search to power real-time ML and generative AI applications in use cases such as retrieval augmented generation (RAG) for chatbots, anomaly (fraud) detection, real-time recommendation engines, and document retrieval.