With vector search, you can store, index, retrieve, and search vector embeddings within MemoryDB alongside your data. First, you generate vector embeddings directly through embedding models such as the Amazon Titan Embeddings or through managed services such as Amazon Bedrock. Then, you load the embeddings into MemoryDB after initializing your vector index using the MemoryDB data plane APIs. MemoryDB stores vector embeddings as JSON or hash data types.
When loaded, MemoryDB builds the index with your vector embeddings. As you load new, update existing, or delete data, MemoryDB streams updates to the vector index within single-digit milliseconds. MemoryDB supports efficient search queries, prefiltering, and multiple distance metrics (cosine, dot product, and Euclidean). For more information on how to use vector search for MemoryDB, see the documentation.