Overview

Product video
LangSmith Observability and Evals is a unified observability & evals platform where teams can debug, test, and monitor AI app performance - whether building with LangChain or not.
Find failures fast with agent observability. Quickly debug and understand non-deterministic LLM app behavior with tracing. See what your agent is doing step by step, then fix issues to improve latency and response quality.
Evaluate your agent's performance. Evaluate your app by saving production traces to datasets, then score performance with LLM-as-Judge evaluators. Gather human feedback from subject-matter experts to assess response relevance, correctness, harmfulness, and other criteria.
Experiment with models and prompts in the Playground, and compare outputs across different prompt versions. Any teammate can use the Prompt Canvas UI to directly recommend and improve prompts.
Track business-critical metrics like costs, latency, and response quality with live dashboards, then get alerted when problems arise and drill into root cause.
LangSmith Deployments is a purpose-built infrastructure and management layer for deploying and scaling long-running, stateful agents -- offering:
- 1-click deployment to go live in minutes,
- 30 API endpoints for designing custom user experiences that fit any interaction pattern
- Horizontal scaling to handle bursty, long-running traffic
- A persistence layer to support memory, conversational history, and async collaboration with human-in-the-loop or multi-agent workflows
- Native LangSmith Studio, the agent IDE, for easy debugging, visibility, and iteration
Highlights
- LangSmith Observability and Evals is a unified observability & evals platform where teams can debug, test, and monitor AI app performance - whether building with LangChain or not. Quickly debug and understand non-deterministic LLM app behavior with tracing. See what your agent is doing step by step, then fix issues to improve latency and response quality.
- LangSmith Deployments is a purpose-built infrastructure and management layer for deploying and scaling long-running, stateful agents offering 1/1-click deployment to go live in minutes, 2/Horizontal scaling to handle bursty, long-running traffic 3/A persistence layer to support memory, conversational history, and async collaboration with human-in-the-loop or multi-agent workflows.
- Please note: there is a minimum $100k annual usage commitment to access this package. To discuss enterprise pricing or to activate your commitment and obtain your license key after signup, please contact us at https://www.langchain.com/contact-sales - alternatively, our self-serve cloud-based products are available at https://www.langchain.com
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/unit |
|---|---|---|
Unit for LangSmith Observability & Evaluation | Per Trace | $0.00625 |
Unit for LangSmith Deployment | Per Agent Run | $0.00625 |
Metered Usage Amount | Metered Usage Amount | $0.01 |
Minimum annual usage commitment, billed in advance | Minimum annual usage commitment, billed in advance | $100,000.00 |
Vendor refund policy
https://www.langchain.com/terms-of-service#:~:text=Customer%20will%20pay%20LangChain%20all ,Fees%20paid%20are%20non%2Drefundable.
Custom pricing options
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
LangSmith Helm Deployment
- Amazon EKS
Helm chart
Helm charts are Kubernetes YAML manifests combined into a single package that can be installed on Kubernetes clusters. The containerized application is deployed on a cluster by running a single Helm install command to install the seller-provided Helm chart.
Version release notes
This release brings alerting, UI-driven experiment workflows, end-to-end OpenTelemetry support and a host of new capabilities alongside several bug fixes. Under the hood it also rolls out beta Self-Hosted LangGraph Cloud Control Plane and full-text search, plus query performance and ingestion optimizations.
Additional details
Usage instructions
To use your instance follow these instructions:
Resources
Vendor resources
Support
Vendor support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
Customer reviews
Great for agentic ai programming
Powerful Framework for Building AI Apps Quickly
Langchain Review -MLOps
Integration with Vector Database,
Pipeline Composition allowing mw to separate data ingestion, transformation and inference stages,
Reproducibility- it helps me LLM-powered workflows for CI/CD deployment.
It has fast-evolving APIs which makes it difficult to track the backward copatibility.
Prompt and Memory Management, LLM Orchestration, Data Connectivity
Powerful AI orchestration framework with a learning curve
Extensive integrations with various AI models and vector databases
Active community and rapid development pace
Flexibility in building complex AI workflows
Good documentation with practical examples
Memory management capabilities for conversational AI
Built-in prompt templates and output parsers
Frequent breaking changes between versions
Can be overly complex for simple use cases
Debugging can be challenging with nested chains
Performance overhead compared to direct API calls
Documentation sometimes lags behind new features
Abstractions can sometimes hide important details