Listing Thumbnail

    Langfuse Enterprise Edition (EE) - Self Hosting

     Info
    Sold by: Langfuse 
    Deployed on AWS
    Langfuse is an open-source LLM engineering platform that helps teams collaboratively debug, analyze, and iterate on their LLM applications.

    Overview

    Please contact marketplace-aws@langfuse.com  to request a private offer.

    Langfuse is an open-source LLM engineering platform designed to streamline the development, monitoring, and testing of LLM-based applications. It addresses the unique challenges posed by LLMs, such as complex control flows, non-deterministic outputs, and mixed user intents, by offering robust tools for tracing, debugging, and evaluating these applications. With Langfuse, teams can collaboratively debug, analyze, and iterate on their LLM applications, making it easier to track all relevant logic, manage prompts, and monitor the performance and quality of their models over time.

    Core features of Langfuse include observability through detailed tracing of all LLM calls and relevant application logic, along with integrations for popular tools like OpenAI SDK, Langchain, and others. The platform provides a UI for inspecting and debugging logs, managing prompts, and conducting experiments to test application behavior before deployment. Additionally, Langfuse offers powerful analytics and evaluation tools to monitor LLM performance, track metrics like cost and latency, and gather user feedback, all of which contribute to a deeper understanding of application quality and user behavior.

    Langfuse's open-source nature, model and framework agnosticism, and incremental adoptability make it ideal for teams building complex LLM applications. By capturing the full context of LLM executions and providing tools to classify and analyze user inputs, Langfuse helps developers maintain control over their applications, ensuring they can effectively manage and improve the performance and quality of their LLM systems.

    Highlights

    • Automated Evals -- Use Langfuse to automatically score the quality of your LLM application with an LLM-as-a-Judge approach or by collecting user and employee feedback.
    • Integration -- Langfuse provides robust integrations via Python and Typescript SDKs as well as with frameworks such as Llama Index, Langchain, OpenAI, Dify or Litellm.

    Details

    Delivery method

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Langfuse Enterprise Edition (EE) - Self Hosting

     Info
    Pricing is based on the duration and terms of your contract with the vendor. This entitles you to a specified quantity of use for the contract duration. If you choose not to renew or replace your contract before it ends, access to these entitlements will expire.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    1-month contract (1)

     Info
    Dimension
    Description
    Cost/month
    Langfuse Enterprise Edition (EE)
    Langfuse Enterprise Edition (EE) - Self-Hosted
    $8,500.00

    Vendor refund policy

    Customers can cancel their contract at any time and remain liable for the payment of the last remaining billing period. By default, Langfuse enters into monthly contracts.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Software as a Service (SaaS)

    SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.

    Resources

    Vendor resources

    Support

    Vendor support

    Customers have access to the Langfuse team via Slack or Email.

    Customers can purchase additional support and SLAs.

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    100
    In Log Analysis
    Top
    100
    In Testing

    Overview

     Info
    AI generated from product descriptions
    Observability
    Detailed tracing of all LLM calls and application logic with comprehensive logging capabilities
    Integration Framework
    Support for multiple SDKs and integrations including Python, Typescript, Llama Index, Langchain, OpenAI, and other LLM frameworks
    Performance Analytics
    Advanced monitoring of metrics including execution performance, cost tracking, and latency measurement for LLM applications
    Evaluation Mechanism
    Automated quality scoring using LLM-as-a-Judge approach and user/employee feedback collection
    Debugging Interface
    User interface for inspecting logs, managing prompts, and conducting experimental tests on LLM application behavior
    AI Model Evaluation
    Comprehensive testing and scoring of Large Language Model (LLM) performance across real-world scenarios
    Adversarial Test Case Generation
    Automated generation of test cases to identify potential vulnerabilities and unexpected behaviors in AI systems
    Retrieval-Augmented Generation Testing
    Verification of retrieval-based LLM systems to ensure consistent and reliable information delivery
    Failure Monitoring
    Real-time tracking and visualization of LLM system performance and potential failures in production environments
    Multi-Environment Deployment
    Support for cloud-hosted and on-premises self-hosted deployment architectures for flexible AI system evaluation
    Observability Tracking
    Detailed monitoring of metrics including token usage, latency, and comprehensive logging for AI application performance analysis
    Prompt Management
    Advanced collaborative platform for prompt templating, versioning, and conducting A/B testing for language model interactions
    Security Guardrails
    Automated content filtering mechanisms to prevent toxic responses, detect prompt injections, and protect against potential data leakage
    Model Evaluation Framework
    Comprehensive testing suite for assessing language model performance with integration capabilities for continuous integration pipelines
    AI Interaction Logging
    Systematic recording of user interactions and feedback collection to enable iterative refinement of AI response strategies

    Contract

     Info
    Standard contract
    No
    No

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.