Listing Thumbnail

    Dify Premium

     Info
    Deployed on AWS
    A cloud-native AI application development platform, empowering mid-sized teams to rapidly design, deploy, and manage scalable AI applications on AWS.

    Overview

    Dify Premium is a ready-to-use, cloud-native edition of Dify, built exclusively for AWS environments to help organizations accelerate innovation. Through an intuitive interface, Dify seamlessly integrates AI workflow orchestration, RAG pipelines, agent capabilities, model management, observability and more, enabling teams to efficiently design, launch, and manage AI-powered applications.

    With Dify Premium, you can deploy the platform on your chosen AWS EC2 instance after purchase via AWS Marketplace, giving you full control over your deployment while leveraging scalable AWS cloud resources. Compared to Dify Community, Dify Premium includes priority email support and branding customization, making it an ideal choice for mid-sized teams seeking greater flexibility and a more polished, professional experience.

    For organizations that require advanced customization, enterprise grade security, multi-tenant management, or deployment within private infrastructure, Dify Enterprise is available as an upgrade.

    Highlights

    • Model Management & Flexibility: Access 1,000+ models, including AWS Bedrock and SageMaker, with centralized management and side-by-side performance comparison. Empower teams to flexibly select and integrate the best models into AI applications, all within Dify intuitive no-code/low-code environment.
    • Agentic Workflows & RAG: Design advanced agentic workflows with multi-step logic, context-aware agents, and cross-modal integration (LLM, TTS, STT). Leverage robust built-in RAG pipelines for seamless data extraction, transformation, and indexing across diverse sources and knowledge bases.
    • Distribution, Branding & LLMOps: Publish AI applications as WebApps, embed into websites, or integrate via API. Apply custom branding for a professional user experience. Monitor performance, analyze metrics, and use LLMOps tools for ongoing experimentation, evaluation, and optimization.

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 22

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Dify Premium

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time. Alternatively, you can pay upfront for a contract, which typically covers your anticipated usage for the contract duration. Any usage beyond contract will incur additional usage-based costs.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (14)

     Info
    Dimension
    Cost/hour
    c5.2xlarge
    Recommended
    $0.30
    m7i.xlarge
    $0.30
    m6i.xlarge
    $0.30
    c5.xlarge
    $0.30
    m7a.xlarge
    $0.30
    m6a.xlarge
    $0.30
    x1e.xlarge
    $0.30
    m7a.4xlarge
    $0.30
    m5a.2xlarge
    $0.30
    r5.xlarge
    $0.30

    Vendor refund policy

    No refund is available.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Additional details

    Usage instructions

    [First-time Setup]: If this is your first time accessing Dify, enter the Admin initialization password (set to your EC2's instance ID) to start the set up process. [Accessing Dify Premium]: After the AMI is deployed, access Dify via the instance's public IP found in th EC2 console (HTTP port 80 is used by default) [Upgrading Dify Premium]: In the EC2 instance, run the following commands: 1.git clone https://github.com/langgenius/dify.git /tmp/dify 2.mv -f /tmp/dify/docker/* /dify/ 3. rm -rf /tmp/dify 4. docker-compose down 5. docker-compose pull 6. docker-compose -f docker-compose.yaml -f docker-compose.override.yaml up -d [Customizing Dify Premium]: Refer to the help documentation: https://docs.dify.ai/getting-started/install-self-hosted/docker-compose#customize-dify .

    Support

    Vendor support

    Priority email support included with subscription. For technical assistance, please contact: support@dify.ai . To ensure we can assist you efficiently, please mention that you are a Dify Premium subscriber and include your AWS Account ID in your email.

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly
    By LangGenius
    By Dataloop AI/GenAI development platform

    Accolades

     Info
    Top
    10
    In AIOps, Generative AI
    Top
    50
    In Data Preparation
    Top
    10
    In Edit/Processing-Text, Natural Language Processing, Generative AI

    Overview

     Info
    AI generated from product descriptions
    Model Integration
    Supports access to over 1,000 AI models, including integration with AWS Bedrock and SageMaker, with centralized management and performance comparison capabilities
    Workflow Orchestration
    Enables design of advanced agentic workflows with multi-step logic, context-aware agents, and cross-modal integration across language models, text-to-speech, and speech-to-text systems
    Retrieval Augmented Generation
    Provides robust built-in RAG pipelines for comprehensive data extraction, transformation, and indexing across diverse knowledge sources and databases
    Application Distribution
    Supports publishing AI applications as web applications, website embeddings, and API integrations with customizable branding options
    Performance Monitoring
    Offers LLMOps tools for continuous performance analysis, metrics tracking, experimentation, and optimization of AI application workflows
    Data Management
    Advanced platform for exploring and analyzing unstructured data from diverse sources with automated preprocessing and embeddings
    AI Pipeline Orchestration
    Drag-and-drop and code-based interface for creating complex AI workflows with data, models, apps, and human feedback integration
    Model Management
    Capability to use pre-existing AI models, build custom models, deploy to production, and perform versioning, experimentation, and fine-tuning
    Enterprise Security Framework
    Comprehensive security controls including RBAC, SSO, 2FA, AES-256 encryption, compliance with GDPR, ISO 27001, ISO 27701, and SOC 2 Type II standards
    AI Application Development
    Function-as-a-service offering enabling custom code development for complex tasks with direct data and model access without infrastructure setup
    Multi-Agent LLM Orchestration
    Enables building and managing multiple generative AI agents with capability to assign, switch, and string together different large language models from a single platform
    Retrieval Augmented Generation (RAG)
    Provides customized prompt generation space for creating dynamic generative AI prompts against proprietary data without requiring coding
    Performance Governance
    Offers enterprise-level observability and control mechanisms to mitigate risks of LLM hallucination, misinformation, and sensitive data exposure
    Semantic Analytics
    Generates provenance tracking, coverage assessment, and confidence scoring for AI-generated outputs
    Dynamic LLM Management
    Supports real-time monitoring of token usage and enables comparative analysis and optimization of different large language models

    Contract

     Info
    Standard contract
    No

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.