Listing Thumbnail

    Comet - Licensing only

     Info
    Sold by: Comet ML 
    Comet's machine learning platform integrates with your existing infrastructure and tools so you can reproduce, debug, manage, visualize, and optimize model - from training runs to production monitoring. Add two lines of code to your notebook or script and automatically start tracking code, hyperparameters, metrics, and more, so you can compare and reproduce training runs.
    4.3

    Overview

    Play video

    Comet's machine learning platform integrates with your existing infrastructure and tools so you can manage, visualize, and optimize model - from training runs to production monitoring.

    Add two lines of code to your notebook or script and automatically start tracking code, hyperparameters, metrics, and more, so you can compare and reproduce training runs.

    Comet helps ML teams: -Track and share training run results in real time. -Build their own tailored, interactive visualizations. -Track and version datasets and artifacts. -Manage their models and trigger deployments. -Monitor their models in production.

    Comet's platform supports some of the world's most innovative enterprise teams deploying deep learning at scale and is used by ML teams at Uber, Zappos, Shopify, Affirm, Etsy, Ancestry.com and ML leaders across all industries.

    For custom pricing, MSA, or a private contract, please contract AWS-Marketplace@comet.com  for a private offer.

    Highlights

    • Track and share training run results in real time: Comet's ML platform gives you visibility into training runs and models so you can iterate faster.
    • Manage your models and trigger deployments: Comet Model Registry allows you to keep track of your models ready for deployment. Thanks to the tight integration with Comet Experiment Management, you will have full lineage from training to production.
    • Monitor your models in production: The performance of models deployed to production degrade over time, either due to drift or data quality. Use Comet's machine learning platform to identify drift and track accuracy metrics using baselines automatically pulled from training runs.

    Details

    Sold by

    Delivery method

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Comet - Licensing only

     Info
    Pricing is based on the duration and terms of your contract with the vendor. This entitles you to a specified quantity of use for the contract duration. If you choose not to renew or replace your contract before it ends, access to these entitlements will expire.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    12-month contract (1)

     Info
    Dimension
    Description
    Cost/12 months
    Advanced Package
    Experiment Management, Model Registry, Monitoring
    $4,500.00

    Vendor refund policy

    Non-Refundable. Unless otherwise expressly provided for in this agreement or the applicable Order Form, (i) all fees are based on services purchased and not on actual use; and (ii) all fees paid under this agreement are non-refundable.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Software as a Service (SaaS)

    SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.

    Resources

    Vendor resources

    Support

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    50
    In Computer Vision
    Top
    50
    In Computer Vision
    Top
    10
    In Time-series Forecasting

    Customer reviews

     Info
    Sentiment is AI generated from actual customer reviews on AWS and G2
    Reviews
    Functionality
    Ease of use
    Customer service
    Cost effectiveness
    2 reviews
    Insufficient data
    Insufficient data
    Insufficient data
    Insufficient data
    0 reviews
    Insufficient data
    Insufficient data
    Insufficient data
    Insufficient data
    Positive reviews
    Mixed reviews
    Negative reviews

    Overview

     Info
    AI generated from product descriptions
    Model Tracking
    "Automatically track code, hyperparameters, metrics, and training run details with minimal code integration"
    Experiment Management
    "Enable real-time tracking and sharing of machine learning experiment results across team environments"
    Model Registry
    "Maintain comprehensive model versioning and lineage tracking from training to production deployment"
    Production Monitoring
    "Detect model performance degradation through drift identification and accuracy metric tracking"
    Visualization Support
    "Create custom interactive visualizations for machine learning experiment analysis and comparison"
    Model Performance Monitoring
    Comprehensive tracking and analysis of model performance across various machine learning domains including tabular, deep learning, computer vision, natural language processing, and large language models
    Anomaly Detection
    Advanced capabilities to identify and mitigate model drift, data integrity issues, hallucination, accuracy, safety, and security problems in AI deployments
    Advanced Analytics
    Utilization of 3D UMAP visualization for macro-level trend analysis and root cause diagnostics for micro-level model performance investigation
    Security Compliance
    SOC2 Type 2 security compliance with role-based access control (RBAC) for secure model operationalization and environment protection
    Model Validation
    Comprehensive model validation and improvement mechanisms to enhance model outputs and optimize deployment outcomes before production
    Data Pipeline Management
    Supports data sharding, dynamic resource optimization, and prevents data contamination with error correction mechanisms
    Model Authoring Capabilities
    Provides deep learning features with custom reusable components and automatic dimensionality transformations
    Experiment Tracking
    Enables hyperparameter tuning, model evaluation, and comprehensive model evolution tracking
    Model Registry and Deployment
    Offers secure model storage with full traceability and one-click deployment across cloud, on-premises, and edge environments
    Security Infrastructure
    Implements comprehensive security features to protect data and models throughout the machine learning lifecycle

    Contract

     Info
    Standard contract
    No

    Customer reviews

    Ratings and reviews

     Info
    4.3
    14 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    29%
    57%
    14%
    0%
    0%
    1 AWS reviews
    |
    13 external reviews
    External reviews are from G2  and PeerSpot .
    reviewer2774574

    Streamlined experiment tracking has improved collaboration and accelerated data workflows

    Reviewed on Dec 19, 2025
    Review from a verified AWS customer

    What is our primary use case?

    In my current organization, we are using Comet  for monitoring and automation purposes. We use Comet  to monitor our data pipelines and automated workflows in real-time, and it alerts us when a scheduled job fails or when performance drops below the threshold. This allows the team to quickly investigate the logs, identify the root cause, and trigger corrective actions without manual interventions.

    How has it helped my organization?

    Comet has had a very positive impact on my organization, mainly by bringing structure, visibility, and consistency to our workflow. The key improvements I have seen so far are faster experimental cycles. The team can spend less time tracking results manually and more time improving models, which has significantly reduced iteration time. Better reproducibility and reliability features are evident because every experiment is well documented, making it easy to reproduce results and avoid machine-related issues. The shared dashboard and experiment history gives everyone in my team a single source of truth, reducing back-and-forth communication and misalignment between teams. Additionally, visual comparisons and tracked metrics help my team and me confidently choose which models or approaches to move forward with.

    What is most valuable?

    The first Comet feature I am using from my experience is experimental tracking, so metrics, code version, artifacts, and output are captured automatically. This makes it easy to compare runs and reproduce results. The second feature I appreciate most is the visual dashboards and charts because it provides interactive charts and graphs to visualize training curves and metrics over time and parameter effects. It also helps us quickly spot trends, anomalies, or performance regressions.

    Another valuable feature is the collaboration and sharing capability. The team can share experiments, dashboards, and results with links or permissions, which encourages transparency and faster iterations for our product.

    The reason behind this is that experimental tracking is central to our workflow because it automatically captures parameters, metrics, code version, and output for everyone. This makes it easy to compare experiments and understand why one model performed better than another and reproduce results without manual logging. It also helps with errors and saves a lot of time when iterating quickly or handling handoffs between team members. Visual dashboards and collaboration are valuable, but experiment tracking is the foundation that everything else builds on.

    One small but really helpful thing is how easy it is to add context to experiments. Features such as tags, notes, and metadata might seem minor, but they make a significant difference over time. Being able to tag runs or add quick notes about why a change was made helps me tremendously when I revisit experiments weeks or months later. This also makes onboarding new team members much smoother.

    What needs improvement?

    Overall, Comet has been very strong for us. There are a few areas where improvement can be made to make it even better. The first improvement I found is simpler onboarding for new users. While it is a powerful tool, some advanced features such as custom logging and alerts have a learning curve. More guided walkthroughs or product tips would help users become productive faster. Additionally, pre-built dashboards, alert rules, or experiment templates for common use cases would reduce setup time, especially for smaller teams.

    There is also a need for improvement in large-scale experiment navigation. As the number of experiments grows, filtering and organizing runs could be even more powerful, especially for long-running projects. I believe that more customization in visualization is needed because the dashboards are very useful, but additional options for fine-grain customization would help tailor views for different stakeholders.

    For how long have I used the solution?

    I have been working in my current field for the last five years.

    What do I think about the stability of the solution?

    In our experience, Comet has been very stable and reliable. We have not faced any significant downtime that impacted our workflow. The platform handles concurrent experiments with large data volumes without major issues. Logging dashboards and artifacts storage have been consistently responsive even during high load periods. Occasionally, retrieving very large experiment histories or artifacts can take a few extra seconds, but this has not affected productivity or caused failures for us. When minor issues do arise, Comet's support team responds promptly, which helps maintain reliability.

    What do I think about the scalability of the solution?

    Comet's scalability has been very effective for our organization. It handles more experiments, and we can run hundreds of experiments concurrently without noticeable slowdowns. It also handles large datasets and models. As our team members onboard, project-level permissions, shared dashboards, and collaboration features scale effectively, keeping everyone aligned. Comet's cloud-based architecture automatically scales with use, so we have not needed to worry about provisioning or capacity limits.

    How are customer service and support?

    In my experience with Comet customer support, they have been positive and helpful. When we have reached out with questions or issues, the support team has responded in a timely manner. They have been effective at guiding us through troubleshooting, especially for configuration questions or edge case behaviors. The support representatives seem knowledgeable about the product and able to provide actionable guidance rather than generic responses. In a few cases, support helped clarify gaps in the documentation and even pointed us to resources we had not discovered.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    Before Comet, we were using a combination of manual spreadsheets, local logging scripts, and some basic experiment tracking tools. We switched to Comet because our previous setup had scattered metrics, artifacts, and code across multiple tools and folders, making reproducibility and collaboration difficult. It was hard to compare experiments or track progress across the team, leading to slower iteration cycles. Logging results, visualizing metrics, and sharing updates consumed a lot of time for us. Sharing insights with teammates and onboarding new members was cumbersome and error-prone as we experienced.

    What was our ROI?

    We have definitely seen a return on investment for using Comet in both tangible and intangible ways. Automating experiment tracking, logging, and reporting has freed up 30 to 40 percent of time that would otherwise be spent on manual documentation and comparisons. Teams can converge on optimal models more quickly, reducing overall project timelines by roughly 20 to 25 percent. Additionally, shared dashboards and notes reduce miscommunication, leading to smoother handoffs and less duplicated work. It provides us decision confidence, as data-driven insights from Comet allow us to make faster, more reliable decisions on model selections and deployment, which indirectly impacts project success and revenue.

    Which other solutions did I evaluate?

    Before choosing Comet, we evaluated a few more options in the market, including W&B, which is popular for experiment tracking and collaboration with strong visualization features. We also considered MLflow, an open-source platform for tracking experiments, models, and deployments that is flexible but requires more setup. We also experimented with Neptune.ai, which focuses on experiment logging and team collaboration and is lightweight and easy to use. We chose Comet because it offered a good balance between ease of use and advanced features, including experiment tracking, dashboards, and artifact management. It also has strong collaboration and access control capabilities for team and workflow. Additionally, it has reliable integration with our existing tech stack and major machine learning frameworks, which made it the preferred choice over the others.

    What other advice do I have?

    Collaboration in Comet is one of the strongest aspects for my team. Everyone in my team can access the same experiment dashboard and visualization, providing a single source of truth. Team members can leave context or explanation directly on runs, which helps avoid miscommunication and preserve knowledge over time. By tagging experiments, it becomes easier for multiple people to filter and find relevant runs quickly. Role-based permissions allow us to control who can add experiments versus who can only view them, which keeps collaboration secure.

    Comet's documentation and learning resources are quite helpful and generally well-organized. The step-by-step guides and quick start tutorials made onboarding straightforward, especially for integrating with popular machine learning frameworks. The detailed documentation for the Python SDK, REST and APIs, and CLI makes it easier to implement custom logging, metrics, and artifacts tracking. The knowledge base and community forums provide practical solutions for common issues, which helps reduce downtime.

    Comet handles version control for models, code, and data in a way that is very useful for my team. Every experiment run captures the code version, dataset version, and model checkpoints automatically, making it easy to reproduce results later. Models, datasets, and other artifacts are stored with clear lineage, so we can trace exactly which inputs produced a given output. We can compare different versions of models or experiments and, if required, roll back to previous stable versions without confusion. Additionally, Comet can link to Git  commits, making code tracking seamless alongside experiments.

    The advice I would offer to others looking into Comet is to start with the basics and then expand. Organizing experiments with tags, metadata, and inline notes from the start saves a lot of time and makes collaboration much smoother. Connect Comet with existing code repositories, CI and CD pipelines, and collaboration tools to get the most value. Use the shared dashboards to take more advantage of resources. If expecting many experiments or large datasets, structure projects and metadata thoughtfully to maintain performance and organization as usage grows. I would rate this product an 8 out of 10.

    reviewer2751006

    Experiment and asset tracking enhance model development and ease of on-prem maintenance

    Reviewed on Aug 19, 2025
    Review provided by PeerSpot

    What is our primary use case?

    I use Comet  for experiment and asset tracking during model development, as well as to support model reproducibility and transparency. I also appreciate the ability to perform an on-prem installation without the need to maintain the installation.

    How has it helped my organization?

    Previously, we had an on-prem installation that required frequent re-deployment due to internal security standards, which could cause down-time during model development. Using Comet  within SageMaker  streamlined the deployment process to require zero maintenance and also simplified billing.

    What is most valuable?

    Model metric tracking and comparison has been extremely beneficial. Comet's customer service has also been excellent. Any issue we've had, they have been able to help us resolve.

    What needs improvement?

    SageMaker  itself has a cumbersome interface, which makes launching Comet somewhat of a hassle.

    For how long have I used the solution?

    I have used the solution for 3 months.

    Which deployment model are you using for this solution?

    On-premises

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Shreyansh J.

    Comet.ml: Streamlining Machine Learning and Collaborative Experiment Tracking Platform

    Reviewed on Feb 09, 2023
    Review provided by G2
    What do you like best about the product?
    Comet.ml provides an easy-to-use interface for tracking experiments, comparing results, and reproducing past results. This helps data scientists and machine learning engineers to keep track of their progress and make informed decisions based on their experiments. Comet.ml integrates with popular version control systems like Git, allowing users to track changes in their code and experiments over time.
    What do you dislike about the product?
    Comet.ml may not be suitable for large-scale machine learning projects, as it has limited scalability compared to other solutions. Some users may find the platform's user interface and features to be limited, as it may not provide the level of customization they need for their projects.
    What problems is the product solving and how is that benefiting you?
    Machine learning projects can involve a large number of experiments and it can be difficult to keep track of all the results and make decisions based on them. Comet.ml provides a platform for tracking experiments, comparing results, and reproducing past results, making it easier to manage machine learning projects.
    Avi P.

    Solid platform overall but there's competition

    Reviewed on Jun 20, 2022
    Review provided by G2
    What do you like best about the product?
    Simplicity to integrate into my project. Nice UI and UX overall
    What do you dislike about the product?
    Expensive and not so customizable overall. There are platforms that compete with this one and have better offerings, which is why i switched.
    What problems is the product solving and how is that benefiting you?
    Helps me speed up building my neural networks and ML tests...
    Taha S.

    Easy to Use !! Great UI

    Reviewed on May 24, 2022
    Review provided by G2
    What do you like best about the product?
    User interface
    Easy to use
    Support different View and Easy to search Text
    What do you dislike about the product?
    Price.
    time take to pull data
    small notification view
    What problems is the product solving and how is that benefiting you?
    Code Debug
    Application monitoring
    View all reviews