Statsig Developer Tier
StatsigReviews from AWS customer
0 AWS reviews
-
5 star0
-
4 star0
-
3 star0
-
2 star0
-
1 star0
External reviews
335 reviews
from
External reviews are not included in the AWS star rating for the product.
Super fast to build conversion dashboards from Segment events
What do you like best about the product?
The widgets that are pre made are pretty nice
What do you dislike about the product?
Errors should allow me to troubleshoot or at least know what's wrong.
What problems is the product solving and how is that benefiting you?
We use Statsig to build conversion dashboards for new features based on Segment events, run experiments, and quickly monitor impact. It shortens time-to-insight and makes iteration more efficient for our product teams.
Extremely flexible tool
What do you like best about the product?
I am a recent Statsig user, and my first impression of the tool is very positive. The tool is very user-friendly and has many useful functionalities that make my work as a product analyst easier and faster compared to the manual I was using before for every experiment configuration, cleaning, or post-hoc analysis. Also, having a power calculator integrated facilitates my work and makes processes more transparent and easier to share.
What do you dislike about the product?
Minor: I noticed that it is not possible to change the name of the experiment once it has been created. It would be great if one could quickly fix the working title without creating an entirely new experiment. Or maybe I missed how this can be done?
What problems is the product solving and how is that benefiting you?
It allows faster and easier A/B Test implementation in my opinion, results are easy to monitor, and the diagnostic section is also very helpful to make sure the experiment is healthy without requiring manual inspectiog.
Very comprehensive experimentation platform
What do you like best about the product?
This is still early days for us and we're not using the targeting feature at the moment, but as we're onboarding onto Statsig for experiment analytics, we can already appreciate how sleek and exhaustive the platform is. Being able to get a full statistical analysis out of the box saves us a lot of time and effort and brings consistency across the business. The UI is intuitive, especially the results section which features cool charts like days since first exposure as well as a full suite of diagnostics results (crossover, pre experimental bias, etc.). The account/support team have been very helpful so far and provide a great training.
What do you dislike about the product?
There are many layers and customization options available, which makes it a bit hard to navigate at first. It's a bit of a learning curve for everyone, from engineering to analysts.
What problems is the product solving and how is that benefiting you?
It enables us to streamline the statistical analysis part of experimentation and brings more governance and consistency across teams. As it creates efficiency gains, it helps us to increase the pace of experimentation.
Amazing Analytics Tool
What do you like best about the product?
The simple yet detailed data analytics helps to really collect and analyze insights
What do you dislike about the product?
Some of their amazing features are really lowkey and hard to notice
What problems is the product solving and how is that benefiting you?
It is running A/B experiments and giving me insights into how users are reacting to certain features
Statsis Review after real time using
What do you like best about the product?
Low learning curve – Users don’t need a long tutorial or manual.
Efficiency – Tasks can be done quickly without unnecessary steps.
Error tolerance – Mistakes are easy to fix without penalty.
Efficiency – Tasks can be done quickly without unnecessary steps.
Error tolerance – Mistakes are easy to fix without penalty.
What do you dislike about the product?
Sometime Statsig is not wokring on some internet providers
What problems is the product solving and how is that benefiting you?
It streamlines the process of running A/B tests, managing feature releases, and analyzing user behavior, enabling faster, data-driven decisions and accelerating product development cycles
Powerful experimentation and feature management platform
What do you like best about the product?
Statsig makes experimentation feel seamless. I like how easy it is to set up A/B tests and feature gates without slowing down development.
What do you dislike about the product?
I’d like to see more flexible visualization options for custom metrics.
What problems is the product solving and how is that benefiting you?
Statsig helps us evaluate machine learning models and new features before they go into full production.Instead of deploying blindly, we can run controlled experiments to measure real-world impact on key business metrics.
Migrating from Snowplow to Statsig: A Practical Review
What do you like best about the product?
The implementation simplicity is a game-changer. Adding data attributes to track user interactions is incredibly straightforward compared to our previous setup with Snowplow. What used to require complex schema management and pipeline configuration now takes minutes. Having analytics and A/B testing capabilities in one unified platform also eliminates the need for multiple tool integrations, which should streamline our experimentation workflow once we expand beyond pure analytics.
What do you dislike about the product?
The main issue I've encountered is event duplication between Statsig's native automatic tracking and my explicit custom events. This creates noise in the data and requires careful configuration to avoid double-counting user actions. It's manageable once you understand the system, but it would be helpful if the platform had better conflict detection or clearer guidance on when to rely on native tracking versus custom implementation.
What problems is the product solving and how is that benefiting you?
Statsig is solving the complexity and cost burden we faced with our previous analytics setup. With Snowplow, even basic event tracking required extensive configuration, schema management, and pipeline maintenance that consumed significant engineering time. Statsig's streamlined implementation means we can instrument new features in minutes rather than hours or days.
The unified platform also eliminates the tool sprawl problem - instead of managing separate analytics and experimentation systems, we're consolidating everything into one place. This should dramatically simplify our A/B testing workflow when we expand beyond analytics. The cost savings were our initial driver, but the operational efficiency gains are proving to be the bigger long-term benefit, freeing up our team to focus on product development rather than data infrastructure maintenance.
The unified platform also eliminates the tool sprawl problem - instead of managing separate analytics and experimentation systems, we're consolidating everything into one place. This should dramatically simplify our A/B testing workflow when we expand beyond analytics. The cost savings were our initial driver, but the operational efficiency gains are proving to be the bigger long-term benefit, freeing up our team to focus on product development rather than data infrastructure maintenance.
Great product overall
What do you like best about the product?
- Ease to Use
- Great Customer Support
- Quick New Feature Deliver
- Diagnostic tab
- Low latency
- Great Customer Support
- Quick New Feature Deliver
- Diagnostic tab
- Low latency
What do you dislike about the product?
- Gaps in documentation for the new feature release; some docs are outdated.
- Guarded Releases
- Guarded Releases
What problems is the product solving and how is that benefiting you?
- low latency
- availability
- UI
- Analytics
- availability
- UI
- Analytics
Early in our journey but very promissing
What do you like best about the product?
We're still early in our journey with Statsig, but the impact is already clear. Historically, we’ve had limited visibility into how users actually interact with our app - what screens they visit, what they skip, and how engagement varies across the experience. Most of our feedback has come from a vocal few, which isn’t always representative. Statsig is starting to give us the additional data we need to make more informed and balanced decisions.
Even without a demo or diving into the documentation, the setup experience has been smooth. It was quick and intuitive to get our first dashboards and graphs running, and it’s already clear that it’ll integrate well into our existing development process.
Right now, the real-time dashboards are what we're most excited about. Looking ahead, we’re planning to make good use of the experimentation features as we continue to scale. Combining this data with qualitative user feedback and focus groups will give us a strong foundation for future product decisions. Over the next six months, we expect Statsig to significantly improve our confidence in roadmap planning and help us better understand user behaviour across platforms.
Even without a demo or diving into the documentation, the setup experience has been smooth. It was quick and intuitive to get our first dashboards and graphs running, and it’s already clear that it’ll integrate well into our existing development process.
Right now, the real-time dashboards are what we're most excited about. Looking ahead, we’re planning to make good use of the experimentation features as we continue to scale. Combining this data with qualitative user feedback and focus groups will give us a strong foundation for future product decisions. Over the next six months, we expect Statsig to significantly improve our confidence in roadmap planning and help us better understand user behaviour across platforms.
What do you dislike about the product?
At this early stage, I don’t have any dislikes to report. Our experience so far has been wholly positive, and we’ve found the platform intuitive and easy to get started with.
What problems is the product solving and how is that benefiting you?
Statsig is helping us solve a key problem: limited visibility into how users interact with our app. Previously, our understanding of user behaviour was based largely on anecdotal feedback from a limited number of users, which made it difficult to make confident product decisions. With Statsig, we will be gaining data-driven insights into which screens users visit or ignore, how they engage with different features, and where improvements are needed.
This will benefit us by supporting more balanced, informed decision-making. It’s will also help us validate user feedback at scale and giving us the foundation to run meaningful experiments in the future. Ultimately, it’s enabling us to build a clearer picture of user engagement across platforms, which will be critical as we refine existing features and plan future product launches.
This will benefit us by supporting more balanced, informed decision-making. It’s will also help us validate user feedback at scale and giving us the foundation to run meaningful experiments in the future. Ultimately, it’s enabling us to build a clearer picture of user engagement across platforms, which will be critical as we refine existing features and plan future product launches.
Powerful, Integrated Experimentation with Room to Improve Usability
What do you like best about the product?
For me, Statsig’s biggest strength is its all‑in‑one approach: experimentation, feature flagging, and analytics in one intuitive platform. Setting up A/B tests is fast, the SDKs are solid, and the built‑in statistical engine does heavy lifting like p‑value and confidence interval calculations automatically. I also appreciate the transparent and generous pricing model and their support is consistently responsive and helpful.
What do you dislike about the product?
There’s a bit of a learning curve especially when configuring metrics or pulling from a data warehouse. The distinctions between experiments, feature gates, and dynamic configs can feel muddy until you’ve used them a few times in context. Additionally, the UI sometimes feels cluttered when managing many experiments—better filtering and customization options would help.
What problems is the product solving and how is that benefiting you?
Before Statsig, we often relied on intuition or manual analysis to assess the impact of new features, which led to slower decisions and uncertainty. Now, we can ship A/B tests quickly, measure impact across key metrics automatically, and roll out features behind flags all in one platform.
It also simplifies managing experiments at scale. The guardrail metrics, automated stats engine, and built-in dashboards save us hours of manual analysis.
It also simplifies managing experiments at scale. The guardrail metrics, automated stats engine, and built-in dashboards save us hours of manual analysis.
showing 41 - 50