Sign in
Categories
Your Saved List Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

Reviews from AWS customer

0 AWS reviews
  • 5 star
    0
  • 4 star
    0
  • 3 star
    0
  • 2 star
    0
  • 1 star
    0

External reviews

251 reviews
from

External reviews are not included in the AWS star rating for the product.


    Banking

Greatly speeds up experimentation setup & analysis

  • August 01, 2025
  • Review provided by G2

What do you like best about the product?
I use statsig to monitor product experiments almost every day.
- Ease of setting up new experiments
- Ease & speed of analysing experimental results
- Has all the features we need to cover 90% of the experiment setups we wish to run
- Speed of bug reporting, understanding & resolution
- Amazing documentation and further support when asking for it
What do you dislike about the product?
It can be complex to understand at first. There's a big learning curve upfront, especially for engineers, but once that's done, it becomes a superpower.
What problems is the product solving and how is that benefiting you?
- Experiment setup speed
- Results monitoring & finalisation (with statistical adjusments e.g. early stopping)
- Metric calculations and statistical testing
- Experiment catalogue and record keeping
- Federation of running experiments
- Holdout groups


    Financial Services

Statsig makes my job so much easier

  • August 01, 2025
  • Review provided by G2

What do you like best about the product?
It's super easy to use (setting up experiments, deep diving into results) and has a wide range of useful features (user and metric breakouts, holdouts, dashboards)
What do you dislike about the product?
It's perhaps a little bit too stats-y for non-technical people to understand - it would be nice to have more in-situ information about what results mean, so that my stakeholders can self serve more easily
What problems is the product solving and how is that benefiting you?
Running experimentation at scale - Statsig allows me to run lots of experiments at once without drowning in setup & analysis


    Rodrigo B.

Statsig makes complex experimentation setup and analysis easy

  • August 01, 2025
  • Review provided by G2

What do you like best about the product?
I like the user interface from Statsig, all the experimentation capabilities it supports.

It's easy to breakdown results based on specific customer demographics, or improve the estimates of impact based on additional regressors. It takes a lot of the setup away from the organisation to focus on delivering insights
What do you dislike about the product?
Statsig could improve on how we extract all the details of experiment performance in a more programmatic way, so that we can do meta analysis offline
What problems is the product solving and how is that benefiting you?
Statsig enable more self serve capabilities for experiment analysis. We've had big success with getting non technical stakeholders to setup and analyse their own experiments, and only focusing on providing guidelines and best practices


    Ling L.

I use statsig at work for experimentation - both setup and basic analysis, as well as feature flags.

  • August 01, 2025
  • Review provided by G2

What do you like best about the product?
- Easy to setup
- Experiment/feature flag setup is straightforward
- UI is intuitive with respect to experiment setup and results
What do you dislike about the product?
There are some bugs with the client SDK, and we've had issues with documentation/debugging that required very close support with engineers from statsig.
What problems is the product solving and how is that benefiting you?
Doing experimentation at scale with analytics is cumbersome. At Notion, we previously had an in-house experimentation platform, but while scaling up statsig is easier and has more robust features/UI to serve our purposes.


    ling h.

Easy to use

  • August 01, 2025
  • Review provided by G2

What do you like best about the product?
It is very easy to use, and have integration with slack, so when pal use status updates I can got notifications.

It also very easy to setup experiment rules, and overrides, the graph for compare different groups is easy to understand
What do you dislike about the product?
Top of my mind is the first time setup is not very straight forward, need some one to write down a very specific steps, but once we had that steps, it is very easy to analyze the data
What problems is the product solving and how is that benefiting you?
It listed all the metric I can use for my experiment, so when our Data Sciences set it up, I can easily use it.
And if I curious about something else, I can see the query to understand it better, or create my own query against the experiment, it helps for analysis


    Zach S.

Attentive Review

  • August 01, 2025
  • Review provided by G2

What do you like best about the product?
It really made A/B testing at Attentive better. It standardized our processes, made our metrics more rigorous, and centralized A/B testing as a whole across the company.
What do you dislike about the product?
Its a bit complicated to use. At times it can be overwhelming.
What problems is the product solving and how is that benefiting you?
We use statsig to A/B test new features


    Computer Software

Many features for growth engineering teams to run effective tests

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
Lots of features for both engineers and data scientists to make educated decisions on A/B experiments, such as filtering out outliers, custom assignment sources, winsorization, etc. Analyzing experiments from within the web interface is easy and there are many tools available such as the Explore queries where you can cut by custom dimensions, adjusting the CI percentage, adding and removing metrics. Additionally, customer support is very helpful and friendly whenever we have questions or encounter issues.
What do you dislike about the product?
Some of the "edge cases" are not clear to the user - Segments are not supposed to be more than 10K rows, but the system doesn't stop you from going over that limit - it simply becomes unstable once you do. Other things - when calling Statsig API from offline jobs (such as cron) there is a default 500 batch limit and a 60-second flush interval, so if your job has < 500 exposures and finishes in < 60 seconds, those exposures get lost unless you explicitly call Statsig.flush() before the pod exits.
What problems is the product solving and how is that benefiting you?
How new features perform on our platform, what kinds of user cohorts utilize vs. shy away from certain features. Also helps with gradual rollout / immediate rollback of risky changes.


    Fabricio N.

Great for FF and even more

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
The feature flag dashboard is very flexible and easy to use.
Managing different environments is accessible and quite flexible.
SDK makes integration a breeze.
What do you dislike about the product?
Organizing FF as their numbers grow is not as easy as I expected. Could be better
What problems is the product solving and how is that benefiting you?
Mostly feature gating, but also product analytics.


    Ke W.

Great product, easy to use and insightful

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
Very easy to use, set up experiment and features gate is quite straightforward. Dynamic config is also quite feasible for daily operations
What do you dislike about the product?
It is a little hard to understand its limitation on throughput, e.g. whether we should put it into db or statsig dynamic config. If the dashboard / insight can be more powerful we can save lots to use any tools such as mixpanel
What problems is the product solving and how is that benefiting you?
Feature rollout and A/B testing.
It is much easier to do it in Statsig than building in house solution


    Brian L.

A powerful and developer-friendly experimentation platform with room to grow

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
Statsig makes it easy to run experiments and manage feature flags with minimal setup. The SDKs are straightforward, the dynamic configs are flexible, and the event logging is well-structured. I especially appreciate how experimentation logic can stay on the backend and remain decoupled from UI, which fits perfectly with our architecture. The ability to evaluate flags and variants in real time using console rules is also incredibly powerful. Lastly, the team is responsive and open to feedback, which makes a real difference.

Very smooth. The SDK was easy to integrate into our backend service, and initial setup of feature gates and dynamic configs was quick. We were up and running with our first experiment in a matter of hours.

We use Statsig consistently for experimentation logic, feature rollout control, and real-time configuration updates. It’s now an essential part of our development and deployment workflow.

Straightforward. Backend integration (in our case, .NET) was well-supported. The SDK offers a clean API, and the event logging + variant evaluation flow was simple to embed into our existing services.
What do you dislike about the product?
The documentation could go a bit deeper in some areas — especially around advanced configuration and production-ready patterns. We also noticed that the console UI can feel a bit clunky at times when dealing with large numbers of configs or gates. Some limitations around segment targeting and rule flexibility required us to build custom logic on top.
What problems is the product solving and how is that benefiting you?
Statsig helps us decouple experimentation logic from frontend clients and manage feature rollouts safely and efficiently. It solves the complexity of running A/B tests by providing real-time evaluation, clear variant assignment, and automatic metric tracking — all without reinventing the wheel internally.

By using Statsig, we can confidently experiment with new features, validate assumptions with data, and gradually roll out changes with minimal risk. It’s also helping promote a culture of experimentation across teams by making the tooling accessible and reliable.