AWS Public Sector Blog
Change.org’s Ray-based recommender on Amazon EKS increases petition signatures by 30%
Change.org’s mission is to build better democracies through digital platforms that empower people to create change. More than 1,500 petitions are created every day on Change.org, covering topics ranging from public health to land zoning to early childhood education. With such a vast amount of content on the site, many important petitions never gain traction merely because people don’t know they exist. To address this discovery problem, Change.org decided to focus on maximizing engagement on their weekly email digest, which is sent to millions of users. By recommending more relevant petitions to these users, Change.org aimed to increase signatures and amplify the impact of citizen-driven campaigns.
In this post, we discuss how Change.org built a next-generation recommender on Amazon Web Services (AWS) that turned more awareness into action—boosting petition signatures by 30 percent.
From manual to machine learning
Historically, Change.org relied heavily on manual email marketing, in which communications and marketing staff would research trending topics and send mass emails. This process was labor-intensive and limited in its ability to provide personalized recommendations or capture rapidly trending topics. Nidhi Samuchiwal, senior data and AI manager at Change.org, explained, “It was kind of a missed opportunity. Sometimes a petition was not picked at the right time, and we were not able to catch it and make it available to other masses of supporters.”
Recognizing the limitations of this approach, the Change.org team began exploring various techniques to improve their recommendation system. Rather than immediately jumping to complex machine learning (ML) solutions, they first developed interpretable, content-based heuristic models. As Peter Winslow—senior staff data scientist at Change.org—described, the team focused initially on building something “simple, highly interpretable, easy to change quickly. Just to kind of keep the lights running and buy you that time so that you’re not trying to build something complex in a huge rush.” Although these models proved valuable as a first step, Change.org marked a significant breakthrough by eventually transitioning to neural networks. “We’ve really driven it quite a bit further with the next-gen recommender, which is now ML-based,” Winslow emphasized.
Choosing a modern ML architecture
To build a more sophisticated recommender, Change.org considered several factors:
- Scale: They needed to train on hundreds of millions of examples and serve tens of millions of users weekly.
- Performance: They wanted one model that would dramatically increase signature rates globally, because the organization has active users in more than 100 countries, and staff in 13 countries.
- Automation: The system needed to combine features, run inference, and retrain regularly with minimal manual effort.
After evaluating options, the team chose Ray, an open source framework for scaling AI applications. Some of the reasons for choosing Ray include its compatibility with Amazon Elastic Kubernetes Service (Amazon EKS), which Change.org had deep expertise in, and its ability to take advantage of smaller, single-GPU instances that are often more readily available than the large, multi-GPU machines. “With Ray, I feel like I can squeeze every last drop out of those GPUs,” Jesse Steinweg-Woods, staff AI engineer at Change.org, shared. Specifically, they are using Ray Train for training and Ray Data for processing and inference.
They decided to run Ray on Amazon EKS to enable rapid scaling of compute resources. “We wanted to be able to scale up and down the number of machines based on job requirements very quickly. It’s done exactly what we’ve needed it to do without issue,” Steinweg-Woods explained.
Lastly, they chose Amazon Managed Workflows for Apache Airflow (Amazon MWAA) to help orchestrate components from different tools together. This meant that they could have some pieces built in Databricks (a Data Intelligence Platform, from the original creators of Apache Spark) and some on Amazon EKS—and automate all the scheduling.
The major components of the solution are Databricks, Amazon EKS, and Amazon MWAA. The formatted results are stored in Amazon ElastiCache to support the quick lookups required by the bulk email sending. The following diagram is the solution architecture of the inference pipeline.
Building the solution
The team spent about 8 months building the new Pytorch-based recommender system. They started with feature engineering using Apache Spark jobs in Databricks and designing the model’s architecture. The architecture uses a two-stage approach:
- Retrieval: A two-tower model embeds users and multi-lingual petitions independently into a shared vector space, enabling efficient similarity comparisons. During this phase, the system retrieves 500 candidate petitions for each user. Both models use transformer architectures to process the sequential nature of a user’s petition signing history.
- Ranking: A second model ranks the petitions from the candidate set. To put the scale in perspective: with 500 petitions from initial retrieval and tens of millions of users, our system processes 25 billion petition scores weekly. This second model refines the results by analyzing the relationship between users and petitions, ensuring more relevant recommendations.
The result of this approach is a model that is language-agnostic, time- and compute-efficient, and effective in surfacing petitions that might represent niche causes and not receive as much internet traffic or press. For example, Steinweg-Woods explained, “If a user has signed a bunch of stuff about animal rights in the past, and we have a new animal rights petition, it’s going to be scored pretty high on the list of recommendations, even if it’s less popular.”
After solidifying their design, the team worked on implementing Ray on Amazon EKS to handle their massive dataset and experimented with different Amazon Elastic Compute Cloud (Amazon EC2) node instance types and sizes to dial in on the correct machines to use for this workload. Lastly, they integrated Amazon MWAA to stitch together the various components of their pipeline.
When the team had a model they had some confidence in, they slowly started A/B testing. Initially, they saw only minimal improvements, only 5-6 percent increases in petition signatures. “It wasn’t the big lifts everybody was expecting, so people were starting to get a little worried,” Steinweg-Woods said. They quickly realized that they were seeing the results of model drift (the decaying of a model over time) because the content on Change.org is often very closely tied to rapidly shifting current events. After enabling more frequent retraining to keep pace with their platform’s dynamic nature, Change.org’s engagement metrics soared to the double-digit growth they had envisioned.
Results and Future Plans
On March 28, 2025, Change.org launched their recommender for 100 percent of their emailable users for their weekly digests. Their results were:
- 30 percent increase in petition signature rates globally, representing petitions in 8 different written languages
- 50 percent increase in petition signature rates in the United States and Great Britain
- Improved user satisfaction “delighted” scores
These increases directly impact Change.org’s primary mission of empowering ordinary people to create change. “If petitions get more signatures, they start to get more media attention, and then more people sign them,” explained Steinweg-Woods. “It’s a flywheel effect—the more people sign, the more likely someone is going to take action because of the public pressure.”
Looking ahead, Change.org plans to experiment with different retraining frequencies to keep models fresh, run trials with additional features and signals, and eventually expand to the website to serve live recommendations.
Lessons learned
The Change.org team shared some key takeaways for others building recommendation systems:
- Start simple. ML is not the solution for every problem, and even when it is, it’s often worth experimenting with the simplest options first: “Begin with basic, interpretable models to serve users while you build more complex systems,” advised Winslow. Your initial baseline might not require any ML knowledge at all, just counting!
- Balance performance and user experience. For Change.org, that meant moving beyond merely maximizing technical metrics and instead focusing on delivering relevant, timely recommendations that enhance rather than detract from the user experience. Consider a responsible AI framework to help structure your thinking. “We care a lot about our users,” emphasized Samuchiwal. “We want to make sure we get the user experience right, so they should get the right petition recommended at the right time, as opposed to us spamming them with all different kinds of petitions.”
- Consider AWS managed services where possible to reduce operational burden: “It’s nice to use managed services like EKS and MWAA,” Steinweg-Woods shared. “I really wouldn’t want to manage an Airflow cluster myself. We’d rather spend that time focusing on the problem we’re trying to solve.”
By using modern ML tools on AWS, Change.org built a powerful recommendation engine that amplifies citizen voices and drives meaningful change. Follow Change.org’s lead, and harness the combined power of Ray and Amazon EKS to build your ML infrastructure today.
If you’re interested in hearing from Change.org directly, attend their breakout session NPR301 at the AWS Summit in Washington, DC.
And if you’re inspired by Change.org’s mission, visit their website to see how millions of people are turning digital signatures into real-world impact.