Skip to main content

The Weather Company Uses Generative AI To Improve Forecast Comprehension

The company has combined human meteorological expertise with trusted technology to drive progress, and deliver around 25 billion forecasts daily

Overview

The world’s most accurate forecaster according to ForecastWatch, and one of America’s most trusted brands, reaching 2 billion people in 178 countries and 83 languages, The Weather Company has leveraged high-performance computing (HPC), artificial intelligence (AI) and machine learning (ML) for decades within its forecasting. Weather is becoming more erratic, consumer behavior and expectations are shifting, and businesses and governments increasingly need to rely on accurate weather forecasts, as it impacts at least one cost metric in their organizations. Since its inception over four decades ago, the company has combined human meteorological expertise with trusted technology to drive progress, and deliver around 25 billion forecasts daily.

The Weather Company continues to advance weather forecasting, driven by AI and using generative AI to help people and businesses make more informed decisions based on weather—which starts with accurate weather data. “Building weather and climate resilience is a global challenge that requires working together with other experts in the weather community and across industries, such as AWS, NVIDIA, and more,” said The Weather Company CIO Dan Margulies. “We continue to innovate with cutting-edge technologies including generative AI to improve communications, messaging, and alerting of those forecasts. Because even the freshest, most accurate forecast can only be helpful if it gets in front of those who need it most during a weather event.”

About The Weather Company

As a leading global provider of weather data, forecasting, and insights, The Weather Company delivers scalable, proven solutions for consumers through its flagship consumer brand, The Weather Channel, as well as to businesses across the advertising, aviation, and media industries and more.

Challenges | Long Lag Times Between Model Training and Deployment

As weather patterns have increasingly become more erratic and impactful, forecast accuracy has become more important than ever. In order to continue to deliver meaningful, actionable weather information, The Weather Company looked to scale its ML operations with more transparency and less complexity to effectively manage its growing data science team by embracing MLOps - a set of practices that automate and simplify ML workflows and deployments. It is an ML culture and practice that unifies ML application development (Dev) with ML system deployment and operations (Ops) and is used to automate and standardize processes across the ML lifecycle, including model development, testing, integration, release, and infrastructure management. The company experienced challenges when running ML workflows in the cloud due to lack of transparency of ML jobs, monitoring, and a feature store, which made it hard for users to collaborate.

The teams experienced challenges with their current Kubeflow container-based MLOps environment, experiencing long lag times between model training and deployment. Additionally, patchwork management of the underlying environment was gradually becoming an operational burden. The lack of operational efficiency between data scientists and Machine Learning engineers would lead to long lead times for model productization. The company also faced challenges in parsing massive data from numerous sources for last-mile delivery of information, including summarizing forecasts for consumers across digital properties and localized content on The Weather Channel app and weather.com, as well as to enterprise customers.

Solution | Switching from Containers to Amazon SageMaker AI

The Weather Company collaborated with the Amazon Generative AI Innovation Center in order to expedite the implementation of MLOps, ensuring the repeatable and scalable migration of ML models from containers to Amazon SageMaker AI. The solution offered a versatile framework comprising model training, monitoring, and inferencing standards. Its purpose was to cater to various ML scenarios specific to first-party data models and advertising efforts, as well as to simplify collaboration between data scientists and ML engineers. And now with services such as Amazon Bedrock, the company can use LLMs to more easily scale to help parse massive data for last-mile delivery to consumers and enterprise customers.


“We’ve used AI and generative AI for years to create reliable, timely weather forecasts and to better contextualize those forecasts to help people make more informed, confident decisions to keep their families and businesses safe,” said Margulies. “Secure first-party data models on Amazon SageMaker AI help us deliver weather to our customers in a more relevant way in order to better explain how weather will impact them.”

The Weather Company’s ML models have two primary pipelines: 1) a training pipeline dedicated to model training, registration, and data quality checks, and 2) an inference pipeline that is tailored for on-demand batch inference, monitoring, and drift detection to measure the similarity between a model data point or groups of data points versus the reference baseline. The architectural framework leverages Amazon SageMaker AI pipelines—a series of interconnected steps in directed acyclic graph (DAG) that are defined using the drag-and-drop user interface (UI) or Pipelines SDK. The company uses Amazon Managed Workflows for Apache Airflow to orchestrate workflows using DAGs for scheduling. It uses Amazon SageMaker Notebooks to provide an integrated development environment (IDE) for notebooks, code, and data, and SageMaker Feature Store - a purpose-built repository to store, share, and manage features for ML models. It uses SageMaker Model Registry to manage the entire lifecycle of ML models from training to inference. 

The Weather Company uses Amazon EC2 M5 instances for model inference, which are the latest generation of General-Purpose Instances powered by Intel Xeon® Platinum processors, which are in turn powered by the AWS Nitro System, a virtualization system built on a custom AWS designed chip that offloads networking, compute, and storage virtualization. The company continues to innovate with generative AI to improve the timeliness and localization of communications, messaging, and alerting of weather forecasts to consumers.

LLMs in Amazon Bedrock, including the Claude family of models, are instrumental in augmenting the capabilities of meteorologists when creating regional forecasts; of teams when creating content across marketing, customer service, and development; and of data scientists when explaining model predictions within the Weather Engine™, an enterprise-level offering that scales weather intelligence and insights through the application of large-scale data analytics, machine learning, and AI-based weather forecasting to datasets.     

Outcome | Better Productivity

The Weather Company experienced a 90% reduction in infrastructure management time as a result of migrating from Kubeflow pipelines to Amazon SageMaker Pipelines. The migration also resulted in a 20% improvement in model deployment time. The solution empowered the company’s data scientists and ML engineers to direct their efforts toward model development instead of managing infrastructure. The solution enables them to deliver trained models to business clients, such as future advancements to products for aviation clients and also through the Weather Engine™ enterprise offering.

“The solutions produce actionable insights for the entire organization, which can help in lowering costs, increasing revenue, and ultimately driving growth, within a quick turnaround time. By combining these pipelines with the power of Bedrock, The Weather Company is poised to create additional GenAI products and features quickly and easily.”

Dan Margulies, The Weather Company CIO

In a time when weather patterns are increasingly turbulent and impactful, The Weather Company relies on generative AI to help deliver accurate, meaningful forecasts to consumers around the globe when they need them.