AWS Cloud Financial Management
Simplify Departmental Cost Allocation with AWS Organizations and Lambda
In this blog, we’ll explore a straightforward, automated approach to departmental cost allocation using AWS Organizations and AWS Lambda. The solution eliminates manual tracking and gives you clear visibility into departmental spending.
AWS Price List Gets a Natural Language Upgrade: Introducing the AWS Pricing MCP Server
We are excited to release the aws-pricing-mcp-server, an open-source tool in the AWS Labs GitHub repository, that brings natural language pricing queries to your favorite AI assistants through the Model Context Protocol (MCP). Now you can simply ask “What would it cost to run three m5.large instances and a MySQL RDS database in us-west-2?” and get instant pricing answers using natural language queries without leaving your workflow.
Automating Budget Management Across Multi-Account Environments
Managing AWS spending across multiple accounts demands a sophisticated approach to budget control and monitoring. Our custom solution enables centralized budget management with automated email notifications, allowing organizations to set and enforce account-specific budgets from a central management account. This automated system tracks spending across individual accounts and delivers timely alerts when accounts approach or exceed their allocated budgets. The central management account serves as a single source of truth, where finance teams can configure unique budget thresholds for each account and receive notifications about spending patterns across accounts in the entire organization.
How to Set Up Automated Alerts for Newly Purchased AWS Savings Plans
As organizations expand, FinOps teams require a comprehensive overview of AWS Savings Plans commitments to maximize utilization efficiency. This solution involves implementing monitoring systems and automated alerts to identify underutilized Savings Plans within the eligible return period. In this blog post, we provide AWS CloudFormation templates that create AWS Step Functions state machine, Amazon Simple Notification Service (SNS) topic, Amazon EventBridge scheduler, and necessary AWS Identity and Access Management (IAM) roles to automate the monitoring of newly purchased Savings Plans and highlight those that are underutilized.
Navigating GPU Challenges: Cost Optimizing AI Workloads on AWS
Navigating GPU resource constraints requires a multi-faceted approach spanning procurement strategies, leveraging AWS AI accelerators, exploring alternative compute options, utilizing managed services like SageMaker, and implementing best practices for GPU sharing, containerization, monitoring, and cost governance. By adopting these techniques holistically, organizations can efficiently and cost-effectively execute AI, ML, and GenAI workloads on AWS, even amidst GPU scarcity. Importantly, these optimization strategies will remain valuable long after GPU supply chains recover, as they establish foundational practices for sustainable AI infrastructure that maximizes performance while controlling costs—an enduring priority for organizations scaling their AI initiatives into the future.
Optimizing cost for deploying Amazon Q
Building on our previous discussions about AWS generative AI cost optimization, the fourth blog of the five-part blog series focuses on maximizing value from Amazon Q, AWS’s generative AI-powered assistant. While our earlier posts covered custom model development with Amazon EC2 and SageMaker AI and foundation models with Amazon Bedrock, today we’ll explore strategies to optimize costs when implementing Amazon Q. From selecting the right pricing tier and implementing strategic user management to optimizing content indexing and improving cost predictability, we’ll share practical approaches that help you balance functionality with cost efficiency. Whether you’re using Amazon Q Business for your generative AI–powered assistant or Amazon Q Developer to enhance developer productivity, these best practices will help you make informed decisions about your Q implementation.
Optimize Your AWS Spend with New Cost Savings Features in AWS Trusted Advisor
In response to customer requests for a more consistent cost savings experience and broader set of recommendations, AWS Trusted Advisor is expanding its capabilities. We’re excited to announce the integration of 16 new checks from AWS Cost Optimization Hub into Trusted Advisor. This significant update provides more actionable insights to help you optimize your AWS spend.
Quick MoM Cost Analysis with Cost Comparison in AWS Cost Explorer
As organizations scale cloud usage, understanding cost variations becomes increasingly complex. Many of you have told us that you sometimes had to spend hours analyzing why costs changed from one month to another. To address this, we’re excited to announce a new cost comparison feature in AWS Cost Explorer that provides automated month-over-month cost change analysis. This feature enables you to quickly identify, understand, and explain variations in your AWS spending. With this new feature, you can now pinpoint the largest cost changes across any cost dimension, such as services, accounts and regions, and drill down into detailed explanations of these changes, including shifts in usage patterns, changes in commitment-based discounts, and applied credits within seconds.
The authenticated AWS Pricing Calculator is now generally available
Today, we’re excited to announce the general availability of the authenticated AWS Pricing Calculator in the AWS Billing and Cost Management Console. The new capability improves the accuracy of cost estimates for new workloads or modifications to your existing AWS usage by incorporating eligible discounts and commitment savings. You can now easily model cost changes for things such as migrating workloads between regions, modifying existing or planning new workloads, and planning for commitment purchases.
AWS Compute Optimizer now supports Aurora I/O-Optimized Recommendations
Starting today, AWS Compute Optimizer delivers new recommendations for your Amazon Aurora DB clusters. Compute Optimizer analyzes the cost of your clusters and identifies opportunities to leverage Aurora I/O-Optimized cluster storage configuration to save cost and improve price predictability for your most I/O-intensive workloads.