AWS Storage Blog
Category: Intermediate (200)
Prime Video improved stream analytics performance with Amazon S3 Express One Zone
Amazon Prime Video provides a selection of original content and licensed movies and TV shows that you can stream or download as part of the Amazon Prime subscription. Prime Video’s telemetry platform serves as the backbone for monitoring playback performance, saving data snapshots for failure recovery, providing business analytics, and generating real-time insights across its […]
Reduce costs with customized delete protection for Amazon EBS Snapshots and EBS-backed AMIs
Safeguarding business-critical cloud resources against accidental loss and external threats such as ransomware is a top priority for modern organizations. These companies utilize privacy-enhancing technologies, malware scanning, and the ability to protect from accidental deletion to form key pillars of a strong data security posture. This combination helps ensure that data remains secure, protected from […]
Streamlining access to tabular datasets stored in Amazon S3 Tables with DuckDB
As businesses continue to rely on data-driven decision-making, there’s an increasing demand for tools that streamline and accelerate the process of data analysis. Efficiency and simplicity in application architecture can serve as a competitive edge when driving high-stakes decisions. Developers are seeking lightweight, flexible tools that seamlessly integrate with their existing application stack, specifically solutions […]
Seamless streaming to Amazon S3 Tables with StreamNative Ursa Engine
Organizations are modernizing data platforms to use generative AI by centralizing data from various sources and streaming real-time data into data lakes. A strong data foundation, such as scalable storage, reliable ingestion pipelines, and interoperable formats, is critical for businesses to discover, explore, and consume data. As organizations modernize their platforms, they often turn to […]
Connect Snowflake to S3 Tables using the SageMaker Lakehouse Iceberg REST endpoint
Organizations today seek data analytics solutions that provide maximum flexibility and accessibility. Customers need their data to be readily available using their preferred query engines, and break down barriers across different computing environments. At the same time, they want a single copy of data to be used across these solutions, to track lineage, be cost […]
Build a managed Apache Iceberg data lake using Starburst and Amazon S3 Tables
Managing large-scale data analytics across diverse data sources has long been a challenge for enterprises. Data teams often struggle with complex data lake configurations, performance bottlenecks, and the need to maintain consistent data governance while enabling broad access to analytics capabilities. Today, Starburst announces a powerful solution to these challenges by extending their Apache Iceberg […]
Build a data lake for streaming data with Amazon S3 Tables and Amazon Data Firehose
Businesses are increasingly adopting real-time data processing to stay ahead of user expectations and market changes. Industries such as retail, finance, manufacturing, and smart cities are using streaming data for everything from optimizing supply chains to detecting fraud and improving urban planning. The ability to use data as it is generated has become a critical […]
Access data in Amazon S3 Tables using PyIceberg through the AWS Glue Iceberg REST endpoint
Modern data lakes integrate with multiple engines to meet a wide range of analytics needs, from SQL querying to stream processing. A key enabler of this approach is the adoption of Apache Iceberg as the open table format for building transactional data lakes. However, as the Iceberg ecosystem expands, the growing variety of engines and languages has […]
Enhancing resource-level permission for creating an Amazon EBS volume from a snapshot
Businesses use Amazon Elastic Block Store (Amazon EBS) snapshots to capture point-in-time copies of application data volumes that can serve as baseline standards when creating new volumes. This enables them to quickly launch application workloads in different AWS Regions or meet data protection and disaster recovery requirements. Security and regulatory compliance remain top priorities as […]
Optimizing data transfers for high throughput life science instruments using AWS DataSync
Healthcare and life sciences (HCLS) customers are generating more data than ever as they integrate the use of omics data with applications in drug discovery, clinical development, molecular diagnostics, and population health. The rate and volume of data that HCLS laboratories generate are a reflection of their lab instrumentation and day-to-day lab operations. Efficiently moving […]