AWS Partner Network (APN) Blog

Category: Amazon Simple Storage Service (S3)

Cognizant-AWS-Partners

Change Data Capture from On-Premises SQL Server to Amazon Redshift Target

Change Data Capture (CDC) is the technique of systematically tracking incremental change in data at the source, and subsequently applying these changes at the target to maintain synchronization. You can implement CDC in diverse scenarios using a variety of tools and technologies. Here, Cognizant uses a hypothetical retailer with a customer loyalty program to demonstrate how CDC can synchronize incremental changes in customer activity with the main body of data already stored about a customer.

Teradata-AWS-Partners

How to Use AWS Glue to Prepare and Load Amazon S3 Data for Analysis by Teradata Vantage

Customers want to use Teradata Vantage to analyze the data they have stored in Amazon S3, but the AWS service that prepares and loads data stored in S3 for analytics, AWS Glue, does not natively support Teradata Vantage. To use AWS Glue to prep and load data for analysis by Teradata Vantage, you need to rely on AWS Glue custom database connectors. Follow step-by-step instructions and learn how to set up Vantage and AWS Glue to perform Teradata-level analytics on the data you have stored in Amazon S3.

Clumio-AWS-Partners

Protecting Your Amazon EBS Volumes at Scale with Clumio

Many AWS customers who use Amazon EBS to store persistent data need to back up that data, sometimes for long periods of time. Clumio’s SaaS solution protects Amazon EBS volumes from multiple AWS accounts though a single policy via tagging. Amazon EBS backups by Clumio are securely stored outside of your AWS account in the Clumio service built on AWS, which is protected by end-to-end encryption and stored in an immutable format.

CloudZero_AWS-Partners

Improving Dataset Query Time and Maintaining Flexibility with Amazon Athena and Amazon Redshift

Analyzing large datasets can be challenging, especially if you aren’t thinking about certain characteristics of the data and what you’re ultimately looking to achieve. There are a number of factors organizations need to consider in order to build systems that are flexible, affordable, and fast. Here, experts from CloudZero walk through how to use AWS services to analyze customer billing data and provide value to end users.

Zaloni-AWS-Partners

Turning Data into a Key Enterprise Asset with a Governed Data Lake on AWS

Data and analytics success relies on providing analysts and data end users with quick, easy access to accurate, quality data. Enterprises need a high performing and cost-efficient data architecture that supports demand for data access, while providing the data governance and management capabilities required by IT. Data management excellence, which is best achieved via a data lake on AWS, captures and makes quality data available to analysts in a fast and cost-effective way.

MongoDB_AWS Solutions

MongoDB Atlas Data Lake Lets Developers Create Value from Rich Modern Data 

With the proliferation of cost-effective storage options such as Amazon S3, there should be no reason you can’t keep your data forever, except that with this much data it can be difficult to create value in a timely and efficient way. MongoDB’s Atlas Data Lake enables developers to mine their data for insights with more storage options and the speed and agility of the AWS Cloud. It provides a serverless parallelized compute platform that gives you a powerful and flexible way to analyze and explore your data on Amazon S3.

How to Create a Continually Refreshed Amazon S3 Data Lake in Just One Day

Data management architectures have evolved drastically from the traditional data warehousing model, to today’s more flexible systems that use pay-as-you-go cloud computing models for big data workloads. Learn how AWS services like Amazon EMR can be used with Bryte Systems to deploy an Amazon S3 data lake in one day. We’ll also detail how AWS and the BryteFlow solution can automate modern data architecture to significantly accelerate delivery and business insights at scale.

Splunk_AWS Solutions

How to Reduce AWS Storage Costs for Splunk Deployments Using SmartStore

It can be overwhelming for organizations to keep pace with the amount of data being generated by machines every day. There’s a great deal of meaningful information that can be extracted from data, but companies need software vendors to develop tools that help. In this post, learn about Splunk SmartStore and how it helps customers to reduce storage cost in a Splunk deployment on AWS. Many customers are using SmartStore to reduce the size of Amazon EBS volumes and moving data to Amazon S3.

How Cloud Backup for Mainframes Cuts Costs with BMC AMI Cloud Data and AWS

Mainframe cold storage based on disks and tapes is typically expensive and rigid. BMC AMI Cloud Data improves the economics and flexibility by leveraging AWS storage for archival, backup, and recovery of mainframe data. BMC AMI Cloud Data enables mainframe customers to leverage modern cloud technologies and economics to reduce data recovery risks and improve application availability by providing a software-defined solution for archive, backup, and recovery directly from AWS.

AWS Cloud Automation

Using Amazon CloudFront with Multi-Region Amazon S3 Origins

By leveraging services like Amazon S3 to host content, AWS Competency Partner Cloudar has a cost effective way to build websites that are highly available. If content is stored in a single Amazon S3 bucket, all of the content is stored in a single AWS region. To serve content from other regions, you need to route requests to different Amazon S3 buckets. In this post, explore how to accomplished this by using Amazon CloudFront as a content delivery network and Lambda@Edge as a router.