AWS Announces New Capabilities for Amazon Aurora and Amazon DynamoDB, Introduces Amazon Neptune Graph Database New Multi-Master capability makes Amazon Aurora the first relational database service to scale out both reads and writes across multiple datacenters; customers can sign up for the preview today
Amazon Aurora Serverless auto-scales database capacity for applications with infrequent or cyclical usage; customers can sign up for the preview today
With new Global Tables capability, Amazon DynamoDB becomes the first fully managed multi-master, multi-region database, offering fast local performance to globally dispersed users
Amazon Neptune, a new fast, reliable graph database, makes it easy for customers to build applications on highly connected datasets
SEATTLE--(BUSINESS WIRE)--Nov. 29, 2017-- Today at AWS re:Invent, Amazon Web Services Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced new database capabilities for Amazon Aurora and Amazon DynamoDB, and introduced Amazon Neptune, a new fully managed graph database service. Amazon Aurora now includes the ability to scale out database reads and writes across multiple data centers for even higher performance and availability. Amazon Aurora Serverless is a new deployment option that makes it easy and cost-effective to run applications with unpredictable or cyclical workloads by auto-scaling capacity with per-second billing. With Global Tables, Amazon DynamoDB is now the first fully managed database service that provides true multi-master, multi-region read and writes, offering high-performance and low-latency for globally distributed applications and users. Amazon Neptune is AWS's new fast, reliable, and fully managed graph database service that makes it easy for developers to build and run applications that work with highly connected datasets. To get started with Amazon Aurora and Amazon DynamoDB, and to learn more about Amazon Neptune, visit: https://aws.amazon.com/products/databases.
The days of the one-size-fits-all database are over. For many years, the relational database was the only option available to application developers. And, while relational databases are great for applications that log transactions and store up to terabytes of structured data, today's developers need a variety of databases to serve the needs of modern applications. These applications need to store petabytes of unstructured data, access it with sub-millisecond latency, process millions of requests per second, and scale to support millions of users all around the world. Its not only common for modern companies to use multiple database types across their various applications, but also to use multiple database types within a single application. Since introducing Amazon Relational Database Service (Amazon RDS) in 2009, AWS has expanded its database offerings to provide customers the right database for the right job. This includes the ability to run six relational database engines with Amazon RDS (including Amazon Aurora, a fully MySQL/PostgreSQL compatible database engine with at least as strong durability and availability as commercial grade databases but at 1/10th of the cost); a highly scalable and fully managed NoSQL database service with DynamoDB; and a fully managed in-memory data store and cache in Amazon ElastiCache. Now, with the introduction of Amazon Neptune, developers can extend their applications to work with highly connected data such as social feeds, recommendations, drug discovery, or fraud detection.
Nobody provides a better, more varied selection of databases than AWS, and its part of why hundreds of thousands of customers have embraced AWS database services, with hundreds more migrating every day, said Raju Gulabani, Vice President, Databases, Analytics, and Machine Learning, AWS. These customers are moving to our built-for-the-cloud database services because they scale better, are more cost-effective, are well integrated with AWS's other services, provide customers relief (and freedom) from onerous old guard database providers, and free them from the constraints of a one-database-for-every-workload model. We will continue to listen to what customers tell us they want to solve, and relentlessly innovate and iterate on their behalf so they have the right tool for each job.
Amazon Aurora Multi-Master scales reads and writes across multiple data centers for applications with stringent performance and availability needs
Tens of thousands of customers are using Amazon Aurora because it delivers the performance and availability of the highest-grade commercial databases at a cost more commonly associated with open source, making it the fastest-growing service in AWS history. Amazon Aurora's scale-out architecture lets customers seamlessly add up to 15 low-latency read replicas across three Availability Zones (AZs), achieving millions of reads per second. With its new Multi-Master capability, Amazon Aurora now supports multiple write master nodes across multiple Availability Zones (AZs). Amazon Aurora Multi-Master is designed to allow applications to transparently tolerate failures of any master--or even a service level disruption in a single AZ-with zero application downtime and sub-second failovers. This means customers can scale out performance and minimize downtime for applications with the most demanding throughput and availability requirements. Amazon Aurora Multi-Master will add multi-region support for globally distributed database deployments in 2018.
Expedia.com is one of the worlds largest full service travel sites, helping millions of travelers per month easily plan and book travel. Expedia's high-volume data needs were met easily with Amazon Aurora by scaling out while maintaining high performance, said Gurmit Singh Ghatore, Principal Database Engineer, Expedia. Amazon Aurora Multi










