When Andy Jassy, then head of Amazon Web Services, announced Amazon Aurora in 2014, the pitch was daring, but measured: Aurora would be a relational database built for the cloud. As such, it would provide access to cost -effective, fast and scalable computer infrastructure.
In essence, he explained, Aurora would combine the cost -effectiveness and simplicity of MySQL with the speed and availability of advanced commercial databases, the kind that companies typically succeeded in theirs. In the number of promised Aurora five times the flow (eg the number of transactions, inquiries, read/writing operations) of the MySQL to a tent The price of commercial database solutions, all of relieving expensive management challenges and holiness and accessibility.
AWS RE: Invent 2014 | Advertising Amazon Aurora for RDS
Aurora launched a year later, in 2015. Nordicant, the disadvantage of storage, a clear contrast to traditional database architectures where the two are intertwined. This basic innovation along with automated backups and replication and other improvements enabled easy scaling for both calculation tasks and storage while meeting reliability dequests.
“Aurora’s design retains the core transactional consistency forces in relational databases. It is innovating on the storage layer to create a database for the cloud that can support modern workload without sacrificing performance,” explained Werner Vogels, Amazon’s CTO, in 2019.
“To start tackling the limitations of relational databases, we conceptualized the stack by breaking down the system to its basic building blocks,” Vogels said. “We acknowledged that cache and logging layers we have mature for innovation. We could move these layers to a custom-built, scale-out, self-healing, multi-off, database optimized storage service. When we started building the distributed storage system, Amazon Aurora was born.
Within two years, Aurora became the fast-growing service in the AWS story. Tens of thousands of customers-inclusive financial service companies, gaming companies, healthcare providers, educational institutions and startup turned to Aurora to help carry their workload.
During the intervention years, Aurora has continued to follow the needs of a changing digital landscape. Most recently, by 2024, Amazon announced Aurora DSQL. A big step forward is Aurora DSQL a serverless approach designed to global scale and improved adaptability to variable workload.
Today, International Data Corporation (IDC) Estimates Research that companies using Aurora see a three-year return on the investment of 434 percent and an operating cost reduction of 42 carried forward to other database solutions.
But what lies behind these numbers? How did Aurora become so valuable to its users? To understand it is useful to consider what came before.
A time for reinvention
In 2015, when Cloud Computing gained popularity, older companies began migrating workloads away from local data centers to save money on capital investments and internal maintenance. At the same time, Startups for Mobile and WebApp for remote, called very reliable databases that could scale in an instant. The theme was clear: Computing and storage should be elastic and reliable. The reality was that at that time, most databases simply had not adapted to these needs.
Amazon engineers acknowledged that the cloud could enable almost unlimited, network storage and separately calculate.
This stiffness makes sense considering the origin of the databases and the problems they were invented to solve. In the 1960s, one of their earliest uses saw: NASA engineers had to navigate a complex list of parts, components and systems as they built spacecraft for lunar exploration. This need inspired the creation of the information management system or IMS, a hierarchal structured solution that enabled engineers to find recurring information, such as sizes or compaatibilities in different parts and components. While IMS was a blessing at the time, it was also limited. Finding parts meant that engineers had to write batches of specially coded queries that would then move through a tree -like data structure, a relatively slow and specialized processes.
In 1970, the idea of relational databases debuted its public debut when EF CODD invented the term. Relationship databases organized data depending on how it was related: customers and their purchases, for example or students in a class. Relationship databases meant faster search, sincere data were stored in structural tables, and queries did not require very coding knowledge. With programming languages such as SQL, relational databases became a dominant model for storing and retrieving structured data.
In the 1990s, however, this approach began to show its boundaries. Companies that needed more computer capabilities typically had to buy and physically install more on site servers. They also needed a specialist to manage new capabilities, such as influx of transaction workloads – such as when it increases the number of customers added more and more pets articles to virtual shopping carts. When AWS arrived in 2006, these heir databases were the most crispy, least elastic component of a company’s IT stack.
The emergence of cloud computing promised a better way forward with more flexibility and remote controlled solutions. Amazon engineers acknowledged that the cloud could enable almost unlimited, network storage and separate calculation.
Amazon Relational Database Service (Amazon RDS) discussed in 2009 to help customers create, operate and scale a MySQL database in the cloud. And although this service expanded to include Oracle, SQL Server and Postgresql, as Jeff Barr noted in 2014 blog post, these database motors were “designed to work in a limited and as what simplified hardware surroundings.”
AWS researchers challenged themselves to investigate these limitations and “quickly realized that they had a unique opportunity to create an effective, integrated design that clumsy storage, network, calculation, system software and database software”.
“The central limitation of high capacity data processing has been moved from computer and storage to the network,” wrote the authors of a Sigmod 2017 paper describing Aurora’s architecture. Aurora scientists approached this restriction via “a new, service-oriented architecture”, one that offered meaningful overdrafts in relation to traditional approaches. These include “Building Storage as an independent fault tolerant and self -healing service across multiple data centers … Protection of databases against performance variance and short -lived or permanent errors in the EITH network or storage levels.”
The serverless era is now
In the years since its debut, Amazon engineers and researchers have ensured that Aurora has kept pace with customer needs. In 2018, Aurora Serverless provided an on-demand auto-derived configuration that allowed customers to adjust the calculation capacity up and down based on their needs. Later versions, this process further optimized by automatic scaling based on customer needs. The customer is approaching the need to explicitly control database capacity; Customers only need specific minimum and maximum levels.
Gaining that kind of “resource elasticity at high efficiency levels” meant that Auroraless had to tackle more challenges, wrote the authors of a VLDB 2024 paper. “These include politics from how to define ‘heat’ (ie resource use functions on which to base decision making)” and how to determine whether remedy may be required. Aurora serverless fulfill these challenges the authors nodded, by adapting and changing “well-established ideas related to resource overrun; reactive control informed by recent measurement
From May 2025, all Aurora’s offers are now serverless. Customers no longer need to choose a specific server type or size or worry about the underlying hardware or operating system, lapping or backup; All that is managed by AWS. “One of the things that we have tried to design from the start is a database where you do not have to work on the interns,” said Marc Brooker, AWS vice president and distinguished engineer, at AWS RE: Invent in 2024.
These are exactly the capabilities that Arizona State University needs, says John Rome, Deputy Head of Information at ASU. Each fall, the university’s data explodes when classes for its more than 73,000 students are in session on several campuses. Aurora lets ASU pay for the calculation and storage it uses, and helps it adapt on the go.
We see Amazon Aurora Serverless as a next step in our cloud maturity.
John Rome, Deputy Chief Information Manager at ASU
“We see Amazon Aurora Serverless as a next step in our cloud maturity,” says Rome, “to help us improvise agility, while reducing costs on rarely used systems, to optimize the overall Overal Operatures further.”
And what can the next step in maturity look like the now 10-year-old Aurora service? The authors of the 2024 paper outlined several potential paths. These include “predictable introductory techniques to live migration”; “Utilization of statistical multitic possibilities that stem from complementary resource needs” and “using sophisticated ML/RL-based techniques for work loading and decision making.”