Golden Hammer Part 1 - Which AWS Database?

Golden Hammer Part 1 - Which AWS Database?

It's important to have a diverse toolkit to tackle each problem with the best solution possible.

ยท

18 min read

Have you ever heard of the "Golden Hammer"? No, it's not a tool made of gold (although that would be quite the showstopper in a handyman's toolkit). It's a term used in the software development world to describe a situation we've all probably experienced at some point. Imagine you've found the perfect tool that can solve all your problems, like a magic wand. And every time you come across a problem, you reach for this tool without even thinking twice. Sound familiar? Well, my friend, you've fallen prey to the "Golden Hammer" syndrome! In this blog, we'll take a closer look at this software development phenomenon, learn about its drawbacks and see why it's important to have a diverse toolkit to tackle each problem with the best solution possible. Let's get started!

Today's Golden Hammer: SQL in RDS

In AWS, when you are choosing a database for your data store, which one is right for the task at hand? Let's look at some of the prime suspects and their strengths and weaknesses, starting with the golden hammer itself...

๐Ÿ” Amazon Relational Database Service (RDS)

Managed relational database service

Amazon Relational Database Service (RDS) is a managed service that makes it easy to set up, operate, and scale relational databases in the cloud. You would choose Amazon RDS over other options if you need:

  • A fully managed relational database solution: RDS takes care of routine database management tasks, such as backups, software patching, and monitoring, so you don't have to.

  • Support for multiple database engines: RDS supports popular relational database engines, such as Amazon Aurora, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL, so you can choose the engine that best fits your needs.

  • Scalability and high availability: RDS provides automatic and instant scalability, making it easy to handle sudden spikes in traffic, and provides high availability options, such as Multi-AZ deployment, to ensure that your database is always available.

  • Cost-effectiveness: RDS provides a cost-effective solution for running relational databases in the cloud, with the ability to pay only for the resources you actually use, and with no upfront costs or long-term commitments.

  • Integration with other AWS services: RDS integrates with other AWS services, such as Amazon S3, Amazon EC2, and Amazon CloudWatch, making it easy to build and run sophisticated applications.

  • Easy migration: RDS makes it easy to migrate your existing relational databases to the cloud, with minimal downtime and no changes to your application.

  • Security: RDS provides robust security features, such as network isolation, encryption at rest and in transit, and managed access control, to help keep your data safe.

Some of the downsides of using Amazon Relational Database Service (RDS) include:

  • Limited customization: RDS is a managed service, which means that you have limited control over the underlying infrastructure. This can limit your ability to customize the database and access low-level configuration options.

  • Performance limitations: RDS can experience performance limitations, especially in resource-intensive workloads.

  • Latency: RDS can have higher latency compared to running a database on dedicated infrastructure, especially for high-traffic applications with large amounts of data.

  • Cost: Although RDS is cost-effective, the cost of running a relational database on RDS can still add up, especially if you have a large amount of data or if you need to run multiple instances.

  • Limited scalability: Although RDS provides automatic scalability, it may not always scale as quickly as you need, especially during sudden spikes in traffic.

  • Complexity: RDS can be complex to set up and manage, especially for inexperienced users or for those who are unfamiliar with relational databases.

  • Compliance limitations: RDS may not meet all regulatory and compliance requirements for certain industries and use cases, such as highly regulated industries, such as healthcare and finance, or for applications with strict data privacy requirements.

๐Ÿ” Amazon Aurora

High-performance relational database

Amazon Aurora is a relational database service optimized for high performance, compatibility with MySQL and PostgreSQL, and scalability. It is designed for applications that require a low-latency, high-throughput database with fast read and write performance.

  • Compatibility with MySQL and PostgreSQL - If your application uses either of these database management systems, Aurora can provide a drop-in replacement with improved performance and scalability.

  • Low latency and high throughput - Aurora is designed to provide fast read and write performance, making it ideal for applications that require real-time data access.

  • Scalability - Aurora can automatically scale up or down based on the workload, providing a highly available and scalable database for growing applications. We can scale up to a maximum of 128 tebibytes (TiB), which is frankly insane. When we make a cluster, the replication time in the cluster is usually less than 100 milliseconds, also insane.

  • Cost-effectiveness - Aurora can lower costs compared to running an equivalent database on-premises or in a virtual machine, as it eliminates the need for database administration and provides automatic backups and software patching.

Tradeoffs

  1. Cost: More expensive compared to other cloud-based database options, especially for small or infrequent workloads.

  2. Complexity: Complex to set up and manage, and requires a deeper understanding of database administration and performance tuning.

  3. Compatibility: Differences in behaviour or feature availability compared to MySQL and PostgreSQL can make migrating existing applications more challenging. Does not support Microsoft SQL Server.

  4. Scalability: Requires careful planning and management to ensure seamless scalability as the workload increases. In my experience, I have found that Aurora, in performance testing, has slightly slower write speeds than RDS with a large SSD.

  5. Flexibility: May not be as flexible as some other cloud-based NoSQL databases for handling highly unstructured or hierarchical data.

๐Ÿ” Amazon DynamoDB

Fast and flexible NoSQL database

Amazon DynamoDB is best suited for real-time, high-performance, and flexible NoSQL data storage for applications requiring low operational overhead, global reach, and the ability to handle complex data requirements.

  • High performance: DynamoDB is designed to provide fast and predictable performance, with the ability to scale up and down as needed.

  • Serverless: DynamoDB is a fully managed service that does not require any server provisioning, maintenance, or administration, making it ideal for applications that require low operational overhead.

  • Global reach: DynamoDB supports global tables, allowing you to replicate your data across multiple regions for low-latency, read-heavy workloads.

Tradeoffs for DynamoDB:

The tradeoff for using Amazon DynamoDB is a limited querying capability compared to a traditional relational database. DynamoDB is optimized for fast key-value and document lookups, and while it does support some querying and filtering options, it is not as flexible or powerful as a SQL-based relational database. Additionally, DynamoDB charges for read and write capacity units, which can become costly for high-traffic applications, and requires developers to carefully design their data model and partition key strategy to ensure scalability and performance.

๐Ÿ” Amazon Redshift

Amazon Redshift is a data warehousing service

Optimized for large-scale, high-performance analysis of data. The main reasons to choose Redshift over other databases are:

  • Performance: Redshift is designed for fast querying and analysis of large datasets, making it ideal for complex data warehousing and business intelligence workloads.

  • Scalability: Redshift is highly scalable and can handle petabyte-scale datasets, making it suitable for big data analytics use cases.

  • Columnar storage: Redshift uses columnar storage, which is optimized for high-performance analytics, allowing you to analyze large amounts of data quickly and efficiently.

  • Advanced analytics: Redshift provides advanced analytics capabilities, including support for advanced data processing frameworks such as Apache Spark and support for SQL-based analytics.

Amazon Redshift may not be a good fit for the following workloads:

Amazon Redshift is not optimized for real-time or very low-latency workloads, as it is designed for large, complex data warehousing workloads. It may not perform as well as other databases for applications that require highly concurrent access to many small, randomly distributed data elements. It may also not be the best choice for OLAP (Online Analytical Processing) or OLTP (Online Transactional Processing) applications, which typically require high levels of random I/O operations, as Redshift's design prioritizes sequential scans and bulk data retrieval.

  • Low-latency, OLTP-style applications: Redshift is optimized for high-performance analytics and may not provide the low-latency response times required for transactional workloads.

  • Small datasets: For small datasets, Redshift may not provide the cost-effectiveness of other database options, such as Amazon RDS or Amazon DynamoDB.

  • Dynamic, real-time data: Redshift is optimized for batch processing and may not provide the real-time data ingestion and processing capabilities required for some real-time data use cases.

  • Simple data access patterns: Redshift's columnar storage and advanced analytics capabilities may not be necessary for simple data access patterns and may result in increased complexity and cost.

  • High write-intensity workloads: Redshift is optimized for read-intensive workloads and may not provide the high write-throughput required for some high write-intensity workloads.

  • Cost sensitivity: While Redshift is cost-effective for large-scale data warehousing workloads, it may not be the most cost-effective option for smaller data warehousing or simple analytics use cases.

๐Ÿ” AWS Athena

AWS Athena is a serverless, interactive query service that allows you to analyze data stored in Amazon S3 using SQL. It enables you to analyze big data with standard SQL, and doesn't require any administration tasks, such as setting up, patching, or managing infrastructure.

In comparison to Amazon RDS and Amazon Aurora, Athena is designed for data analysis, while RDS and Aurora are relational databases that support transactional workloads. If you need a relational database for transactional workloads, RDS and Aurora may be a better fit. If you need to analyze large amounts of data stored in S3 and you prefer using SQL for querying, Athena may be a good choice. Athena is also cost-effective compared to RDS and Aurora, as you only pay for the queries you run.

AWS Athena is used for a variety of big data use cases, including:

  1. Data lake analytics: Athena is ideal for querying and analyzing large amounts of data stored in S3 data lakes.

  2. Log analysis: Athena can be used to analyze logs generated by various AWS services, such as CloudFront, CloudTrail, and AWS Elastic Beanstalk.

  3. Business intelligence and data warehousing: Athena provides a way to query large amounts of structured and semi-structured data stored in S3, making it suitable for business intelligence and data warehousing.

  4. Ad-hoc analysis: Athena allows you to quickly query data stored in S3, making it an ideal solution for ad-hoc analysis.

  5. Data migration: Athena can be used as a source and target for data migration from various sources, such as RDBMS, NoSQL databases, and data warehouses, to S3.

AWS Athena is not well-suited for certain use cases, including:

  1. Transactional workloads: Athena is designed for read-intensive analytics workloads and is not optimized for transactional workloads that require fast, low-latency writes. For these types of workloads, Amazon RDS or Amazon Aurora may be a better fit.

  2. Relational database management: Athena does not have the capability to enforce relationships between tables and does not support transactions or updates, making it unsuitable for use cases that require a traditional relational database management system.

  3. Streaming data analysis: Athena is designed to analyze batch data stored in S3 and is not optimized for analyzing streaming data in real time. For real-time data analysis, you may consider using Amazon Kinesis Data Analytics or Amazon Redshift.

  4. High-performance computing: Athena is designed for low-latency, interactive analytics, but it may not be the best choice for high-performance computing use cases that require a high degree of parallelism and fast processing times.

  5. Graphical user interfaces: Athena does not provide a graphical user interface for data visualization and exploration, so it may not be the best choice for use cases that require a user-friendly interface for data analysis. You will need to provide your own UI for graphing.

๐Ÿ” Amazon Keyspaces

Managed Apache Cassandra database service

Amazon Keyspaces is a managed Apache Cassandra database service designed for large-scale, globally distributed applications. It provides a fast and highly available database that can handle massive amounts of data with low latency.

The best use case for Amazon Keyspaces (for Apache Cassandra) is for highly scalable, low-latency, and highly available NoSQL databases. Keyspaces is a managed service that provides a highly scalable and low-latency data store for large amounts of unstructured and semi-structured data. It is well-suited for applications that require high write and read throughputs, such as real-time analytics, mobile and gaming applications, IoT, and user-facing applications. Keyspaces provides automatic and instant scalability, making it easy to handle sudden spikes in traffic, and provides robust data management and durability features, making it ideal for mission-critical applications. Additionally, Keyspaces is fully compatible with Apache Cassandra, allowing you to use existing Cassandra applications and tools with no code changes.

Some tradeoffs for AWS Keyspaces are:

  • Data Modeling: Keyspaces requires a different approach to data modelling compared to traditional relational databases, and can be challenging for some use cases that require complex data relationships.

  • Query Performance: The query performance of Keyspaces can be slower compared to other NoSQL databases, especially for complex queries or complex data structures.

  • Latency: The latency of Keyspaces can be higher compared to other in-memory data stores, especially for high-traffic applications with complex data access patterns.

๐Ÿ” Amazon Neptune

Graph database for connected data

Amazon Neptune is a fast, reliable, and fully managed graph database service. It is designed to make it easy to build and run applications that work with highly connected data. The main use cases for Amazon Neptune are:

  • Social Networking: To build and run social networking applications that manage relationships and connections between users, groups, and objects.

  • Fraud Detection: To detect and prevent fraudulent activities by analyzing relationships between entities, such as customers, transactions, and products.

  • Recommendation Engines: To build and run recommendation engines that suggest products, content, or other items to users based on their relationships and interactions.

  • Knowledge Graphs: To build and manage knowledge graphs to represent relationships between entities, such as people, places, and things, and to enable advanced search and discovery.

  • Network & IT Operations: To analyze and manage large-scale network and IT operations data, such as configuration information, performance metrics, and security logs.

  • Life Sciences: To manage and analyze relationships between proteins, genes, diseases, and drugs in life sciences research.

Neptune supports the Property Graph and W3C's Resource Description Framework (RDF) models, providing flexible and versatile data modelling options, and integrates with other AWS services, making it easy to build and run sophisticated applications.

Some tradeoffs for AWS Neptune are:

AWS Neptune is a graph database designed for high-performance graph applications. It is not optimized for transactional workloads that require high write performance, as its primary focus is on providing low-latency, high-throughput graph queries. Additionally, Neptune may not be the best choice for applications that require complex data validation, data normalization, or ACID (Atomicity, Consistency, Isolation, Durability) transactions, as these features are not natively supported by the database. If your application requires high write performance or complex data validation and normalization, it may be more suitable to use a different database technology.

๐Ÿ” Amazon Timestream

Time-series database

Amazon Timestream is designed specifically for time-series data and provides several benefits over other databases, including:

  • Time-series optimized: Timestream is optimized for storing, querying, and analyzing time-series data, making it ideal for use cases where time-series data is a central component, such as IoT data, financial data, and application logs.

  • Scalability and performance: Timestream provides automatic scaling and high performance for time-series data, with fast query speeds and low latency.

  • Cost-effective: Timestream is cost-effective for storing and analyzing time-series data, especially compared to running a custom solution or using another database service.

  • Ease of use: Timestream provides a simple, easy-to-use interface that makes it easy to manage and analyze time-series data, even for users with limited technical expertise.

Amazon Timestream is optimized for time-series data, so it may not be the best choice for other types of workloads, such as:

  • Non-time-series data: If your data is not time-series data, such as relational data or document data, other database services may be better suited for your use case.

  • Complex data structures: If your data has complex relationships or structures, a relational database service like Amazon RDS may be a better choice.

  • High write loads: While Timestream can handle high write loads, it may not be the best choice if your workload requires very high write performance or if you have a large volume of data that needs to be written in real-time.

  • Large and complex queries: Timestream is optimized for fast time-series queries, but may not be the best choice for complex queries with multiple joins or for data that requires complex data processing.

  • Cost: While Timestream is cost-effective for time-series data, it may not be the most cost-effective option for other types of data, especially if your workload requires a large amount of storage or if you have a low volume of time-series data.

๐Ÿ” Amazon DocumentDB

MongoDB-compatible document database

Amazon DocumentDB is a document database service that is compatible with MongoDB and provides a fast and flexible way to store, query, and analyze JSON-like documents. It is designed for applications that require high performance and scalability and provides automatic failover, continuous backup, and encryption at rest.

We will deviate slightly here and compare this service to another to help explain the difference: DynamoDB

Speed

Both Amazon DocumentDB and DynamoDB are designed for high performance and low latency, but the specific performance characteristics can vary depending on the use case. DocumentDB has a document-oriented data model and supports secondary indexes, which can make it faster for complex queries and filtering operations. DynamoDB, on the other hand, is designed for highly scalable key-value and document data storage, and has built-in support for fast and predictable read and write performance.

Price

The cost of using either Amazon DocumentDB or DynamoDB depends on factors such as the amount of storage used, the number of requests, and the level of throughput required. In general, DynamoDB may be more cost-effective for simple key-value or document data storage, while DocumentDB may be more cost-effective for more complex data models that require secondary indexes.

Scalability

Both Amazon DocumentDB and DynamoDB are highly scalable, with the ability to automatically scale storage and throughput to meet the demands of your application. DynamoDB provides automatic and seamless scaling, while DocumentDB provides the ability to scale storage and compute capacity independently.

Functionality

Amazon DocumentDB is a document database that supports SQL and MongoDB APIs, allowing for a more flexible data model and supporting a wider range of use cases. DynamoDB is a key-value and document database that provides fast and predictable performance but has a more limited data model compared to DocumentDB.

Here is a MongoDB tutorial: https://www.w3schools.com/mongodb/

Amazon CloudSearch

CloudSearch provides advanced full-text search capabilities, including faceted search, geospatial search, and hit highlighting.

Amazon CloudSearch is designed to provide scalable search functionality to applications that have outgrown basic search capabilities and need more sophisticated search and analysis capabilities. Some of the primary use cases where Amazon CloudSearch excels compared to other databases include:

  1. Text search: CloudSearch provides advanced full-text search capabilities, including faceted search, geospatial search, and hit highlighting.

  2. Auto-complete and suggest functionality: CloudSearch allows you to easily add auto-complete and suggest functionality to your search applications.

  3. Customizable search relevance: With CloudSearch, you can fine-tune search relevance for your specific use case, improving the accuracy and usefulness of search results.

  4. Large-scale search: CloudSearch can handle large amounts of search data and can scale up to handle millions of queries per day, making it ideal for use cases with a high volume of search traffic.

Reasons to choose another data store:

  1. Complex data analysis: CloudSearch is designed primarily for search and text analytics, and may not be the best solution for complex data analysis tasks that require significant processing power.

  2. Real-time data processing: CloudSearch is optimized for search and analytics, and may not be well-suited for real-time data processing tasks that require low latency.

  3. Structured data storage: CloudSearch is designed for unstructured data and may not be the best solution for structured data storage tasks, such as those requiring complex relational data structures or transactions.

  4. Graph or NoSQL data: If your application requires graph or NoSQL data, CloudSearch may not be the best fit as it is primarily designed for text search and analysis.

  5. Cost: While CloudSearch is cost-effective, it can still be expensive for use cases with lower levels of search traffic. If your application requires only basic search functionality, there may be more cost-effective solutions available.

ElasticSearch and OpenSearch

Elasticsearch (now OpenSearch) is a highly scalable and open-source search and analytics engine designed to help organizations search, analyze, and visualize large amounts of data in real time. AWS also recently started offering a serverless version, meaning no infrastructure management and simpler scaling.

Elasticsearch and OpenSearch are used in a variety of data patterns and use cases, including:

  1. Text search: Elasticsearch and OpenSearch are commonly used for full-text search, providing fast and accurate search results for unstructured data.

  2. Log analytics: Elasticsearch and OpenSearch are used to store and analyze log data, providing real-time insights into system and application performance.

  3. Business intelligence: Elasticsearch and OpenSearch are used for data analysis and reporting, allowing organizations to make data-driven decisions.

  4. Metrics and time-series data: Elasticsearch and OpenSearch are used for storing and analyzing time-series data, such as metrics, performance data, and sensor data.

  5. Geospatial data: Elasticsearch and OpenSearch are used for geospatial data analysis, allowing organizations to analyze and visualize data based on location.

While Elasticsearch is a powerful and versatile search and analytics engine, some use cases may not be well-suited for it:

  1. Relational data: Elasticsearch is designed for unstructured and semi-structured data, and may not be the best solution for storing and querying complex relational data.

  2. Transactions: Elasticsearch does not provide transactional guarantees, and may not be the best solution for applications that require strong consistency and atomicity.

  3. Graph data: Elasticsearch is not designed for graph data and may not be the best solution for use cases that require graph data processing and analysis.

  4. Real-time data processing: While Elasticsearch provides real-time search and analytics capabilities, it may not be well-suited for applications that require low latency and high throughput for real-time data processing.

  5. High write loads: While Elasticsearch can handle large amounts of data, it may not be the best solution for applications that require high write loads, as it is designed primarily for read-intensive use cases. You will need to build a queue to ingest data over a certain rate.

Conclusion

We might reach for a golden hammer in database choice, and whatever your golden hammer is, consider the use case and counter use case for your data type, access patterns and even things like price.

One size does not fit all in a highly scalable world where time and money can mean the difference between success and failure for an organisation.

Technical debt is your worst enemy and it can be almost impossible to escape the clutches of a bad data storage and access solution that was chosen as the marteau du jour.

ย