Cloud Computing

EXPERT ADVICE

The Scalable, Available Cloud

With the evolution and proliferation of cloud computing, as more and more applications are being migrated to the cloud, many organizations are considering moving their database to the cloud as well.

Two main concerns on users’ minds are database scalability and availability. How scalable/elastic is a database in the cloud, and will the data be highly available?

Scalability in the Cloud

In terms of scalability, there are several things users need to think about when choosing the right solution, most importantly questions and priorities around capacity and throughput.

Scaling capacity is the most natural scaling scenario, where data has grown and the disk or attached storage is simply too small to hold it. Throughput, however, means that application usage as grown, resulting in a decline in performance. Throughput is influenced by latency and concurrency, meaning the user can either improve response time for every task (latency), perform tasks in parallel (concurrency), or use a combination of both.

Scaling can take place offline or online. Offline scaling requires downtime and stopping applications until the newly scaled database is restored, which also requires syncing backup data with the newly scaled database. Online scaling does not require downtime, and although applications may suffer some performance degradation, overall service is maintained.

Now for the nitty gritty: What is the difference between scaling in and out, or up and down?

Scaling up and down is the most common way to scale. If a user needs more capacity and/or better performance, all he or she needs to do is buy a bigger, faster machine. The cloud makes this approach very easy from the operational standpoint — you simply spawn the next available virtual machine class, and voila. The advantages to this approach are that it is simple to deploy, it’s supported by most databases used today, and it does not require any code changes. Disadvantages include the price of high-end hardware, limited scalability (you can’t outgrow the capability of any one machine) and cloud provider restrictions. To add to that, scaling up and down is usually done offline, and you know what that means: downtime.

Scale Me Out, Scale Me In

Scaling out and in is a bit more complex to achieve because it requires the database to be designed to support scale-out capabilities. There are various ways to scale out, but the two we’ll address are read replicas and sharding.

Read replicas are the most common way to scale out. In this configuration, you have a single master server where all writes are performed and then distributed to read replicas. The distribution of write operations can be synchronous, where the write operation will be blocking until it reaches all the read replicas (ensuring data consistency), or it can be asynchronous, where the write operation is unaffected, resulting in data inconsistency since you may not be reading the most up-to-date data.

Sharding is another way to achieve scalability. It usually means splitting up the data by some logic derived from the application. This can be done by selecting a key in the data and splitting the data by hashing that key and having some distribution logic. It can also be done by identifying the application needs and setting different tables or different data sets in different databases (splitting the North-America sales data from the EMEA sales data, for example).

The Inherent Availability Problem

There’s been a fair share of high-profile cloud outages recently, such as the crash of Amazon’s EC2 data center in the U.S. last April, or the recent outage to their Dublin data center this August.

While service downtime of an entire data center is a major (and less common) event, anybody who has ever worked in the cloud environment knows that the day-to-day of cloud operations are full of other “blips” with respect to availability — albeit more contained than a complete datacenter meltdown, yet just as harmful to every user’s specific business and service continuity.

Unlike traditional on-premise data centers, high availability in the cloud isn’t just about hardware resiliency anymore. Cloud users can’t just plug in an extra power supply or network card, or swap hard drives, etc.

Availability in the cloud depends on both the availability of u201cmore of the sameu201d resources and the ability to dynamically provision them across any and all datacenters/configurations — be it within the same data center, across regions, across availability zones and even across cloud providers.

In the cloud, there isn’t much that can be done to lower the chances of a machine failing, but in case it does fail u2013 or, more accurately, when it fails — you want to be able to seamlessly provision a new one on-the-fly and maintain service.

Your Data and the Cloud

Since the database tier is the most sensitive and critical part of the application (and also the hardest to scale), many enterprises are opting for a cloud Database as a Service (DBaaS) solution — specifically, one that has been designed for the unique and dynamic cloud environment, such as Xeround or Salesforce’s Database.com.

In a DBaaS, the database tier in the backend is being overseen by a management layer that’s responsible for the continuous monitoring and configuration of the database to achieve optimized scaling, high availability, performance and effective resource allocation in the cloud. These solutions spare the developer and IT manager much of the hassles of the tedious ongoing database management tasks and operations, as those are automatically handled by the service itself.

Razi Sharir is CEO of Xeround.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

E-Commerce Times Channels