Just like you diversify income sources, diversify your cloud infrastructure providers!

Intro

We are always taught that we “should not put all our eggs in one basket” (🥚🥚🥚 + 🧺 = ☠) to avoid the risk of losing everything at once. This can be applied to many verticals, whether it’s applied to skills, investing, income sources, etc… We always need to diversify.

Should we also apply the rule and diversify when it comes to cloud providers? We will figure out.

Photo by Autumn Mott Rodeheaver on Unsplash

Story

Last December, on Christmas Day (disasters usually happen on the holidays for some reason 🧐 when everyone is off) one of our clients had their AWS account suspended. Because of the suspension — which we had no idea why it happened in the first place — their production servers, databases and storage completely stopped. Connections to the servers or the databases were timing out, nothing could be reached.

Quick Background

They were using the compute service (EC2) for multiple load-balanced servers, a central caching server, the relational database service (RDS) as a central database serving all applications and the storage service (S3) as a CDN plus an object store for everything else. Luckily the DNS was not managed by Route53 — so that gave some hope in restoring backups on another cloud until the issue is resolved…

More Digging…

We wanted to dig into the AWS account suspension issue deeper to see why it happened and if it was possible to resolve it and get everything up and running quickly. While checking the account billing (since that’s the only thing you can do for a suspended AWS account) we noticed high usage of massively large Windows instances that incurred tremendous charges we know nothing about.

The server instances that we saw on the bill were the most powerful ones to date (Windows running on p3dn.24xlarge) — these were actually just unveiled by Amazon the same month:

“p3dn.24xlarge has 2.5 GHz (base) and 3.1 GHz (sustained all-core turbo) Intel Xeon P-8175M processors and supports Intel AVX-512.”

Amazon states the following use cases for these machines:

“Machine/Deep learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, drug discovery.”

The mentioned instances ran for a couple of days on the client’s AWS account before the suspension. What the client knows for sure is that they have not launched these instances by themselves or anyone who has authorized access to the account. Which leaves us thinking about two possible scenarios:

  1. The AWS account was hacked and someone created these server instances
  2. It could be -but unlikely- that it’s a billing error where AWS mistakenly added these charges to the bill

Unfortunately, solving the problem was taking some time, so it made sense to take more than one action in the same time.

Temporary Solution / Hope

The client has always had regular file and database backups 👍 taken hourly, daily and weekly. We concluded that it was time to temporarily deploy all servers and databases on another cloud provider from the most recent backups.

It all looked good until we realized that all backups were stored on Amazon S3 😱 — and that was the exact moment when the last hope vanished since we could not even restore backups because S3 was suspended and practically learned that we should apply this saying:

“Don’t put all your eggs in one basket”

Lesson

It’s just not enough to have regular backups, you are still not safe!

You need to have regular backups stored in more than one cloud basket since a single cloud account can simply disappear for a reason or two.

Looking at some of the possible options to see if they were going to be sufficient for a quick recovery:

  • ❌ Raw backups stored on a single provider (S3) only — insufficient if the account is suspended
  • ❌ Full server snapshots/images at your cloud provider — insufficient if the account is suspended
  • ✅ Parallel cloud (servers, database, CDN, storage) running on another provider in addition to AWS (or on stand-by) — more expensive and higher overhead, but mostly sufficient
  • ✅ Raw backups stored on multiple storage providers, say S3 and another storage (Google Cloud Storage, DigitalOcean Spaces, etc…) — sufficient in restoring application files and databases in case one account is suspended

Tips

  • Expect the worst — be prepared
  • Enable two-factor authentication for all your accounts
  • Deploy your web infrastructure on more than one cloud if possible
  • Always have regular backups of your application files, databases and static resources (assets, user content and uploads) stored on more than one storage provider — or at least on a different provider than your cloud infrastructure

If setting up backups for every project is too much manual work -and indeed it can be- then give SimpleBackups a try.

Backup Job Overview

SimpleBackups makes it a breeze to schedule automated backups of all your website files and databases in a simple dashboard. You will get alerts if any of your backups fail and you can store your backups on different storage providers like AWS S3, Google Cloud Storage, DigitalOcean Spaces and more.

Connect Storage Options

How to whitelist a particular IP and allow it to connect to your RDS database.

What This Is About

By default, if you create an Amazon RDS MySQL database you won’t be able to connect to it unless you specifically whitelist inbound traffic sources.

In this post, I will show you step by step in the easiest way possible how to allow an IP to connect to your RDS instance (in other words, open port 3306). I am assuming this will be helpful for developers using RDS for the first time and wondering why they can’t just connect! 🤯

The instructions will pretty much work to open firewall ports for your AWS EC2 instances.

Note: when creating your RDS instance, make sure you choose to make it publicly accessible (it’s an option that pops up to you when creating the database).

Illustration Designed by Vectorpouch

Steps To Whitelist an IP

Step 1

Choose your RDS database from the list of instances.

Step 1

Step 2

Scroll to the “Details” section then find the “Security groups” and click on the active security group link. This will directly redirect you to the security group you need to whitelist the IP address at.

Step 2

Step 3

Make sure the security group that belongs to your RDS database is selected/highlighted. If you are not sure which one it is, you can match them by the VPC ID (in this case it’s the one ending in 0bc0) or the GROUP IP (ending in 6cbf).

Step 4

Click on “Inbound” at the bottom (you can also right click the highlighted item and click “Edit inbound rules”). Then click “Edit”.

Step 3 and 4

Step 5

In this last step you will just need to select the port to whitelist. If you are using the default MySQL port then selecting the “MYSQL/Aurora” option works. If you are using a custom port for your database, then under the “Type” dropdown select “Custom TCP Rule” and type the port number in the “Port Range” field.

Step 6

Under the “Source” we finally add the IP address or IP range we need to whitelist. Note: The IP addresses you enter here must be not he range format, which means that you need to append /32 to the end of your IP address.

Example: to whitelist 8.8.8.8 you need to enter 8.8.8.8/32 in the source field.

Don’t forget to click “Save” then you are done ✅

Step 5 and 6

Verify You Can Connect

Personally, I like using telnet to check for open ports. In our case we can do the following to check if we can connect to the database instance after whitelisting the IP:

$ telnet hostname_or_endpoint_or_database_ip port

A successful connection to port 3306

In the screenshot above, seeing the “Connected….” means that we can successfully connect to the RDS instance. If you only see the “Trying ….” line then you are still unable to access the instance.

If you are still unable to connect

  • Repeat the steps and make sure you followed all instructions
  • Make sure that your RDS instance is set to “Publicly Accessible”
  • Verify you are trying to connect from the same IP address you whitelisted


One of the cool things that Google Cloud Storage supports is the AWS S3 interoperability mode. This mode allows you to use almost the same API used for AWS S3 to deal with Google Cloud Storage including authentication.

It relies on the same variables needed for S3:

  • Access Key
  • Secret Key
  • Bucket
  • Region

While pretty much of the operations work fine in an S3-like way, signing URLs won’t, since Google uses a different URL signing method. This becomes problematic if you want to use Google Cloud Storage as a Laravel filesystem using the S3 driver.

I am going to show how you can create a signed URL using a PHP function that has no dependencies, no service account needed, and no key file.

Creating a signed download URL

<?php

$filepath = 'path/to/file.tar.gz'; // do not include the bucket name, no slash in the beginning
$bucket = 'my-bucket-name';
$key = 'GOOGSOMEKEYHERE'; // key
$secret = 'aNdTHEsecretGoesRitghtHere'; // secret

function getSignedGoogleCloudStorageUrl($filepath, $bucket, $secret, $key, $duration = 50)
{
    $expires = new DateTime('+ ' . $duration . ' seconds');
    $seconds = $expires->format('U');

    $objectPieces = explode('/', $filepath);
    array_walk($objectPieces, function (&$piece) {
        $piece = rawurlencode($piece);
    });
    $objectName = implode('/', $objectPieces);

    $resource = sprintf(
        '/%s/%s',
        $bucket,
        $objectName
    );

    $headers = []; // you may add any headers needed here

    $toBeSignedArray = [
        'GET',
        '', // contentMd5, can be left blank
        '', // contentType, can be left blank
        $seconds,
        implode("n", $headers) . $resource,
    ];

    $toBeSignedString = implode("n", $toBeSignedArray);
    $encodedSignature = urlencode(base64_encode(hash_hmac('sha1', $toBeSignedString, $secret, true)));

    $query   = [];
    $query[] = 'GoogleAccessId=' . $key;
    $query[] = 'Expires=' . $seconds;
    $query[] = 'Signature=' . $encodedSignature;

    return "https://storage.googleapis.com/{$bucket}/{$filepath}?" . implode('&', $query);
}

This is the same signing function used by Google’s PHP SDK but simplified to only support the GET method for file downloads. Additionally, it utilizes the hidden fact that you can replace the GoogleAccessId with the Key and use the Secret Key to sign the payload.

You can create a free SimpleBackups account and effortlessly back up your databases, servers and websites with the ability to choose Google Cloud Storage and other providers as a storage option! Try it out.

In this little piece I am going to highlight some of the reasons of why backups may fail.

While many of the reasons below are common and apply to a wide range of different backup methods, I am specifically assuming you are using a backup service like SimpleBackups.io to back up your servers and databases.

Server-related causes:

  • Not enough disk space
  • Server runs out of memory
  • Server has been placed behind a firewall and cannot be accessed
  • The backup is taking too long to be created and eventually times out
  • Trying to back up a non-existent directory or one that has been deleted
  • Trying to back up a directory which you don’t have permissions to read
  • Invalid/changed server credentials (host, port, username, ssh key, or password)

Storage-related causes:

  • A problem uploading backup to remote storage
  • Invalid/changed storage credentials (key, secret, region, or bucket)

Database-related causes:

  • Trying to back up an empty database
  • Invalid/changed database credentials (db host, db port, db username, db password, or db name)