If you are building an application that needs to store customers’ data in the cloud, you have a few options to think about.

In this post I will be comparing two common options we analyzed in the past and list their pros and cons based on our trials. I will be using AWS S3 and DigitalOcean Spaces as reference.

Photo by Robert V. Ruggiero on Unsplash

👉 The options we compared were:

A. Separate buckets for customers. Where each bucket represents a customer’s folder.
B. Separate customer folders in one bucket (recommended most of the time).

A. Separate buckets

👍 The pros of using separate buckets:

  1. More organization, security and data separation since every customer has his/her own data in a particular bucket.
  2. The ability to allow customers to access their data by giving them access to their own buckets.
  3. Easily block access, delete or know the size of a customer’s bucket.

👎 The cons of using separate buckets:

  1. If your app relies on a naming convention for each customer’s bucket and for some reason the name of the bucket is taken or unavailable then you have to break the naming convention.
  2. It is extremely complex if you ever think of backing up customer data (may have to back up hundreds or thousands of buckets).
  3. It is almost impossible to migrate to another storage provider (migrate hundreds or thousands of customers’ buckets from S3 to DigitalOcean Spaces for example) if you decide you need to migrate to another provider.
  4. High overhead if you need to make global changes to the buckets’ settings (change the storage class for example of buckets).

B. Separate folders in one bucket

👍 The pros of using separate folders all in one bucket:

  1. Very easy to manage the bucket settings for all customers, no overhead.
  2. Easy to migrate data and easy to back up the whole bucket into another bucket or even back it up on another storage provider (back up an S3 bucket into DigitalOcean Spaces).
  3. You can fully control the naming convention of customer folders, no worries about unavailable names (bucket-name/customer-01, bucket-name/customer-02).

👎 The cons of using separate folders all in one bucket:

  1. You need to depend on the app layer to block access or know the size of a customer’s folder. Can be done but not as easy as option ‘A’.
  2. Full access to the bucket means full access to all customers’ folders.

⚠️ Notes:

  • On AWS S3, you can create credentials that give access to a particular folder in a bucket. Making it function as a virtual bucket (pro for option B if you are using S3).
  • On DigitalOcean Spaces, in the time being, any credentials you create give access to all DigitalOcean Spaces buckets and all their folders (con for option A if you are using DigitalOcean).

Let me know if that’s helpful and what you end up doing!

Just like you diversify income sources, diversify your cloud infrastructure providers!


We are always taught that we “should not put all our eggs in one basket” (🥚🥚🥚 + 🧺 = ☠) to avoid the risk of losing everything at once. This can be applied to many verticals, whether it’s applied to skills, investing, income sources, etc… We always need to diversify.

Should we also apply the rule and diversify when it comes to cloud providers? We will figure out.

Photo by Autumn Mott Rodeheaver on Unsplash


Last December, on Christmas Day (disasters usually happen on the holidays for some reason 🧐 when everyone is off) one of our clients had their AWS account suspended. Because of the suspension — which we had no idea why it happened in the first place — their production servers, databases and storage completely stopped. Connections to the servers or the databases were timing out, nothing could be reached.

Quick Background

They were using the compute service (EC2) for multiple load-balanced servers, a central caching server, the relational database service (RDS) as a central database serving all applications and the storage service (S3) as a CDN plus an object store for everything else. Luckily the DNS was not managed by Route53 — so that gave some hope in restoring backups on another cloud until the issue is resolved…

More Digging…

We wanted to dig into the AWS account suspension issue deeper to see why it happened and if it was possible to resolve it and get everything up and running quickly. While checking the account billing (since that’s the only thing you can do for a suspended AWS account) we noticed high usage of massively large Windows instances that incurred tremendous charges we know nothing about.

The server instances that we saw on the bill were the most powerful ones to date (Windows running on p3dn.24xlarge) — these were actually just unveiled by Amazon the same month:

“p3dn.24xlarge has 2.5 GHz (base) and 3.1 GHz (sustained all-core turbo) Intel Xeon P-8175M processors and supports Intel AVX-512.”

Amazon states the following use cases for these machines:

“Machine/Deep learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, drug discovery.”

The mentioned instances ran for a couple of days on the client’s AWS account before the suspension. What the client knows for sure is that they have not launched these instances by themselves or anyone who has authorized access to the account. Which leaves us thinking about two possible scenarios:

  1. The AWS account was hacked and someone created these server instances
  2. It could be -but unlikely- that it’s a billing error where AWS mistakenly added these charges to the bill

Unfortunately, solving the problem was taking some time, so it made sense to take more than one action in the same time.

Temporary Solution / Hope

The client has always had regular file and database backups 👍 taken hourly, daily and weekly. We concluded that it was time to temporarily deploy all servers and databases on another cloud provider from the most recent backups.

It all looked good until we realized that all backups were stored on Amazon S3 😱 — and that was the exact moment when the last hope vanished since we could not even restore backups because S3 was suspended and practically learned that we should apply this saying:

“Don’t put all your eggs in one basket”


It’s just not enough to have regular backups, you are still not safe!

You need to have regular backups stored in more than one cloud basket since a single cloud account can simply disappear for a reason or two.

Looking at some of the possible options to see if they were going to be sufficient for a quick recovery:

  • ❌ Raw backups stored on a single provider (S3) only — insufficient if the account is suspended
  • ❌ Full server snapshots/images at your cloud provider — insufficient if the account is suspended
  • ✅ Parallel cloud (servers, database, CDN, storage) running on another provider in addition to AWS (or on stand-by) — more expensive and higher overhead, but mostly sufficient
  • ✅ Raw backups stored on multiple storage providers, say S3 and another storage (Google Cloud Storage, DigitalOcean Spaces, etc…) — sufficient in restoring application files and databases in case one account is suspended


  • Expect the worst — be prepared
  • Enable two-factor authentication for all your accounts
  • Deploy your web infrastructure on more than one cloud if possible
  • Always have regular backups of your application files, databases and static resources (assets, user content and uploads) stored on more than one storage provider — or at least on a different provider than your cloud infrastructure

If setting up backups for every project is too much manual work -and indeed it can be- then give SimpleBackups a try.

Backup Job Overview

SimpleBackups makes it a breeze to schedule automated backups of all your website files and databases in a simple dashboard. You will get alerts if any of your backups fail and you can store your backups on different storage providers like AWS S3, Google Cloud Storage, DigitalOcean Spaces and more.

Connect Storage Options

How to whitelist a particular IP and allow it to connect to your RDS database.

What This Is About

By default, if you create an Amazon RDS MySQL database you won’t be able to connect to it unless you specifically whitelist inbound traffic sources.

In this post, I will show you step by step in the easiest way possible how to allow an IP to connect to your RDS instance (in other words, open port 3306). I am assuming this will be helpful for developers using RDS for the first time and wondering why they can’t just connect! 🤯

The instructions will pretty much work to open firewall ports for your AWS EC2 instances.

Note: when creating your RDS instance, make sure you choose to make it publicly accessible (it’s an option that pops up to you when creating the database).

Illustration Designed by Vectorpouch

Steps To Whitelist an IP

Step 1

Choose your RDS database from the list of instances.

Step 1

Step 2

Scroll to the “Details” section then find the “Security groups” and click on the active security group link. This will directly redirect you to the security group you need to whitelist the IP address at.

Step 2

Step 3

Make sure the security group that belongs to your RDS database is selected/highlighted. If you are not sure which one it is, you can match them by the VPC ID (in this case it’s the one ending in 0bc0) or the GROUP IP (ending in 6cbf).

Step 4

Click on “Inbound” at the bottom (you can also right click the highlighted item and click “Edit inbound rules”). Then click “Edit”.

Step 3 and 4

Step 5

In this last step you will just need to select the port to whitelist. If you are using the default MySQL port then selecting the “MYSQL/Aurora” option works. If you are using a custom port for your database, then under the “Type” dropdown select “Custom TCP Rule” and type the port number in the “Port Range” field.

Step 6

Under the “Source” we finally add the IP address or IP range we need to whitelist. Note: The IP addresses you enter here must be not he range format, which means that you need to append /32 to the end of your IP address.

Example: to whitelist you need to enter in the source field.

Don’t forget to click “Save” then you are done ✅

Step 5 and 6

Verify You Can Connect

Personally, I like using telnet to check for open ports. In our case we can do the following to check if we can connect to the database instance after whitelisting the IP:

$ telnet hostname_or_endpoint_or_database_ip port

A successful connection to port 3306

In the screenshot above, seeing the “Connected….” means that we can successfully connect to the RDS instance. If you only see the “Trying ….” line then you are still unable to access the instance.

If you are still unable to connect

  • Repeat the steps and make sure you followed all instructions
  • Make sure that your RDS instance is set to “Publicly Accessible”
  • Verify you are trying to connect from the same IP address you whitelisted

One of the cool things that Google Cloud Storage supports is the AWS S3 interoperability mode. This mode allows you to use almost the same API used for AWS S3 to deal with Google Cloud Storage including authentication.

It relies on the same variables needed for S3:

  • Access Key
  • Secret Key
  • Bucket
  • Region

While pretty much of the operations work fine in an S3-like way, signing URLs won’t, since Google uses a different URL signing method. This becomes problematic if you want to use Google Cloud Storage as a Laravel filesystem using the S3 driver.

I am going to show how you can create a signed URL using a PHP function that has no dependencies, no service account needed, and no key file.

Creating a signed download URL


$filepath = 'path/to/file.tar.gz'; // do not include the bucket name, no slash in the beginning
$bucket = 'my-bucket-name';
$key = 'GOOGSOMEKEYHERE'; // key
$secret = 'aNdTHEsecretGoesRitghtHere'; // secret

function getSignedGoogleCloudStorageUrl($filepath, $bucket, $secret, $key, $duration = 50)
    $expires = new DateTime('+ ' . $duration . ' seconds');
    $seconds = $expires->format('U');

    $objectPieces = explode('/', $filepath);
    array_walk($objectPieces, function (&$piece) {
        $piece = rawurlencode($piece);
    $objectName = implode('/', $objectPieces);

    $resource = sprintf(

    $headers = []; // you may add any headers needed here

    $toBeSignedArray = [
        '', // contentMd5, can be left blank
        '', // contentType, can be left blank
        implode("n", $headers) . $resource,

    $toBeSignedString = implode("n", $toBeSignedArray);
    $encodedSignature = urlencode(base64_encode(hash_hmac('sha1', $toBeSignedString, $secret, true)));

    $query   = [];
    $query[] = 'GoogleAccessId=' . $key;
    $query[] = 'Expires=' . $seconds;
    $query[] = 'Signature=' . $encodedSignature;

    return "https://storage.googleapis.com/{$bucket}/{$filepath}?" . implode('&', $query);

This is the same signing function used by Google’s PHP SDK but simplified to only support the GET method for file downloads. Additionally, it utilizes the hidden fact that you can replace the GoogleAccessId with the Key and use the Secret Key to sign the payload.

You can create a free SimpleBackups account and effortlessly back up your databases, servers and websites with the ability to choose Google Cloud Storage and other providers as a storage option! Try it out.

In this little piece I am going to highlight some of the reasons of why backups may fail.

While many of the reasons below are common and apply to a wide range of different backup methods, I am specifically assuming you are using a backup service like SimpleBackups.io to back up your servers and databases.

Server-related causes:

  • Not enough disk space
  • Server runs out of memory
  • Server has been placed behind a firewall and cannot be accessed
  • The backup is taking too long to be created and eventually times out
  • Trying to back up a non-existent directory or one that has been deleted
  • Trying to back up a directory which you don’t have permissions to read
  • Invalid/changed server credentials (host, port, username, ssh key, or password)

Storage-related causes:

  • A problem uploading backup to remote storage
  • Invalid/changed storage credentials (key, secret, region, or bucket)

Database-related causes:

  • Trying to back up an empty database
  • Invalid/changed database credentials (db host, db port, db username, db password, or db name)

A few months ago a friend of mine called me and I could hear in his voice that he was in trouble:

I’m ****** I thought we had backups running but we didn’t … really.
– Mister Smith, Web Agency Manager

He was managing a Web Agency in Bordeaux and had everything in place to manage web projects from their creation to their long term maintenance.

Part of the maintenance job is to handle website’s potential crash and be ready to deploy a backup quickly.
As probably a lot of other agencies and developers he was relying on hosting company systems that are supposed to handle those backups without ever really testing them.

And what had to happen, happened: one server got deleted together with all its backups … that’s it… not turning back.
They were using a Digital Ocean (which btw we love) server together with the paid option ($4/month) to have daily droplet backups and CloudWays to manage their servers and deployments .
A simple click on a button got the entire droplet deleted and with it all its backups.

I was surprised to hear that removing a droplet actually deletes all its backups too but that’s what happened.

This is the kind of things that is usually not tested and that’s why having a third party solution dedicated to websites backups on a separated serveris a must-have for any professional agencies.

The moral of the story:

  • Don’t simply rely on your hosting backups (don’t put all your eggs in the same basket)
  • Have an actual recovery process in place with simple instructions (we’re working on a simple “recovery procedure” that’ll be released soon)
  • Test your backups on regular base (go through your recovery process and validate each and every step)

If you’re looking for a simple backups solution for web agencies, visit simplebackups.io and try our tool, we have a free plan.