Speed is everything on the Internet. Studies repeatedly show that a website, online application, or game’s load speed has a dramatic impact on end-user satisfaction, sales, and usage. Slow-loading directly leads to decreased page views and engagement and increased abandonment.

The typical solution for accelerating a site or application is to leverage a content delivery network. But while CDNs ?— like the StackPath CDN — can offer a great deal of configurability, some workloads need something very specific and unique (if not total power and control).

Some workloads need a private CDN.

Building a CDN yourself isn’t that simple. (Trust us. We’ve built a few.) You could rent servers in data centers close to your clients, but this quickly becomes difficult. Or you can disperse your application across multiple cloud regions, but you’re then limited by the data center locations your cloud provider offers. Even if you do set up applications servers around the world, how do you geographically locate a user and automatically serve them content from your closest server?

Fortunately, our edge platform has all of the parts you need. Virtual machines located close to end-users as opposed to public cloud providers whose servers live in more centralized data centers. Edge computing locations around the world so you are always able to serve content to end users from nearby. A private network backbone connecting them, to avoid the turbulence and bottlenecks of public internet routes.

In this article, we walk through creating a three node, multi continent CDN network on StackPath edge compute VMs. These VMs will serve files from persistent storage with the NGINX web server on an anycast IP address.

If you expect your needs would be better served by a configurable HTTP cache like Varnish, you could easily install that instead. We’re using NGINX in this guide because it’s an extremely fast general purpose web server that can handle nearly any task you might want your CDN to perform.

Prerequisites

Before you begin creating the VM fleet, you should have the following:

  • An SSH key pair: you will use SSH to log into your Edge Compute virtual machines and you need to generate a key pair before you spin up the virtual machines
  • A domain name for which you can create DNS A records

Spin up for a 3-Node Edge Compute Workload

First, sign up a StackPath account. The first step of the signup process asks you what service you want to use. Select the Edge Compute service then complete the signup process.

When you complete the signup process, you will be logged into your StackPath control panel on the Create Workload page. Workload is the StackPath term for a VM (or container) deployment that uses the same operating system (OS) image and is managed and billed as a single unit.

Fill in the Create Workload page as follows:

  1. Name – Give your workload a short, appropriate and memorable name
  2. Workload Type – Select VM from the drop down options
  3. OS Image – Choose the Linux distribution that works for you

Click Continue to Settings to proceed to the Workload Settings page.

Complete the form as follows:

  1. Add Anycast IP Address – Check this option as you will use the anycast IP address
  2. Public Ports – Add the ports to communicate with your servers on the anycast and public IP address of the VMs
    1. 80 – This is the HTTP port which will serve the website assets
    2. 22 – This is the SSH port which needs to be open to log into VMs
  3. First Boot SSH Key(s) – Enter the public SSH key of your SSH key pair
    1. Note that you must enter at least one key as password logins are disabled

The VPC is a virtual private network that StackPath creates between your VMs. Each VM is allocated a private IP address over which they can securely communicate.

Click Continue to Spec to move to the Spec, Storage, & Deployment page.

  1. Spec – Choose the number of vCPUs and amount of RAM and storage space for the VMs in this workload
  2. Persistent Storage – Storage space guaranteed to be retained between reboots. The first field is where you set the:
    1. Mount Point – The directory in the Linux file system where the persistent storage is located
    2. Size – The size of the persistent storage space
  3. Deployment Target – A subset of VMs that can span PoPs. The idea here is that you have one deployment with a larger minimum number of VMs in PoPs that are close to the bulk of your customers while having deployments with a smaller minimum number closer to smaller groups of clients
    1. Name – Name the deployment
    2. PoPs – Select all the PoPs where you want your VMs
    3. Instances Per PoP – The number of VMs to start in each PoP

Click Create Workload to spin up the VMs.

This will take a couple of minutes. When they are online, you will see them on the Workloads Overview page along with their public and private IP addresses:

You also find the anycast IP address on this page. Since you will need this when you start configuring NGINX, you should resolve your domain name to the anycast IP address now so it’s ready.

Repeat the following install and configuration steps on all your VMs. Note that if you plan to create a production-ready custom CDN, you should consider using provisioning and configuration management tools like Terraform and Ansible to create and configure your VMs.

Install NGINX

First, log in to your VM using SSH. The default usernames are as follows:

Distribution User
Ubuntu ubuntu
Debian debian
CentOS centos

Next, perform a system update and reboot to ensure your VM is running the latest packages:

Debian and Ubuntu:

sudo apt update
sudo apt upgrade
sudo systemctl reboot

CentOS:

sudo dnf update
sudo systemctl reboot

Log back into the server when the update and reboot are complete. Then, install the NGINX web server utility as follows:

Debian and Ubuntu:

sudo apt install nginx

CentOS:

On CentOS we’ll install both NGINX and the nano text editor and then start the NGINX service:

sudo dnf install nginx nano
sudo systemctl enable --now nginx

Configure NGINX

Before you configure NGINX, create a folder under the persistent storage location to contain the files you want to serve:

sudo mkdir /var/lib/data/http/

Next, create a simple HTML file to identify each server by its location. First, open the file in a text editor:

sudo nano /var/lib/data/http/index.html

Then, paste in the following:

<body> <h1>Tokyo</h1>
</body>

Replace the city name with the VM location. This will help identify your servers when you test them.

Next, set the new directory and file ownership so that NGINX can read it with the following commands:

Debian and Ubuntu:

sudo chown -R www-data:www-data /var/lib/data/http

CentOS:

sudo chown -R nginx:nginx /var/lib/data/http

Next, we create the NGINX configuration that serves your files from /var/lib/data/http.

Debian and Ubuntu:

On Debian and Ubuntu, first remove the default configuration file:

sudo rm /etc/nginx/sites-enabled/default

Then, create a new configuration file with nano:

sudo nano /etc/nginx/sites-available/anycast.conf

Copy and paste the following into the new file:

server {
       listen 80;
       server_name <DOMAIN>;
       root /var/lib/data/http;
       index index.html;
       location / {
               try_files $uri $uri/ =404;
       }
}

Next, create a symlink from /etc/nginx/sites-available/anycast.conf to /etc/nginx/sites-enabled/anycast.conf so NGINX loads the new site configuration:

sudo ln -s /etc/nginx/sites-available/anycast.conf /etc/nginx/sites-enabled/anycast.conf

Finally, reload NGINX:

sudo systemctl reload nginx

CentOS:

For CentOS, open a new site configuration file at /etc/nginx/conf.d/:

sudo nano /etc/nginx/conf.d/anycast.conf

Then, copy and paste the following:

server {
       listen 80;
       server_name <DOMAIN>;
       root /var/lib/data/http;
       index index.html;
       location / {
                  try_files $uri $uri/ =404;
       }
}

Finally, restart NGINX:

sudo systemctl restart nginx

Your VMs are now configured and ready to test.

Testing

You should confirm the CDN cluster is working as expected.

First, copy each of the public IP addresses from the StackPath WorkLoads Overview page and paste them into your browser. This will load the HTML file you created that contains the name of their city, for example:

Next, browse to the domain name that you resolved to the anycast IP address. This will connect to the server closest to you.

Testing that the anycast domain name is working from other physical locations is a little more challenging. If you have access to a VPN, set your endpoint to be close to your other PoPs and test again. Alternatively, use a website like GeoPeeker that displays your website as it appears from other locations around the world.

Conclusion

You have now created a 3-node cluster of VMs that take advantage of StackPath’s edge computing platform to serve your assets from a PoP closest to your clients. Your new VMs are fully capable Linux servers, which means you can benefit from their locations for other latency sensitive applications, such as video conferencing. To do this, install a conference server on each node that is closest to your office and access it from the VM’s public (rather than the anycast) IP address.

From here, your next step may be to get an SSL/TLS certificate for your anycast domain name. If so, the regular IP based verification method will not work as you cannot guarantee where the validation requests from the registry end up. Instead, use the DNS based verification offered by many registries.

To learn more about deploying flexible, low latency workloads closer to your users than ever before, request a free demo from a StackPath edge expert.

The post Build a 3-Continent CDN with Edge VMs in One Hour appeared first on Articles for Developers Building High Performance Systems.