I never get tired of seeing customer-driven innovation in action! When AWS customers told us that they needed an easy way to move petabytes of data in and out of AWS, we responded with the AWS Snowball. Later, when they told us that they wanted to do some local data processing and filtering (often at disconnected sites) before sending the devices and the data back to AWS, we launched the AWS Snowball Edge, which allowed them to use AWS Lambda functions for local processing. Earlier this year we added support for EC2 Compute Instances, with six instances sizes and the ability to preload up to 10 AMIs onto each device.
Great progress, but we are not done yet!
More Compute Power and a GPU
I’m happy to tell you that we are getting ready to give you two new Snowball Edge options: Snowball Edge Compute Optimized and Snowball Edge Compute Optimized with GPU (the original Snowball Edge is now called Snowball Edge Storage Optimized). Both options include 42 TB of S3-compatible storage and 7.68 TB of NVMe SSD storage, and allow you to run any combination of instances that consume up to 52 vCPUs and 208 GiB of memory. The additional processing power gives you the ability to do even more types of processing at the edge.
Here are the specs for the instances:
|sbe-c.small / sbe-g.small
|sbe-c.medium / sbe-g.medium
|sbe-c.large / sbe-g.large
|sbe-c.xlarge / sbe-g.xlarge
|sbe-c.2xlarge / sbe-g.2xlarge
|sbe-c.4xlarge / sbe-g.4xlarge
|sbe-c.8xlarge / sbe-g.8xlarge
|sbe-c.12xlarge / sbe-g.12xlarge
The Snowball Edge Compute Optimized with GPU includes an on-board GPU that you can use to do real-time full-motion video analysis & processing, machine learning inferencing, and other highly parallel compute-intensive work. You can launch an sbe-g instance to gain access to the GPU.
You will be able to select the option you need using the console, as always:
The Compute Optimized device is just a tad bigger than the Storage Optimized Device. Here they are, sitting side-by-side on an Amazon door desk:
I’ll have more information to share soon, so stay tuned!