Christmas is coming, but you don’t have a present on hand for your (grand)parents (Mom, Dad, if you’re reading this – I promise this post isn’t drawn from real life!). Looking for a solution? If your loved ones happened to live through the era of monochrome photography, keep reading. You can work some magic with Intel® Distribution of OpenVINO™ Toolkit on Ubuntu containers to give their old pictures new life. Hopefully, this blog will save Christmas!
Also, suppose you’re curious about AI/ML and what you can do with OpenVINO on Ubuntu containers. In that case, this blog is an excellent read for you too.
OpenVINO on Ubuntu containers: making developers’ lives easier
Docker image security isn’t only about provenance and supply chains; it’s also about the user experience. More specifically, the developer experience.
Removing toil and friction from your app development, containerisation, and deployment processes avoids encouraging developers to use untrusted sources or bad practices in the name of getting things done. As AI/ML development often requires complex dependencies, it’s the perfect proof point for secure and stable container images.
Why Ubuntu Docker images?
As the most popular container image in its category, the Ubuntu base image provides a seamless, easy-to-set-up experience. From public cloud hosts to IoT devices, the Ubuntu experience is consistent and loved by developers.
One of the main reasons for adopting Ubuntu-based container images is the software ecosystem. More than 30.000 packages are available in one `install` command, with the option to subscribe to enterprise support from Canonical. It just makes things easier.
In the next and final blog (coming soon, keep posted…), you’ll see that using Ubuntu Docker images greatly simplifies components containerisation. We even used a prebuilt & preconfigured container image for the NGINX web server from the LTS images portfolio maintained by Canonical for up to 10 years.
Beyond providing a secure, stable, and consistent experience across container images, Ubuntu is a safe choice from bare metal servers to containers. Additionally, it comes with hardware optimisation on clouds and on-premises, including Intel hardware.
When you’re ready to deploy deep learning inference in production, binary size and memory footprint are key considerations – especially when deploying at the edge. OpenVINO provides a lightweight Inference Engine with a binary size of just over 40MB for CPU-based inference. It also provides a Model Server for serving models at scale and managing deployments.
OpenVINO includes open-source developer tools to improve model inference performance. The first step is to convert a deep learning model (trained with TensorFlow, PyTorch,…) to an Intermediate Representation (IR) using the Model Optimizer. In fact, it cuts the model’s memory usage in half by converting it from FP32 to FP16 precision. You can unlock additional performance by using low-precision tools from OpenVINO. The Post-training Optimisation Tool (POT) and Neural Network Compression Framework (NNCF) provide quantisation, binarisation, filter pruning, and sparsity algorithms. As a result, Intel devices’ throughput increases on CPUs, integrated GPUs, VPUs, and other accelerators.
Open Model Zoo provides pre-trained models that work for real-world use cases to get you started quickly. Additionally, Python and C++ sample codes demonstrate how to interact with the model. More than 280 pre-trained models are available to download, from speech recognition to natural language processing and computer vision.
For this blog, we will use the pre-trained colourisation models from Open Model Zoo and serve them with Model Server.
OpenVINO and Ubuntu container images
The Model Server – by default – ships with the latest Ubuntu LTS, providing a consistent development environment and an easy-to-layer base image. The OpenVINO tools are also available as prebuilt development and runtime container images.
To learn more about Canonical LTS Docker Images and OpenVINO™, read:
- Intel and Canonical to secure containers software supply chain – Ubuntu blog
- OpenVINO Documentation – OpenVINO™
- Webinar: Secure AI deployments at the edge – Canonical and Intel
Neural networks to colourise a black & white image
Now, back to the matter at hand: how will we colourise grandma and grandpa’s old pictures? Thanks to Open Model Zoo, we won’t have to train a neural network ourselves and will only focus on the deployment. (You can still read about it.)
Our architecture consists of three microservices: a backend, a frontend, and the OpenVINO Model Server (OVMS) to serve the neural network predictions. The Model Server component hosts two different demonstration neural networks to compare their results (V1 and V2). These components all use the Ubuntu base image for a consistent software ecosystem and containerised environment.
A few reads if you’re not familiar with this type of microservices architecture:
- What are container images?
- What is Kubernetes?
gRPC vs REST APIs
The OpenVINO Model Server provides inference as a service via HTTP/REST and gRPC endpoints for serving models in OpenVINO IR or ONNX format. It also offers centralised model management to serve multiple different models or different versions of the same model and model pipelines.
The server offers two sets of APIs to interface with it: REST and gRPC. Both APIs are compatible with TensorFlow Serving and expose endpoints for prediction, checking model metadata, and monitoring model status. For use cases where low latency and high throughput are needed, you’ll probably want to interact with the model server via the gRPC API. Indeed, it introduces a significantly smaller overhead than REST. (Read more about gRPC.)
OpenVINO Model Server is distributed as a Docker image with minimal dependencies. For this demo, we will use the Model Server container image deployed to a MicroK8s cluster. This combination of lightweight technologies is suitable for small deployments. It suits edge computing devices, performing inferences where the data is being produced – for increased privacy, low latency, and low network usage.
Ubuntu minimal container images
Since 2019, the Ubuntu base images have been minimal, with no “slim” flavours. While there’s room for improvement (keep posted), the Ubuntu Docker image is a less than 30MB download, making it one of the tiniest Linux distributions available on containers.
In terms of Docker image security, size is one thing and reducing the attack surface is a fair investment. However, as is often the case, size isn’t everything. In fact, maintenance is the most critical aspect. The Ubuntu base image, with its rich and active software ecosystem community, is usually a safer bet than smaller distributions.
A common trap is to start smaller and install loads of dependencies from many different sources. The end result will have poor performance, use non-optimised dependencies, and not be secure. You probably don’t want to end up effectively maintaining your own Linux distribution… So, let us do it for you.
Colourise black & white pictures: what’s next?
In the next and final blog in this series, we’ll start coding. I promise you’ll come out of reading it with a concrete solution for a Christmas present. You can already begin scanning the old photo books to craft their coloured version.
In the final blog, we will:
- Prepare the backend and frontend code (source code included!)
- Craft Dockerfiles based on Ubuntu and LTS images for each component
- Give an overview of container images best practices using multi-stage builds
- Deploy the OpenVINO Model Server on MicroK8s
- Relate all these microservices to complete the whole picture
- Colourise some black and white photos!
You can also sign up for the on-demand version of our joint OpenVINO demo webinar with Intel. The most advanced (or impatient) readers can get started with the architecture diagram above and use the documentation to implement it themselves.
See you soon…
- Follow the tutorial “Install a local Kubernetes with MicroK8s”
- Read last-year holidays season’s blog about running EKS locally