Azure Confidential Computing vs Google Cloud Confidential Computing
I rarely write about product comparisons, especially between two giants like Microsoft and Google, in this case. But the recent announcement of Google Confidential Computing at the Cloud Next ’20 event attracted my attention, not last because of the blatantly same name as Microsoft’s equivalent offering: Azure Confidential Computing.
What is Confidential Computing (and why we need it)
Confidential computing adds new data security capabilities to your applications by introducing trusted execution environments (TEEs) and encryption mechanisms to protect your data while in use. TEEs, also known as enclaves, are hardware or software implementations that safeguard data being processed from access outside the enclave. An enclave provides a protected container by securing a portion of the processor and memory. Only authorized code is permitted to run and access data, so code and data are protected against viewing and modification from outside of the TEE.
With the announcement on October 2018, Microsoft became the first cloud provider to offer protection of data in use in a service called, aptly, Azure Confidential Computing. As well described in the official post on the Azure blog, “[..] Azure confidential computing protects your data while it’s in use. It is the final piece to enable data protection through its lifecycle whether at rest, in transit, or in use. It is the cornerstone of our ‘Confidential Cloud’ vision, which aims to make data and code opaque to the cloud provider.”
The concept of “opaque data and code” is revolutionary. For the first time, we can trust the cloud for no-one, including the cloud provider, can read your data. It’s encrypted at any stage, and only authorized applications have the key to decrypt it and access it. This is obtained in two ways:
· Hardware: Thanks to a partnership with Intel, Azure can offer hardware-protected virtual machines that run on Intel SGX technology. Intel Software Guard Extensions (SGX) is a set of extensions to the Intel CPU architecture that aims to provide integrity and confidentiality guarantees to sensitive computation performed on a computer, where all the privileged software (kernel, hypervisor, etc.) might potentially be compromised.
· Hypervisor: Virtualization Based Security (VBS) is a software-based TEE that’s implemented by Hyper-V in Windows 10 and Windows Server 2016. Hyper-V prevents administrator code running on the computer or server, as well as local administrators and cloud service administrators from viewing the contents of the enclave or modifying its execution.
The potential applications for confidential computing are really unlimited. Every time there is a requirement for protecting sensitive data, trusted execution environments represent the building blocks on top of which it’s possible to enable new secure business scenarios and use cases. Many industries and technologies can benefit of Azure Confidential Computing. In finance, for example, personal portfolio data and wealth management strategies would no longer be visible outside of a TEE. Healthcare organizations can collaborate by sharing their private patient data, like genomic sequences, to gain deeper insights from machine learning across multiple data sets without risk of data being leaked to other organizations. Combining multiple data sources to support secure multi-party machine learning scenarios allow for organizations to share their datasets confidentially. Machine learning services can obtain a higher accuracy of prediction by working on a larger trained model, but organizations can still preserve their own customers information (data is shared in encrypted format, visible only to the machine learning service). In oil and gas, and IoT scenarios, sensitive seismic data that represents the core intellectual property of a corporation can be moved to the cloud for processing, but with the protections of encrypted-in-use technology.
Confidential Computing can help you transform the way your organization processes data in the cloud while preserving confidentiality and privacy.
Google Confidential Computing
Google recently announced its newest cloud security program: Confidential Virtual Machines. The idea is simple: As we put more and more of our work and data on the cloud, we need data not just to be encrypted at-rest and in-transit but to be encrypted in memory while being processed. Where Azure Confidential Computing works with Intel SGX, Google Cloud collaborated with AMD on the Secure Encrypted Virtualization (SEV) technology that makes it all possible. SEV is hardware accelerated memory encryption for data-in-use protection that takes advantage of new security components available in AMD EPYC processors. It provides:
· AES-128 encryption engine embedded in the memory controller to automatically encrypt and decrypt data in the main memory when an appropriate key is provided.
· AMD Secure Processor for cryptographic functionality to secure key generation and key management.
The way it works is that AMD Secure Memory Encryption, the technology that encrypt system memory, is enabled at BIOS or operating system level. At boot time, a key is generated by the AMD Secure Processor. SEV than uses one key per virtual machine to isolate guests and the hypervisor from one another. Encryption happens at two levels: (i) specific memory pages, and (ii) for register content when a VM stops running. This prevents the leakage of information in CPU registers to components like the hypervisor, and can even detect malicious modifications to a CPU register state.
Based on this technology, Google’s Compute Engine will start to offer new Confidential Virtual Machines (cVMs) built upon SEV. The initial offering consists of Ubuntu 18.04/20.04, COS v81, and RHEL 8.2. All of these confidential VMs are built on top of GCP’s Shielded VMs. These VMs are enhanced with security controls that help defend against rootkits and bootkits. This is done by hardening the operating system image and verifying firmware, kernel binaries, and drivers’ integrity.
Other operating system images will be available in due course. Google is the first cloud provider to offer SEV-enabled VMs. Google is also promoting the use of its Asylo open-source framework for confidential computing, promising to make deployment easy at a high performance.
Google says it believes the future of cloud computing will increasingly shift to private, encrypted services that give users confidence that they are always in control over the confidentiality of their data.
Asylo
GCP’s secured VMs are built on the open-source, confidential computing Asylo framework. This Google project works with emerging trusted execution environments (TEEs) to lock down systems. Asylo provides:
· The ability to execute trusted workloads in an untrusted environment, inheriting the confidentiality and integrity guarantees from the security backend, i.e. the underlying enclave technology.
· Ready-to-use containers, an open-source API, libraries, and tools so you can develop and run applications that use one or more enclaves.
· A choice of security backends.
· Portability of your application’s source code across security backends.
The Asylo framework allows developers to easily build applications and make them portable, so they can be deployed on a variety of software and hardware backends. This video goes in detail of what is Asylo:
I’m a strong advocate of ACC — Azure Confidential Computing. I have used this technology in a number of projects, and shared my experience with the community in a variety of contributions:
· Video on my YouTube channel: Azure Confidential Computing
· A couple of articles for the Microsoft MSDN Magazine:
o Protect Your Data with Azure Confidential Computing
o Secure Multi-Party Machine Learning with Azure Confidential Computing
· An open source contribution on GitHub about the multi-party Machine Learning use case
ACC is built on top of Intel SGX, and SGX is built around the idea of creating “enclaves” of protected code and data. One or more ranges of physical memory are set aside as the enclave page cache; the contents of that memory, whether data or code, are only accessible to code that is, itself, located within the enclave. That code is callable from outside the enclave, but only via a set of entry points defined when the enclave is set up. Memory within the enclave is encrypted using an engine built into the processor itself; the key that is used is generated at power-on and is not available to any running code. As a result, according to Intel, the contents of the enclave are “protected even when the BIOS, VM, OS, and drivers are compromised.”
For comparison, AMD Secure Memory Encryption is, in a sense, a simpler mechanism. Rather than establishing enclaves, a system with SME simply marks a range of memory for encryption by setting a bit in the relevant page-table entries. The memory controller will then encrypt all data stored to those pages using a key generated at power-on time; all data read from the range will be transparently decrypted. No code running on the processor, not even the kernel, has access to the encryption key. Enabling encryption is said to slightly increase memory latency, but AMD and Google suggest that the performance impact will normally be quite small.
Open Enclave SDK
Microsoft has also released the Open Enclave SDK, an open source project targeted at creating a single unified enclave abstraction for developers to build TEE-based applications in C and C++ languages. The Open Enclave SDK supports an API set that allows developers to build their application once, and deploy it on multiple platforms (Linux and Windows) and environments, from cloud to hybrid to edge. The Open Enclave SDK is completely open source. The intention is to be a non-vendor specific solution that supports enclave applications both on Linux and Windows platforms. The current implementation of Open Enclave provides support for Intel SGX as well as preview support for OP-TEE OS on ARM TrustZone.
An enclave is a protected memory region that provides confidentiality for data and code execution.
Conclusion
The concepts of confidential computing, trusted execution environments, enclaves are nice in principle, but the lack of an industry standard, despite this promise, hampered the adoption of this emerging technology. Dependence on specific hardware, complexity and the lack of application development tools to run in confidential computing environments have also not helped in a broader adoption.
It’s still too early to say which is the best. The different hardware manufacturers also don’t necessarily work together to ensure their technologies are interoperable, making any comparison even more challenging.
Not even security is 100% guaranteed. With Confidential Computing turned on, data is decryptable on the chip itself but remains encrypted to everyone else, including the cloud provider, since no-one, not even the system admin of the virtual machine, can access the decryption keys stored only on the chip. All of this could make the chip’s own security a single point of failure, though. Last year, a new form of cyber attack called Plundervolt gave attackers access to the sensitive data stored in an Intel SGX secure enclave. The Plundervolt web site nicely describes how a little undervolting of a CPU can cause a lot of problems.
AMD is not immune too, and this is by design. With Secure Memory Encryption meant to encrypt specific memory pages, the OS kernel is not protected, as it is with Intel SGX. An attacker could potentially compromise the kernel. SME is designed to protect against cold-boot attacks, snooping on the memory bus, and the disclosure of transient data stored in memory pages.
The direction that hardware development is taking offers some encouragement. Those of us who have despaired of ever truly securing our software may well be right; we need levels of defence that come into play when the software has failed. Done right, hardware-based defences can come to the rescue without taking away our power to secure and control our own systems.