Yesterday, June Yang, the director of product management at Google announced a new beta version of the E2 VMs for Google Compute Engine. It features a dynamic resource management that delivers a reliable performance with flexible configurations and the best total cost of ownership (TCO) than any other VMs in Google Cloud.
According to Yang, “E2 VMs are a great fit for a broad range of workloads including web servers, business-critical applications, small-to-medium sized databases, and development environments.” He further adds, “For all but the most demanding workloads, we expect E2 to deliver similar performance to N1, at a significantly lower cost.”
What are the key features offered by E2 VMs
E2 VMs are built to offer 31% savings compared to N1, which is the lowest total cost of ownership of any VM in Google Cloud. Thus, the VMs acquire a sustainable performance at a consistently low price point. Unlike comparable options from other cloud providers, E2 VMs can support a high CPU load without complex pricing.
The E2 VMs can be tailored up to 16 vCPUs and 128 GB of memory and will only distribute the resources that the user needs or with the ability to use custom machine types. Custom machine types are ideal for scenarios when workloads that require more processing power or more memory but don’t need all of the upgrades that are provided by the next machine type level.
How E2 VMs achieve optimal efficiency
Large, efficient physical servers
E2 VMs automatically take advantage of the continual improvements in machines by flexibly scheduling across the zone’s available CPU platforms. With new hardware upgrades, the E2 VMs are live migrated to newer and faster hardware which allows it to automatically take advantage of these new resources.
Intelligent VM placement
In E2 VMs, Borg, Google’s cluster management system predicts how a newly added VM will perform on a physical server by observing the CPU, RAM, memory bandwidth, and other resource demands of the VMs. Post this, Borg searches across thousands of servers to find the best location to add a VM. These observations by Borg ensures that a newly placed VM will be compatible with its neighbors and will not experience any interference from them.
Performance-aware live migration
After the VMs are placed on a host, its performance is continuously monitored so that if there is an increase in demand for VMs, a live migration can be used to transparently shift the E2 load to other hosts in the data center.
A new hypervisor CPU scheduler
In order to meet E2 VMs performance goals, Google has built a custom CPU scheduler with better latency and co-scheduling behavior than Linux’s default scheduler. The new scheduler yields sub-microsecond average wake-up latencies with fast context switching which helps in keeping the overhead of dynamic resource management negligible for nearly all workloads.
This will be very popular: if your workload can tolerate a *slightly* worse latency SLA, you can get the same VM up to 31% cheaper. E2s should work for the majority of IT workloads. Enjoy.https://t.co/ubVSiFtczD
— Urs Hölzle (@uhoelzle) December 12, 2019
Read the official announcement to know the custom VM shapes and predefined configurations offered by E2 VMs. You can also read part- 2 of the announcement to know more about the dynamic resource management in E2 VMs.
Read Next
Why use JVM (Java Virtual Machine) for deep learning
Brad Miro talks TensorFlow 2.0 features and how Google is using it internally
EU antitrust regulators are investigating Google’s data collection practices, reports Reuters
Google will not support Cloud Print, its cloud-based printing solution starting 2021
Google Chrome ‘secret’ experiment crashes browsers of thousands of IT admins worldwide