before reading the conclusions, we recommend to read the previous posts in this series if you want to know more details about the environment and the test scenarios: Introduction, Scenario 1, Scenario 2, Scenario 3.
It’s time to discuss all the results obtained so far. The table below show the average time expressed in seconds for every scenario (green: better, red: worse):
|KVM||Hyper-V onWindows Server 2012 R2||Hyper-V onWindows Server 2016|
|Scenario 3 (part 1)||119,247||114,310||114,700|
|Scenario 3 (part 2)||221,638||219,237||218,941|
The first two scenarios are mostly focused on IaaS operations (booting VMs, SSH into the VMs over software defined tenant networks, deleting the VMs, etc), so the performance differences become more visible, with Hyper-V Server 2016 showing up on the top of the heap.
The third scenario is mostly focused on guest workloads (Hadoop in this case) so the difference between IaaS operations does not provide a big impact. It’s good to see that the performance between Linux guests on KVM and Hyper-V is almost identical.
Performance is a feature, so hopefully this might help in deciding which hypervisor to choose next in your OpenStack deployment. One of the great advantages of OpenStack is that different hypervisors can be mixed in the same infrastructure, so our recommendation is to try benchmarking more than one one based on your preferences (KVM, Hyper-V, ESXi, etc) and decide. Hopefully our Rally scenarios will ease your work!
About Hyper-V, we’ve come a long way here at Cloudbase with our work on the OpenStack compute driver and we’re really happy with the results, especially considering that OpenStack is traditionally tied to Linux and KVM.
Hypervisors are mostly seen as commodity these days, but there are still significant differences that are relevant depending on the scenarios, for example the unique features that Hyper-V Server 2016 provides (Failover Clustering, Shielded VMs, RemoteFX, etc) are particularly well suited for enterprise deployments.
Here are a few recommended blog posts about Hyper-V and OpenStack for further reading:
To begin with, we’re going to publish VMware vSphere / ESXi benchmarks and see how those compare against KVM and Hyper-V, followed by some Windows guest workloads benchmarks. After that, what about some hyper-converged storage performance comparisons? For example Ceph + KVM vs Storage Spaces Direct + Hyper-V? And networking, NFVs? Looks like this performance series is going to go on for quite some time!