As you may already know, OpenStack Mitaka has been recently released. It has been quite an eventful cycle, full of new features, increased performance and overall reliability improvements on our Windows and Hyper-V OpenStack drivers.
In this blog post, we’d like to present some of the most meaningful changes and updates that come with this release:
- Fibre Channel Storage Support
- Nova Driver iSCSI / Fibre Channel MPIO
- Hyper-V Cluster
- Open vSwitch (OVS) 2.5
- New Features in Windows / Hyper-V Server 2016
- Hyper-V Shielded VMs
- RemoteFX support for Windows / Hyper-V Server 2016
- Nano Server support
- Hyper-V Driver and Neutron vif plug events
- Winstackers and os-win
- PyMI
- Performance & Rally
- Removed support for Windows / Hyper-V Server 2008 R2
- Fuel + Hyper-V
Fibre Channel
Users owning a Fibre Channel (FC) infrastructure have shown their interest in exposing it to Hyper-V instances in OpenStack. We are pleased to announce that this feature is now available!
The Hyper-V driver can now seamlessly attach FC volumes as pass-through disks transparently to the guest, including booting from FC based volumes.
Nova Driver iSCSI / Fibre Channel MPIO
Starting with this release, the Hyper-V driver can establish multipath iSCSI sessions, a requirement for highly available storage. Depending on the MPIO policy, load balancing can also be provided.
The deployer can now configure which HBAs will be used for the iSCSI sessions.
Hyper-V Cluster
If you desire instances with host level fault tolerance, the new Hyper-V Cluster Driver is the perfect solution for you. Thanks to the Windows Server Failover Clustering feature available for Hyper-V, instances will be able to automatically migrate from faulty compute nodes to healthy ones within the same cluster and continue working. The driver ensures that Nova is aware of the migration and the instance status will be updated accordingly. From a SDN perspective, The Hyper-V Cluster Driver works with both L2 agents: neutron-hyperv-agent and openvswitch-agent.
For more details about this feature, requirements and configuration, check out this blog post.
Open vSwitch (OVS) 2.5
The recently released OVS 2.5 is now available for Microsoft Windows / Hyper-V Server 2012, 2012 R2, and 2016 due to the joint effort of Cloudbase Solutions, VMWare, and the rest of the Open vSwitch community. It includes all the OVS CLI tools and services, and an updated version of the OVS Hyper-V virtual switch forwarding extension. OVS offers full OVSDB and OpenFlow support along with native VXLAN, GRE, STT, MPLS encapsulation fully interoperable with Linux hosts and SDN controllers like OpenDaylight, OVN or NSX.
For more details on the release and how to install it, you can read this blog post.
New Features in Windows / Hyper-V Server 2016
Windows / Hyper-V Server 2016 is just right around the corner. We ensure that every Technical Preview released by Microsoft is compatible with OpenStack, including the recently released TP5.
There are quite a few important updates and features that come with this new release. Some of the features we’re most excited about are:
Shielded VMs
Highly secure VMs which guarantee that only the users with the proper guest OS credentials can access their encrypted content. Host administrators cannot access the shielded VMs content or console, thus providing significantly better protection against compromised hosts. Jump to the Shielded VMs section.Nano Server
Extremely thin Windows image which greatly reduced the deployment times. It can be used in various OpenStack scenarios: Hyper-V Nova compute nodes, Cinder storage servers, Windows Containers, Manila SMB3 file servers and many others. More about this in the Nano Server section.RemoteFX new capabilities
This feature allows you to share host GPUs across virtual instances by adding virtual graphics devices, this is in particular very useful for VDI scenarios. Windows / Hyper-V Server 2016 greatly improves this feature. More about it in the RemoteFX section.Windows / Hyper-V Containers
We already have OpenStack support for Windows Containers in nova-docker and are working towards adding support for Windows Containers in Magnum in the upcoming cycle.Nested Virtualization
This feature is also present in the latest versions of Windows 10. It allows you to create VMs in which you can run nested Hyper-V VMs (seen Inception yet?)New Networking Controller
Windows Server 2016 comes with a new SDN stack. It includes many new capabilities, including VXLAN support. Other features include: Software Load Balancer Management, RAS Gateway Management, new Firewall Management, and so on. We will add support for this networking stack during the Newton cycle, with the intent of allowing it as an alternative to Open vSwitch in OpenStack deployments. You can read more about this here.Linux Secure Boot VMs
Previously, only Windows guests were able to boot with Secure Boot enabled, now this is also supported on Linux VMs and implemented in OpenStack.ReFS
Microsoft’s Resilient File System (ReFS) is recommended for Hyper-V hosts in Server 2016. It is much faster for certain operations used by Hyper-V: creating fixed-size VHDx, file merge, snapshots, etc.Larger VMs
Pimp your flavors! 🙂 VMs can now be spawned with up to 64 vCPUs, 1 TB RAM, 64 TB virtual hard drives and up to 256 virtual SCSI disks.
For a comprehensive list of features and updates that come with the new Windows Server, you can find them here.
Hyper-V Shielded VMs
Shielded VMs offer a very unique new feature added in Windows / Hyper-V Server 2016 to provide a secure context for virtual machines in scenarios where the underlying Hyper-V hosts cannot be fully trusted. This is implemented by taking advantage of various Windows Server OS features, including Isolated User Mode, BitLocker, TPM support along with support for virtual TPM (vTPM) in Hyper-V and new Windows Server roles and features.
This is an amazing feature that could not have been missed in our OpenStack Hyper-V driver, which is why we have a dedicated blog post coming out soon with all the details!
RemoteFX support for Windows / Hyper-V Server 2016
We’ve added support for RemoteFX for Windows / Hyper-V Server 2012 R2 back in Kilo, but the highly anticipated Windows / Hyper-V Server 2016 comes with some new nifty features which we’re very excited about! Hyper-V remains by far one of the best choices when it comes to VDI deployments.
In case you are not familiar with this feature, RemoteFX allows you to virtualize your GPUs and share them across instances by adding virtual graphics devices. This leads to a richer RDP experience, as well as the benefit of having a GPU on your instances, enhancing GPU-intensive applications.
Some of the new features for RemoteFX in Windows / Hyper-V Server 2016 are:
- 4K resolution option
- 1GB dedicated VRAM (available choices: 64MB, 128MB, 256MB, 512MB, 1GB) and up to another 1GB shared VRAM
- Support for Generation 2 VMs
- OpenGL and OpenCL API support
- H.264/AVC codec investment
- Improved performance
Nano Server support
Nano Server is an extremely thin Windows OS install option available in Windows Server 2016. It has a very small disk size (a few hundreds of MB), which greatly reduce the deployment times. It can be used in OpenStack in various ways: Hyper-V Nova compute nodes, Cinder storage servers, Windows Containers, Manila SMB3 file servers and many others. For more information, click here.
You can also build a Nano Server images for KVM, ESXi, or BareMetal. Go here to learn how to do it.
Hyper-V Driver and Neutron vif plug events
Until now, nova-compute and neutron-hyperv-agent worked completely asynchronously. In Mitaka, we’ve enabled the Hyper-V Compute Driver to wait until its vNICs are properly bound. Neutron generates network-vif-plugged events when neutron-hyperv-agent reports a port as being bound. This won’t impact on performance mainly thanks to recent changes to the neutron-hyperv-agent which enable it to bind ports with improved parallelism.
Controlling this behaviour in the Hyper-V Compute Driver can be done with the following config options on your compute nodes
nova.conf:
1 2 3 4 5 6 7 8 |
[DEFAULT] # how many seconds to wait for neutron network-vif-plugged events. # the Hyper-V Compute Driver will not wait for neutron events if the config option is set to 0. vif_plugging_timeout = 60 # if set to True, an error will be generated if the Hyper-V Compute Driver does not # receive the needed neutron events within the vif_plugging_timeout time limit. vif_plugging_is_fatal = False |
Winstackers and os-win
During the Mitaka cycle, the OpenStack Foundation created an official team under OpenStack’s governance called Winstackers, lead by our development team. The mission is to facilitate the integration of Hyper-V, Windows and related components into OpenStack.
os-win is the first project which we created under the Winstackers umbrella. It’s a library containing Windows / Hyper-V related code that is being used in various OpenStack projects like: nova, cinder, neutron, networking_hyperv, compute_hyperv, and ceilometer; it will also be included in future projects. This library is meant to simplify the process for adding new features in OpenStack projects, as well as making it easier for us to maintain the existing ones.
PyMI
PyMI is a library we’ve added in this cycle, and is a drop-in replacement for the old, WMI Python module. The new module maintains the same interface, so that it can work as a drop in replacement, while providing better error handling, better multithreading support, better performance as well as a higher reliability. It is based on the Windows Management Infrastructure API introduced in Windows Server 2012.
PyMI is officially accepted as a requirement in OpenStack, and it is required by os-win.
Performance-wise, it is very easy to observe the gains brought by using PyMI (see more below). For example, when binding VMs to neutron networks, the neutron-hyperv-agent has to process a series of operations: connect its vNICs to vSwitches, bind VLANs, bind security group rules to the vSwitch ports; all of which can take time, especially adding the security group rules. By simply using PyMI instead of wmi, everything is processed roughly 6 times faster, which is very helpful!
Because PyMI offers the same interface as the old wmi module, it can even be used on previous OpenStack releases (e.g. Juno, Kilo, Liberty)! All you need to do is to run this simple command on your compute nodes:
1 |
pip install -U PyMI |
Performance & Rally
We have updated the way the neutron-hyperv-agent works, it is now able to bind and process neutron ports in parallel using workers. This leads to a significant increase in performance. Including the benefits of using PyMI, the overall performance gain over the previous implementation is roughly 12x.
WMI, PyMI, and Native Threads workers
As you can see from the graph, the difference between the old implementation using WMI and the new one that uses PyMI and Native Thread workers is quite impressive!
The number of workers is configurable, you will have to set the following configuration option on your compute node’s neutron_hyperv_agent.conf file:
1 2 |
[AGENT] worker_count = 12 |
The recommended value for this config option is the number of CPUs on your compute node. Our studies on a compute node with 32 processors show that there is no significant performance gain beyond 12-16 workers.
Additionally, Windows provides power configuration options. By default, it is set to “Balanced“, but setting it to “High Performance” will result in an overall performance gain, including improved VMs performance. This has been observed in particular after running Rally Hadoop scenarios, where the execution time has been lowered by ~10-15%. In order to set the “High Performance” power configuration option, all you need to do is to run the following command on your nodes:
1 |
PowerCfg.exe /S 8C5E7FDA-E8BF-4A96-9A85-A6E23A8C635C |
Significant performance improvements have been made into the Hyper-V Compute Driver as well. There are fewer MI queries being executed, and the ones which are done are now a lot more efficient.
After including all the above improvements, the execution time for IaaS intensive Rally scenarios has been greatly reduced, with the result that Hyper-V is now one of the fastest OpenStack compute options available.
We have also tested Hadoop Rally scenarios. We’re happy to see that Hyper-V comes out at the top of this category in those tests as well!
Removed support for Windows / Hyper-V Server 2008 R2
The support for Windows / Hyper-V Server 2008 R2 compute nodes has been deprecated in previous releases, with Liberty being the last one that still supports this hypervisor version.
Fortunately, all you have to do is update the compute nodes to Windows / Hyper-V Server 2012 or above (2012 R2 or 2016 is highly recommended).
Fuel + Hyper-V
We recently announced a new partnership wth Mirantis which now allows enterprises to use the OpenStack Hyper-V Compute Driver on Mirantis OpenStack, with complete interoperability and support.
You can read more about the announcement here and you should also check out this blog post about how you can add Hyper-V nodes to a Fuel deployment.
It is our belief that open source is the best way for cloud computing and software in general to evolve into something bigger and better, and the way to show it is through our dedicated products and services for the world of OpenStack on Windows. We recommend to check out our Hyper-Converged OpenStack Hyper-V product offer to fully take advantage of Hyper-V including all the the new features included in Mitaka!