š¾ Archived View for gemini.mcgillij.dev āŗ vfio_part4.gmi captured on 2023-03-20 at 17:59:14. Gemini links have been rewritten to link to archived content
ā¬ ļø Previous capture (2022-03-01)
-=-=-=-=-=-=-
:author:
mcgillij
:category:
Linux
:date:
2020-12-28 14:52
:tags:
Linux, VFIO, Virtualization, Tutorial, BIOS, #100DaysToOffload
:slug:
vfio-part4
:cover_image:
oddsandends.jpg
:summary:
BIOS and optimizations
While all the other steps are pretty universal, there are a couple places where the configurations or settings may diverge based on what your goals are or what hardware you have available.
======================================================================
Iāll start with BIOS settings, for me Iām using a
x570 Aorus Master from Gigabyte
, so the settings I will be outlining will be more pertinent to those using similar boards. However itās just a matter of finding where your vendors are hiding the similar settings in their UEFI UIās.
First things first youāll need to enable your CPU specific VM instructions, for me thatās SVM.
Next weāll want to hunt down the IOMMU settings in a couple places in the BIOS.
[image: IOMMU in NBIO settings]
Make sure these settings are set to āEnabledā and not āAutoā as thatās a workaround for Windows machines and arenāt entirely enabled.
Depending on what type of PCIe setup you have going with your GPUās and which ones your going to be passing through, you may want to change which PCIe slot is to be used for your initial display, this will help with not binding the GPU to your kernelās driver.
Lastly we will want to disable CSM support, as this will likely interfere with booting up your machine fully with UEFI.
From here this should be your baseline configuration for a working GPU pass through for the x570 Aorus Master or any other Gigabyte board that shares the same BIOSās. You can save this as a profile, that you can then toggle whichever other settings you may want to get working on your board (XMP, OC, etc). But for the sake of getting pass through working this is the baseline that I work from before going and tweaking other things.
======================================================================
There are a number of host level optimizations that you can do to your VM once you get it up and running and have validated that everythingās working properly.
CPU pinning is one of them, and the folks over at the
, have a great section on this that can be
. Donāt worry about the information being Arch specific, most of the directions there are portable to just about any distro around. You can go even further with this and isolate the CPUs as well but it all depends on the work-loads that your going to be dealing with as thereās no silver bullet for āoptimalā performance in every case.
There are some AMD CPU specific optimization available as well. Generally you will want to start off by configuring your CPU section in the virt-manager with settings similar to this:
To enable AMD SMT (hyperthreads), you will need to manually edit the XML file ( "virsh edit win10") for your virtual machine and add an extension to the cpu block. Add the "<feature policy='require' name='topoext'/>" below the topology section in your XML as seen below.
<cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='8' threads='2'/> <feature policy='require' name='topoext'/> </cpu>
Once thatās in place, you can actually change your configuration to look like:
Not only does this more accurately correspond to your actual CPU configuration, you should get a bit of a bump in performance.
There a ton of different ways to configure your network with various bridge devices, actual hardware pass through, NATāing etc. However I personally find the best way to get networking on a āWindowsā guest in particular is to use the VirtIO driver from Redhat.
Selecting this option before you create your VM has the added benefit of not allowing any networking of your Windows 10 VM right after your installation (until you install the VirtIO drivers). Which gives you a bit of breathing room to clean up and block things like Windows Updates from Microsoft downloading just a whole bunch of garbage to your machine.
[image: Actual footage from Windows 10 installation]
Actual footage from Windows 10 installationThis allows you to setup mitigations against this and turn off services that generally will install that stuff during the installation. This is also a very optimized driver provided by RedHat. Alternatively if your motherboard has multiple NICās you can pass one through directly to the VM however the VirtIO driver will give it a run for itās money when it comes to performance.
If you have (or want) to still be able to dual boot into your windows disk, you have a couple options available to you. Passing through the entire PCIe controller for your NVMe device works great. Or you can pass it in as a Block device, this will however require you to install the VirtIO drivers mentioned previously āduringā the Windows installation process. There is very little performance loss from doing this, and you get the added benefit of easier snapshots / backups and better support for moving the VM around.
======================================================================
Most of the guest level optimizations, are pretty standard Windows things.
Here are a couple great resources for removing the garbage from Windows, these are best done prior to putting Windows on the network and allowing it to run any Windows updates. However I highly recommend that you go over the configurations for these applications as they could leave your VM in a bad state. Like anything else donāt blindly run anything from the internet without going over it.
Either of these work pretty well for getting rid of most of the telemetry that Microsoft puts in their products. However since were running in a VM, we can further limit this by applying our own firewall rules to the VMs, but thatās not in scope for this.
|