Bug 1669178
Summary: | [RFE] Q35 SecureBoot - Add ability to preserve variable store certificates. | ||
---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Nisim Simsolo <nsimsolo> |
Component: | BLL.Virt | Assignee: | Milan Zamazal <mzamazal> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Nisim Simsolo <nsimsolo> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.3.0 | CC: | ahadas, berrange, bugs, hunter86_bg, lersek, michal.skrivanek, mprivozn, mzamazal, nsimsolo, tgolembi |
Target Milestone: | ovirt-4.4.6 | Keywords: | FutureFeature |
Target Release: | 4.4.6.4 | Flags: | pm-rhel:
ovirt-4.4?
michal.skrivanek: exception? rule-engine: planning_ack? pm-rhel: devel_ack+ pm-rhel: testing_ack+ |
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ovirt-engine-4.4.6.4 | Doc Type: | Enhancement |
Doc Text: |
Secure Boot process relies on keys that are normally stored in NVRAM of the VM. However, NVRAM was not stored in previous versions of oVirt and was newly initialized on every start of a VM. This prevented the use of any custom drivers (e.g. for Nvidia devices or for PLDP drivers in SUSE) on VMs with Secure Boot enabled. To be able to use SecureBoot VMs effectively oVirt now persists the content of NVRAM for UEFI VMs.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2021-05-06 12:14:09 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1933974 | ||
Bug Blocks: |
Description
Nisim Simsolo
2019-01-24 14:27:05 UTC
An alternative could be to write a systemd service that loads custom pubkeys (certificates) from regular files with keyctl into the system keyring. This should happen early enough, before the nvidia proprietary modules are loaded. The service and its dependencies (certificate files) might have to be built into the initrd as well (because I assume the nvidia modules end up in the initrd too). Thanks. The following procedure allows to install NVidia driver with SecureBoot: 1. Start the VM. 2. Download and install NVidia driver. During the installation, ask the installer to generate the signing keys and to sign the module. 3. Import the public key to the system keyring: mokutil --import /usr/share/nvidia/nvidia-modsign-crt-*.der 4. Reboot the VM from the console (do not poweroff). 5. Finish the key enrolment process from the menu, that will be displayed when booting. 6. Continue booting and ensure that the driver loads normally. 7. While the VM is still running, copy its NVRAM image, located at /var/lib/libvirt/qemu/nvram/<VM_ID>.fd (VM ID is displayed in the VM details in WebAdmin). 8. Copy the image to /usr/share/OVMF/OVMF_VARS.secboot.fd (you may want to backup the original file) on all hosts where the VM or its copies may be running. 9. If you want to install the driver in other VMs, ask the installer on step 2 to preserve the private key used for signing. Then pass the same keys to the installer in other VMs using --module-signing-secret-key= and --module-signing-public-key= options. In this case you will be able to use the same NVRAM image everywhere. (In reply to Shmuel Melamud from comment #2) > 7. While the VM is still running, copy its NVRAM image, located at > /var/lib/libvirt/qemu/nvram/<VM_ID>.fd (VM ID is displayed in the VM > details in WebAdmin). I don't recommend this. Same as we don't snapshot a read-write mounted, not fs-frozen disk image without also snapshotting guest RAM, processor, and device state. Capturing a varstore file for use as a template is a valid technique (we use it in the build script of the edk2 package), but the originating domain must be powered down first. > 8. Copy the image to /usr/share/OVMF/OVMF_VARS.secboot.fd (you may > want to backup the original file) on all hosts where the VM or its > copies may be running. I don't recommend this either. It is possible to offer new variable store template files without corrupting existent package contents. And the new varstore template can be exposed to libvirtd and other management applications through additional firmware metadata descriptor files. (See examples in "/usr/share/qemu/firmware/".) The new files (varstore template and matching descriptor) can be collected into a new RPM (to be installed on RHV hosts). The new descriptor may be assigned such a double-digit priority prefix that causes it to take priority over the descriptor files from the edk2-ovmf package. This will instantiate the varstore files of newly defined UEFI domains from the extra varstore template that carries the nvidia module signing certificate. Additionally, when defining a new domain, the @template attribute in the nvram element can still be pointed, like before, at a specific varstore template file. Then libvirtd will create the domain's own varstore from that specific template. The above steps are acceptable as a proof of concept, but please don't use them in production. Overwriting installed package contents (except config files) makes debugging very difficult. Thanks. (In reply to Laszlo Ersek from comment #3) > Capturing a varstore file for use as a template is a valid technique (we > use it in the build script of the edk2 package), but the originating > domain must be powered down first. It is not possible, because the varstore file is removed immediately after VM shutdown. (In reply to Shmuel Melamud from comment #4) > (In reply to Laszlo Ersek from comment #3) > > Capturing a varstore file for use as a template is a valid technique (we > > use it in the build script of the edk2 package), but the originating > > domain must be powered down first. > > It is not possible, because the varstore file is removed immediately after > VM shutdown. Libvirt will execute a hook script if you have it on domain shutdown - that might be a good place to copy the NVRAM store to a safe location. https://www.libvirt.org/hooks.html Would be great to have a unified mechanism to report both NVRAM store and TPM data [1] from the host to the engine [1] https://github.com/oVirt/ovirt-site/pull/2298 *** Bug 1727987 has been marked as a duplicate of this bug. *** (In reply to Michal Privoznik from comment #5) > Libvirt will execute a hook script if you have it on domain shutdown - that > might be a good place to copy the NVRAM store to a safe location. > > https://www.libvirt.org/hooks.html thanks Michal, is the NVRAM store cleared on domain shutdown or when the domain is undefined? (In reply to Arik from comment #8) > thanks Michal, is the NVRAM store cleared on domain shutdown or when the > domain is undefined? The hook script is ran on domain shutdown, and removal of the NVRAM file is done on domain undefine. The Vdsm part is done. I'm going to implement the Engine part now, so taking over the bug. The changes were done in 4.4.5 already, will be verified on 4.4.6 Verified: ovirt-engine-4.4.6.6-0.10.el8ev vdsm-4.40.60.6-1.el8ev.x86_64 qemu-kvm-5.2.0-15.module+el8.4.0+10650+50781ca0.x86_64 libvirt-daemon-7.0.0-13.module+el8.4.0+10604+5608c2b4.x86_64 host Nvidia drivers: NVIDIA-vGPU-rhel-8.4-460.73.02.x86_64 VM Nvidia drivers(for Windows and Linux): GRID 12.0 GA Verification scenario: Polarion test case added to "Links" This bugzilla is included in oVirt 4.4.6 release, published on May 4th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.6 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |