Bug 1887434
Summary: | LVM IDs and Machine ID are same for all new VMs created from sealed template | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Marian Jankular <mjankula> |
Component: | ovirt-engine | Assignee: | Shmuel Melamud <smelamud> |
Status: | CLOSED ERRATA | QA Contact: | Nisim Simsolo <nsimsolo> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.4.1 | CC: | ahadas, dfodor, fgarciad, mavital, nsimsolo, rjones, smelamud |
Target Milestone: | ovirt-4.4.7 | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ovirt-engine-4.4.7.4 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-07-22 15:12:18 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Marian Jankular
2020-10-12 12:51:24 UTC
I am not able to reproduce that with current RHV and RHEL 8.2. Both machine ID and LVM UUIDs for PVs and VGs do change. Does this happen only for some specific RHEL 8.x version or some specific guest configuration? Also what is the version of libguestfs-tools-c RPM on the host used to seal the template? (In reply to Tomáš Golembiovský from comment #1) > I am not able to reproduce that with current RHV and RHEL 8.2. Both machine > ID and LVM UUIDs for PVs and VGs do change. Does this happen only for some > specific RHEL 8.x version or some specific guest configuration? Also what is > the version of libguestfs-tools-c RPM on the host used to seal the template? The hypervisors are RHEL 8.2 and RHV 4.4, guest VM is also RHEL 8.2, but it is reproducible in also with RHV 4.3 and RHEL 7.7. The version is libguestfs-tools-c-1.40.2-24.module+el8.2.1+7154+47ffd890.x86_64 Ok, I understand now. The problem is not that the IDs don't change. The problem is that they change only once (when creating the template). So all new VMs have same IDs, albeit different from the original VM. * LVM IDs: given the way this works we either would need to run virt-sysprep when creating a new VM (as opposed to when we create a template) or add first-boot scripts to perform the change (possibly followed by reboot, which could be tricky to do right). * machine ID: this is a regression in libguestfs (commit d5ce659e2c1). The ID is first properly removed, but any customize command that is run afterwards will re-initialize it. This should be fixed in libguestfs. Changing LVM UUIDs is very complex. virt-sysprep claims to do it, but I'm not sure it does it correctly in every case. As for the /etc/machine-id, can you describe how you're using virt-customize/virt-sysprep and how it's wrong? Because so much stuff (eg. kernel updates) doesn't work without a valid machine-id, we currently set it to a random value when we see that /etc/machine-id exists but has zero length, and otherwise we don't touch it. (In reply to Richard W.M. Jones from comment #6) > As for the /etc/machine-id, can you describe how you're using > virt-customize/virt-sysprep > and how it's wrong? The man page says for 'machine-id' operation: Remove the local machine ID. I would argue that this is not happening as there is still machine ID configured when virt-sysprep finishes. > Because so much stuff (eg. kernel updates) doesn't work > without a > valid machine-id, we currently set it to a random value when we see that > /etc/machine-id > exists but has zero length, and otherwise we don't touch it. The solution would be to run the machine-id operation as last, just before filesystems are unmounted. That would make sure no other operation later recreates it with new value. Of course, running virt-sysprep on newly created VMs (instead of on templates) would help us solve both issues. (In reply to Tomáš Golembiovský from comment #7) > (In reply to Richard W.M. Jones from comment #6) > > As for the /etc/machine-id, can you describe how you're using > > virt-customize/virt-sysprep > > and how it's wrong? > > The man page says for 'machine-id' operation: > > Remove the local machine ID. As this is a default operation, I'm tempted to change the description of this to "Change the local machine ID to a new random value". However it would be worth having a new, non-default operation which really removes /etc/machine-id (or maybe leaves it as an empty file). It would suppress the default action of recreating /etc/machine-id. There are a couple of bugs already for this: https://bugzilla.redhat.com/show_bug.cgi?id=1554546 https://bugzilla.redhat.com/show_bug.cgi?id=1557042 As I understand, there is only one way to solve all these problems (LVM IDs, Machine ID, and there may be others) - it is to run virt-sysprep just after a VM is created from a template. So let's go forward with this approach. There should be an option for 'sealing VM' in the VM creation dialog and it should be turned off by default, correct? What about VM pools? (In reply to Shmuel Melamud from comment #9) > There should be an option for 'sealing VM' in the VM creation dialog and it > should be turned off by default, correct? The downside of this approach is that it requires clients to make changes on their side in order to get this. It may also raise the question of what's the difference between sealing the template and sealing the vm that is based on it, unless we deprecate the former. I would rather prefer to apply it (i.e., sealing the VM) by default also when clients ask to seal the template using the existing API - so existing clients like the backup providers would get this without making any change. > What about VM pools? Yes, that's the problematic part with what I'm suggesting, that it can add insignificant overhead to the creation of large VM pools. As create-template is not an operation that is done frequently, how about the following procedure: 1. when the user asks to seal a template, we would still execute virt-sysprep on it and mark it as sealed 2. by default, we'll run virt-sysprep when creating a VM from a sealed template, unless asked otherwise 3. by default, we won't run virt-sysprep when create a VM pool from a sealed template, unless asked otherwise The rational behind this is that VM pools, especially (or "maybe only") the stateless ones, are not subject to backups and you want to create them rather fast (for testing, for class rooms, etc) In other use cases, we can probably spend a bit more time to achieve "better" sealing of the VM at the expense of higher overhead in its creation Would that make sense? (In reply to Arik from comment #10) > > What about VM pools? > > Yes, that's the problematic part with what I'm suggesting, that it can add > insignificant overhead to the creation of large VM pools. What I meant to say here is "not negligible" (not "insignificant") Verified: ovirt-engine-4.4.7.4-0.9.el8ev vdsm-4.40.70.4-1.el8ev.x86_64 qemu-kvm-5.2.0-16.module+el8.4.0+11536+725e25d9.2.x86_64 libvirt-daemon-7.0.0-14.1.module+el8.4.0+11095+d46acebf.x86_64 Verification scenario: see https://bugzilla.redhat.com/show_bug.cgi?id=1887434#c14 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: RHV Manager (ovirt-engine) security update [ovirt-4.4.7]), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2865 |