Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Previously it was possible for a host to send prepareVolume to a logical volume that had not finished being created, resulting in failure. An initial tag has been added to the lvcreate command so that other hosts are able to identify the volume as incomplete and ignore it.
Created attachment 644816[details]
logs
Description of problem:
createVolume is not finished on spm when we send prepareVolume which will fail because volume does not exist yet.
Version-Release number of selected component (if applicable):
si24.1
How reproducible:
100%
Steps to Reproduce:
1. create 3 domains in iscsi
2. create a template with 15GB thin provision disk and OS installed and create 20 pool vm's on one domain and 2 more vm's as clone on a second domain
3. run the vms on 2 hosts and have them write (opening explorer in the vms will be enough no need for heavy writing)
4. from vm's tab -> disks -> move each of the vm's disks to the 3ed domain
Actual results:
some of the vms fail to create the live snapshot because prepareVolume was sent to hsm before the createVolume finished creating the volume.
Expected results:
we should not send prepareVolume before confirming that createVolume completed successfully.
Additional info:logs
since there are a lot of tasks running at the same time and we already debugged here is the relevant info:
in spm log, create volume is Thread-4416:: Task is 11b6930d-5f8c-435e-9de4-46c6cc34684f
prepareVolume for same action in hsm is on Thread-4275:
this is the error with the volume that failed:
VolumeMetadataReadError: Error while processing volume meta data: ('missing offset tag on volume 710f7022-29ae-40cd-9b41-3a8a22fd8cc1',)
engine log:
this is the SnapshotVDSCommand for the vm:
2012-11-14 12:15:36,360 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-4-thread-23) START, SnapshotVDSCommand(HostName = gold-vdsd, HostId = 2d81a26a-2c20-11e2-aeab-001a4a169741, vmId=3d393cd1-666e-4283-a2b6-ce99e74656f4), log id: 57fa069c
and here is the failure in engine log:
2012-11-14 12:15:47,105 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-4-thread-23) FINISH, SnapshotVDSCommand, log id: 57fa069c
2012-11-14 12:15:47,105 ERROR [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (pool-4-thread-23) Wasnt able to live snpashot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, rolling back.
From the description here, it seems as though the bug is between the two components of Live Snaphots, and not nescissarily specific to Live Storage Migration
problem is a race between block volume creation and vdsm's lvm cache refresh on another host.
block volume create process has 2 stages:
1. lvcreate - creates the volume
2. lvchange - adds tags with metadata
if another host refreshes its cache between 1 and 2 then it will have an incomplete lv in the cache and next time it needs the lv it will not refresh the cache.
Solution is to add an init tag in lvcreate so that other hosts would be able to identify such LVs as partial and ignore them.
Comment 6Eduardo Warszawski
2012-11-18 22:19:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHSA-2012-1508.html
Created attachment 644816 [details] logs Description of problem: createVolume is not finished on spm when we send prepareVolume which will fail because volume does not exist yet. Version-Release number of selected component (if applicable): si24.1 How reproducible: 100% Steps to Reproduce: 1. create 3 domains in iscsi 2. create a template with 15GB thin provision disk and OS installed and create 20 pool vm's on one domain and 2 more vm's as clone on a second domain 3. run the vms on 2 hosts and have them write (opening explorer in the vms will be enough no need for heavy writing) 4. from vm's tab -> disks -> move each of the vm's disks to the 3ed domain Actual results: some of the vms fail to create the live snapshot because prepareVolume was sent to hsm before the createVolume finished creating the volume. Expected results: we should not send prepareVolume before confirming that createVolume completed successfully. Additional info:logs since there are a lot of tasks running at the same time and we already debugged here is the relevant info: in spm log, create volume is Thread-4416:: Task is 11b6930d-5f8c-435e-9de4-46c6cc34684f prepareVolume for same action in hsm is on Thread-4275: this is the error with the volume that failed: VolumeMetadataReadError: Error while processing volume meta data: ('missing offset tag on volume 710f7022-29ae-40cd-9b41-3a8a22fd8cc1',) engine log: this is the SnapshotVDSCommand for the vm: 2012-11-14 12:15:36,360 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-4-thread-23) START, SnapshotVDSCommand(HostName = gold-vdsd, HostId = 2d81a26a-2c20-11e2-aeab-001a4a169741, vmId=3d393cd1-666e-4283-a2b6-ce99e74656f4), log id: 57fa069c and here is the failure in engine log: 2012-11-14 12:15:47,105 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-4-thread-23) FINISH, SnapshotVDSCommand, log id: 57fa069c 2012-11-14 12:15:47,105 ERROR [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (pool-4-thread-23) Wasnt able to live snpashot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, rolling back.