RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 876558 - 3.1 - engine: live snapshot fails due to race on multiple move of disks (live storage migration)
Summary: 3.1 - engine: live snapshot fails due to race on multiple move of disks (live...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.3
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 6.3
Assignee: Eduardo Warszawski
QA Contact: Dafna Ron
URL:
Whiteboard: storage,
Depends On:
Blocks: 896507
TreeView+ depends on / blocked
 
Reported: 2012-11-14 13:10 UTC by Dafna Ron
Modified: 2022-07-09 05:40 UTC (History)
13 users (show)

Fixed In Version: vdsm-4.9.6-44.0
Doc Type: Bug Fix
Doc Text:
Previously it was possible for a host to send prepareVolume to a logical volume that had not finished being created, resulting in failure. An initial tag has been added to the lvcreate command so that other hosts are able to identify the volume as incomplete and ignore it.
Clone Of:
: 891609 891610 896507 (view as bug list)
Environment:
Last Closed: 2012-12-04 19:14:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (1.15 MB, application/x-gzip)
2012-11-14 13:10 UTC, Dafna Ron
no flags Details
engine.log.1 (526.24 KB, application/x-xz)
2012-11-14 15:29 UTC, Dafna Ron
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2012:1508 0 normal SHIPPED_LIVE Important: rhev-3.1.0 vdsm security, bug fix, and enhancement update 2012-12-04 23:48:05 UTC

Description Dafna Ron 2012-11-14 13:10:53 UTC
Created attachment 644816 [details]
logs

Description of problem:

createVolume is not finished on spm when we send prepareVolume which will fail because volume does not exist yet. 

Version-Release number of selected component (if applicable):

si24.1

How reproducible:

100%

Steps to Reproduce:
1. create 3 domains in iscsi 
2. create a template with 15GB thin provision disk and OS installed and create 20 pool vm's on one domain and 2 more vm's as clone on a second domain
3. run the vms on 2 hosts and have them write (opening explorer in the vms will be enough no need for heavy writing)
4. from vm's tab -> disks -> move each of the vm's disks to the 3ed domain
  
Actual results:

some of the vms fail to create the live snapshot because prepareVolume was sent to hsm before the createVolume finished creating the volume. 

Expected results:

we should not send prepareVolume before confirming that createVolume completed successfully. 

Additional info:logs

since there are a lot of tasks running at the same time and we already debugged here is the relevant info: 

in spm log, create volume is Thread-4416:: Task is 11b6930d-5f8c-435e-9de4-46c6cc34684f

prepareVolume for same action in hsm is on Thread-4275:

this is the error with the volume that failed: 

VolumeMetadataReadError: Error while processing volume meta data: ('missing offset tag on volume 710f7022-29ae-40cd-9b41-3a8a22fd8cc1',)

engine log: 

this is the SnapshotVDSCommand for the vm: 

2012-11-14 12:15:36,360 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-4-thread-23) START, SnapshotVDSCommand(HostName = gold-vdsd, HostId = 2d81a26a-2c20-11e2-aeab-001a4a169741, vmId=3d393cd1-666e-4283-a2b6-ce99e74656f4), log id: 57fa069c


and here is the failure in engine log:

2012-11-14 12:15:47,105 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-4-thread-23) FINISH, SnapshotVDSCommand, log id: 57fa069c
2012-11-14 12:15:47,105 ERROR [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (pool-4-thread-23) Wasnt able to live snpashot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, rolling back.

Comment 1 Allon Mureinik 2012-11-14 15:22:37 UTC
Dafna, the attached engine.log only starts on 2012-11-14 12:46:48,744.
Can you please attach the rotated log too?

Comment 2 Allon Mureinik 2012-11-14 15:27:06 UTC
From the description here, it seems as though the bug is between the two components of Live Snaphots, and not nescissarily specific to Live Storage Migration

Comment 3 Dafna Ron 2012-11-14 15:29:19 UTC
Created attachment 644960 [details]
engine.log.1

engine log rotated - adding engine.1

Comment 5 Ayal Baron 2012-11-18 19:31:52 UTC
problem is a race between block volume creation and vdsm's lvm cache refresh on another host.
block volume create process has 2 stages:
1. lvcreate - creates the volume
2. lvchange - adds tags with metadata

if another host refreshes its cache between 1 and 2 then it will have an incomplete lv in the cache and next time it needs the lv it will not refresh the cache.

Solution is to add an init tag in lvcreate so that other hosts would be able to identify such LVs as partial and ignore them.

Comment 6 Eduardo Warszawski 2012-11-18 22:19:49 UTC
I40cd67e563935de663d938cbc1bc9cf152802448

Comment 10 Dafna Ron 2012-11-22 16:55:51 UTC
verified on si24.4 with vdsm-4.9.6-44.0.el6_3.x86_64

Comment 12 errata-xmlrpc 2012-12-04 19:14:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html


Note You need to log in before you can comment on or make changes to this bug.