Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 876558

Summary: 3.1 - engine: live snapshot fails due to race on multiple move of disks (live storage migration)
Product: Red Hat Enterprise Linux 6 Reporter: Dafna Ron <dron>
Component: vdsmAssignee: Eduardo Warszawski <ewarszaw>
Status: CLOSED ERRATA QA Contact: Dafna Ron <dron>
Severity: high Docs Contact:
Priority: high    
Version: 6.3CC: abaron, aburden, amureini, bazulay, dyasny, hateya, iheim, ilvovsky, lpeer, Rhev-m-bugs, sgrinber, yeylon, ykaul
Target Milestone: rcKeywords: ZStream
Target Release: 6.3   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage,
Fixed In Version: vdsm-4.9.6-44.0 Doc Type: Bug Fix
Doc Text:
Previously it was possible for a host to send prepareVolume to a logical volume that had not finished being created, resulting in failure. An initial tag has been added to the lvcreate command so that other hosts are able to identify the volume as incomplete and ignore it.
Story Points: ---
Clone Of:
: 891609 891610 896507 (view as bug list) Environment:
Last Closed: 2012-12-04 19:14:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 896507    
Attachments:
Description Flags
logs
none
engine.log.1 none

Description Dafna Ron 2012-11-14 13:10:53 UTC
Created attachment 644816 [details]
logs

Description of problem:

createVolume is not finished on spm when we send prepareVolume which will fail because volume does not exist yet. 

Version-Release number of selected component (if applicable):

si24.1

How reproducible:

100%

Steps to Reproduce:
1. create 3 domains in iscsi 
2. create a template with 15GB thin provision disk and OS installed and create 20 pool vm's on one domain and 2 more vm's as clone on a second domain
3. run the vms on 2 hosts and have them write (opening explorer in the vms will be enough no need for heavy writing)
4. from vm's tab -> disks -> move each of the vm's disks to the 3ed domain
  
Actual results:

some of the vms fail to create the live snapshot because prepareVolume was sent to hsm before the createVolume finished creating the volume. 

Expected results:

we should not send prepareVolume before confirming that createVolume completed successfully. 

Additional info:logs

since there are a lot of tasks running at the same time and we already debugged here is the relevant info: 

in spm log, create volume is Thread-4416:: Task is 11b6930d-5f8c-435e-9de4-46c6cc34684f

prepareVolume for same action in hsm is on Thread-4275:

this is the error with the volume that failed: 

VolumeMetadataReadError: Error while processing volume meta data: ('missing offset tag on volume 710f7022-29ae-40cd-9b41-3a8a22fd8cc1',)

engine log: 

this is the SnapshotVDSCommand for the vm: 

2012-11-14 12:15:36,360 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-4-thread-23) START, SnapshotVDSCommand(HostName = gold-vdsd, HostId = 2d81a26a-2c20-11e2-aeab-001a4a169741, vmId=3d393cd1-666e-4283-a2b6-ce99e74656f4), log id: 57fa069c


and here is the failure in engine log:

2012-11-14 12:15:47,105 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-4-thread-23) FINISH, SnapshotVDSCommand, log id: 57fa069c
2012-11-14 12:15:47,105 ERROR [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (pool-4-thread-23) Wasnt able to live snpashot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, rolling back.

Comment 1 Allon Mureinik 2012-11-14 15:22:37 UTC
Dafna, the attached engine.log only starts on 2012-11-14 12:46:48,744.
Can you please attach the rotated log too?

Comment 2 Allon Mureinik 2012-11-14 15:27:06 UTC
From the description here, it seems as though the bug is between the two components of Live Snaphots, and not nescissarily specific to Live Storage Migration

Comment 3 Dafna Ron 2012-11-14 15:29:19 UTC
Created attachment 644960 [details]
engine.log.1

engine log rotated - adding engine.1

Comment 5 Ayal Baron 2012-11-18 19:31:52 UTC
problem is a race between block volume creation and vdsm's lvm cache refresh on another host.
block volume create process has 2 stages:
1. lvcreate - creates the volume
2. lvchange - adds tags with metadata

if another host refreshes its cache between 1 and 2 then it will have an incomplete lv in the cache and next time it needs the lv it will not refresh the cache.

Solution is to add an init tag in lvcreate so that other hosts would be able to identify such LVs as partial and ignore them.

Comment 6 Eduardo Warszawski 2012-11-18 22:19:49 UTC
I40cd67e563935de663d938cbc1bc9cf152802448

Comment 10 Dafna Ron 2012-11-22 16:55:51 UTC
verified on si24.4 with vdsm-4.9.6-44.0.el6_3.x86_64

Comment 12 errata-xmlrpc 2012-12-04 19:14:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html