Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1103844

Summary: [scale][storage] Use a single lock to update multiple OVFs on the master domain
Product: Red Hat Enterprise Virtualization Manager Reporter: Marina Kalinin <mkalinin>
Component: vdsmAssignee: Federico Simoncelli <fsimonce>
Status: CLOSED CURRENTRELEASE QA Contact: Yuri Obshansky <yobshans>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.4.0CC: amureini, bazulay, eedri, fsimonce, gklein, iheim, lpeer, scohen, tdosek, yeylon
Target Milestone: ---Keywords: ZStream
Target Release: 3.4.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-08-04 14:41:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdsm.log none

Description Marina Kalinin 2014-06-02 17:51:45 UTC
Description of problem:
Can we please backport to 3.4.z the commit:
"hsm: unify vm ovf management lock"
gerrit.ovirt.org/#/c/28068/


This bug comes from customer case related bug#1100527.
It would make sense to backport it to 3.4.z.

Comment 1 Allon Mureinik 2014-06-10 16:25:29 UTC
Fede, can you please add a suggestion how QA can verify this?

Comment 2 Federico Simoncelli 2014-06-11 09:55:48 UTC
(In reply to Allon Mureinik from comment #1)
> Fede, can you please add a suggestion how QA can verify this?

After adding several new VMs to the Cluster you should verify that when the updateVM command is called (to sync the VMs ovf) vdsm is taking only one exclusive lock named as: "vms_sdUUID" (where sdUUID is the uuid of the master domain).

After removing several VMs you should verify that when the removeVM command is called vdsm is taking only two locks, one in shared mode: "vms_sdUUID" (as before) and one in exclusive mode: "vms_vmUUID_sdUUID" (where vmUUID is the uuid of the vm ovf that we're removing)

Comment 4 Yuri Obshansky 2014-07-10 11:18:25 UTC
I tried reproduce bug on version 
RHEVM: 3.4.1-0.25.el6ev
OS Version: RHEV Hypervisor - 6.5 - 20140707.0.el6ev
Kernel Version: 2.6.32 - 431.20.3.el6.x86_64
KVM Version: 0.12.1.2 - 2.415.el6_5.10
LIBVIRT Version: libvirt-0.10.2-29.el6_5.9
VDSM Version: vdsm-4.14.7-5.el6ev

Sorry, but description of how to verify is not clear for me.
For what looking for ?
Thread-34::INFO::2014-07-10 09:34:45,206::logUtils::44::dispatcher::(wrapper) Run and protect: updateVM(spUUID='4581f8c5-5ac7-42af-8f9f-90e9e6a8bbee', vmList=[{'imglist': '9cf413b7-7f14-4a61-8c71-91
3e5b0ea9c0', 'ovf'....................

or

Thread-34::INFO::2014-07-10 09:34:45,210::sp::1237::Storage.StoragePool::(updateVM) spUUID=4581f8c5-5ac7-42af-8f9f-90e9e6a8bbee sdUUID=9e1aa540-1f4e-4ee3-a152-be381ef6c3bd
Thread-34::INFO::2014-07-10 09:34:45,210::sp::1248::Storage.StoragePool::(updateVM) vmUUID=0670c2d7-3c1c-4b42-800c-060445cb5530 imgList=['9cf413b7-7f14-4a61-8c71-913e5b0ea9c0']

I'm going to attached vdsm.log
Allon/Federico please, take a look at it
and check if bug fixed or not

Thank you

Comment 5 Yuri Obshansky 2014-07-10 11:19:58 UTC
Created attachment 917046 [details]
vdsm.log

Comment 6 Allon Mureinik 2014-07-10 11:32:58 UTC
Looks OK to me.

Comment 7 Tomas Dosek 2014-08-04 12:18:52 UTC
This bugzilla was mistakenly not part of official RHEV 3.4.1 Errata announcement. Please consider this BZ as released by the relevant errata.

More information is available via following article:
https://access.redhat.com/solutions/1155243

Comment 8 Eyal Edri 2014-08-04 14:41:56 UTC
closing as this is already in 3.4.1