Bug 1127117

Summary: VDSM: Require newer lvm version (2.02.100-8) and certify fix for "Concurrent activations of same LV race against each other with 'Device or resource busy'"
Product: Red Hat Enterprise Virtualization Manager Reporter: Allon Mureinik <amureini>
Component: vdsmAssignee: Tal Nisan <tnisan>
Status: CLOSED ERRATA QA Contact: Kevin Alon Goldblatt <kgoldbla>
Severity: low Docs Contact:
Priority: medium    
Version: 3.5.0CC: acanan, agk, amureini, bazulay, cmarthal, coughlan, cpelland, dron, dwysocha, gklein, heinzm, jbrassow, jkurik, lpeer, lvm-team, msnitzer, mspqa-list, nobody, nsoffer, pm-eus, prajnoha, prockai, scohen, srevivo, thornber, tnisan, yeylon, ykaul, ylavi, zkabelac
Target Milestone: ovirt-3.6.0-rcFlags: amureini: Triaged+
Target Release: 3.6.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1112137 Environment:
Last Closed: 2016-03-09 19:23:49 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Bug Depends On: 878948, 1112137    
Bug Blocks:    

Comment 1 Nir Soffer 2014-08-06 13:36:42 UTC
We don't have any issue with lvm version that contained the "bug", since we have our own locking in our lvm module.

Removing our own locking mechanism and depending on lvm locking can be nice feature, but it requires lot of testing, and is certainly not 3.5 work.

I suggest to move this to 3.6 - we should work now only on critical bug fixes, not design improvements.

Comment 2 Nir Soffer 2014-08-06 14:15:43 UTC
This lvm bug has *no* effect on vdsm, so removing severity and priority.

Comment 3 Allon Mureinik 2014-08-07 09:23:57 UTC
(In reply to Nir Soffer from comment #1)
> We don't have any issue with lvm version that contained the "bug", since we
> have our own locking in our lvm module.
This LVM bug was encountered in a VDSM flow - running Live Storage Migration while attempting to extend the source volume (see bug 878948).

Comment 4 Nir Soffer 2014-08-07 13:34:11 UTC
Waiting for a fixed version on all supported platforms.

Comment 5 Tal Nisan 2015-01-14 15:15:25 UTC
Nir, I see that the two blocking bugs are now closed, what do we need to do on our side to solve this bug in light of those fixes?

Comment 6 Allon Mureinik 2015-01-14 15:26:17 UTC
(In reply to Tal Nisan from comment #5)
> Nir, I see that the two blocking bugs are now closed, what do we need to do
> on our side to solve this bug in light of those fixes?
We need to modify vdsm.spec.in to require the newer lvm version ("Requires: lvm2 >= 2.02.100-8", presumably - need to check for all supported platforms.

Comment 7 Yaniv Lavi 2015-01-14 15:28:14 UTC
(In reply to Allon Mureinik from comment #6)
> (In reply to Tal Nisan from comment #5)
> > Nir, I see that the two blocking bugs are now closed, what do we need to do
> > on our side to solve this bug in light of those fixes?
> We need to modify vdsm.spec.in to require the newer lvm version ("Requires:
> lvm2 >= 2.02.100-8", presumably - need to check for all supported platforms.

Could a customer pull this package via yum update on host that will include this fix?

Comment 8 Allon Mureinik 2015-01-14 15:31:18 UTC
(In reply to Yaniv Dary from comment #7)
> (In reply to Allon Mureinik from comment #6)
> > (In reply to Tal Nisan from comment #5)
> > > Nir, I see that the two blocking bugs are now closed, what do we need to do
> > > on our side to solve this bug in light of those fixes?
> > We need to modify vdsm.spec.in to require the newer lvm version ("Requires:
> > lvm2 >= 2.02.100-8", presumably - need to check for all supported platforms.
> 
> Could a customer pull this package via yum update on host that will include
> this fix?
Sure. The point of this bug is to get it automagically when you yum-update vdsm, so you don't have to go over release notes and related manual procedures.

If I understand the underlying intent behind this question - yes, this can be pushed out to 3.6.0.

Comment 9 Yaniv Lavi 2015-01-14 15:55:46 UTC
(In reply to Allon Mureinik from comment #8)
> (In reply to Yaniv Dary from comment #7)
> > (In reply to Allon Mureinik from comment #6)
> > > (In reply to Tal Nisan from comment #5)
> > > > Nir, I see that the two blocking bugs are now closed, what do we need to do
> > > > on our side to solve this bug in light of those fixes?
> > > We need to modify vdsm.spec.in to require the newer lvm version ("Requires:
> > > lvm2 >= 2.02.100-8", presumably - need to check for all supported platforms.
> > 
> > Could a customer pull this package via yum update on host that will include
> > this fix?
> Sure. The point of this bug is to get it automagically when you yum-update
> vdsm, so you don't have to go over release notes and related manual
> procedures.
> 
> If I understand the underlying intent behind this question - yes, this can
> be pushed out to 3.6.0.

Can we avoid people using this in 3.5.0? Can it cause issue by this update?

Comment 10 Allon Mureinik 2015-01-14 16:04:31 UTC
(In reply to Yaniv Dary from comment #9)
> Can we avoid people using this in 3.5.0?
No.

> Can it cause issue by this update?
No.

This bug has been in there since RHEV 3.1 - there's nothing urgent about it, but we do need to move forward with the times, and consume fixes from newer LVM versions.

Comment 12 Kevin Alon Goldblatt 2015-11-11 07:47:42 UTC
Please confirm steps to reproduce this bz.

My understanding that it is as found in (bug 878948) as mentioned in comment 3.

1. Create VM thin nfs disk and install OS
2. Start I/O (using dd) 
3. Start LSM 
IS this correct?



a) Expected result - The LSM is successful?
b) What must I look for regarding the newer lvm version (2.02.100-8) and where do I find this?

Comment 13 Allon Mureinik 2015-11-11 08:21:41 UTC
Let's divide and conquer. The platform's QA group should (and presumably has) tested that the patch in LVM does what its supposed to.
Our side is to test that yum install/upgrade pulls the relevant lvm2 rpm.

Makes sense?

Comment 14 Kevin Alon Goldblatt 2015-11-11 14:23:43 UTC
Code used to verify:
----------------------------
rhevm-3.6.0.3-0.1.el6.noarch
vdsm-4.17.10.1-0.el7ev.noarch

Verified with the following scenario:
------------------------------------------
rpm -qa |grep lvm2 from host:

lvm2-2.02.130-5.el7.x86_64
lvm2-libs-2.02.130-5.el7.x86_64

Also on the host ran:
-----------------------------------------
repoquery --requires vdsm |grep lvm
Repository rhel-7.2 is listed more than once in the configuration
Repository rhel-72-optional is listed more than once in the configuration
Repository rhev-72-hypervisor is listed more than once in the configuration
lvm2 >= 2.02.107


Moving to Verified!

Comment 16 errata-xmlrpc 2016-03-09 19:23:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0362.html