We don't have any issue with lvm version that contained the "bug", since we have our own locking in our lvm module. Removing our own locking mechanism and depending on lvm locking can be nice feature, but it requires lot of testing, and is certainly not 3.5 work. I suggest to move this to 3.6 - we should work now only on critical bug fixes, not design improvements.
This lvm bug has *no* effect on vdsm, so removing severity and priority.
(In reply to Nir Soffer from comment #1) > We don't have any issue with lvm version that contained the "bug", since we > have our own locking in our lvm module. This LVM bug was encountered in a VDSM flow - running Live Storage Migration while attempting to extend the source volume (see bug 878948).
Waiting for a fixed version on all supported platforms.
Nir, I see that the two blocking bugs are now closed, what do we need to do on our side to solve this bug in light of those fixes?
(In reply to Tal Nisan from comment #5) > Nir, I see that the two blocking bugs are now closed, what do we need to do > on our side to solve this bug in light of those fixes? We need to modify vdsm.spec.in to require the newer lvm version ("Requires: lvm2 >= 2.02.100-8", presumably - need to check for all supported platforms.
(In reply to Allon Mureinik from comment #6) > (In reply to Tal Nisan from comment #5) > > Nir, I see that the two blocking bugs are now closed, what do we need to do > > on our side to solve this bug in light of those fixes? > We need to modify vdsm.spec.in to require the newer lvm version ("Requires: > lvm2 >= 2.02.100-8", presumably - need to check for all supported platforms. Could a customer pull this package via yum update on host that will include this fix?
(In reply to Yaniv Dary from comment #7) > (In reply to Allon Mureinik from comment #6) > > (In reply to Tal Nisan from comment #5) > > > Nir, I see that the two blocking bugs are now closed, what do we need to do > > > on our side to solve this bug in light of those fixes? > > We need to modify vdsm.spec.in to require the newer lvm version ("Requires: > > lvm2 >= 2.02.100-8", presumably - need to check for all supported platforms. > > Could a customer pull this package via yum update on host that will include > this fix? Sure. The point of this bug is to get it automagically when you yum-update vdsm, so you don't have to go over release notes and related manual procedures. If I understand the underlying intent behind this question - yes, this can be pushed out to 3.6.0.
(In reply to Allon Mureinik from comment #8) > (In reply to Yaniv Dary from comment #7) > > (In reply to Allon Mureinik from comment #6) > > > (In reply to Tal Nisan from comment #5) > > > > Nir, I see that the two blocking bugs are now closed, what do we need to do > > > > on our side to solve this bug in light of those fixes? > > > We need to modify vdsm.spec.in to require the newer lvm version ("Requires: > > > lvm2 >= 2.02.100-8", presumably - need to check for all supported platforms. > > > > Could a customer pull this package via yum update on host that will include > > this fix? > Sure. The point of this bug is to get it automagically when you yum-update > vdsm, so you don't have to go over release notes and related manual > procedures. > > If I understand the underlying intent behind this question - yes, this can > be pushed out to 3.6.0. Can we avoid people using this in 3.5.0? Can it cause issue by this update?
(In reply to Yaniv Dary from comment #9) > Can we avoid people using this in 3.5.0? No. > Can it cause issue by this update? No. This bug has been in there since RHEV 3.1 - there's nothing urgent about it, but we do need to move forward with the times, and consume fixes from newer LVM versions.
Please confirm steps to reproduce this bz. My understanding that it is as found in (bug 878948) as mentioned in comment 3. 1. Create VM thin nfs disk and install OS 2. Start I/O (using dd) 3. Start LSM IS this correct? a) Expected result - The LSM is successful? b) What must I look for regarding the newer lvm version (2.02.100-8) and where do I find this?
Let's divide and conquer. The platform's QA group should (and presumably has) tested that the patch in LVM does what its supposed to. Our side is to test that yum install/upgrade pulls the relevant lvm2 rpm. Makes sense?
Code used to verify: ---------------------------- rhevm-3.6.0.3-0.1.el6.noarch vdsm-4.17.10.1-0.el7ev.noarch Verified with the following scenario: ------------------------------------------ rpm -qa |grep lvm2 from host: lvm2-2.02.130-5.el7.x86_64 lvm2-libs-2.02.130-5.el7.x86_64 Also on the host ran: ----------------------------------------- repoquery --requires vdsm |grep lvm Repository rhel-7.2 is listed more than once in the configuration Repository rhel-72-optional is listed more than once in the configuration Repository rhev-72-hypervisor is listed more than once in the configuration lvm2 >= 2.02.107 Moving to Verified!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0362.html