Bug 149813 - vgreducing while pvmoving causes inconsistency after pvmove completes
Summary: vgreducing while pvmoving causes inconsistency after pvmove completes
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm-obsolete (Show other bugs)
(Show other bugs)
Version: rawhide
Hardware: All Linux
medium
medium
Target Milestone: ---
Assignee: Alasdair Kergon
QA Contact: Brian Brock
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-02-27 17:01 UTC by Alexandre Oliva
Modified: 2007-11-30 22:11 UTC (History)
0 users

Fixed In Version: RHBA-2005-192
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-05-24 13:57:34 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2005:192 low SHIPPED_LIVE lvm2 bug fix and enhancement update 2005-06-09 04:00:00 UTC

Description Alexandre Oliva 2005-02-27 17:01:54 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.7.6)
Gecko/20050224 Firefox/1.0.1 Fedora/1.0.1-1

Description of problem:
If you're freeing up multiple physical volumes, moving them to
multiple other physical volumes, and you remove one of them from the
volume group as soon as its pvmove completes, but before the other
does, when the other completes, it will load a config file that brings
back in the just-removed PV.  If you've already done something else
with that block device, the volume group goes missing one PV, wreaking
havoc in, say, yet another pvmove operation running in parallel.

Version-Release number of selected component (if applicable):
lvm2-2.01.05-1.0

How reproducible:
Didn't try

Steps to Reproduce:
1.start two pvmoves in parallel, say from /dev/hda7 to /dev/sdb7 and
from /dev/hda8 to /dev/sdb8.  Run them in foreground, and arrange for
the one that frees up say /dev/hda7 to complete first
2.as soon as the pvmove from hda7 completes, vgreduce it out of the
VG, and immediately create another pv in there, such that the UUID changes
3.wait for the second pvmove to complete
4.run vgscan

Actual Results:  the volume group will report a missing PV

Expected Results:  pvmove shouldn't bring the removed PV back in

Additional info:

Comment 1 Alasdair Kergon 2005-03-08 17:31:31 UTC
Presumably a locking error somewhere.

If you get the chance, please turn on full debug logging and repeat the test (eg
to file or syslog in lvm.conf) and attach the output.

Then we can check through it for any place that updates metadata while not
holding the appropriate VG lock(s), or for somewhere that acquires a VG lock but
uses an historic copy of the VG metadata instead of re-reading it after getting
the lock.


Comment 2 Alasdair Kergon 2005-03-08 17:40:19 UTC
It's also worth seeing if the 2.01.07 cache change makes any difference: if it's
not a locking problem, it could be a uuid caching problem.


Comment 3 Alasdair Kergon 2005-03-16 23:28:07 UTC
May well be related to 138396

Comment 4 Alasdair Kergon 2005-03-22 17:13:27 UTC
A variety of cache improvements in 2.01.08 which I hope fixes this.

Comment 5 Alasdair Kergon 2005-05-24 13:57:34 UTC
Assuming this is now fixed; reopen if not.

Comment 6 Tim Powers 2005-06-09 12:30:00 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2005-192.html



Note You need to log in before you can comment on or make changes to this bug.