RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 989174 - lvm2app lvm_vg_reduce() can delete a volume group
Summary: lvm2app lvm_vg_reduce() can delete a volume group
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Tony Asleson
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-28 06:41 UTC by benscott
Modified: 2014-10-14 08:24 UTC (History)
13 users (show)

Fixed In Version: lvm2-2.02.107-1.el6
Doc Type: Bug Fix
Doc Text:
The lvm_vg_reduce() function in the lvm2app library gains some additional validation to bring it into line with its command-line equivalent.
Clone Of:
Environment:
Last Closed: 2014-10-14 08:24:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Test to remove PVs from VG (2.12 KB, text/x-c++src)
2013-08-05 20:34 UTC, Tony Asleson
no flags Details
Test case with ability to specify a device to not remove from VG (2.26 KB, text/x-c++src)
2013-08-07 20:16 UTC, Tony Asleson
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1387 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2014-10-14 01:39:47 UTC

Description benscott 2013-07-28 06:41:12 UTC
Description of problem:

I have run into a serious problem with using the function 
lvm_vg_reduce() in lvm2lib. If I use the function call to
remove most but not all physical volumes from a volume
group it often destroys the group completely.

For example, I created a group valled vg1 and
then put a logical volume on /dev/sdh while
leaving the other physical volumes empty as
follows:

~# pvs
  PV         VG   Fmt  Attr PSize   PFree  
  /dev/sdc   vg1  lvm2 a--    5.12g   5.12g
  /dev/sdd   vg1  lvm2 a--    5.12g   5.12g
  /dev/sde   vg1  lvm2 a--    5.12g   5.12g
  /dev/sdf   vg1  lvm2 a--    5.12g   5.12g
  /dev/sdg   vg1  lvm2 a--    9.23g   9.23g
  /dev/sdh   vg1  lvm2 a--    9.23g   5.73g
  /dev/sdi   vg1  lvm2 a--    9.23g   9.23g


~# vgs
  VG   #PV #LV #SN Attr   VSize  VFree 
  vg1    7   1   0 wz--n- 48.20g 44.70g

~# lvs -o +devices 
  LV    VG   Attr      LSize Pool Origin Data%  Devices    
  lvol0 vg1  -wi-a---- 3.51g                    /dev/sdh(0)


Then I ran the following code:

    vg_t vg_dm = nullptr;
    lvm_t lvm = lvm_init(0);

    lvm_scan(lvm);

    if ((vg_dm = lvm_vg_open(lvm, "vg1", "w", 0))) {

        if (lvm_vg_reduce(vg_dm, "/dev/sdc"))
            qDebug() << lvm_errmsg(lvm);

        if (lvm_vg_reduce(vg_dm, "/dev/sdd"))
            qDebug() << lvm_errmsg(lvm);

        if (lvm_vg_reduce(vg_dm, "/dev/sde"))
            qDebug() << lvm_errmsg(lvm);

        if (lvm_vg_reduce(vg_dm, "/dev/sdf"))
            qDebug() << lvm_errmsg(lvm);

        if (lvm_vg_reduce(vg_dm, "/dev/sdg"))
            qDebug() << lvm_errmsg(lvm);

        if (lvm_vg_reduce(vg_dm, "/dev/sdi"))
            qDebug() << lvm_errmsg(lvm);

        if (lvm_vg_write(vg_dm))
            qDebug() << lvm_errmsg(lvm);

        lvm_vg_close(vg_dm);

    } else {
        qDebug() << lvm_errmsg(lvm);
    }

This deleted the volume group completely even though
"/dev/sdh" was never even run through lvm_vg_reduce().
No error output was generated.

~# vgs
  No volume groups found


Version-Release number of selected component (if applicable):

~# lvs --version
  LVM version:     2.02.98(2) (2012-10-15)
  Library version: 1.02.77 (2012-10-15)
  Driver version:  4.24.0

Comment 2 Tony Asleson 2013-08-02 22:45:46 UTC
I have been unable to reproduce using RHEL 6.4.  I have also not been able to reproduce using the latest built from source.

This was reported against RHEL6, but the driver version looks to be too new.  Please clarify which version you are running.

Comment 3 benscott 2013-08-03 17:29:10 UTC
Sorry, I should have mentioned I am running Debian Sid:

linux-image-3.10-1-amd64     3.10.3-1

  LVM version:     2.02.98(2) (2012-10-15)
  Library version: 1.02.77 (2012-10-15)
  Driver version:  4.24.0

Comment 4 Tony Asleson 2013-08-05 20:31:11 UTC
Tested on debian stable (wheezy) and unable to reproduce.  Stable is utilizing:

LVM version:     2.02.95(2) (2012-03-06)
Library version: 1.02.74 (2012-03-06)
Driver version:  4.22.0

I created a simple test case which iterates through all the PVs in a VG and tries to remove all of them and then writes out the changes.  In all testing so far I get an error when trying to remove the PV that may be in use which is expected.

My test run:

+ pvs
  PV         VG   Fmt  Attr PSize PFree
  /dev/sda        lvm2 a--  8.00g 8.00g
  /dev/sdb        lvm2 a--  8.00g 8.00g
  /dev/sdc        lvm2 a--  8.00g 8.00g
  /dev/sdd        lvm2 a--  8.00g 8.00g
  /dev/sde        lvm2 a--  8.00g 8.00g
  /dev/sdf        lvm2 a--  8.00g 8.00g
+ vgcreate vgtest /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
  Volume group "vgtest" successfully created
+ lvcreate -L100M vgtest
  Logical volume "lvol0" created
+ ./vgreducetest vgtest
  Physical volume /dev/sda still in use.
  Unable to remove physical volume '/dev/sda' from volume group 'vgtest'.
Removal error Physical volume /dev/sda still in use.
Unable to remove physical volume '/dev/sda' from volume group 'vgtest'.
+ lvs -o +devices
  LV    VG     Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert Devices    
  lvol0 vgtest -wi-ao-- 100.00m                                            /dev/sda(0)

Not sure if/when I will have a sid system to try on.

Comment 5 Tony Asleson 2013-08-05 20:34:18 UTC
Created attachment 783007 [details]
Test to remove PVs from VG

This isn't exactly the same test as when the bug reporter is stating, but I have tried to remove and not remove the PV that the actual lv resides on and get the same behavior.

Comment 6 Tony Asleson 2013-08-05 20:38:04 UTC
Please review my test case to and see if it re-creates what you are seeing.  If not please supply a complete sample or modification so I can try to re-create.

FYI: We will be adding some type of addtional debug capability to a future version to help debug these types of errors better.

Thanks!

Comment 7 Tony Asleson 2013-08-07 20:14:36 UTC
I was able to get a VM updated to sid.

Linux sid 3.10-1-amd64 #1 SMP Debian 3.10.3-1 (2013-07-27) x86_64 GNU/Linux
LVM version:     2.02.98(2) (2012-10-15)
Library version: 1.02.77 (2012-10-15)
Driver version:  4.24.0

Still unable to re-create.

Comment 8 Tony Asleson 2013-08-07 20:16:26 UTC
Created attachment 784132 [details]
Test case with ability to specify a device to not remove from VG

Comment 9 benscott 2013-08-10 19:35:15 UTC
Now I feel a little stupid. There is a bug but it isn't what I thought. The reason the volume group was disapearing is that a couple of my physical volumes were created (long ago) with --metadatacopies 0. When the group was reduced to just those pvs, it vanished. 

I think the libary should check for available MDA's before allowing the reduction of a group. However there is also this bug: 

https://bugzilla.redhat.com/show_bug.cgi?id=880395

The vgcreate and vgextend library only create a new pv label if there isn't already one on the device and there is no way to specify a new layout. In my case reusing the ancient volume label was more of a bug than a feature.

Thank you, and sorry for the misleading report.

Comment 10 Alasdair Kergon 2013-08-10 21:18:42 UTC
command line vgreduce gives the error:
   Cannot remove final metadata area 

Obviously library should do the same thing.
- If this check was missing, how many other checks are also missing?

Comment 11 Tony Asleson 2013-11-22 20:20:56 UTC
Fixed upstream, see: http://www.redhat.com/archives/lvm-devel/2013-November/msg00080.html

Will be in next release (2.02.105)

Comment 13 Nenad Peric 2014-03-31 14:41:09 UTC
Acking this as other API BZs - SanityOnly.

Comment 15 Corey Marthaler 2014-07-24 21:05:37 UTC
Marking Verified (SanityOnly), no major issues found yet during regression testing on the latest rpms.


2.6.32-492.el6.x86_64
lvm2-2.02.107-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
lvm2-libs-2.02.107-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
lvm2-cluster-2.02.107-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
udev-147-2.56.el6    BUILT: Fri Jul 11 09:53:07 CDT 2014
device-mapper-1.02.86-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
device-mapper-libs-1.02.86-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
device-mapper-event-1.02.86-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
device-mapper-event-libs-1.02.86-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.107-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014

Comment 16 errata-xmlrpc 2014-10-14 08:24:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1387.html


Note You need to log in before you can comment on or make changes to this bug.