Bug 431760

Summary: lvm delete message calulates the number of volumes incorrectly
Product: Red Hat Enterprise Linux 5 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Peter Rajnoha <prajnoha>
Status: CLOSED ERRATA QA Contact: Corey Marthaler <cmarthal>
Severity: low Docs Contact:
Priority: low    
Version: 5.2CC: agk, dwysocha, edamato, heinzm, jbrassow, mbroz, prockai
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2009-09-02 11:57:35 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2008-02-06 20:01:35 UTC
Description of problem:

[root@grant-03 ~]# vgremove mirror_sanity
Do you really want to remove volume group "mirror_sanity" containing 8 logical
volumes? [y/n]:          

There are really one 2 volumes, not 8, they just both happen to be mirrors.

[root@grant-03 ~]# lvs
  LV             VG            Attr   LSize  Origin Snap%  Move Log            
    Copy%  Convert
  LogVol00       VolGroup00    -wi-ao 72.44G
  LogVol01       VolGroup00    -wi-ao  1.94G
  resync_nosync  mirror_sanity mwi-a-  2.00G                   
resync_nosync_mlog  100.00
  resync_regular mirror_sanity mwi-a-  2.00G                   
resync_regular_mlog  10.55


[root@grant-03 ~]# lvs -a -o +devices
  LV                        VG            Attr   LSize  Origin Snap%  Move Log 
               Copy%  Convert Devices
  LogVol00                  VolGroup00    -wi-ao 72.44G                        
                              /dev/sda2(0)
  LogVol01                  VolGroup00    -wi-ao  1.94G                        
                              /dev/sda2(2318)
  resync_nosync             mirror_sanity mwi-a-  2.00G                   
resync_nosync_mlog  100.00        
resync_nosync_mimage_0(0),resync_nosync_mimage_1(0)
  [resync_nosync_mimage_0]  mirror_sanity iwi-ao  2.00G                        
                              /dev/sdd1(0)
  [resync_nosync_mimage_1]  mirror_sanity iwi-ao  2.00G                        
                              /dev/sdb5(0)
  [resync_nosync_mlog]      mirror_sanity lwi-ao  4.00M                        
                              /dev/sdb3(0)
  resync_regular            mirror_sanity mwi-a-  2.00G                   
resync_regular_mlog  10.55        
resync_regular_mimage_0(0),resync_regular_mimage_1(0)
  [resync_regular_mimage_0] mirror_sanity Iwi-ao  2.00G                        
                              /dev/sdd1(512)
  [resync_regular_mimage_1] mirror_sanity Iwi-ao  2.00G                        
                              /dev/sdb2(0)
  [resync_regular_mlog]     mirror_sanity lwi-ao  4.00M                        
                              /dev/sdb5(512)


Version-Release number of selected component (if applicable):
lvm2-2.02.32-1.el5
lvm2-cluster-2.02.32-1.el5

How reproducible:
everytime

Comment 1 Dave Wysochanski 2008-02-11 06:14:48 UTC
Could probably fix this easily by adding a "count_visible_lv()" function which
iterates over lvs and calls "lv_is_visible()".  Should also examine the code for
other instances of this incorrect LV counting.

Comment 2 Peter Rajnoha 2008-11-26 09:53:38 UTC
In case we have, for example, one snapshot volume in VG, lv_is_visible() will take into account all 3 LVs in this case and mark them as visible. I think that could confuse the user because only 2 LVs are visible actually (origin+snapshot). I would probably do this by checking the VISIBLE_LV flag directly and not calling lv_is_visible().

Comment 3 Peter Rajnoha 2008-12-04 09:17:43 UTC
*** Bug 465168 has been marked as a duplicate of this bug. ***

Comment 4 Peter Rajnoha 2008-12-04 16:46:33 UTC
There are other places in code where similar inconsistency related to counting visible LVs (from user's perspective!) can be found, like in output of "vgdisplay -c" command. Snapshot volumes should be counted in statistics of open LVs within the output of "vgdisplay", "vgremove" should count snapshot volumes as well while displaying the message about the number of LVs beeing deleted. Renaming of hidden snapshot volumes should not be allowed for a user.

All these cases are related to testing the visibility and counting of LVs from user's perpsective, so two new functions have been proposed - displayable_lvs_in_vg counting LVs in a given VG that are displayable to the user through outputs of commands and lv_is_displayable testing the actual visibility of an LV. These should be used instead of lv_is_visible function whenever a test for visibility and counting from user's perspective is needed (e.g. in listings and outputs of user commands).

The fix has been uploaded to CVS.

Comment 6 Milan Broz 2009-05-21 09:22:01 UTC
Fix in version lvm2-2.02.46-1.el5.

Comment 8 Corey Marthaler 2009-05-26 19:26:49 UTC
Fix verified in lvm2-2.02.46-2.el5.

Comment 10 errata-xmlrpc 2009-09-02 11:57:35 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-1393.html