RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1115004 - Wrong lv name (internal lvname) presented as an error in vgsplit
Summary: Wrong lv name (internal lvname) presented as an error in vgsplit
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Alasdair Kergon
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-01 12:23 UTC by Nenad Peric
Modified: 2015-09-30 20:42 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.107-2.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-30 20:42:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Nenad Peric 2014-07-01 12:23:11 UTC
Description of problem:

When trying to vgsplit a VG with a RAID 5 LV in it, the error presented points to an intenal LVM LV name, and not an existing user-usable LV. 

Version-Release number of selected component (if applicable):

lvm2-2.02.107-1.el6.x86_64


How reproducible:
Everytime

Steps to Reproduce:

[root@virt-015 ~]# vgcreate seven /dev/sd{a..e}1
  Volume group "seven" successfully created
[root@virt-015 ~]# lvcreate --alloc anywhere --type raid5 -n raid -i 2 -L 100M seven /dev/sdc1 /dev/sdd1
  Using default stripesize 64.00 KiB
  Rounding size (25 extents) up to stripe boundary size (26 extents).
  Logical volume "raid" created
[root@virt-015 ~]# vgsplit -n raid seven ten
  Logical volume "raid_rimage_0" must be inactive

[root@virt-015 ~]# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raid    seven      rwa-a-r--- 104.00m                                    100.00          
  lv_root vg_virt015 -wi-ao----   6.71g                                                    
  lv_swap vg_virt015 -wi-ao---- 816.00m                                 


[root@virt-015 ~]# lvs -a
  LV              VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raid            seven      rwa-a-r--- 104.00m                                    100.00          
  [raid_rimage_0] seven      iwa-aor---  52.00m                                                    
  [raid_rimage_1] seven      iwa-aor---  52.00m                                                    
  [raid_rimage_2] seven      iwa-aor---  52.00m                                                    
  [raid_rmeta_0]  seven      ewa-aor---   4.00m                                                    
  [raid_rmeta_1]  seven      ewa-aor---   4.00m                                                    
  [raid_rmeta_2]  seven      ewa-aor---   4.00m                                                    
  lv_root         vg_virt015 -wi-ao----   6.71g                                                    
  lv_swap         vg_virt015 -wi-ao---- 816.00m   


Actual results:

The error says that LV raid_rimage_0 must be inactive, but a quick check of lvs does not show it (as expected) in the list of LVs. Only lvs -a will show it as an internal LVM LV name.


Expected results:

If a user is presented with an error, it should point to a user-controllable LV and not one of the internal LV names.

Comment 2 Alasdair Kergon 2014-07-03 19:21:18 UTC
Thanks for reporting this.

While the matter itself is quite trivial, it has shown up some surprises in the vgsplit/vgmerge code, including a couple of mistakes that cancelled each other out!

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=1e1c2769a7092959e5c0076767b4973d4e4dc37c

Comment 4 Alasdair Kergon 2014-07-04 00:26:00 UTC
https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=ac60c876c43d0ebc7e642dcc92528b974bd7b9f5

The vgsplit code needs an overhaul, but this is all I'm doing for now.

Comment 6 Nenad Peric 2014-07-14 12:42:12 UTC
Why doesn't the output of the error message contain the parent LV name only?
Can a user actually deactivate just parts of RAID?
If not, this additional information is still confusing (albeit less than before). 

I'd say that just lv name which is controllable by the user should be displayed. 

Meaning, that instead of:

[root@virt-122 ~]# vgsplit -n raid_LV seven ten
  Logical volume seven/raid_LV_rimage_0 (part of raid_LV) must be inactive.
[root@virt-122 ~]# 


we could have just:

[root@virt-122 ~]# vgsplit -n raid_LV seven ten
  Logical volume seven/raid_LV must be inactive.
[root@virt-122 ~]# 

(if that is not too big of a change now, of course)

Comment 7 Nenad Peric 2014-08-18 14:10:43 UTC
Additional test showed that on a more layered structure wrong (internal) LV names are still being displayed. 
It would still be better if just the first topmost layer so to speak is displayed to the user, without any mention of the underlying LV names.


Here's the example of another failure (displaying two internal devices actually):



[root@virt-063 ~]# lvconvert --thinpool /dev/test/raid1 
  WARNING: Converting logical volume test/raid1 to pool's data volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert test/raid1? [y/n]: y
  Logical volume "lvol0" created
  Logical volume "lvol0" created
  Converted test/raid1 to thin pool.

[root@virt-063 ~]# vgsplit -n raid1 test new
  Logical volume test/raid1_tdata_rimage_0 (part of raid1_tdata) must be inactive.


[root@virt-063 ~]# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raid1   test       twi-a-tz--   1.00g             0.00   0.98                            
  lv_root vg_virt063 -wi-ao----   6.71g                                                    
  lv_swap vg_virt063 -wi-ao---- 816.00m

Comment 9 Nenad Peric 2014-08-20 10:29:23 UTC
The original problem is still present with layered LVs.


Note You need to log in before you can comment on or make changes to this bug.