Bug 1115004
| Summary: | Wrong lv name (internal lvname) presented as an error in vgsplit | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Nenad Peric <nperic> |
| Component: | lvm2 | Assignee: | Alasdair Kergon <agk> |
| lvm2 sub component: | Mirroring and RAID (RHEL6) | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED WONTFIX | Docs Contact: | |
| Severity: | unspecified | ||
| Priority: | unspecified | CC: | agk, cmarthal, heinzm, jbrassow, lvm-team, msnitzer, prajnoha, prockai, zkabelac |
| Version: | 6.6 | ||
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.107-2.el6 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-09-30 20:42:10 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Thanks for reporting this. While the matter itself is quite trivial, it has shown up some surprises in the vgsplit/vgmerge code, including a couple of mistakes that cancelled each other out! https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=1e1c2769a7092959e5c0076767b4973d4e4dc37c https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=ac60c876c43d0ebc7e642dcc92528b974bd7b9f5 The vgsplit code needs an overhaul, but this is all I'm doing for now. Why doesn't the output of the error message contain the parent LV name only? Can a user actually deactivate just parts of RAID? If not, this additional information is still confusing (albeit less than before). I'd say that just lv name which is controllable by the user should be displayed. Meaning, that instead of: [root@virt-122 ~]# vgsplit -n raid_LV seven ten Logical volume seven/raid_LV_rimage_0 (part of raid_LV) must be inactive. [root@virt-122 ~]# we could have just: [root@virt-122 ~]# vgsplit -n raid_LV seven ten Logical volume seven/raid_LV must be inactive. [root@virt-122 ~]# (if that is not too big of a change now, of course) Additional test showed that on a more layered structure wrong (internal) LV names are still being displayed. It would still be better if just the first topmost layer so to speak is displayed to the user, without any mention of the underlying LV names. Here's the example of another failure (displaying two internal devices actually): [root@virt-063 ~]# lvconvert --thinpool /dev/test/raid1 WARNING: Converting logical volume test/raid1 to pool's data volume. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert test/raid1? [y/n]: y Logical volume "lvol0" created Logical volume "lvol0" created Converted test/raid1 to thin pool. [root@virt-063 ~]# vgsplit -n raid1 test new Logical volume test/raid1_tdata_rimage_0 (part of raid1_tdata) must be inactive. [root@virt-063 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raid1 test twi-a-tz-- 1.00g 0.00 0.98 lv_root vg_virt063 -wi-ao---- 6.71g lv_swap vg_virt063 -wi-ao---- 816.00m The original problem is still present with layered LVs. |
Description of problem: When trying to vgsplit a VG with a RAID 5 LV in it, the error presented points to an intenal LVM LV name, and not an existing user-usable LV. Version-Release number of selected component (if applicable): lvm2-2.02.107-1.el6.x86_64 How reproducible: Everytime Steps to Reproduce: [root@virt-015 ~]# vgcreate seven /dev/sd{a..e}1 Volume group "seven" successfully created [root@virt-015 ~]# lvcreate --alloc anywhere --type raid5 -n raid -i 2 -L 100M seven /dev/sdc1 /dev/sdd1 Using default stripesize 64.00 KiB Rounding size (25 extents) up to stripe boundary size (26 extents). Logical volume "raid" created [root@virt-015 ~]# vgsplit -n raid seven ten Logical volume "raid_rimage_0" must be inactive [root@virt-015 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raid seven rwa-a-r--- 104.00m 100.00 lv_root vg_virt015 -wi-ao---- 6.71g lv_swap vg_virt015 -wi-ao---- 816.00m [root@virt-015 ~]# lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raid seven rwa-a-r--- 104.00m 100.00 [raid_rimage_0] seven iwa-aor--- 52.00m [raid_rimage_1] seven iwa-aor--- 52.00m [raid_rimage_2] seven iwa-aor--- 52.00m [raid_rmeta_0] seven ewa-aor--- 4.00m [raid_rmeta_1] seven ewa-aor--- 4.00m [raid_rmeta_2] seven ewa-aor--- 4.00m lv_root vg_virt015 -wi-ao---- 6.71g lv_swap vg_virt015 -wi-ao---- 816.00m Actual results: The error says that LV raid_rimage_0 must be inactive, but a quick check of lvs does not show it (as expected) in the list of LVs. Only lvs -a will show it as an internal LVM LV name. Expected results: If a user is presented with an error, it should point to a user-controllable LV and not one of the internal LV names.