Description of problem:
This worked in the previous build (-2.02.108-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014)
[root@host-049 ~]# lvs -o lv_volume_type
Current build (2.02.109-1.el6 BUILT: Tue Aug 5 10:36:23 CDT 2014)
[root@host-025 ~]# lvs -o lv_volume_type
Unrecognised field: lv_volume_type
[root@host-025 ~]# echo $?
That field was not properly completed.
We're just editing this field - it will provide more detailed info than previous implementation - it will also cover combinations with underlaying layout used for the volume (for example raid+thin pool metadata in case thin pool metadata volume is actually also a raid one - previously it was reported as "thin pool metadata" only - so only the top-level view).
I expect this change to appear in next RHEL6 build (...so till next Tuesday).
There are new lv_layout and lv_type fields now that replace the original lv_volume_type field. These two makes it easier to identify the underlying layout used and the exact type of the LV.
The two new fields are defined as string lists - it also makes it easier to use with -S/--select (see the comment in the commit).
I'll write up some notes about possible combinations...
An enhancement for this would to have all the types/layouts also reported as combination of individual fields - which is probably better for machine parsing the complete report, without using selection criteria. But this "per-property/individual field" approach is a bit more tricky to make it unique so that the layout and type is properly identified without a chance to incorrectly assume a different type by combining these separate fields (it needs to properly mark layout and type somehow like the lv_layout and lv_type field does). So I'd leave this for next update probably as an enhancement as we're running out of time for 6.6.
We've edited this a bit more, the scheme we use is (as commented in the commit message https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=f4e56b28299680783b8375235bdd2bd48a9934e5):
LAYOUTS ("how the LV is laid out"):
[linear] (all segments have number of stripes = 1)
[striped] (all segments have number of stripes > 1)
[linear,striped] (mixed linear and striped)
raid (raid layout always reported together with raid level, raid layout == image + metadata LVs underneath that make up raid LV)
[raid,raid5] (exact sublayout not specified during creation - default one used - raid5_ls)
[raid6,raid] (exact sublayout not specified during creation - default one used - raid6_zr)
[mirror] (mirror layout == log + image LVs underneath that make up mirror LV)
thin (thin layout always reported together with sublayout)
[thin,sparse] (thin layout == allocated out of thin pool)
[thin,pool] (thin pool layout == data + metadata volumes underneath that make up thin pool LV, not supposed to be used for direct use!!!)
[cache] (cache layout == allocated out of cache pool in conjunction with cache origin)
[cache,pool] (cache pool layout == data + metadata volumes underneath that make up cache pool LV, not supposed to be used for direct use!!!)
[virtual] (virtual layout == not hitting disk underneath, currently this layout denotes only 'zero' device used for origin,thickorigin role)
[unknown] (either error state or missing recognition for such layout)
ROLES ("what's the purpose or use of the LV - what is its role"):
- each LV has either of these two roles at least: [public] (public LV that users may use freely to write their data to)
[public] (public LV that users may use freely to write their data to)
[private] (private LV that LVM maintains; not supposed to be directly used by user to write his data to)
- and then some special-purpose roles in addition to that:
[origin,thickorigin] (origin for thick-style snapshot; "thick" as opposed to "thin")
[origin,multithickorigin] (there are more than 2 thick-style snapshots for this origin)
[origin,thinorigin] (origin for thin snapshot)
[origin,multithinorigin] (there are more than 2 thin snapshots for this origin)
[origin,extorigin] (external origin for thin snapshot)
[origin,multiextoriginl (there are more than 2 thin snapshots using this external origin)
[origin,cacheorigin] (cache origin)
[snapshot,thicksnapshot] (thick-style snapshot; "thick" as opposed to "thin")
[snapshot,thinsnapshot] (thin-style snapshot)
[raid,metadata] (raid metadata LV)
[raid,image] (raid image LV)
[mirror,image] (mirror image LV)
[mirror,log] (mirror log LV)
[pvmove] (pvmove LV)
[thin,pool,data] (thin pool data LV)
[thin,pool,metadata] (thin pool metadata LV)
[cache,pool,data] (cache pool data LV)
[cache,pool,metadata] (cache pool metadata LV)
[pool,spare] (pool spare LV - common role of LV that makes it used for both thin and cache repairs)
The new reporting fields are "lv_layout" and "lv_role".
[root@virt-147 ~]# lvs -a -olv_layout
[root@virt-147 ~]# lvs -a -olv_role
[root@virt-147 ~]# lvs -a -o+lv_role
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Role
[lvol0_pmspare] vg ewi------- 4.00m private,pool,spare
mirror vg mwi-a-m--- 1.00g mirror_mlog 70.31 public
[mirror_mimage_0] vg Iwi-aom--- 1.00g private,mirror,image
[mirror_mimage_1] vg Iwi-aom--- 1.00g private,mirror,image
[mirror_mlog] vg lwi-aom--- 4.00m private,mirror,log
thin_pool vg twi---tz-- 1.00g private
[thin_pool_tdata] vg Twi------- 1.00g private,thin,pool,data
[thin_pool_tmeta] vg ewi------- 4.00m private,thin,pool,metadata
lv_root vg_virt147 -wi-ao---- 6.71g public
lv_swap vg_virt147 -wi-ao---- 816.00m public
Marking VERIFIED with:
lvm2-2.02.111-2.el6 BUILT: Mon Sep 1 13:46:43 CEST 2014
lvm2-libs-2.02.111-2.el6 BUILT: Mon Sep 1 13:46:43 CEST 2014
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.