Red Hat Bugzilla – Bug 858117
[RFE] provide a column for pvs to display if the PV is thin device.
Last modified: 2016-01-22 10:16:41 EST
Description of problem:
provide a column for pvs to display if the PV is thin device.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Did you mean 'lvs' ?
Since the 'pvs' command shows only things like UUID of device - we do not display any 'lvm2' properties here - so you get only info about device having lvm2 metadata on it.
If you are using LV as a 'source' for PV (so you put LVs into LVs)
you should use 'lvs' command - i.e. lvs -o+segtype would display
proper types if individual LVs.
Please specify some use-case you are trying to solve,
and why it should be important 'pvs' detects this thing ?
(In reply to comment #3)
> Did you mean 'lvs' ?
> Please specify some use-case you are trying to solve,
> and why it should be important 'pvs' detects this thing ?
Actually i mean 'pvs'.
Now lvm supports the discards and thin provisioning. If the raw device is a thin disk the users can enable isssue_discard and set the discards mode to passdown when using thin provisioning.
Currently the users have to check the /sys to decide if the device is a thin disk. I think it's convenient to the users if the 'pvs' can check if the disk is thin.
Anyway, it is low priority.
Exactly what properties are you proposing that we would benefit from exposing through pvs?
How do we obtain each of them? /sys ? ioctls ?
I know we've been asked to detect SSDs too.
We have proposed creating a new class of tags to hold properties of devices such as these.
Extra information that is available
That document is missing:
0 == ssd, 1 == Spinning disk
The bigger question is out of the available information, what would be useful for a user to make an informed decision with regards to lvm and how could it be simplified or automated for the user.
Recently, we've added 2 new reporting fields - lv_layout and lv_role. These two new fields help with identification of the LV (in addition to the incomplete lv_attr field which is quite cryptic and lv_target_type which we removed in the end because it didn't provide complete information we needed):
I can imagine we could possibly reuse this logic in case some LV is used as a PV - if we detected that the PV is an LV, we'd have a look at the LV layout identification and report that for the new 'pv_layout' field.
However, such scheme introduces looking at the stack downwards and this stack can be composed of several layers and there's a question whether we should identify it completely or just one level down.
The ssd/spinning disk is property of physical disk. While "thin LV used for PV" (similarly any other LV layout used for the PV in the stack) is property of block dev virtualization. So we need to properly differentiate between these two.
That would be another field like: "PV disk type" with values "virtualized/ssd/spinning disk/<any other possible physical disk type>". And the other field "pv_layout" to cover the "virtualized" types more (the LV stacks would be identified here).
Now, there's one more question emerging here - if we report an LV within the stack used as a PV, shouldn't we also identify MD, mpath, crypt devices or anything else too? From the point of user, I'd say yes, I want this to be reported and visible in the pvs output. But from the point of development, I'd say this is kind of duplication of the work - we already have tools like lsblk which is directly built on blkid code and it shows the structure pretty nicely.
Though I can also imagine we can read this information from udev db or reuse libblkid code here (we already link with libblkid and libudev - there's also another RFE bz to reuse this info to not duplicate the work - bug #1094162).
Another thing is transport type - iSCSI, FCOE - do we want to report these as well? (This is going too far then I think.)
- do we want only ssd/spinning disk/LV identification or anything that can blkid identify or is written in udev db (so to reuse the blkid/udev db information)?
- do we want to reflect the whole stack of devs or just one level down the PV?
- if we detect the stack what makes other block devs not important if we decide to report only LVs used as PVs
- aren't we duplicating code/work which is done elsewhere?
- and finally - is this really something LVM should bother about, shouldn't this be a part of some more general tool that encompasses block devices fully concentrating information from several sources where LVM would be just one of them?
I'm still not sure what is being asked for here. Following a strict reading of the request, a column is wanted to display when a PV is on top of a thinLV. Is this really what is wanted and what is the use-case? It isn't that often that we do layering in this way.
OTOH, if you are asking for a way to determine all the PVs /under/ a thinLV, then there are ways to do this using the new select code in LVM, like this:
# lvs -S 'lv_role=pool && lv_role=thin' -a -o name,devices
(In reply to Jonathan Earl Brassow from comment #14)
> I'm still not sure what is being asked for here. Following a strict reading
> of the request, a column is wanted to display when a PV is on top of a
> thinLV. Is this really what is wanted and what is the use-case? It isn't
> that often that we do layering in this way.
Yes, and my argument in comment #10 was that if we display whether a PV is on thin LV or not, then we should as well display *any* type of LV this PV is on, it should not be just thin LV (the PV may be on top of mirror/raid LV or any other top-level LV type).
But for that, we'd need to have a possibility to report the stack properly (at least by looking one level down). We don't do that at the moment - that would require enhancing.
> OTOH, if you are asking for a way to determine all the PVs /under/ a thinLV,
> then there are ways to do this using the new select code in LVM, like this:
> # lvs -S 'lv_role=pool && lv_role=thin' -a -o name,devices
> LV Devices
> [thinpool_tdata] /dev/sdb1(27652)
> [thinpool_tmeta] /dev/sdi1(0)
Nope, this BZ asks for the opposite - we have a PV and we need to know for that PV what is below it (if there's a stack of LVs below, we'd like to report the LV type at least one level down the stack).
If we want to be more general, we could even report more what's beneath the PV (via udev/libblkid) - e.g. is this PV on MD, iSCSI... But I'm not sure if this makes sense if we already have tools like lsblk which already displays this info clearly.
We will not be providing methods that allow users to peer through PVs to determine the layering below. If you are looking for a way to display the PVs in a thin-LV, then refer to comment 14.