Bug 820646 - should support to report the PE ranges and devices of thin LV
Summary: should support to report the PE ranges and devices of thin LV
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 17
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Zdenek Kabelac
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-05-10 15:04 UTC by Xiaowei Li
Modified: 2015-01-27 00:10 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-01 18:14:30 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Xiaowei Li 2012-05-10 15:04:52 UTC
Description of problem:
should support to display the PE ranges and devices of thin LV

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. create the thin pool & LV
# lvcreate -V8m -T -l10 vg/pool -n lv1
2.allocate the thin LV
# dd if=/dev/zero of=/dev/vg/lv1
3.check the seg_pe_ranges and devices fields
# lvs vg -a -olv_name,pool_lv,lv_size,data_percent,seg_pe_ranges,devices
lvs  LV           Pool LSize  Data%  PE Ranges      Devices      
lvs  lv1          pool  8.00m 100.00                             
lvs  pool              40.00m  20.00 pool_tdata:0-9 pool_tdata(0)
lvs  [pool_tdata]      40.00m        /dev/vdb:0-9   /dev/vdb(0)  
lvs  [pool_tmeta]       4.00m        /dev/vde:0-0   /dev/vde(0)  

  
Actual results:
not report the PE ranges and devices of the lv1

Expected results:
should report PE ranges and devices of the lv1

Additional info:

Comment 1 Alasdair Kergon 2012-05-10 15:27:58 UTC
All it could do would be copy the values from the 'pool' line, but I think that could mislead people and it's better to leave the fields empty.

(Fine-grained volume allocations will be available via the persistent data tools package eventually.)

Comment 2 Zdenek Kabelac 2012-05-10 16:12:24 UTC
While comparing this output with  virtual snapshot - there are some thing that looks maybe a bit 'incompatible':

# lvs -a -olv_name,pool_lv,lv_size,data_percent,seg_pe_ranges,devices
LV              Pool LSize   Data%  PE Ranges          Devices        
lvol0           thin 100,00M   0,00                                   
lvol1                 10,00M   0,00 /dev/loop0:192-351 /dev/loop0(192)
[lvol1_vorigin]        1,00G                                          
thin                  10,00M   0,00 thin_tdata:0-159   thin_tdata(0)  
[thin_tdata]          10,00M        /dev/loop0:0-159   /dev/loop0(0)  
[thin_tmeta]           2,00M        /dev/loop0:160-191 /dev/loop0(160)

# lvs -a
LV              VG   Attr     LSize   Pool Origin          Data%  
lvol0           mvg  Vwi-a-tz 100,00M thin                   0,00 
lvol1           mvg  swi-a-s-  10,00M      [lvol1_vorigin]   0,00 
[lvol1_vorigin] mvg  owi-a-s-   1,00G                             
thin            mvg  twa-a-tz  10,00M                        0,00 
[thin_tdata]    mvg  Twi-aot-  10,00M                                                    
[thin_tmeta]    mvg  ewi-aot-   2,00M 


So while device lvol1 is usable by user - it's reported with size 10MB, but user may use 1GB (i.e. this size is report by fdisk) - which might be a source of user's confusion.

(Other way around would be to display 1G for lvol1 and for hidden lvol1_vorigin use 10MB)

But comparing the output here - we report used devices and PE ranges for lvol1 - and nothing for zero origin lvol1_vorigin. 

But it's probably hard to put this 2 case equal here.

For now I'd prefer to keep the lvs as is - as a side note - device which is 0% in use actually does not eat any device and any PE.
If there would be good reason to copy 'pool' parameter to thin line, it's easy to add - but IMHO dense output makes it more readable for the user.

Comment 3 Xiaowei Li 2012-05-11 02:57:09 UTC
(In reply to comment #2)
> While comparing this output with  virtual snapshot - there are some thing that
> looks maybe a bit 'incompatible':
> 
> # lvs -a -olv_name,pool_lv,lv_size,data_percent,seg_pe_ranges,devices
> LV              Pool LSize   Data%  PE Ranges          Devices        
> lvol0           thin 100,00M   0,00                                   
> lvol1                 10,00M   0,00 /dev/loop0:192-351 /dev/loop0(192)
> [lvol1_vorigin]        1,00G                                          
> thin                  10,00M   0,00 thin_tdata:0-159   thin_tdata(0)  
> [thin_tdata]          10,00M        /dev/loop0:0-159   /dev/loop0(0)  
> [thin_tmeta]           2,00M        /dev/loop0:160-191 /dev/loop0(160)
> 
> # lvs -a
> LV              VG   Attr     LSize   Pool Origin          Data%  
> lvol0           mvg  Vwi-a-tz 100,00M thin                   0,00 
> lvol1           mvg  swi-a-s-  10,00M      [lvol1_vorigin]   0,00 
> [lvol1_vorigin] mvg  owi-a-s-   1,00G                             
> thin            mvg  twa-a-tz  10,00M                        0,00 
> [thin_tdata]    mvg  Twi-aot-  10,00M                                           
> [thin_tmeta]    mvg  ewi-aot-   2,00M 
> 
> 
> So while device lvol1 is usable by user - it's reported with size 10MB, but
> user may use 1GB (i.e. this size is report by fdisk) - which might be a source
> of user's confusion.
> 
> (Other way around would be to display 1G for lvol1 and for hidden lvol1_vorigin
> use 10MB)
> 
> But comparing the output here - we report used devices and PE ranges for lvol1
> - and nothing for zero origin lvol1_vorigin. 
> 
> But it's probably hard to put this 2 case equal here.



> 
> For now I'd prefer to keep the lvs as is - as a side note - device which is 0%
> in use actually does not eat any device and any PE.
> If there would be good reason to copy 'pool' parameter to thin line, it's easy
> to add - but IMHO dense output makes it more readable for the user.

(In reply to comment #2)
> While comparing this output with  virtual snapshot - there are some thing that
> looks maybe a bit 'incompatible':
> 
> # lvs -a -olv_name,pool_lv,lv_size,data_percent,seg_pe_ranges,devices
> LV              Pool LSize   Data%  PE Ranges          Devices        
> lvol0           thin 100,00M   0,00                                   
> lvol1                 10,00M   0,00 /dev/loop0:192-351 /dev/loop0(192)
> [lvol1_vorigin]        1,00G                                          
> thin                  10,00M   0,00 thin_tdata:0-159   thin_tdata(0)  
> [thin_tdata]          10,00M        /dev/loop0:0-159   /dev/loop0(0)  
> [thin_tmeta]           2,00M        /dev/loop0:160-191 /dev/loop0(160)
> 
> # lvs -a
> LV              VG   Attr     LSize   Pool Origin          Data%  
> lvol0           mvg  Vwi-a-tz 100,00M thin                   0,00 
> lvol1           mvg  swi-a-s-  10,00M      [lvol1_vorigin]   0,00 
> [lvol1_vorigin] mvg  owi-a-s-   1,00G                             
> thin            mvg  twa-a-tz  10,00M                        0,00 
> [thin_tdata]    mvg  Twi-aot-  10,00M                                           
> [thin_tmeta]    mvg  ewi-aot-   2,00M 
> 
> 
> So while device lvol1 is usable by user - it's reported with size 10MB, but
> user may use 1GB (i.e. this size is report by fdisk) - which might be a source
> of user's confusion.
> 
> (Other way around would be to display 1G for lvol1 and for hidden lvol1_vorigin
> use 10MB)
> 
> But comparing the output here - we report used devices and PE ranges for lvol1
> - and nothing for zero origin lvol1_vorigin. 
> 
> But it's probably hard to put this 2 case equal here.
> 
> For now I'd prefer to keep the lvs as is - as a side note - device which is 0%
> in use actually does not eat any device and any PE.
> If there would be good reason to copy 'pool' parameter to thin line, it's easy
> to add - but IMHO dense output makes it more readable for the user.


I prefer to keep lvol1_vorigin is 10M(actual allocated size) and lvol1 is 1G(virtual size) since lvol1 is the LV used by users.

currently the output is as below
# lvs -a vg -olv_name,lv_size,data_percent,origin,origin_size
lvs  LV            LSize  Data%  Origin        OSize 
lvs  lv2            8.00m  22.66 [lv2_vorigin] 20.00m
lvs  [lv2_vorigin] 20.00m                      20.00m

but it's better if it looks like 
# lvs -a vg -olv_name,lv_size,data_percent,origin,origin_size
lvs  LV            LSize  Data%  Origin        OSize 
lvs  lv2           20.00m 9.00   [lv2_vorigin] 20.00m
lvs  [lv2_vorigin] 8.00m  22.66               

Also if have another field called origin_data_percent to display 22.66%, the users will no need to use lvs -a to monitor if the origin is full.

Comment 4 Xiaowei Li 2012-05-11 03:07:43 UTC
(In reply to comment #1)
> All it could do would be copy the values from the 'pool' line, but I think that
> could mislead people and it's better to leave the fields empty.
> 
> (Fine-grained volume allocations will be available via the persistent data
> tools package eventually.)

Currently i also don't have good reason to let the lvs must display the seg_pe_ranges of thin LV. 

So let's mark this BZ as low priority.

Comment 5 Alasdair Kergon 2012-05-11 12:00:20 UTC
(In reply to comment #2)

> So while device lvol1 is usable by user - it's reported with size 10MB, but
> user may use 1GB (i.e. this size is report by fdisk) - which might be a source
> of user's confusion.

That's just how that field has always been defined - the amount of actual data that can be written to the device.  (Look at the original snapshots.  Look at virtual devices.)  We can't change lv_size.  But I think we should consider adding a display field corresponding to --virtualsize, lv_virtual_size.

Comment 6 Fedora End Of Life 2013-07-04 06:38:04 UTC
This message is a reminder that Fedora 17 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 17. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '17'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 17's end of life.

Bug Reporter:  Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 17 is end of life. If you 
would still like  to see this bug fixed and are able to reproduce it 
against a later version  of Fedora, you are encouraged  change the 
'version' to a later Fedora version prior to Fedora 17's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 7 Fedora End Of Life 2013-08-01 18:14:36 UTC
Fedora 17 changed to end-of-life (EOL) status on 2013-07-30. Fedora 17 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.