RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 908792 - Add support to lvs to display more status information about thin pools
Summary: Add support to lvs to display more status information about thin pools
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.5
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: ---
Assignee: Peter Rajnoha
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1310661
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-02-07 14:36 UTC by Zdenek Kabelac
Modified: 2016-05-11 01:14 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.02.143-1.el6
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-11 01:14:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0964 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-05-10 22:57:40 UTC

Description Zdenek Kabelac 2013-02-07 14:36:29 UTC
Description of problem:

lvs should be capable to display actual kernel information about discards in-use by the kernel target, and also read-only status of thin pool.


Version-Release number of selected component (if applicable):
2.02.98

How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 5 Peter Rajnoha 2016-01-14 16:00:46 UTC
We already have new characters in the lv_attr field which signify various thin pool states. It's the "health status" part there, from lvs man page:

 Related to thin pool Logical Volumes: (F)ailed, out of (D)ata space, (M) read only.
                 (F)ailed  is  set  if thin pool encounters serious failures and hence no further I/O is permitted at all. The out of (D)ata space is set if thin pool has run out of data space. (M) read only
                 signifies that thin pool encounters certain types of failures but it's still possible to do reads at least, but no metadata changes are allowed.


As for the discards mode actually used in kernel, we have a new "kernel_discards" field now to make a difference between metadata and kernel value:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=b82d5ee0926acee37356f5a322edbb4694081699

Comment 11 Zdenek Kabelac 2016-02-18 17:38:30 UTC
Ok - there has been couple related bugs.

First  - whenever you see 'X' - it's sing there is something going wrong - so getting 'X' is likely a reason to open new BZ.


Now - I've posted patchset which fixes some parsing issues for thin and thin-pool targets - so now the 'F' could be properly reported.

The way I trigger this to appear on lvm2 test suite is - to use 'error' blocks right after header of metadata devices (4th sector - using 2 error sectors).

So here are links for patches:

https://www.redhat.com/archives/lvm-devel/2016-February/msg00078.html
https://www.redhat.com/archives/lvm-devel/2016-February/msg00079.html
https://www.redhat.com/archives/lvm-devel/2016-February/msg00080.html
https://www.redhat.com/archives/lvm-devel/2016-February/msg00081.html
https://www.redhat.com/archives/lvm-devel/2016-February/msg00083.html


Here is the link for actual test:

https://www.redhat.com/archives/lvm-devel/2016-February/msg00082.html


So now - do we need a new BZ - or we will squeeze fixes into current release ?

Comment 14 Roman Bednář 2016-03-11 10:24:26 UTC
Verified using latest packages.

2.6.32-625.el6.x86_64

lvm2-2.02.143-1.el6    BUILT: Wed Feb 24 14:59:50 CET 2016
lvm2-libs-2.02.143-1.el6    BUILT: Wed Feb 24 14:59:50 CET 2016
lvm2-cluster-2.02.143-1.el6    BUILT: Wed Feb 24 14:59:50 CET 2016
udev-147-2.72.el6    BUILT: Tue Mar  1 13:14:05 CET 2016
device-mapper-1.02.117-1.el6    BUILT: Wed Feb 24 14:59:50 CET 2016
device-mapper-libs-1.02.117-1.el6    BUILT: Wed Feb 24 14:59:50 CET 2016
device-mapper-event-1.02.117-1.el6    BUILT: Wed Feb 24 14:59:50 CET 2016
device-mapper-event-libs-1.02.117-1.el6    BUILT: Wed Feb 24 14:59:50 CET 2016
device-mapper-persistent-data-0.6.2-0.1.rc5.el6    BUILT: Wed Feb 24 14:07:09 CET 2016
cmirror-2.02.143-1.el6    BUILT: Wed Feb 24 14:59:50 CET 2016


Results:

Test "kernel_discards" attribute:

# lvs -o +discards,kernel_discards
  LV         VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Discards KDiscards 
  POOL       vg         twi-aotz--   1.00g             0.00   41.41                            passdown nopassdown
  test_lv1   vg         Vwi-a-tz--   1.00m POOL        0.00                                    passdown           
           



Test "out of (D)ata space" lv_attr:

# dd if=/dev/zero of=/dev/vg/test_lv2 bs=4M count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB) copied, 61.7342 s, 6.8 MB/s

# lvs -a
  WARNING: /dev/vg/test_lv: Thin's thin-pool needs inspection.
  WARNING: /dev/vg/test_lv2: Thin's thin-pool needs inspection.
  LV              VG         Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  POOL            vg         twi-aotzD-    1.00g             100.00 13.77                           
  [POOL_tdata]    vg         Twi-ao----    1.00g                                                    
  [POOL_tmeta]    vg         ewi-ao----    4.00m                                                    
  [lvol0_pmspare] vg         ewi-------    4.00m                                                    
  test_lv         vg         Vwi-a-tz-- 1004.00m POOL        99.20                                  
  test_lv2        vg         Vwi-a-tz-- 1000.00m POOL        2.80                                   
  lv_root         vg_virt024 -wi-ao----    6.79g                                                    
  lv_swap         vg_virt024 -wi-ao----  824.00m                                                    




Test "(M) read only" lv_attr:

--->reached thin pool metadata capacity

# lvcreate -V 1M --name test_lv2 --thin vg/POOL
  device-mapper: message ioctl on (253:4) failed: Operation not supported
  Failed to process thin pool message "delete 488".
  Failed to suspend and send message vg/POOL.
# lvs -a
  LV              VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  POOL            vg         twi-aotzM-   1.00g             0.00   98.83                           
  [POOL_tdata]    vg         Twi-ao----   1.00g    

                                             


Test "(F)ailed" lv_attr:

# lvs -a -o +devices
  LV              VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  POOL            vg         twi-aotz--  10.00g             0.00   0.65                             POOL_tdata(0)  
  [POOL_tdata]    vg         Twi-ao----  10.00g                                                     /dev/sda(3)    
  [POOL_tmeta]    vg         ewi-ao----  12.00m                                                     /dev/sda(2563) 
  [lvol0_pmspare] vg         ewi-------  12.00m                                                     /dev/sda(0)    
  lvol1           vg         Vwi-a-tz--   5.00g POOL        0.00                                                   
  lv_root         vg_virt025 -wi-ao----   6.79g                                                     /dev/vda2(0)   
  lv_swap         vg_virt025 -wi-ao---- 824.00m                                                     /dev/vda2(1737)
# dmsetup suspend vg-POOL_tmeta
# dmsetup load vg-POOL_tmeta --table "0 24576 error 8:0 2048"
# dmsetup resume vg-POOL_tmeta
# lvs -a -o +devices
  LV              VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  POOL            vg         twi-aotzF-  10.00g                                                     POOL_tdata(0)  
  [POOL_tdata]    vg         Twi-ao----  10.00g                                                     /dev/sda(3)    
  [POOL_tmeta]    vg         ewi-ao----  12.00m                                                     /dev/sda(2563) 
  [lvol0_pmspare] vg         ewi-------  12.00m                                                     /dev/sda(0)    
  lvol1           vg         Vwi-a-tzF-   5.00g POOL                                                               
  lv_root         vg_virt025 -wi-ao----   6.79g                                                     /dev/vda2(0)   
  lv_swap         vg_virt025 -wi-ao---- 824.00m  

NOTE: putting device into failed state as show above might result in kernel panic as described in bug 1305983 and bug 1310661

Comment 16 errata-xmlrpc 2016-05-11 01:14:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0964.html


Note You need to log in before you can comment on or make changes to this bug.