RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1533334 - lvdisplay command with -m switch produces segfault for cached logical volumes and cached POOL LVs
Summary: lvdisplay command with -m switch produces segfault for cached logical volumes...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-11 05:29 UTC by Nitin Yewale
Modified: 2021-09-03 12:54 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.177-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 15:23:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0853 0 None None None 2018-04-10 15:24:41 UTC

Description Nitin Yewale 2018-01-11 05:29:43 UTC
Description of problem:

lvdisplay command with -m switch produces segfault for cached logical volumes and cached POOL LVs


Version-Release number of selected component (if applicable):
$ grep lvm2 installed-rpms 
lvm2-2.02.171-8.el7.x86_64                                  Fri Dec  1 10:08:36 2017
lvm2-libs-2.02.171-8.el7.x86_64                             Fri Dec  1 10:08:25 2017



How reproducible:
Every time. Reproduced this issue in our environment as well. Will post the details in next comment. From customer environment we have lvdisplay command core file on cache LV and not on cache pool lv.

Steps to Reproduce:
1. Create a cache LV
2. Run `lvdisplay -vm /dev/vg/lv


Actual results:
Segfault happens and core is generated.

Expected results:
Ideally there should be no segfault.

Additional info:

Analysis of core file:




Program terminated with signal 11, Segmentation fault.
#0  0x00005641a155c144 in _cache_display (seg=0x5641a3277360) at cache_segtype/cache.c:58
58		if ((n = pool_seg->policy_settings->child))
(gdb) 



(gdb) bt
#0  0x00005641a155c144 in _cache_display (seg=0x5641a3277360) at cache_segtype/cache.c:58
#1  0x00005641a14dbaee in lvdisplay_segments (lv=lv@entry=0x5641a3275580) at display/display.c:663
#2  0x00005641a148d078 in _lvdisplay_single (cmd=cmd@entry=0x5641a30e5020, lv=0x5641a3275580, 
    handle=handle@entry=0x5641a312efb8) at lvdisplay.c:29
#3  0x00005641a14aa578 in process_each_lv_in_vg (cmd=cmd@entry=0x5641a30e5020, vg=vg@entry=0x5641a325aaf0, 
    arg_lvnames=arg_lvnames@entry=0x7ffe1f89bdd0, tags_in=tags_in@entry=0x7ffe1f89bd80, 
    stop_on_error=stop_on_error@entry=0, handle=handle@entry=0x5641a312efb8, 
    check_single_lv=check_single_lv@entry=0x0, 
    process_single_lv=process_single_lv@entry=0x5641a148cff0 <_lvdisplay_single>) at toollib.c:3144
#4  0x00005641a14ab9c4 in _process_lv_vgnameid_list (process_single_lv=0x5641a148cff0 <_lvdisplay_single>, 
    check_single_lv=0x0, handle=0x5641a312efb8, arg_tags=0x7ffe1f89bd80, arg_lvnames=0x7ffe1f89bda0, 
    arg_vgnames=0x7ffe1f89bd90, vgnameids_to_process=0x7ffe1f89bdc0, read_flags=0, cmd=0x5641a30e5020)
    at toollib.c:3639
#5  process_each_lv (cmd=cmd@entry=0x5641a30e5020, argc=argc@entry=1, argv=argv@entry=0x7ffe1f89c2b8, 
    one_vgname=one_vgname@entry=0x0, one_lvname=one_lvname@entry=0x0, read_flags=read_flags@entry=0, 
    handle=0x5641a312efb8, handle@entry=0x0, check_single_lv=check_single_lv@entry=0x0, 
    process_single_lv=process_single_lv@entry=0x5641a148cff0 <_lvdisplay_single>) at toollib.c:3791
#6  0x00005641a148d1ea in lvdisplay (cmd=0x5641a30e5020, argc=1, argv=0x7ffe1f89c2b8) at lvdisplay.c:61
#7  0x00005641a1493595 in lvm_run_command (cmd=cmd@entry=0x5641a30e5020, argc=1, argc@entry=3, argv=0x7ffe1f89c2b8, 
    argv@entry=0x7ffe1f89c2a8) at lvmcmdline.c:2954
#8  0x00005641a14944d3 in lvm2_main (argc=3, argv=0x7ffe1f89c2a8) at lvmcmdline.c:3485
#9  0x00007f5a6479fc05 in __libc_start_main (main=0x5641a1472f80 <main>, argc=3, ubp_av=0x7ffe1f89c2a8, 
    init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffe1f89c298)
    at ../csu/libc-start.c:274
#10 0x00005641a1472fae in _start ()
(gdb) 


Crash happened while executing the following code snippet 

     46 static void _cache_display(const struct lv_segment *seg)
     47 {
     48         const struct dm_config_node *n;
     49         const struct lv_segment *pool_seg =
     50                 seg_is_cache_pool(seg) ? seg : first_seg(seg->pool_lv);
     51 
     52         log_print("  Chunk size\t\t%s",
     53                   display_size(seg->lv->vg->cmd, pool_seg->chunk_size));
     54         log_print("  Metadata format\t%u", pool_seg->cache_metadata_format);
     55         log_print("  Mode\t\t%s", get_cache_mode_name(pool_seg));
     56         log_print("  Policy\t\t%s", pool_seg->policy_name);
     57 
     58         if ((n = pool_seg->policy_settings->child))  <-----
     59                 dm_config_write_node(n, _cache_out_line, NULL);
     60 
     61         log_print(" ");
     62 }
     63 


(gdb) p ((struct lv_segment  *) 0x55cdb9b34400)->policy_settings
$6 = (struct dm_config_node *) 0x0
(gdb) p ((struct lv_segment  *) 0x55cdb9b34400)->policy_settings->child
Cannot access memory at address 0x18


   3490 struct dm_config_node {
   3491         const char *key;
   3492         struct dm_config_node *parent, *sib, *child;
   3493         struct dm_config_value *v;
   3494         int id;
   3495 };

Comment 2 Nitin Yewale 2018-01-11 05:38:48 UTC
From test environment we see :


[root@vm252-103 ~]# pvs
  PV         VG            Fmt  Attr PSize   PFree 
  /dev/sda2  rhel_vm253-88 lvm2 a--   74.00g     0 
  /dev/sdb   example_vg    lvm2 a--   <2.00g <2.00g
  /dev/sdd1  example_vg    lvm2 a--  <10.00g <6.00g

[root@vm252-103 ~]# lvcreate -L 100M -n lv_cache_meta example_vg /dev/sdb
  Logical volume "lv_cache_meta" created.

[root@vm252-103 ~]# lvcreate -L 800M -n lv_cache example_vg /dev/sdb
  Logical volume "lv_cache" created.

[root@vm252-103 ~]# lvconvert --type cache-pool --poolmetadata example_vg/lv_cache_meta example_vg/lv_cache
  WARNING: Converting logical volume example_vg/lv_cache and example_vg/lv_cache_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert example_vg/lv_cache and example_vg/lv_cache_meta? [y/n]: y
  Converted example_vg/lv_cache_cdata to cache pool.

[root@vm252-103 ~]# lvs -a -o +devices
  LV               VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices          
  lv1              example_vg    -wi-a-----   4.00g                                                     /dev/sdd1(0)     
  lv_cache         example_vg    Cwi---C--- 800.00m                                                     lv_cache_cdata(0)
  [lv_cache_cdata] example_vg    Cwi------- 800.00m                                                     /dev/sdb(25)     
  [lv_cache_cmeta] example_vg    ewi------- 100.00m                                                     /dev/sdb(0)      
  [lvol0_pmspare]  example_vg    ewi------- 100.00m                                                     /dev/sdd1(1024)  
  root             rhel_vm253-88 -wi-ao----  70.00g                                                     /dev/sda2(1024)  
  swap             rhel_vm253-88 -wi-ao----   4.00g                                                     /dev/sda2(0)     
[root@vm252-103 ~]# 



[root@vm252-103 ~]# lvdisplay -am /dev/example_vg/lv_cache   <---- cache pool LV
  --- Logical volume ---
  LV Path                /dev/example_vg/lv_cache
  LV Name                lv_cache
  VG Name                example_vg
  LV UUID                yvnuJT-6Jya-2gld-u1WO-7tTR-1ckX-dWbQ0Q
  LV Write Access        read/write
  LV Creation host, time vm252-103.gsslab.pnq2.redhat.com, 2018-01-10 18:57:49 +0530
  LV Pool metadata       lv_cache_cmeta
  LV Pool data           lv_cache_cdata
  LV Status              NOT available
  LV Size                800.00 MiB
  Current LE             200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Segments ---
  Logical extents 0 to 199:
    Type    cache-pool
    Chunk size    64.00 KiB
    Metadata format 0
  Internal error: Cache pool example_vg/lv_cache has undefined cache mode, using writethrough instead.
    Mode    writethrough
    Policy    (null)
Segmentation fault (core dumped)   <-----------------


[root@vm252-103 ~]# lvdisplay -a /dev/example_vg/lv_cache  <----- cache pool LV     
  --- Logical volume ---
  LV Path                /dev/example_vg/lv_cache
  LV Name                lv_cache
  VG Name                example_vg
  LV UUID                yvnuJT-6Jya-2gld-u1WO-7tTR-1ckX-dWbQ0Q
  LV Write Access        read/write
  LV Creation host, time vm252-103.gsslab.pnq2.redhat.com, 2018-01-10 18:57:49 +0530
  LV Pool metadata       lv_cache_cmeta
  LV Pool data           lv_cache_cdata
  LV Status              NOT available
  LV Size                800.00 MiB
  Current LE             200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
[root@vm252-103 ~]# lvdisplay -am /dev/example_vg/lv1  <-- not yet attached with the cache pool
  --- Logical volume ---
  LV Path                /dev/example_vg/lv1
  LV Name                lv1
  VG Name                example_vg
  LV UUID                LfgUSi-VQLP-Nueh-PYXp-ysCp-GFzL-92flsq
  LV Write Access        read/write
  LV Creation host, time vm252-103.gsslab.pnq2.redhat.com, 2018-01-10 18:41:45 +0530
  LV Status              available
  # open                 0
  LV Size                4.00 GiB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Segments ---
  Logical extents 0 to 1023:
    Type    linear
    Physical volume /dev/sdd1
    Physical extents  0 to 1023
   
   
[root@vm252-103 ~]# 


[root@vm252-103 ~]# lvconvert --type cache --cachepool example_vg/lv_cache example_vg/lv1 
Do you want wipe existing metadata of cache pool example_vg/lv_cache? [y/n]: y
  Logical volume example_vg/lv1 is now cached.


[root@vm252-103 ~]# lvdisplay -am /dev/example_vg/lv1
  --- Logical volume ---
  LV Path                /dev/example_vg/lv1
  LV Name                lv1
  VG Name                example_vg
  LV UUID                LfgUSi-VQLP-Nueh-PYXp-ysCp-GFzL-92flsq
  LV Write Access        read/write
  LV Creation host, time vm252-103.gsslab.pnq2.redhat.com, 2018-01-10 18:41:45 +0530
  LV Cache pool name     lv_cache
  LV Cache origin name   lv1_corig
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Cache used blocks      0.07%
  Cache metadata blocks  0.14%
  Cache dirty blocks     0.00%
  Cache read hits/misses 0 / 47
  Cache wrt hits/misses  0 / 6
  Cache demotions        0
  Cache promotions       9
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Segments ---
  Logical extents 0 to 1023:
    Type    cache
    Chunk size    64.00 KiB
    Metadata format 2
    Mode    writethrough
    Policy    smq
Segmentation fault (core dumped)   <-----------------
[root@vm252-103 ~]# 



# grep use_lvmetad /etc/lvm/lvm.conf 
	# See the use_lvmetad comment for a special case regarding filters.
	#     This is incompatible with lvmetad. If use_lvmetad is enabled,
	# Configuration option global/use_lvmetad.
	# while use_lvmetad was disabled, it must be stopped, use_lvmetad
	use_lvmetad = 1


#pvscan --cache ; vgscan --cache ; lvscan --cache

[root@vm252-103 ~]# lvdisplay -vm /dev/example_vg/lv1 
  --- Logical volume ---
  LV Path                /dev/example_vg/lv1
  LV Name                lv1
  VG Name                example_vg
  LV UUID                LfgUSi-VQLP-Nueh-PYXp-ysCp-GFzL-92flsq
  LV Write Access        read/write
  LV Creation host, time vm252-103.gsslab.pnq2.redhat.com, 2018-01-10 18:41:45 +0530
  LV Cache pool name     lv_cache
  LV Cache origin name   lv1_corig
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Cache used blocks      0.14%
  Cache metadata blocks  0.14%
  Cache dirty blocks     0.00%
  Cache read hits/misses 10 / 49
  Cache wrt hits/misses  18 / 37
  Cache demotions        0
  Cache promotions       18
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Segments ---
  Logical extents 0 to 1023:
    Type		cache
    Chunk size		64.00 KiB
    Metadata format	2
    Mode		writethrough
    Policy		smq
Segmentation fault (core dumped)   <-----------------

Comment 4 Zdenek Kabelac 2018-01-11 08:55:04 UTC
This has been already fixed with upstream version 2.02.172:

Commit: 58e075f5fb12d8bce4ebb1c19c9f20b10d984e57

cache: fix lvdisplay output

Comment 8 Roman Bednář 2018-02-12 08:00:45 UTC
Verified.

# lvdisplay -am /dev/vg/lv_cache
  --- Logical volume ---
  LV Path                /dev/vg/lv_cache
  LV Name                lv_cache
  VG Name                vg
  LV UUID                r2EO1I-P5au-yzuV-OwiM-pFKH-r6Nn-mkZhx7
  LV Write Access        read/write
  LV Creation host, time virt-379.cluster-qe.lab.eng.brq.redhat.com, 2018-02-12 08:28:16 +0100
  LV Pool metadata       lv_cache_cmeta
  LV Pool data           lv_cache_cdata
  LV Status              NOT available
  LV Size                800.00 MiB
  Current LE             200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Segments ---
  Logical extents 0 to 199:
    Type		cache-pool
    Chunk size		64.00 KiB
   

# lvconvert --type cache --cachepool vg/lv_cache vg/lv1
Do you want wipe existing metadata of cache pool vg/lv_cache? [y/n]: y
  Logical volume vg/lv1 is now cached.

# lvdisplay -am /dev/vg/lv1
  --- Logical volume ---
  LV Path                /dev/vg/lv1
  LV Name                lv1
  VG Name                vg
  LV UUID                TthFnP-aUKE-h3G8-3rTe-lR8Z-L6CV-0q4ul7
  LV Write Access        read/write
  LV Creation host, time virt-379.cluster-qe.lab.eng.brq.redhat.com, 2018-02-12 08:56:52 +0100
  LV Cache pool name     lv_cache
  LV Cache origin name   lv1_corig
  LV Status              available
  # open                 0
  LV Size                100.00 MiB
  Cache used blocks      0.02%
  Cache metadata blocks  0.15%
  Cache dirty blocks     0.00%
  Cache read hits/misses 3 / 52
  Cache wrt hits/misses  0 / 0
  Cache demotions        0
  Cache promotions       2
  Current LE             25
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Segments ---
  Logical extents 0 to 24:
    Type		cache
    Chunk size		64.00 KiB
    Metadata format	2
    Mode		writethrough
    Policy		smq



3.10.0-847.el7.x86_64

lvm2-2.02.177-2.el7    BUILT: Wed Feb  7 17:39:26 CET 2018
lvm2-libs-2.02.177-2.el7    BUILT: Wed Feb  7 17:39:26 CET 2018
lvm2-cluster-2.02.177-2.el7    BUILT: Wed Feb  7 17:39:26 CET 2018
device-mapper-1.02.146-2.el7    BUILT: Wed Feb  7 17:39:26 CET 2018
device-mapper-libs-1.02.146-2.el7    BUILT: Wed Feb  7 17:39:26 CET 2018
device-mapper-event-1.02.146-2.el7    BUILT: Wed Feb  7 17:39:26 CET 2018
device-mapper-event-libs-1.02.146-2.el7    BUILT: Wed Feb  7 17:39:26 CET 2018

Comment 11 errata-xmlrpc 2018-04-10 15:23:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0853


Note You need to log in before you can comment on or make changes to this bug.