RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1609914 - re-configuring storage out from under can result in all lvm cmds segfaulting
Summary: re-configuring storage out from under can result in all lvm cmds segfaulting
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-30 19:26 UTC by Corey Marthaler
Modified: 2021-09-03 12:55 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.187-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-29 19:55:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:3927 0 None None None 2020-09-29 19:56:28 UTC

Description Corey Marthaler 2018-07-30 19:26:13 UTC
Description of problem:
I can't remember exactly what i did here, but i believe it had to do with a test re-partitioning storage that had existing lvmlockd volumes on it. I believe I then tore down the cluster, force removed the lvm storage, and was left with this state. Next step will be to dd zeros over the volume headers...


[root@mckinley-03 ~]# lvs
  WARNING: lvmlockd process is not running.
  Reading without shared global lock.
  Couldn't find device with uuid hBcXA2-HEOe-5WeV-Sm0e-iX8Y-JSa2-I8AaKO.
  Couldn't find device with uuid iFk5mI-OX2H-Q18A-7h9A-4LG1-UPXE-gL3S0H.
  Couldn't find device with uuid WqtKhP-AIOO-N0o5-Gq6B-LR1L-Dc5U-XpTgYj.
  Couldn't find device with uuid QjSZ1B-PY78-41u8-8f6O-EHL0-PvHH-oXgzdy.
  Couldn't find device with uuid uWYlQ0-nZMd-1Tk8-z4ek-kkBa-xIbn-iBhynX.
  WARNING: Device for PV hBcXA2-HEOe-5WeV-Sm0e-iX8Y-JSa2-I8AaKO not found or rejected by a filter.
  WARNING: Device for PV iFk5mI-OX2H-Q18A-7h9A-4LG1-UPXE-gL3S0H not found or rejected by a filter.
  WARNING: Device for PV WqtKhP-AIOO-N0o5-Gq6B-LR1L-Dc5U-XpTgYj not found or rejected by a filter.
  WARNING: Device for PV QjSZ1B-PY78-41u8-8f6O-EHL0-PvHH-oXgzdy not found or rejected by a filter.
  WARNING: Device for PV uWYlQ0-nZMd-1Tk8-z4ek-kkBa-xIbn-iBhynX not found or rejected by a filter.
Segmentation fault (core dumped)

[  320.094926] pvscan[3098]: segfault at 10 ip 000055b00888e179 sp 00007ffe2d45adf0 error 4 in lvm[55b0087d9000+1da000]

Core was generated by `pvscan'.
Program terminated with signal 11, Segmentation fault.
#0  _drop_bad_aliases (dev=0x0) at label/label.c:561
561             int major = (int)MAJOR(dev->dev);
(gdb) bt
#0  _drop_bad_aliases (dev=0x0) at label/label.c:561
#1  _scan_list (f=f@entry=0x55b009724f60, devs=devs@entry=0x7ffe2d45b0c0, failed=failed@entry=0x0, cmd=0x55b0096e9020) at label/label.c:727
#2  0x000055b00888e8b9 in label_scan_devs (cmd=cmd@entry=0x55b0096e9020, f=0x55b009724f60, devs=devs@entry=0x7ffe2d45b0c0) at label/label.c:905
#3  0x000055b00890a6f4 in _lvmetad_pvscan_vg (fmt=0x55b0097100f0, vgid=0x55b0097b8b90 "4L5amDJiMfFGqdlX681IRpjnRkfKuoys", vg=0x55b0097bb1e0, cmd=0x55b0096e9020) at cache/lvmetad.c:1883
#4  lvmetad_vg_lookup (cmd=cmd@entry=0x55b0096e9020, vgname=vgname@entry=0x55b0097b8bb8 "raid_sanity", vgid=vgid@entry=0x55b0097b8b90 "4L5amDJiMfFGqdlX681IRpjnRkfKuoys") at cache/lvmetad.c:1096
#5  0x000055b0088bd2b8 in _vg_read (cmd=cmd@entry=0x55b0096e9020, vgname=vgname@entry=0x55b0097b8bb8 "raid_sanity", vgid=vgid@entry=0x55b0097b8b90 "4L5amDJiMfFGqdlX681IRpjnRkfKuoys", 
    write_lock_held=write_lock_held@entry=0, lockd_state=lockd_state@entry=4, warn_flags=warn_flags@entry=1, consistent=consistent@entry=0x7ffe2d45b50c, precommitted=precommitted@entry=0)
    at metadata/metadata.c:3783
#6  0x000055b0088bdf6c in vg_read_internal (cmd=cmd@entry=0x55b0096e9020, vgname=vgname@entry=0x55b0097b8bb8 "raid_sanity", vgid=vgid@entry=0x55b0097b8b90 "4L5amDJiMfFGqdlX681IRpjnRkfKuoys", 
    write_lock_held=write_lock_held@entry=0, lockd_state=lockd_state@entry=4, warn_flags=warn_flags@entry=1, consistent=consistent@entry=0x7ffe2d45b50c) at metadata/metadata.c:4519
#7  0x000055b0088becd0 in _vg_lock_and_read (lockd_state=4, read_flags=262144, status_flags=0, lock_flags=33, vgid=0x55b0097b8b90 "4L5amDJiMfFGqdlX681IRpjnRkfKuoys", 
    vg_name=0x55b0097b8bb8 "raid_sanity", cmd=0x55b0096e9020) at metadata/metadata.c:5523
#8  vg_read (cmd=cmd@entry=0x55b0096e9020, vg_name=vg_name@entry=0x55b0097b8bb8 "raid_sanity", vgid=vgid@entry=0x55b0097b8b90 "4L5amDJiMfFGqdlX681IRpjnRkfKuoys", read_flags=read_flags@entry=262144, 
    lockd_state=4) at metadata/metadata.c:5631
#9  0x000055b008842007 in _process_pvs_in_vgs (cmd=cmd@entry=0x55b0096e9020, read_flags=read_flags@entry=262144, all_vgnameids=all_vgnameids@entry=0x7ffe2d45b8a0, 
    all_devices=all_devices@entry=0x7ffe2d45b8b0, arg_devices=arg_devices@entry=0x7ffe2d45b880, arg_tags=arg_tags@entry=0x7ffe2d45b860, process_all_pvs=process_all_pvs@entry=1, 
    handle=handle@entry=0x55b00972d988, process_single_pv=process_single_pv@entry=0x55b008836490 <_pvscan_single>, process_all_devices=0) at toollib.c:4383
#10 0x000055b008846bcc in process_each_pv (cmd=cmd@entry=0x55b0096e9020, argc=argc@entry=0, argv=argv@entry=0x7ffe2d45c120, only_this_vgname=only_this_vgname@entry=0x0, 
    all_is_set=all_is_set@entry=0, read_flags=262144, read_flags@entry=0, handle=handle@entry=0x55b00972d988, process_single_pv=process_single_pv@entry=0x55b008836490 <_pvscan_single>)
    at toollib.c:4544
#11 0x000055b008837750 in pvscan (cmd=0x55b0096e9020, argc=<optimized out>, argv=0x7ffe2d45c120) at pvscan.c:718
#12 0x000055b00882d9ab in lvm_run_command (cmd=0x55b0096e9020, argc=0, argv=0x7ffe2d45c120) at lvmcmdline.c:3004
#13 0x000055b00882eb1e in lvm2_main (argc=1, argv=0x7ffe2d45c118) at lvmcmdline.c:3581
#14 0x00007f88137063d5 in __libc_start_main (main=0x55b00880ad90 <main>, argc=1, argv=0x7ffe2d45c118, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffe2d45c108)
    at ../csu/libc-start.c:266
#15 0x000055b00880adbe in _start ()


Version-Release number of selected component (if applicable):
3.10.0-927.el7.x86_64

lvm2-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
lvm2-libs-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
lvm2-cluster-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
lvm2-lockd-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
lvm2-python-boom-0.9-4.el7    BUILT: Fri Jul 20 12:23:30 CDT 2018
cmirror-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-1.02.149-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-libs-1.02.149-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-event-1.02.149-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-event-libs-1.02.149-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017

Comment 2 Corey Marthaler 2018-09-14 16:10:41 UTC
Reproduced w/ the latest rpms as well.

[root@harding-03 ~]# pvremove -ff --config global/use_lvmlockd=0 /dev/mapper/mpath[abcdefgh]1

[root@harding-03 ~]# lvs
  WARNING: lvmlockd process is not running.
  Reading without shared global lock.
  Couldn't find device with uuid 534TzZ-83Fq-eABd-inv0-aloj-5THq-PuE7yN.
  Couldn't find device with uuid gpMdby-smdx-qYU7-1tiL-JYs2-zS7t-wOcePY.
  Couldn't find device with uuid v4nNfA-qXiq-j0Eo-NWb9-lZZw-eRxD-5Edxcl.
  Couldn't find device with uuid kzHlb5-OWrX-xWwk-6b2R-psnD-2RBE-2SqLl4.
  WARNING: Device for PV kzHlb5-OWrX-xWwk-6b2R-psnD-2RBE-2SqLl4 not found or rejected by a filter.
Segmentation fault
[root@harding-03 ~]# pvs
  WARNING: lvmlockd process is not running.
  Reading without shared global lock.
  Couldn't find device with uuid 534TzZ-83Fq-eABd-inv0-aloj-5THq-PuE7yN.
  Couldn't find device with uuid gpMdby-smdx-qYU7-1tiL-JYs2-zS7t-wOcePY.
  Couldn't find device with uuid v4nNfA-qXiq-j0Eo-NWb9-lZZw-eRxD-5Edxcl.
  Couldn't find device with uuid kzHlb5-OWrX-xWwk-6b2R-psnD-2RBE-2SqLl4.
  WARNING: Device for PV kzHlb5-OWrX-xWwk-6b2R-psnD-2RBE-2SqLl4 not found or rejected by a filter.
Segmentation fault



lvm2-2.02.180-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
lvm2-libs-2.02.180-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
lvm2-cluster-2.02.180-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
lvm2-lockd-2.02.180-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
lvm2-python-boom-0.9-11.el7    BUILT: Mon Sep 10 04:49:22 CDT 2018
cmirror-2.02.180-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
device-mapper-1.02.149-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
device-mapper-libs-1.02.149-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
device-mapper-event-1.02.149-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
device-mapper-event-libs-1.02.149-8.el7    BUILT: Mon Sep 10 04:45:22 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017

Comment 3 Corey Marthaler 2019-08-14 15:57:22 UTC
This doesn't require lvmlockd. 

I think this may have to do with mpath, or partitioned devices that have multiple PVs like (/dev/mapper/mpatha1 and /dev/mapper/mpatha2) and just one is in a VG and /dev/mapper/mpatha is failed so lvm loses a PV in the VG as well as an unused PV, though I'm not positive.

============================================================
Iteration 1 of 1 started at Wed Aug 14 10:17:01 CDT 2019
============================================================
SCENARIO (raid1) - [degraded_upconversion_attempt]
Create a raid, fail one of the legs to enter a degraded state, and then attmept an upconversion
lvcreate  --type raid1 -m 1 -n degraded_upconvert -L 100M raid_sanity /dev/mapper/mpatha1 /dev/mapper/mpathg1

primary fail=/dev/mapper/mpatha1
paths=sdl sdt sdab sdd
Failing path sdl on harding-03
Failing path sdt on harding-03
Failing path sdab on harding-03
Failing path sdd on harding-03
Verifying that this VG is now corrupt
pvs /dev/mapper/mpatha1
  Couldn't find device with uuid 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc.
  Error reading device /dev/mapper/mpatha at 0 length 512.
  Error reading device /dev/mapper/mpatha at 0 length 4.
  Error reading device /dev/mapper/mpatha at 4096 length 4.
  Error reading device /dev/mapper/mpatha1 at 0 length 512.
  Error reading device /dev/mapper/mpatha1 at 0 length 4.
  Error reading device /dev/mapper/mpatha1 at 4096 length 4.
  Error reading device /dev/mapper/mpatha2 at 0 length 512.
  Error reading device /dev/mapper/mpatha2 at 0 length 4.
  Error reading device /dev/mapper/mpatha2 at 4096 length 4.
  WARNING: Device for PV 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc not found or rejected by a filter.
  Couldn't find device with uuid 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rmeta_0 while checking used and assumed devices.
  Error reading device /dev/mapper/mpatha at 0 length 512.
  Error reading device /dev/mapper/mpatha at 0 length 4.
  Error reading device /dev/mapper/mpatha at 4096 length 4.
  Error reading device /dev/mapper/mpatha1 at 0 length 512.
  Error reading device /dev/mapper/mpatha1 at 0 length 4.
  Error reading device /dev/mapper/mpatha1 at 4096 length 4.
  Error reading device /dev/mapper/mpatha2 at 0 length 512.
  Error reading device /dev/mapper/mpatha2 at 0 length 4.
  Error reading device /dev/mapper/mpatha2 at 4096 length 4.
  WARNING: Device for PV 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc not found or rejected by a filter.
  Couldn't find device with uuid 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc.
  Failed to find physical volume "/dev/mapper/mpatha1".
  WARNING: Device for PV 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc not found or rejected by a filter.
  Couldn't find device with uuid 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc.

VG reduce to removing failed device and put into degraded raid mode (vgreduce --removemissing -f raid_sanity)
  WARNING: Device for PV 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc not found or rejected by a filter.
  Couldn't find device with uuid 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc.
  device-mapper: create ioctl on raid_sanity-degraded_upconvert_rmeta_0 LVM-p40YvfvuIE9jV8X31k0HUa9HtDSXwEfOXydkacfYSE64RT3ZvVv8ND6FiGfctXne failed: Device or resource busy
  Failed to lock logical volume raid_sanity/degraded_upconvert.
unable to write out consistent VG VG



[root@harding-03 ~]# lvs -a -o +devices
  WARNING: Device for PV 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc not found or rejected by a filter.
  Couldn't find device with uuid 8Ayfsv-cKiD-UVvX-ctka-u4pz-Utlq-MIWQDc.
  WARNING: Reading VG raid_sanity from disk because lvmetad metadata is invalid.
Segmentation fault
Aug 14 10:23:59 harding-03 kernel: lvs[13886]: segfault at 10 ip 0000560ad4c3f489 sp 00007ffe90655400 error 4 in lvm[560ad4b89000+1dc000]



3.10.0-1057.el7.x86_64

lvm2-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-libs-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-cluster-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-lockd-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-libs-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-libs-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019

Comment 4 Corey Marthaler 2019-10-15 17:57:53 UTC
In latest 7.8 build as well.

3.10.0-1101.el7.x86_64
lvm2-2.02.186-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
lvm2-libs-2.02.186-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
lvm2-cluster-2.02.186-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
lvm2-lockd-2.02.186-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
lvm2-python-boom-0.9-20.el7    BUILT: Tue Sep 24 06:18:20 CDT 2019
cmirror-2.02.186-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
device-mapper-1.02.164-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
device-mapper-libs-1.02.164-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
device-mapper-event-1.02.164-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
device-mapper-event-libs-1.02.164-2.el7    BUILT: Tue Sep 24 06:20:17 CDT 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019



Core was generated by `pvscan'.
Program terminated with signal 11, Segmentation fault.
#0  _drop_bad_aliases (dev=0x0) at label/label.c:571
571             int major = (int)MAJOR(dev->dev);
(gdb) 

(gdb) bt full
#0  _drop_bad_aliases (dev=0x0) at label/label.c:571
        strl2 = <optimized out>
        sbuf = {st_dev = 94873635118832, st_ino = 94873635118832, st_nlink = 140732684536720, st_mode = 2102517296, st_uid = 22089, st_gid = 2101731360, __pad0 = 22089, st_rdev = 94873610809701, 
          st_size = 206158430248, st_blksize = 140732684536480, st_blocks = 140732684536288, st_atim = {tv_sec = -9200783834314888960, tv_nsec = 94873635118640}, st_mtim = {tv_sec = 1, 
            tv_nsec = 94873635118896}, st_ctim = {tv_sec = 94873610947665, tv_nsec = 94873635118896}, __unused = {94873635119040, 94873635135896, 2102012712}}
        major = <optimized out>
        minor = <optimized out>
        bad = <optimized out>
        strl = <optimized out>
        name = <optimized out>
#1  _scan_list (f=f@entry=0x56497d49a0b0, devs=devs@entry=0x7ffee1ab8b90, failed=failed@entry=0x0, cmd=0x56497d45e020) at label/label.c:737
        wait_devs = {n = 0x7ffee1ab8960, p = 0x7ffee1ab8960}
        done_devs = {n = 0x56497d4a30b0, p = 0x56497d4a30b0}
        reopen_devs = {n = 0x56497d4a2fd0, p = 0x56497d4a3090}
        devl = 0x56497d4a2fd0
        devl2 = 0x56497d4a2ff0
        bb = 0x56497d4f8d20
        retried_open = 1
        scan_read_errors = 0
        scan_process_errors = 0
        scan_failed_count = 0
        rem_prefetches = <optimized out>
        submit_count = <optimized out>
        scan_failed = <optimized out>
        is_lvm_device = 1
        error = <optimized out>
        ret = <optimized out>
#2  0x000056497bdeb3e9 in label_scan_devs (cmd=cmd@entry=0x56497d45e020, f=0x56497d49a0b0, devs=devs@entry=0x7ffee1ab8b90) at label/label.c:1071
        devl = 0x7ffee1ab8b90
#3  0x000056497be67c84 in _lvmetad_pvscan_vg (fmt=0x56497d485240, vgid=0x56497d4a2b28 "HgQYNQK0EfB7Ou1I2OSzo0EbK4Y4MIaI", vg=0x56497d51de30, cmd=0x56497d45e020) at cache/lvmetad.c:1896
        save_seqno = 0
        info = 0x0
        save_vg = 0x0
        found_new_pvs = 0
        retried_reads = 0
        found = <optimized out>
        pvl = 0x56497d51def0
        pvl_new = <optimized out>
        devlsafe = <optimized out>
        fic = {type = 0, context = {pv_id = 0x0, vg_ref = {vg_name = 0x0, vg_id = 0x0}, private = 0x0}}
        fid = <optimized out>
        baton = {cmd = 0x770000006e, vg = 0x0, fid = 0x7ffee1ab8c2f}
        pvid_s = "\000\216\253\341\376\177\000\000\000\000\004\000\000\000\000\000 \340E}IV\000\000e\361\336{IV\000\000("
        uuid = "\n\000\000\000\000\000\000\000\372z|\354 \177\000\000\020\215\253\341\376\177\000\000`%P}IV\000\000\060+n}IV\000\000\205\034_\355 \177\000\000\000\000\000\000\000\000\000\000\240$P}IV\000"
        vgmeta = <optimized out>
        devl = <optimized out>
        pvs_scan = {n = 0x7ffee1ab8b90, p = 0x7ffee1ab8b90}
        pvs_drop = {n = 0x7ffee1ab8ba0, p = 0x7ffee1ab8ba0}
        vginfo = 0x0
        save_meta = 0x0
        save_dev = 0x0
#4  lvmetad_vg_lookup (cmd=cmd@entry=0x56497d45e020, vgname=vgname@entry=0x56497d4a2b50 "centipede", vgid=vgid@entry=0x56497d4a2b28 "HgQYNQK0EfB7Ou1I2OSzo0EbK4Y4MIaI") at cache/lvmetad.c:1109
        vg = 0x56497d51de30
        vg2 = 0x0
        reply = {error = 0, buffer = {allocated = 4224, used = 3263, 
            mem = 0x56497d4fda10 "response=\"OK\"\nname=\"centipede\"\nmetadata {\n\tid=\"HgQYNQ-K0Ef-B7Ou-1I2O-Szo0-EbK4-Y4MIaI\"\n\tseqno=7\n\tformat=\"lvm2\"\n\tstatus=[\"RESIZEABLE\",\"READ\"]\n\tflags=\"WRITE_LOCKED\"\n\tlock_type=\"dlm\"\n\tlock_args=\"1.0.0:HA"...}, cft = 0x56497d521e40}
        found = 1
        uuid = "HgQYNQ-K0Ef-B7Ou-1I2O-Szo0-EbK4-Y4MIaI\000\000\374\006|\354 \177\000\000\200\223\253\341\376\177\000\000P+J}IV\000"
        fid = 0x56497d6e26e0
        fic = {type = 6, context = {pv_id = 0x56497d521fb8 "centipede", vg_ref = {vg_name = 0x56497d521fb8 "centipede", vg_id = 0x56497d4a2b28 "HgQYNQK0EfB7Ou1I2OSzo0EbK4Y4MIaI"}, 
            private = 0x56497d521fb8}}
        top = <optimized out>
        name = <optimized out>
        diag_name = <optimized out>
        fmt_name = <optimized out>
        fmt = 0x56497d485240
        pvcn = <optimized out>
        pvl = <optimized out>
        rescan = 1
#5  0x000056497be1aa58 in _vg_read (cmd=cmd@entry=0x56497d45e020, vgname=vgname@entry=0x56497d4a2b50 "centipede", vgid=vgid@entry=0x56497d4a2b28 "HgQYNQK0EfB7Ou1I2OSzo0EbK4Y4MIaI", 
    write_lock_held=write_lock_held@entry=0, lockd_state=lockd_state@entry=4, warn_flags=warn_flags@entry=1, consistent=consistent@entry=0x7ffee1ab8fdc, precommitted=precommitted@entry=0)
    at metadata/metadata.c:3825
        fid = 0x0
        fic = {type = 0, context = {pv_id = 0x7ffee1ab8e50 "centipede", vg_ref = {vg_name = 0x7ffee1ab8e50 "centipede", vg_id = 0x21 <Address 0x21 out of bounds>}, private = 0x7ffee1ab8e50}}
        fmt = <optimized out>
        vg = <optimized out>
        correct_vg = 0x0
        mda = <optimized out>
        info = <optimized out>
        inconsistent = 0
        inconsistent_vgid = 0
        inconsistent_pvs = 0
        inconsistent_mdas = 0
        inconsistent_mda_count = 0
        strip_historical_lvs = 0
        update_old_pv_ext = 0
        use_precommitted = 0
        pvids = <optimized out>
        pvl = <optimized out>
        all_pvs = {n = 0x7ffee1ab8e50, p = 0x21}
        uuid = "HgQYNQ-K0Ef-B7Ou-1I2O-Szo0-EbK4-Y4MIaI\000\000\000\000\000\000\000\000\000\000P+J}IV\000\000\200\323\336{IV\000"
        skipped_rescan = 0
        reappeared = 0
        vg_fmtdata = 0x0
        use_previous_vg = 0
#6  0x000056497be1b70c in vg_read_internal (cmd=cmd@entry=0x56497d45e020, vgname=vgname@entry=0x56497d4a2b50 "centipede", vgid=vgid@entry=0x56497d4a2b28 "HgQYNQK0EfB7Ou1I2OSzo0EbK4Y4MIaI", 
    write_lock_held=write_lock_held@entry=0, lockd_state=lockd_state@entry=4, warn_flags=warn_flags@entry=1, consistent=consistent@entry=0x7ffee1ab8fdc) at metadata/metadata.c:4561
        vg = <optimized out>
        lvl = <optimized out>
#7  0x000056497be1c470 in _vg_lock_and_read (lockd_state=4, read_flags=262144, status_flags=0, lock_flags=33, vgid=0x56497d4a2b28 "HgQYNQK0EfB7Ou1I2OSzo0EbK4Y4MIaI", vg_name=0x56497d4a2b50 "centipede", 
    cmd=0x56497d45e020) at metadata/metadata.c:5565
        consistent = 0
        consistent_in = <optimized out>
        is_shared = 0
        write_lock_held = 0
        vg = 0x0
        failure = 0
        already_locked = 0
        warn_flags = 1
#8  vg_read (cmd=cmd@entry=0x56497d45e020, vg_name=vg_name@entry=0x56497d4a2b50 "centipede", vgid=vgid@entry=0x56497d4a2b28 "HgQYNQK0EfB7Ou1I2OSzo0EbK4Y4MIaI", read_flags=read_flags@entry=262144, 
    lockd_state=4) at metadata/metadata.c:5673
        status_flags = 0
        lock_flags = 33
#9  0x000056497bd9cc27 in _process_pvs_in_vgs (cmd=cmd@entry=0x56497d45e020, read_flags=read_flags@entry=262144, all_vgnameids=all_vgnameids@entry=0x7ffee1ab9370, 
    all_devices=all_devices@entry=0x7ffee1ab9380, arg_devices=arg_devices@entry=0x7ffee1ab9350, arg_tags=arg_tags@entry=0x7ffee1ab9330, process_all_pvs=process_all_pvs@entry=1, 
    handle=handle@entry=0x56497d4a2ad8, process_single_pv=process_single_pv@entry=0x56497bd90a70 <_pvscan_single>, process_all_devices=0) at toollib.c:4441
        saved_log_report_state = {report = 0x0, context = LOG_REPORT_CONTEXT_PROCESSING, object_type = LOG_REPORT_OBJECT_TYPE_PV, object_name = 0x0, object_id = 0x0, object_group = 0x0, 
          object_group_id = 0x0}
        uuid = "HgQYNQ-K0Ef-B7Ou-1I2O-Szo0-EbK4-Y4MIaI\000\000\001\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000)\201\274\331?P\200"
        vg = <optimized out>
        vgnl = 0x56497d4a2b00
        vg_name = 0x56497d4a2b50 "centipede"
        vg_uuid = 0x56497d4a2b28 "HgQYNQK0EfB7Ou1I2OSzo0EbK4Y4MIaI"
        lockd_state = 4
        ret_max = 1
        skip = 0
        notfound = 0
        already_locked = 0
#10 0x000056497bda1895 in process_each_pv (cmd=cmd@entry=0x56497d45e020, argc=argc@entry=0, argv=argv@entry=0x7ffee1ab9c80, only_this_vgname=only_this_vgname@entry=0x0, all_is_set=all_is_set@entry=0, 
    read_flags=262144, read_flags@entry=0, handle=handle@entry=0x56497d4a2ad8, process_single_pv=process_single_pv@entry=0x56497bd90a70 <_pvscan_single>) at toollib.c:4602
        saved_log_report_state = {report = 0x0, context = LOG_REPORT_CONTEXT_PROCESSING, object_type = LOG_REPORT_OBJECT_TYPE_NULL, object_name = 0x0, object_id = 0x0, object_group = 0x0, 
          object_group_id = 0x0}
        arg_tags = {n = 0x7ffee1ab9330, p = 0x7ffee1ab9330}
        arg_pvnames = {n = 0x7ffee1ab9340, p = 0x7ffee1ab9340}
        arg_devices = {n = 0x7ffee1ab9350, p = 0x7ffee1ab9350}
        arg_missed = {n = 0x7ffee1ab9360, p = 0x7ffee1ab9360}
        all_vgnameids = {n = 0x56497d4a2b00, p = 0x56497d4a2c38}
        all_devices = {n = 0x56497d4a2c70, p = 0x56497d4a2f88}
        dil = <optimized out>
        process_all_pvs = <optimized out>
        process_all_devices = <optimized out>
        orphans_locked = 0
        ret_max = <optimized out>
        ret = <optimized out>
#11 0x000056497bd922d2 in pvscan (cmd=0x56497d45e020, argc=<optimized out>, argv=0x7ffee1ab9c80) at pvscan.c:835
        params = {new_pvs_found = 0, pvs_found = 0, size_total = 0, size_new = 0, pv_max_name_len = 0, vg_max_name_len = 0, pv_tmp_namelen = 0, pv_tmp_name = 0x0}
        handle = 0x56497d4a2ad8
        reason = 0x0
        ret = <optimized out>
#12 0x000056497bd87ef5 in lvm_run_command (cmd=0x56497d45e020, argc=0, argv=0x7ffee1ab9c80) at lvmcmdline.c:3018
        config_string_cft = <optimized out>
        config_profile_command_cft = <optimized out>
        config_profile_metadata_cft = <optimized out>
        reason = 0x0
        ret = <optimized out>
        locking_type = <optimized out>
        monitoring = 1
        arg_new = <optimized out>
        arg = <optimized out>
        i = <optimized out>
        skip_hyphens = <optimized out>
        refresh_done = <optimized out>
#13 0x000056497bd890ee in lvm2_main (argc=1, argv=0x7ffee1ab9c78) at lvmcmdline.c:3595
        base = <optimized out>
        ret = <optimized out>
        alias = <optimized out>
        custom_fds = {out = -1, err = -1, report = -1}
        cmd = 0x56497d45e020
        run_shell = 0
        run_script = 0
        run_name = <optimized out>
        run_command_name = 0x7ffee1abb5ff "pvscan"
#14 0x00007f20ec75d555 in __libc_start_main (main=0x56497bd656f0 <main>, argc=1, argv=0x7ffee1ab9c78, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffee1ab9c68)
    at ../csu/libc-start.c:266
        result = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -6975843667435942521, 94873610245877, 140732684541040, 0, 0, 6975249487573978503, 7021406279004884359}, mask_was_saved = 0}}, priv = {pad = {0x0, 
              0x0, 0x7f20eda189a3 <_dl_init+275>, 0x7f20edc2c150}, data = {prev = 0x0, cleanup = 0x0, canceltype = -308180573}}}
        not_first_call = <optimized out>
#15 0x000056497bd6571e in _start ()
No symbol table info available.

Comment 5 David Teigland 2019-10-16 18:36:12 UTC
pushed fix to stable (not able to reproduce, but the gdb backtrace makes it obvious that there's a null dev pointer that we should just skip).

https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=f50af80199f723f7b1970ee33ddf959ea79fcbef

Comment 10 Corey Marthaler 2020-04-29 02:05:20 UTC
Marking this verified in the latest rpms. 
Although we weren't consistently able to reproduce this, we have not seen this yet in any of our 7.9 regression testing to date. 

3.10.0-1136.el7.x86_64

lvm2-2.02.187-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
lvm2-libs-2.02.187-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
lvm2-cluster-2.02.187-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
lvm2-lockd-2.02.187-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
lvm2-python-boom-0.9-27.el7    BUILT: Thu Apr 16 12:10:50 CDT 2020
cmirror-2.02.187-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
device-mapper-1.02.170-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
device-mapper-libs-1.02.170-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
device-mapper-event-1.02.170-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
device-mapper-event-libs-1.02.170-2.el7    BUILT: Thu Apr 16 11:56:15 CDT 2020
device-mapper-persistent-data-0.8.5-3.el7    BUILT: Mon Apr 20 09:49:16 CDT 2020

Comment 12 errata-xmlrpc 2020-09-29 19:55:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3927


Note You need to log in before you can comment on or make changes to this bug.