RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1379426 - pvscan segfault when an zeroed out or invalid pv label is found
Summary: pvscan segfault when an zeroed out or invalid pv label is found
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-26 17:07 UTC by Corey Marthaler
Modified: 2021-09-03 12:40 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.169-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 21:47:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2222 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2017-08-01 18:42:41 UTC

Description Corey Marthaler 2016-09-26 17:07:01 UTC
Description of problem:
I saw this right after a fresh install w/o proper debug packages for a trace yet, but I wanted to file this in hopes that I'll be able to reproduce this again...


I was seeing errors that I couldn't get rid of with pvcreate/pvremove, so I decided to just blank the entire device. While doing that I ran a pvscan on another node in the cluster and it segfaulted. Blanking out the device did allow me to recreate the device with problems and move on with testing...


[root@harding-03 ~]# pvscan
  PV /dev/sda2             VG rhel_harding-03   lvm2 [92.16 GiB / 0    free]
  PV /dev/sdb1             VG rhel_harding-03   lvm2 [93.16 GiB / 0    free]
  PV /dev/sdc1             VG rhel_harding-03   lvm2 [93.16 GiB / 0    free]
  WARNING: Device for PV JJU1JF-43jm-YdlN-nLbD-OD4o-4d9j-l9rTnH not found or rejected by a filter.
  WARNING: Device for PV l0C5YY-OaVB-ZzcQ-rURm-ulT4-pGEI-0QBnlA not found or rejected by a filter.
  WARNING: Device for PV aadF1W-x1M5-Sj5J-gBAJ-5bz2-Q3ys-EOKnqE not found or rejected by a filter.
  WARNING: Device for PV lY1iny-SBM0-gFVd-dxSw-20jp-WUyX-kGzvjN not found or rejected by a filter.
  WARNING: Device for PV 3EOJ4P-aWT8-0igy-7yyM-rWwN-bfJh-I0PET9 not found or rejected by a filter.
  WARNING: Device for PV gKihvZ-tO7d-E2JS-sdT4-FxVK-NrEM-a0pCeq not found or rejected by a filter.
  PV /dev/mapper/mpathf1                        lvm2 [250.00 GiB]
  PV /dev/mapper/mpathg1                        lvm2 [250.00 GiB]
  PV /dev/mapper/mpathc1                        lvm2 [250.00 GiB]
  PV /dev/mapper/mpatha1                        lvm2 [250.00 GiB]
  PV /dev/mapper/mpathd1                        lvm2 [250.00 GiB]
  PV /dev/mapper/mpathh1                        lvm2 [250.00 GiB]
  PV /dev/mapper/mpathe1                        lvm2 [250.00 GiB]
  Total: 10 [1.98 TiB] / in use: 3 [278.47 GiB] / in no VG: 7 [1.71 TiB]



[root@harding-02 ~]# dd if=/dev/zero of=/dev/mapper/mpathb1


[root@harding-03 ~]# pvscan
  PV /dev/sda2             VG rhel_harding-03   lvm2 [92.16 GiB / 0    free]
  PV /dev/sdb1             VG rhel_harding-03   lvm2 [93.16 GiB / 0    free]
  PV /dev/sdc1             VG rhel_harding-03   lvm2 [93.16 GiB / 0    free]
  WARNING: Device for PV JJU1JF-43jm-YdlN-nLbD-OD4o-4d9j-l9rTnH not found or rejected by a filter.
  WARNING: Device for PV l0C5YY-OaVB-ZzcQ-rURm-ulT4-pGEI-0QBnlA not found or rejected by a filter.
  WARNING: Device for PV aadF1W-x1M5-Sj5J-gBAJ-5bz2-Q3ys-EOKnqE not found or rejected by a filter.
  WARNING: Device for PV lY1iny-SBM0-gFVd-dxSw-20jp-WUyX-kGzvjN not found or rejected by a filter.
  WARNING: Device for PV 3EOJ4P-aWT8-0igy-7yyM-rWwN-bfJh-I0PET9 not found or rejected by a filter.
  WARNING: Device for PV gKihvZ-tO7d-E2JS-sdT4-FxVK-NrEM-a0pCeq not found or rejected by a filter.
Segmentation fault



[  180.179214] pvscan[3261]: segfault at 34 ip 00007f8bf78130af sp 00007fff3db6f690 error 4 in lvm[7f8bf76ff000+19c000]





Version-Release number of selected component (if applicable):
3.10.0-510.el7.x86_64

lvm2-2.02.165-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
lvm2-libs-2.02.165-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
lvm2-cluster-2.02.165-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-1.02.134-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-libs-1.02.134-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-event-1.02.134-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-event-libs-1.02.134-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016

Comment 2 Corey Marthaler 2016-09-28 17:16:30 UTC
I believe this is the same issue.

Core was generated by `pvscan'.
Program terminated with signal 11, Segmentation fault.
#0  lvmetad_pvscan_vg (vg=0x7f85b4183a70, cmd=0x7f85b40c2020) at cache/lvmetad.c:2064
2064            log_debug_lvmetad("Rescan VG %s done (seqno %u).", vg_ret->name, vg_ret->seqno);
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 elfutils-libelf-0.166-2.el7.x86_64 elfutils-libs-0.166-2.el7.x86_64 glibc-2.17-157.el7.x86_64 libattr-2.4.46-12.el7.x86_64 libblkid-2.23.2-33.el7.x86_64 libcap-2.22-8.el7.x86_64 libgcc-4.8.5-11.el7.x86_64 libselinux-2.5-6.el7.x86_64 libsepol-2.5-6.el7.x86_64 libuuid-2.23.2-33.el7.x86_64 ncurses-libs-5.9-13.20130511.el7.x86_64 pcre-8.32-15.el7_2.1.x86_64 readline-6.2-9.el7.x86_64 systemd-libs-219-30.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0  lvmetad_pvscan_vg (vg=0x7f85b4183a70, cmd=0x7f85b40c2020) at cache/lvmetad.c:2064
#1  lvmetad_vg_lookup (cmd=cmd@entry=0x7f85b40c2020, vgname=vgname@entry=0x7f85b410d248 "global", vgid=vgid@entry=0x7f85b410d220 "Aee8i57ZItcraiSuOfkbezHiexEC3pk0") at cache/lvmetad.c:1092
#2  0x00007f85b317da6d in lvmcache_get_vg (cmd=cmd@entry=0x7f85b40c2020, vgname=vgname@entry=0x7f85b410d248 "global", vgid=vgid@entry=0x7f85b410d220 "Aee8i57ZItcraiSuOfkbezHiexEC3pk0", precommitted=precommitted@entry=0) at cache/lvmcache.c:1238
#3  0x00007f85b31d582a in _vg_read (cmd=cmd@entry=0x7f85b40c2020, vgname=vgname@entry=0x7f85b410d248 "global", vgid=vgid@entry=0x7f85b410d220 "Aee8i57ZItcraiSuOfkbezHiexEC3pk0", warn_flags=warn_flags@entry=1, 
    consistent=consistent@entry=0x7ffeff6b4560, precommitted=precommitted@entry=0) at metadata/metadata.c:4167
#4  0x00007f85b31d6c1a in vg_read_internal (cmd=cmd@entry=0x7f85b40c2020, vgname=vgname@entry=0x7f85b410d248 "global", vgid=vgid@entry=0x7f85b410d220 "Aee8i57ZItcraiSuOfkbezHiexEC3pk0", warn_flags=warn_flags@entry=1, 
    consistent=consistent@entry=0x7ffeff6b4560) at metadata/metadata.c:4792
#5  0x00007f85b31d886c in _vg_lock_and_read (lockd_state=4, read_flags=262144, status_flags=0, lock_flags=33, vgid=0x7f85b410d220 "Aee8i57ZItcraiSuOfkbezHiexEC3pk0", vg_name=0x7f85b410d248 "global", cmd=0x7f85b40c2020) at metadata/metadata.c:5815
#6  vg_read (cmd=cmd@entry=0x7f85b40c2020, vg_name=vg_name@entry=0x7f85b410d248 "global", vgid=vgid@entry=0x7f85b410d220 "Aee8i57ZItcraiSuOfkbezHiexEC3pk0", read_flags=read_flags@entry=262144, lockd_state=4) at metadata/metadata.c:5918
#7  0x00007f85b3163e47 in _process_pvs_in_vgs (cmd=cmd@entry=0x7f85b40c2020, read_flags=read_flags@entry=262144, all_vgnameids=all_vgnameids@entry=0x7ffeff6b48f0, all_devices=all_devices@entry=0x7ffeff6b4900, 
    arg_devices=arg_devices@entry=0x7ffeff6b48d0, arg_tags=arg_tags@entry=0x7ffeff6b48b0, process_all_pvs=process_all_pvs@entry=1, handle=handle@entry=0x7f85b410bd48, process_single_pv=process_single_pv@entry=0x7f85b3159240 <_pvscan_single>, 
    process_all_devices=0) at toollib.c:3487
#8  0x00007f85b316788c in process_each_pv (cmd=cmd@entry=0x7f85b40c2020, argc=argc@entry=0, argv=argv@entry=0x7ffeff6b4e30, only_this_vgname=only_this_vgname@entry=0x0, all_is_set=all_is_set@entry=0, read_flags=262144, read_flags@entry=0, 
    handle=handle@entry=0x7f85b410bd48, process_single_pv=process_single_pv@entry=0x7f85b3159240 <_pvscan_single>) at toollib.c:3644
#9  0x00007f85b315a6ca in pvscan (cmd=0x7f85b40c2020, argc=<optimized out>, argv=0x7ffeff6b4e30) at pvscan.c:644
#10 0x00007f85b314ffb8 in lvm_run_command (cmd=cmd@entry=0x7f85b40c2020, argc=0, argc@entry=1, argv=0x7ffeff6b4e30, argv@entry=0x7ffeff6b4e28) at lvmcmdline.c:1723
#11 0x00007f85b3150db6 in lvm2_main (argc=1, argv=0x7ffeff6b4e28) at lvmcmdline.c:2249
#12 0x00007f85b1e4bb35 in __libc_start_main () from /lib64/libc.so.6
#13 0x00007f85b313352e in _start ()
(gdb)

Comment 3 Zdenek Kabelac 2016-11-04 15:25:13 UTC
I believe it's been fixed by this upstream commit:

https://www.redhat.com/archives/lvm-devel/2016-September/msg00103.html

in version 2.02.167

Comment 7 Corey Marthaler 2017-05-18 19:53:34 UTC
Marking verified (SanityOnly) as this was never reliably reproducible.

I blanked out many PVs from one node in the cluster and scanned from others w/o any issues.


3.10.0-657.el7.x86_64
lvm2-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
lvm2-libs-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
lvm2-cluster-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-libs-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-event-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-event-libs-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017
cmirror-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017

Comment 8 errata-xmlrpc 2017-08-01 21:47:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2222


Note You need to log in before you can comment on or make changes to this bug.