RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1758347 - pvcreate --test not yet supported with dlm (segfault _process_pvs_in_vgs)
Summary: pvcreate --test not yet supported with dlm (segfault _process_pvs_in_vgs)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 8.0
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-03 21:23 UTC by Corey Marthaler
Modified: 2021-09-07 11:55 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.03.07-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 16:58:59 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-38093 0 None None None 2021-09-07 11:55:26 UTC
Red Hat Product Errata RHEA-2020:1881 0 None None None 2020-04-28 16:59:15 UTC

Description Corey Marthaler 2019-10-03 21:23:23 UTC
Description of problem:
This was found while attempting to create vdo volumes on exclusively activated LVs on shared cluster volumes. 


[root@host-083 ~]# pcs status
Cluster name: CVIRTSB
Stack: corosync
Current DC: host-083 (version 2.0.2-3.el8-744a30d655) - partition with quorum
Last updated: Thu Oct  3 16:01:15 2019
Last change: Wed Oct  2 13:09:15 2019 by root via cibadmin on host-083

2 nodes configured
5 resources configured

Online: [ host-083 host-084 ]

Full list of resources:

 smoke-apc      (stonith:fence_apc):    Started host-083
 Clone Set: locking-clone [locking]
     Started: [ host-083 host-084 ]

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[root@host-083 ~]# vgs
  VG            #PV #LV #SN Attr   VSize    VFree
  black_bird      7   1   0 wz--ns <209.92g 197.90g

[root@host-083 ~]# lvs -a -o +devices
  LV                                      VG            Attr       LSize   Cpy%Sync Devices
  synced_primary_raid1_3legs_1            black_bird    rwi-a-r---   3.00g 100.00   synced_primary_raid1_3legs_1_rimage_0(0),synced_primary_raid1_3legs_1_rimage_1(0),synced_primary_raid1_3legs_1_rimage_2(0),synced_primary_raid1_3legs_1_rimage_3(0)
  [synced_primary_raid1_3legs_1_rimage_0] black_bird    iwi-aor---   3.00g          /dev/sdb1(1)                         
  [synced_primary_raid1_3legs_1_rimage_1] black_bird    iwi-aor---   3.00g          /dev/sdf1(1)                         
  [synced_primary_raid1_3legs_1_rimage_2] black_bird    iwi-aor---   3.00g          /dev/sde1(1)                         
  [synced_primary_raid1_3legs_1_rimage_3] black_bird    iwi-aor---   3.00g          /dev/sdd1(1)                         
  [synced_primary_raid1_3legs_1_rmeta_0]  black_bird    ewi-aor---   4.00m          /dev/sdb1(0)                         
  [synced_primary_raid1_3legs_1_rmeta_1]  black_bird    ewi-aor---   4.00m          /dev/sdf1(0)                         
  [synced_primary_raid1_3legs_1_rmeta_2]  black_bird    ewi-aor---   4.00m          /dev/sde1(0)                         
  [synced_primary_raid1_3legs_1_rmeta_3]  black_bird    ewi-aor---   4.00m          /dev/sdd1(0)                         

[root@host-083 ~]# vdo create --name synced_primary_raid1_3legs_1_vdo --vdoSlabSize 128M --device /dev/black_bird/synced_primary_raid1_3legs_1
Creating VDO synced_primary_raid1_3legs_1_vdo
vdo: ERROR - Test mode is not yet supported with lock type dlm.


[  272.895091] pvcreate[3993]: segfault at 80 ip 000055c7ec53cbc2 sp 00007ffe101c77a0 error 4 in lvm[55c7ec4c9000+20f000]
[  272.899330] Code: 00 8b 8c 24 84 00 00 00 89 d8 8b bc 24 d0 00 00 00 39 d9 0f 4d c1 89 84 24 84 00 00 00 85 ff 0f 85 d3 f5 ff ff 48 8b 44 24 38 <48> 8b b8 80 00 00 00 e8 82 81 06 00 85 c0 0f 85 aa 03 00 00 48 8b
Oct  3 16:15:03 host-083 kernel: pvcreate[3993]: segfault at 80 ip 000055c7ec53cbc2 sp 00007ffe101c77a0 error 4 in lvm[55c7ec4c9000+20f000]
Oct  3 16:15:03 host-083 kernel: Code: 00 8b 8c 24 84 00 00 00 89 d8 8b bc 24 d0 00 00 00 39 d9 0f 4d c1 89 84 24 84 00 00 00 85 ff 0f 85 d3 f5 ff ff 48 8b 44 24 38 <48> 8b b8 80 00 00 00 e8 82 81 06 00 85 c0 0f 85 aa 03 00 00 48 8b
Oct  3 16:15:04 host-083 systemd[1]: Created slice system-systemd\x2dcoredump.slice.
Oct  3 16:15:04 host-083 systemd[1]: Started Process Core Dump (PID 3994/UID 0).
Oct  3 16:15:05 host-083 vdo[3991]: ERROR - Test mode is not yet supported with lock type dlm.
Oct  3 16:15:05 host-083 systemd-coredump[3995]: Process 3993 (pvcreate) of user 0 dumped core.#012#012Stack trace of thread 3993:#012#0  0x000055c7ec53cbc2 _process_pvs_in_vgs (lvm)#012#1  0x000055c7ec53db1f pvcreate_each_device (lvm)#012#2  0x000055c7ec527b73 pvcreate (lvm)#012#3  0x000055c7ec51f435 lvm_run_command (lvm)#012#4  0x000055c7ec520763 lvm2_main (lvm)#012#5  0x00007eff1c749873 __libc_start_main (libc.so.6)#012#6  0x000055c7ec4fcc9e _start (lvm)


[root@host-083 ~]# pvcreate --test  /dev/black_bird/synced_primary_raid1_3legs_1
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Test mode is not yet supported with lock type dlm.
Segmentation fault (core dumped)


Version-Release number of selected component (if applicable):
kernel-4.18.0-146.el8    BUILT: Tue Sep 24 15:45:19 CDT 2019

lvm2-2.03.05-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
lvm2-libs-2.03.05-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
lvm2-dbusd-2.03.05-5.el8    BUILT: Thu Sep 26 01:43:33 CDT 2019
lvm2-lockd-2.03.05-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019

device-mapper-1.02.163-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
device-mapper-libs-1.02.163-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
device-mapper-event-1.02.163-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
device-mapper-event-libs-1.02.163-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
device-mapper-persistent-data-0.8.5-2.el8    BUILT: Wed Jun  5 10:28:04 CDT 2019

vdo-6.2.1.134-11.el8    BUILT: Fri Aug  2 10:39:03 CDT 2019
kmod-kvdo-6.2.1.138-57.el8    BUILT: Fri Sep 13 11:00:16 CDT 2019


How reproducible:
Everytime

Comment 1 David Teigland 2019-10-04 15:11:41 UTC
missing a flag, pushed fix to master:
https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=a68258339da7e56910a2a3f6f98e43424ac219b6

Comment 4 Marian Csontos 2020-03-04 11:00:36 UTC
This bug is already fixed in the package included in 8.2.0. Let's get this verified

Comment 7 Corey Marthaler 2020-03-04 16:27:37 UTC
Fix verified in the latest 8.2 rpms.

kernel-4.18.0-184.el8    BUILT: Tue Feb 25 21:37:02 CST 2020

lvm2-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
lvm2-libs-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
lvm2-lockd-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020

sanlock-lib-3.8.0-2.el8    BUILT: Wed Jun 12 15:50:27 CDT 2019
vdo-6.2.2.117-13.el8    BUILT: Tue Feb 11 10:04:28 CST 2020
kmod-kvdo-6.2.2.117-63.el8    BUILT: Tue Feb 11 10:04:51 CST 2020


[root@host-083 ~]# vgs
  VG            #PV #LV #SN Attr   VSize    VFree   
  black_bird      7   0   0 wz--ns <209.92g <209.92g

[root@host-083 ~]# lvcreate -aye --type raid1 -m 2 -n synced_primary_raid1_2legs_1 -L 3G black_bird /dev/sdc1:0-2400 /dev/sdh1:0-2400 /dev/sdb1:0-2400
  Logical volume "synced_primary_raid1_2legs_1" created.

[root@host-083 ~]# lvs -a -o +devices
  LV                                      VG            Attr       LSize   Pool   Origin Data%  Cpy%Sync Devices
  synced_primary_raid1_2legs_1            black_bird    rwi-a-r---   3.00g                      0.00     synced_primary_raid1_2legs_1_rimage_0(0),synced_primary_raid1_2legs_1_rimage_1(0),synced_primary_raid1_2legs_1_rimage_2(0)
  [synced_primary_raid1_2legs_1_rimage_0] black_bird    Iwi-aor---   3.00g                               /dev/sdc1(1)
  [synced_primary_raid1_2legs_1_rimage_1] black_bird    Iwi-aor---   3.00g                               /dev/sdh1(1)
  [synced_primary_raid1_2legs_1_rimage_2] black_bird    Iwi-aor---   3.00g                               /dev/sdb1(1)
  [synced_primary_raid1_2legs_1_rmeta_0]  black_bird    ewi-aor---   4.00m                               /dev/sdc1(0)
  [synced_primary_raid1_2legs_1_rmeta_1]  black_bird    ewi-aor---   4.00m                               /dev/sdh1(0)
  [synced_primary_raid1_2legs_1_rmeta_2]  black_bird    ewi-aor---   4.00m                               /dev/sdb1(0)

[root@host-083 ~]# pvcreate --test /dev/black_bird/synced_primary_raid1_2legs_1
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Test mode is not yet supported with lock type dlm.
  Device /dev/black_bird/synced_primary_raid1_2legs_1 excluded by a filter.

[root@host-083 ~]# vdo create --name synced_primary_raid1_3legs_1_vdo --vdoSlabSize 128M --device /dev/black_bird/synced_primary_raid1_2legs_1
Creating VDO synced_primary_raid1_3legs_1_vdo
      Logical blocks defaulted to 64810 blocks.
      The VDO volume can address 256 MB in 2 data slabs, each 128 MB.
      It can grow to address at most 1 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO synced_primary_raid1_3legs_1_vdo
Starting compression on VDO synced_primary_raid1_3legs_1_vdo
VDO instance 1 volume is ready at /dev/mapper/synced_primary_raid1_3legs_1_vdo

[root@host-083 ~]# vdo list
synced_primary_raid1_3legs_1_vdo

Comment 9 errata-xmlrpc 2020-04-28 16:58:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1881


Note You need to log in before you can comment on or make changes to this bug.