RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1881235 - dm-event.service segfaults with creating a thin snapshot
Summary: dm-event.service segfaults with creating a thin snapshot
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.8
Hardware: Unspecified
OS: Unspecified
urgent
unspecified
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1887598
TreeView+ depends on / blocked
 
Reported: 2020-09-21 21:50 UTC by bugzilla
Modified: 2021-09-03 12:54 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.187-6.el7_9.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1887598 (view as bug list)
Environment:
Last Closed: 2020-12-15 11:23:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
lvm.conf (59.68 KB, text/plain)
2020-09-21 21:50 UTC, bugzilla
no flags Details
dmeventd debug output (7.91 KB, text/plain)
2020-09-23 19:21 UTC, bugzilla
no flags Details
lvmdump -a (617.15 KB, application/gzip)
2020-10-02 23:23 UTC, bugzilla
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5461 0 None None None 2020-12-15 11:23:22 UTC

Description bugzilla 2020-09-21 21:50:13 UTC
Created attachment 1715601 [details]
lvm.conf

Description of problem:

dmeventd segfaults when creating a thin snapshot


Version-Release number of selected component (if applicable):

lvm2-2.02.186-7.el7_8.2.x86_64
lvm2-libs-2.02.186-7.el7_8.2.x86_64

device-mapper-1.02.164-7.el7_8.2.x86_64
device-mapper-event-1.02.164-7.el7_8.2.x86_64
device-mapper-event-libs-1.02.164-7.el7_8.2.x86_64
device-mapper-libs-1.02.164-7.el7_8.2.x86_64



How reproducible:

very


Steps to Reproduce:
1. create a snapshot of a thin volume
2. observe dmeventd segfault in logs

Actual results:

A snapshot is created, but dmeventd segfaults with the following logs:

kernel: dmeventd[19894]: segfault at 7fe317d89578 ip 00007fe3145fbb57 sp 00007fe317d89580 error 6 in liblvm2cmd.so.2.02[7fe31453f000+1d7000]
kernel: Code: f1 31 c9 ba d0 00 00 00 bf 84 00 00 00 31 c0 e8 4f 9f fa ff e9 2e ff ff ff 49 8d 44 24 1e 48 83 e0 f0 48 29 c4 48 8d 5c 24 0f <e8> 04 fc ff ff 48 83 e3 f0 48 63 d0 4a 8d 44 23 f8 48 39 c3 0f
 83
dmeventd[19905]: dmeventd ready for processing.



Expected results:

A snapshot should be created without a segfault in dmeventd


Additional info:

I ran dmeventd with gdb and got the below result when creating a thin snapshot. The only memory setting we have changed in our lvm.conf is the following:
    reserved_memory = 131072

Note that I have tried commenting the above setting and I still get the same result. See attached for our full lvm.conf.


==== gdb attached to dmeventd process ====

~]# gdb -p 26066
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-119.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Attaching to process 26066
Reading symbols from /usr/sbin/dmeventd...Reading symbols from /usr/lib/debug/usr/sbin/dmeventd.debug...done.
done.
Reading symbols from /lib64/libdl.so.2...Reading symbols from /usr/lib/debug/usr/lib64/libdl-2.17.so.debug...done.
done.
Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/libdevmapper-event.so.1.02...Reading symbols from /usr/lib/debug/usr/lib64/libdevmapper-event.so.1.02.debug...done.
done.
Loaded symbols for /lib64/libdevmapper-event.so.1.02
Reading symbols from /lib64/libdevmapper.so.1.02...Reading symbols from /usr/lib/debug/usr/lib64/libdevmapper.so.1.02.debug...done.
done.
Loaded symbols for /lib64/libdevmapper.so.1.02
Reading symbols from /lib64/libpthread.so.0...Reading symbols from /usr/lib/debug/usr/lib64/libpthread-2.17.so.debug...done.
done.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /lib64/libgcc_s.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libgcc_s-4.8.5-20150702.so.1.debug...done.
done.
Loaded symbols for /lib64/libgcc_s.so.1
Reading symbols from /lib64/libc.so.6...Reading symbols from /usr/lib/debug/usr/lib64/libc-2.17.so.debug...done.
done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from /usr/lib/debug/usr/lib64/ld-2.17.so.debug...done.
done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libselinux.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libselinux.so.1.debug...done.
done.
Loaded symbols for /lib64/libselinux.so.1
Reading symbols from /lib64/libsepol.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libsepol.so.1.debug...done.
done.
Loaded symbols for /lib64/libsepol.so.1
Reading symbols from /lib64/libudev.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libudev.so.1.6.2.debug...done.
done.
Loaded symbols for /lib64/libudev.so.1
Reading symbols from /lib64/libm.so.6...Reading symbols from /usr/lib/debug/usr/lib64/libm-2.17.so.debug...done.
done.
Loaded symbols for /lib64/libm.so.6
Reading symbols from /lib64/libpcre.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libpcre.so.1.2.0.debug...done.
done.
Loaded symbols for /lib64/libpcre.so.1
Reading symbols from /lib64/librt.so.1...Reading symbols from /usr/lib/debug/usr/lib64/librt-2.17.so.debug...done.
done.
Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libcap.so.2...Reading symbols from /usr/lib/debug/usr/lib64/libcap.so.2.22.debug...done.
done.
Loaded symbols for /lib64/libcap.so.2
Reading symbols from /lib64/libdw.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libdw-0.176.so.debug...done.
done.
Loaded symbols for /lib64/libdw.so.1
Reading symbols from /lib64/libattr.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libattr.so.1.1.0.debug...done.
done.
Loaded symbols for /lib64/libattr.so.1
Reading symbols from /lib64/libelf.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libelf-0.176.so.debug...done.
done.
Loaded symbols for /lib64/libelf.so.1
Reading symbols from /lib64/libz.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libz.so.1.2.7.debug...done.
done.
Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/liblzma.so.5...Reading symbols from /usr/lib/debug/usr/lib64/liblzma.so.5.2.2.debug...done.
done.
Loaded symbols for /lib64/liblzma.so.5
Reading symbols from /lib64/libbz2.so.1...Reading symbols from /usr/lib/debug/usr/lib64/libbz2.so.1.0.6.debug...done.
done.
Loaded symbols for /lib64/libbz2.so.1
0x00007f49a3ad0983 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:81
81    T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)
(gdb) continue
Continuing.
[New Thread 0x7f49a48e2700 (LWP 26440)]
[New Thread 0x7f49a1cd4700 (LWP 26441)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f49a48e2700 (LWP 26440)]
_allocate_memory () at mm/memlock.c:172
172            _touch_memory(stack_mem, _size_stack);
(gdb) bt
#0  _allocate_memory () at mm/memlock.c:172
#1  0x00007f49a10d42df in _lock_mem (cmd=0x7f499c0008e0) at mm/memlock.c:513
#2  _lock_mem_if_needed (cmd=0x7f499c0008e0) at mm/memlock.c:587
#3  0x00007f49a10d4a67 in memlock_inc_daemon (cmd=cmd@entry=0x7f499c0008e0) at mm/memlock.c:667
#4  0x00007f49a115027e in lvm2_run (handle=0x7f499c0008e0, cmdline=<optimized out>,
    cmdline@entry=0x7f49a16665b0 "_memlock_inc") at lvmcmdlib.c:84
#5  0x00007f49a166612a in dmeventd_lvm2_init () at dmeventd_lvm.c:89
#6  0x00007f49a1869d5f in register_device (device=0x7f499c004a10 "data-pool0-tpool", uuid=<optimized out>,
    major=<optimized out>, minor=<optimized out>, user=0x5653fb950118) at dmeventd_thin.c:352
#7  0x00005653fa1423ca in _do_register_device (thread=0x5653fb9500a0) at dmeventd.c:916
#8  _monitor_thread (arg=0x5653fb9500a0) at dmeventd.c:1006
#9  0x00007f49a3fc6ea5 in start_thread (arg=0x7f49a48e2700) at pthread_create.c:307
#10 0x00007f49a3ad98dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

Comment 2 Zdenek Kabelac 2020-09-22 07:55:39 UTC
The purpose of reserved memory is to pre-allocate all the RAM process will ever need during
locked 'activation' section when device can be suspended (included i.e. swap device) 
and lvm2 trys to keep this size reasonably 'minimal'. Current default limit should
be mostly OK - unless you would be managing very massive metadata for concurrenly active LVs.

So while normally we ask for 8M - you try to go with 128M - assuming there is some good reason 
for this. So that all put here - this code has not changed for quite a while so it's
interesting, why it has now causing any sort of problems.

I'd estimate this is likely a symptom of memory trashing in some other part of code.

So please try to attach 'lvmdump -a' output of the failing system - so we may try to reproduce internally.

Also add 'ulimit -a'.

And if you can - you may try to run  'dmeventd' in debug mode - just kill the
existing one (without any monitored device) - and restart new dmeventd from cmdline:

'dmeventd -flddd  &>/tmp/log

this should give is some idea about what is going inside dmeventd (which internally run lvm command to resize your thin-pool)

eventually running this in valgrind may also enhance the picture.

Comment 3 bugzilla 2020-09-23 19:19:40 UTC
Is there a way we can send you the lvmdump privately? We are a Red Hat partner if that helps facilitate transferring the information privately.

As for your other requests, see below for ulimits and see attached for dmeventd debug output (dmeventd-debug.log).

~]# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 257396
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 257396
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Comment 4 bugzilla 2020-09-23 19:21:03 UTC
Created attachment 1716124 [details]
dmeventd debug output

Comment 5 Zdenek Kabelac 2020-09-23 19:42:23 UTC
Likely such private files can be submitted via opened support case at access.redhat.com.

The trace of dmeventd suggests this is crashing very early at start of 1st lvm command.

Assuming this is a common x86_64 HW and the crash does not happen with 8M ?

And it starts crashing with 128M (so 64M is still ok) ?

If you disable monitoring=0 in lvm.conf - lvm2 commands have no problem with memory locking on this machine 
(i.e. lvchange --refresh vg) with  128M reserved memory.

Comment 6 bugzilla 2020-09-24 23:59:23 UTC
If you can post your GPG key or point me where I can find it, then I can send you an encrypted tar with the other information you are looking for. We will test the other memory and monitoring settings and get back to you.

Comment 7 bugzilla 2020-09-28 17:36:15 UTC
Regarding your tests, here are my results:
* It does not matter what I set reserved_memory to, dmeventd crashes on LV creation (note that it does not crash on removal).
* Setting monitoring=0 stops dmeventd from crashing.
* Running `lvchange --refresh vg` results in many segfaults in the logs.

If you can provide your GPG key, I can provide the requested dump.

Comment 8 bugzilla 2020-10-02 23:22:54 UTC
In there interest of getting this issue resolved sooner than later, I am uploading the lvmdump to this bug report, rather than waiting for a GPG key to encrypt it.

Comment 9 bugzilla 2020-10-02 23:23:29 UTC
Created attachment 1718549 [details]
lvmdump -a

Comment 10 Zdenek Kabelac 2020-10-03 06:43:59 UTC
The problem is likely not GPG - but we still haven't found reasoning why dmeventd could be crashing there.

When you set  'use_mlockall = 1'  in lvm.conf -  does it change anything for your crashing problem ?

Also change of your 'reserved_stack = 64' (instead of current 512) ?

lvmdump has no info about boot & memory mapping - so this some kind of unusual hw ?

The code where it is crashing on your machine is quite 'time-proved' - so we are still unclear how
is it possible it can be crashing there.

Basically it tries to touch every allocated memory page the process has got to confirm the memory
belongs to the process and physically exists.

So if it is getting an exception - it looks like the kernel VM core has no mapping for the memory ??

But since this is bare metal - we are puzzle how this is possible.

Comment 11 bugzilla 2020-10-05 16:55:42 UTC
Setting reserved_stack = 64 fixes the issue; we no longer get segfaults. Even with use_mlockall = 0 this is true. Hopefully that helps you determine the possible cause.

We are not using any unusual hardware configuration as far as I am aware. I am not onsite so I cannot get you exact hardware, but this is the processor we are using:
* Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz

I can get you more specific hardware specs later today if you think it will help.

Comment 12 bugzilla 2020-10-05 22:45:40 UTC
The motherboard is nothing special, it is a Supermicro X9DBL-3F with 2 sockets and the CPUs noted above.

Comment 13 Zdenek Kabelac 2020-10-07 12:36:13 UTC
Ok this has helped - now it's more clear - as the problem is not 'reserved_memory'  but 'reserved_stack'

dmeventd sets internally 'per thread' stack size to  300KiB - and withing one such thread lvm2 command is executed.

So ATM keep the reserved size size ideally at default 64KiB.

We will try to think what we can do better here.

But dmeventd as such doesn't see  lvm.conf - it's only one of it's threads that is executed later which
is influenced by this config.

So if users sets there some high value - threads has been already 'limited' from dmeventd side. 

So we will think what is the best approach here.

ATM users shouldn't touch reserved_stack - as the code has been usually changed to not put large
data on stack - but into memory pools so the value 64K should be OK.

Comment 15 Corey Marthaler 2020-10-13 01:29:41 UTC
FWIW, this doesn't require thin snaps. Old type snapshots also hit this.

[root@hayes-03 ~]# lvcreate --yes -L 300M snapper -n origin
  Logical volume "origin" created.
[root@hayes-03 ~]# lvcreate --yes -s /dev/snapper/origin -n altered_memory -L 100M
  WARNING: Monitoring snapper/altered_memory failed.
  WARNING: Monitoring snapper/origin failed.
  Logical volume "altered_memory" created.

Comment 16 Zdenek Kabelac 2020-10-20 21:31:21 UTC
Resolved by this commit: https://www.redhat.com/archives/lvm-devel/2020-October/msg00168.html
so threaded dmeventd  ignores  reserved_stack setting  (we already mostly cleaned lvm code base
in past to avoid pushing large things on stack - so 64K should work).

Comment 19 Corey Marthaler 2020-10-29 18:53:05 UTC
Fix verified in the latest 7.9.z rpms.

3.10.0-1160.el7.x86_64

lvm2-2.02.187-6.el7_9.2    BUILT: Mon Oct 26 15:21:13 CDT 2020
lvm2-libs-2.02.187-6.el7_9.2    BUILT: Mon Oct 26 15:21:13 CDT 2020
lvm2-cluster-2.02.187-6.el7_9.2    BUILT: Mon Oct 26 15:21:13 CDT 2020


[root@mckinley-01 ~]# grep reserved_ /etc/lvm/lvm.conf
        # Configuration option activation/reserved_stack.
        reserved_stack = 512
        # Configuration option activation/reserved_memory.
        reserved_memory = 131072

[root@mckinley-01 ~]# lvcreate -ay --thinpool my_pool -L 10G vg
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "my_pool" created.
[root@mckinley-01 ~]# lvcreate -ay --virtualsize 20G -T vg/my_pool -n my_origin
  WARNING: Sum of all thin volume sizes (20.00 GiB) exceeds the size of thin pool vg/my_pool (10.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "my_origin" created.
[root@mckinley-01 ~]# lvcreate -ay -y -k n -s /dev/vg/my_origin -n my_snap
  WARNING: Sum of all thin volume sizes (40.00 GiB) exceeds the size of thin pool vg/my_pool (10.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "my_snap" created.

Oct 29 13:44:59 mckinley-01 dmeventd[21105]: No longer monitoring thin pool vg-my_pool-tpool.
Oct 29 13:44:59 mckinley-01 lvm[21105]: Monitoring thin pool vg-my_pool-tpool.

[root@mckinley-01 ~]# ps -ef | grep dmeventd
root     21105     1  0 13:22 ?        00:00:00 /usr/sbin/dmeventd -f

Comment 28 errata-xmlrpc 2020-12-15 11:23:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5461


Note You need to log in before you can comment on or make changes to this bug.