RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1022605 - multipath: system runs out of memory while creating mpath devices
Summary: multipath: system runs out of memory while creating mpath devices
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: iscsi-initiator-utils
Version: 6.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Chris Leech
QA Contact: Martin Hoyer
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-23 15:33 UTC by Bruno Goncalves
Modified: 2021-08-19 03:34 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-06 12:30:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Bruno Goncalves 2013-10-23 15:33:21 UTC
Description of problem:
Trying to assign 510 LUNs to a system with 1GB of RAM causes system to panic after running out of memory.

Version-Release number of selected component (if applicable):
device-mapper-multipath-libs-0.4.9-64.el6.x86_64
2.6.32-358.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1.create 2 targets with 255 LUNs using tgtd on the server, it results in 510 LUNs with 1 path.

2.Discover the targets

iscsiadm -m discovery -I default -p 127.0.0.1 -t st
Starting iscsid: [  OK  ]
[  OK  ]
127.0.0.1:3260,1 iqn.2009-10.com.redhat:storage-1
127.0.0.1:3260,1 iqn.2009-10.com.redhat:storage-2

3. login to targets and wait to create the devices
iscsiadm -m node -l 
Logging in to [iface: default, target: iqn.2009-10.com.redhat:storage-2, portal: 127.0.0.1,3260] (multiple)
Logging in to [iface: default, target: iqn.2009-10.com.redhat:storage-1, portal: 127.0.0.1,3260] (multiple)
Login to [iface: default, target: iqn.2009-10.com.redhat:storage-2, portal: 127.0.0.1,3260] successful.
Login to [iface: default, target: iqn.2009-10.com.redhat:storage-1, portal: 127.0.0.1,3260] successful.

modprobe -q scsi_wait_scan
modprobe -r -q scsi_wait_scan
udevadm settle

Actual results:
Server start killing process
udevd invoked oom-killer: gfp_mask=0xd0, order=1, oom_adj=-17, oom_score_adj=-1000 
udevd cpuset=/ mems_allowed=0 
Pid: 566, comm: udevd Not tainted 2.6.32-358.el6.x86_64 #1 
Call Trace: 
 [<ffffffff810cb5d1>] ? cpuset_print_task_mems_allowed+0x91/0xb0 
 [<ffffffff8111cd10>] ? dump_header+0x90/0x1b0 
 [<ffffffff81510017>] ? _spin_unlock_irqrestore+0x17/0x20 
 [<ffffffff8111d192>] ? oom_kill_process+0x82/0x2a0 
 [<ffffffff8111d0d1>] ? select_bad_process+0xe1/0x120 
 [<ffffffff8111d5d0>] ? out_of_memory+0x220/0x3c0 
 [<ffffffff8112c27c>] ? __alloc_pages_nodemask+0x8ac/0x8d0 
 [<ffffffff8116087a>] ? alloc_pages_current+0xaa/0x110 
 [<ffffffff81129c5e>] ? __get_free_pages+0xe/0x50 
 [<ffffffff8106bef4>] ? copy_process+0xe4/0x1450 
 [<ffffffff81438d3c>] ? lock_sock_nested+0xac/0xc0 
 [<ffffffff8106d2f4>] ? do_fork+0x94/0x460 
 [<ffffffff8143b1ed>] ? sock_setsockopt+0x16d/0x6f0 
 [<ffffffff81009598>] ? sys_clone+0x28/0x30 
 [<ffffffff8100b393>] ? stub_clone+0x13/0x20 
 [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b 

Until panics
Kernel panic - not syncing: Out of memory and no killable processes...

Comment 4 Ben Marzinski 2014-05-13 03:46:07 UTC
Would it be possible to get me access to a system that can reproduce this.  I can't hit it on my systems.  It seems that multipathd uses up about 84kb to 160kb per path.  It varies depending on what you've done.  For instance, running

# service multipathd reload

causes the per path usage to jump. Valgrind doesn't detect any memory leaks, and by commenting out bits of code, it appears that this extra memory is almost entirely due to pthreads library calls.  Multipathd definitely isn't leaving extra threads around, or not de-allocating memory used by the threads.  After about 3 reloads, the memory usage stops increasing, and usually decreases some.  I've duplicated this effect with a simple program that just creates and cancels a number of pthreads.  It may be possible to mitigate this, but I'm not possible.

Also, on all my systems, iscsid memory usage is always significantly higher than multipathd memory usage.

Comment 6 Ben Marzinski 2014-05-16 04:09:23 UTC
So, I've been able to recreate the oom-killer on hp-ml310g5-01, but I don't need multipathd to be running at all.  I can just disable multipath, load all the scsi devices, and then run multipath to try to create all the devices. 

When I hit the OOM killer, the biggest memory users are iscsiuio at 5701k and iscsid at 1022k. Multipath's memory usage is at 172k. It's not in the top 10, and it's behind things like bash (182k) and sshd (231k).

But the biggest memory user overall is udev.  There are 240 udevd processes, averaging around 100k a piece.  This is after I disabled the udev rules to run
kpartx on every created device.  With that rule enabled, there were fewer udevd processes, but each of them needed to fire off a kpartx process.

So it seems like the memory issue is that devices are getting created too fast, and udev tries to keep up regardless of the memory pressure of doing so.

So far, I've been unable to recreate the memory issue using multipathd, instead I'm getting udev timeouts, which are causing devices not to get created. But, I'm going to keep trying.

Comment 7 Ben Marzinski 2014-06-04 02:39:48 UTC
I've hit this on my machines as well.  I can't hit an out of memory issue, but I certainly can max out the CPU, slow everything to a crawl, timeout the iscsi devices and udev workers, and have everything end up in a mess.

If I use tgtd to serve up 1024 iscsi devices, and then I use iscsid to log into them on the same machine with multipathd running, I see udevd and hald both spike to 100% CPU, and sit there for minutes, eventually things start failing.

Are snippets from /var/log/messages from when things start going wrong:

** These are the first sign that things are going wrong

Jun  3 07:42:35 ask-06 udevd[537]: worker [19022] unexpectedly returned with status 0x0100
Jun  3 07:42:35 ask-06 udevd[537]: worker [19022] failed while handling '/devices/platform/host558/session552/target558:0:0/558:0:0:1/scsi_disk/558:0:0:1'
Jun  3 07:42:35 ask-06 udevd[537]: worker [20668] unexpectedly returned with status 0x0100

** Once these messages start, the errors just keep snowballing

Jun  3 07:52:16 ask-06 iscsid: Kernel reported iSCSI connection 18:0 error (1011
 - ISCSI_ERR_CONN_FAILED: iSCSI connection failed) state (3)
Jun  3 07:52:16 ask-06 iscsid: Kernel reported iSCSI connection 17:0 error (1011
 - ISCSI_ERR_CONN_FAILED: iSCSI connection failed) state (3)
Jun  3 07:52:16 ask-06 iscsid: Kernel reported iSCSI connection 230:0 error (101
1 - ISCSI_ERR_CONN_FAILED: iSCSI connection failed) state (3)


recv timeout 5, last rx 6590901304, last ping 6590906304, now 6590911304
Jun  3 07:52:20 ask-06 kernel: connection787:0: detected conn error (1011)
Jun  3 07:52:20 ask-06 kernel: connection469:0: ping timeout of 5 secs expired, recv timeout 5, last rx 6590901304, last ping 6590906304, now 6590911304
Jun  3 07:52:20 ask-06 kernel: connection469:0: detected conn error (1011)
Jun  3 07:52:20 ask-06 kernel: connection473:0: ping timeout of 5 secs expired,


Jun  3 07:52:39 ask-06 tgtd: conn_close(101) connection closed, 0x41599c8 2
Jun  3 07:52:39 ask-06 tgtd: conn_close(107) sesson 0x50354d0 1
Jun  3 07:52:39 ask-06 tgtd: conn_close(138) Forcing release of tx task 0x52282c
0 0 0
Jun  3 07:52:39 ask-06 tgtd: conn_close(101) connection closed, 0x4200f48 2
Jun  3 07:52:39 ask-06 tgtd: conn_close(107) sesson 0x5042c00 1
Jun  3 07:52:39 ask-06 tgtd: conn_close(138) Forcing release of tx task 0x522857
0 0 0
Jun  3 07:52:39 ask-06 tgtd: conn_close(101) connection closed, 0x42da808 2
Jun  3 07:52:39 ask-06 tgtd: conn_close(107) sesson 0x5054690 1
Jun  3 07:52:39 ask-06 tgtd: conn_close(138) Forcing release of tx task 0x522882
0 0 0

** Eventually, I start getting these messages, but it seems that paths keep failing again as soon as they come back up.

Jun  3 07:52:55 ask-06 iscsid: connection302:0 is operational after recovery (1 attempts)
Jun  3 07:52:55 ask-06 iscsid: connection348:0 is operational after recovery (1 attempts)
Jun  3 07:52:55 ask-06 iscsid: connection286:0 is operational after recovery (1 attempts)

** After the iscsi connections start failing, even the multipath devices that did successfully get created start failing.

Jun  3 07:52:55 ask-06 multipathd: checker failed path 8:128 in map 1IET     00030001
Jun  3 07:52:55 ask-06 kernel: device-mapper: multipath: Failing path 8:128.
Jun  3 07:52:55 ask-06 multipathd: 1IET     00030001: remaining active paths: 0
Jun  3 07:52:55 ask-06 kernel: device-mapper: multipath: Failing path 65:0.
Jun  3 07:52:55 ask-06 multipathd: 1IET     000b0001: sdq - directio checker reports path is down
Jun  3 07:52:55 ask-06 multipathd: checker failed path 65:0 in map 1IET     000b0001

** Finally we get into a state where mostly I am getting checker errors on the existing multipath devices, with occasional bursts of scsi timeout messages.

Jun  3 07:53:49 ask-06 multipathd: 1IET     00bc0001: sdgj - directio checker reports path is down
Jun  3 07:53:49 ask-06 multipathd: 1IET     00a20001: sdfv - directio checker reports path is down
Jun  3 07:53:49 ask-06 multipathd: 1IET     00b20001: sdfn - directio checker reports path is down
Jun  3 07:53:49 ask-06 multipathd: 1IET     00ae0001: sdfs - directio checker reports path is down


Jun  3 07:53:51 ask-06 kernel: sd 504:0:0:1: timing out command, waited 7s
Jun  3 07:53:51 ask-06 kernel: sd 549:0:0:1: timing out command, waited 7s
Jun  3 07:53:51 ask-06 kernel: sd 559:0:0:1: timing out command, waited 7s
Jun  3 07:53:51 ask-06 kernel: sd 563:0:0:1: timing out command, waited 7s
Jun  3 07:53:51 ask-06 kernel: sd 520:0:0:1: timing out command, waited 7s


Jun  3 07:55:32 ask-06 kernel: sd 30:0:0:1: timing out command, waited 180s
Jun  3 07:55:32 ask-06 kernel: sd 30:0:0:1: [sdad] Unhandled error code
Jun  3 07:55:32 ask-06 kernel: sd 30:0:0:1: [sdad] Result: hostbyte=DID_IMM_RETRY driverbyte=DRIVER_OK
Jun  3 07:55:32 ask-06 kernel: sd 30:0:0:1: [sdad] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
Jun  3 07:55:32 ask-06 kernel: end_request: I/O error, dev sdad, sector 0

** At no point in all of this was the memory of my machine ever dangerously low.  This is similar to what I saw on the rhts machine.  What I'm seeing doesn't look like a multipath issue.  It looks like the machine simply isn't powerful enough to have 1024 iscsi devices and thus 1024 multipath devices being started up at the same time, with a software iscsi server running on the same machine.  We may need to file this one under "If it hurts, stop doing it."

Comment 11 Ben Marzinski 2014-09-25 15:34:10 UTC
I'm pushing this out to rhel-6.7, so we can see if there is an iscsi way to ratelimit device discovery. Although, I'm still not against closing this WONTFIX.

Comment 15 Jan Kurik 2017-12-06 12:30:47 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/


Note You need to log in before you can comment on or make changes to this bug.