Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
After enabling 'all_devs' option, multipathd is getting crashed - segfault at 0 ip 00007f4bd8b51551 sp 00007ffd37e89eb0 error 4 in libc-2.17.so[7f4bd8ab2000+1b7000]
DescriptionMilan P. Gandhi
2017-06-16 09:42:29 UTC
Description of problem:
The "all_devs" option was added to "/etc/multipath.conf" file to set no_path_retry count for all devices to 5. This option was working fine with device-mapper-multipath-0.4.9-85.el7_2.5.
But once the system was updated to any of the following device-mapper-multipath version, a rescan of multipath device maps, reload of multipathd service starts getting failed with "segmentation fault":
Affected versions of device-mapper-multipath:
device-mapper-multipath-0.4.9-99.el7
device-mapper-multipath-0.4.9-99.el7_3.1
device-mapper-multipath-0.4.9-99.el7_3.3
$ less /etc/multipath.conf
[...]
# Remove devices entries when overrides section is available.
devices {
device {
# These settings overrides built-in devices settings. It does not apply
# to devices without built-in settings (these use the settings in the
# "defaults" section), or to devices defined in the "devices" section.
# Note: This is not available yet on Fedora 21. For more info see
# https://bugzilla.redhat.com/1253799
all_devs yes
no_path_retry 5
}
[...]
After updating the device-mapper-multipath package to any of the above affected versions, we get following "segmentation fault" errors:
[root@testsystem ~]# multipath -v2
Segmentation fault
Jun 15 17:10:06 testsystem kernel: multipath[15506]: segfault at 0 ip 00007f60c204a551 sp 00007ffcd1bbee50 error 4 in libc-2.17.so[7f60c1fab000+1b7000]
[root@testsystem ~]# systemctl restart multipathd
[root@testsystem ~]#
/var/log/messages:
Jun 15 17:11:42 testsystem systemd: Starting Device-Mapper Multipath Device Controller...
Jun 15 17:11:42 testsystem kernel: multipath[15514]: segfault at 0 ip 00007f4bd8b51551 sp 00007ffd37e89eb0 error 4 in libc-2.17.so[7f4bd8ab2000+1b7000]
Jun 15 17:11:42 testsystem kernel: multipathd[15520]: segfault at 0 ip 00007f8efd097551 sp 00007ffd99598e70 error 4 in libc-2.17.so[7f8efcff8000+1b7000]
Jun 15 17:11:42 testsystem systemd: Started Device-Mapper Multipath Device Controller.
Jun 15 17:11:42 testsystem systemd: multipathd.service: main process exited, code=killed, status=11/SEGV
Jun 15 17:11:42 testsystem systemd: Unit multipathd.service entered failed state.
Jun 15 17:11:42 testsystem systemd: multipathd.service failed.
Version-Release number of selected component (if applicable):
Affected versions of device-mapper-multipath:
device-mapper-multipath-0.4.9-99.el7
device-mapper-multipath-0.4.9-99.el7_3.1
device-mapper-multipath-0.4.9-99.el7_3.3
How reproducible:
o Always
Steps to Reproduce:
1. Update device-mapper-multipath package to one of the affected versions listed above.
2. Enable "no_path_retry" count for all devices by using all_devs option as shown below:
$ less /etc/multipath.conf
[...]
# Remove devices entries when overrides section is available.
devices {
device {
all_devs yes
no_path_retry 5
}
[...]
3. Run multipath -v2, or "systemctl restart multipathd".
This would result in "segmentation fault" errors on terminal, logs.
Actual results:
o multipathd gets crashed while processing "no_path_retry", "queue_if_no_path" option with "all_devs yes" set
Expected results:
o multipathd does not get crashed
Additional info:
o Currently this issue could be work-around by following 2 steps:
1. Set custom "no_path_retry" count for device section
corresponding to specific SAN/Storage arrays.
2. Also, disable "no_path_retry fail" with "all_devs" option
devices {
device {
all_devs yes
no_path_retry fail ###
}
}
Description of problem: The "all_devs" option was added to "/etc/multipath.conf" file to set no_path_retry count for all devices to 5. This option was working fine with device-mapper-multipath-0.4.9-85.el7_2.5. But once the system was updated to any of the following device-mapper-multipath version, a rescan of multipath device maps, reload of multipathd service starts getting failed with "segmentation fault": Affected versions of device-mapper-multipath: device-mapper-multipath-0.4.9-99.el7 device-mapper-multipath-0.4.9-99.el7_3.1 device-mapper-multipath-0.4.9-99.el7_3.3 $ less /etc/multipath.conf [...] # Remove devices entries when overrides section is available. devices { device { # These settings overrides built-in devices settings. It does not apply # to devices without built-in settings (these use the settings in the # "defaults" section), or to devices defined in the "devices" section. # Note: This is not available yet on Fedora 21. For more info see # https://bugzilla.redhat.com/1253799 all_devs yes no_path_retry 5 } [...] After updating the device-mapper-multipath package to any of the above affected versions, we get following "segmentation fault" errors: [root@testsystem ~]# multipath -v2 Segmentation fault Jun 15 17:10:06 testsystem kernel: multipath[15506]: segfault at 0 ip 00007f60c204a551 sp 00007ffcd1bbee50 error 4 in libc-2.17.so[7f60c1fab000+1b7000] [root@testsystem ~]# systemctl restart multipathd [root@testsystem ~]# /var/log/messages: Jun 15 17:11:42 testsystem systemd: Starting Device-Mapper Multipath Device Controller... Jun 15 17:11:42 testsystem kernel: multipath[15514]: segfault at 0 ip 00007f4bd8b51551 sp 00007ffd37e89eb0 error 4 in libc-2.17.so[7f4bd8ab2000+1b7000] Jun 15 17:11:42 testsystem kernel: multipathd[15520]: segfault at 0 ip 00007f8efd097551 sp 00007ffd99598e70 error 4 in libc-2.17.so[7f8efcff8000+1b7000] Jun 15 17:11:42 testsystem systemd: Started Device-Mapper Multipath Device Controller. Jun 15 17:11:42 testsystem systemd: multipathd.service: main process exited, code=killed, status=11/SEGV Jun 15 17:11:42 testsystem systemd: Unit multipathd.service entered failed state. Jun 15 17:11:42 testsystem systemd: multipathd.service failed. Version-Release number of selected component (if applicable): Affected versions of device-mapper-multipath: device-mapper-multipath-0.4.9-99.el7 device-mapper-multipath-0.4.9-99.el7_3.1 device-mapper-multipath-0.4.9-99.el7_3.3 How reproducible: o Always Steps to Reproduce: 1. Update device-mapper-multipath package to one of the affected versions listed above. 2. Enable "no_path_retry" count for all devices by using all_devs option as shown below: $ less /etc/multipath.conf [...] # Remove devices entries when overrides section is available. devices { device { all_devs yes no_path_retry 5 } [...] 3. Run multipath -v2, or "systemctl restart multipathd". This would result in "segmentation fault" errors on terminal, logs. Actual results: o multipathd gets crashed while processing "no_path_retry", "queue_if_no_path" option with "all_devs yes" set Expected results: o multipathd does not get crashed Additional info: o Currently this issue could be work-around by following 2 steps: 1. Set custom "no_path_retry" count for device section corresponding to specific SAN/Storage arrays. 2. Also, disable "no_path_retry fail" with "all_devs" option devices { device { all_devs yes no_path_retry fail ### } }