Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Created attachment 1127985[details]
logs for multipath and var/log/messages
Description of problem:
Multipath is unable to add new paths and fails in domap for FC volumes
Version-Release number of selected component (if applicable):
device-mapper-multipath-libs-0.4.9-87.el6.x86_64
device-mapper-multipath-0.4.9-87.el6.x86_64
How reproducible:
Easily reproducible
Steps to Reproduce:
1.There are 2 active paths to the volume via 2 different FC interface. At a given time one interface is up.
2. There is a script which brings down a FC interface while the other interface is up. After 30 seconds the interface is brought up while the other interface goes through the same process. This is repeated iterations
3. After few repeated iteration of FC port flapping some of the paths do not recover.
4. Our multipath dev_los_tmo is 150 seconds. There is a udev event for add path, but multipath is unable to add it
Actual results:
Feb 11 19:34:32 hitdev-rhel67 multipathd: delay_wait_checks = DISABLED (internal default)
Feb 11 19:34:32 hitdev-rhel67 multipathd: mpathe: failed in domap for addition of new path sdk
Feb 11 19:34:32 hitdev-rhel67 multipathd: mpathe: giving up reload
Feb 11 19:34:32 hitdev-rhel67 kernel: device-mapper: table: 253:5: multipath: error getting device
Feb 11 19:34:32 hitdev-rhel67 kernel: device-mapper: ioctl: error adding target to table
Expected results:
The paths should be recovered
Additional info:
Following are attached :
1. /var/log/messages
2. multipath.conf
3. multipath -ll
Hello Raunak,
Because we have no Nimble Storage in our lab, could you help provide test result once the package is available? We will do sanityonly testing.
thanks!
Once this happens, does running
# multipath
add the path. If not, then you can run
# multipath -v4
To see why device-mapper is failing the reload. The output should contain something like:
Feb 19 10:07:37 | libdevmapper: ioctl/libdm-iface.c(1786): device-mapper: reload ioctl on mpathb failed: Device or resource busy
If not, are you able to reproduce this with
verbosity 4
added to the defaults section of multipath.conf? This cause a huge amount of messages to be logged, but you are still looking for a libdevmapper message, like above. If these don't work, I can send you a test package that will cut down on all the extra messages, and just print the important ones, so we can figure out why device mapper can't grab the path device.
My assumption is that something has it opened exclusively, and you are going to get a "Device or resource busy" message. The question is whether whatever has it open exclusively keeps it open, or only has it open temporarily, so that a retry could fix this.
Created attachment 1127985 [details] logs for multipath and var/log/messages Description of problem: Multipath is unable to add new paths and fails in domap for FC volumes Version-Release number of selected component (if applicable): device-mapper-multipath-libs-0.4.9-87.el6.x86_64 device-mapper-multipath-0.4.9-87.el6.x86_64 How reproducible: Easily reproducible Steps to Reproduce: 1.There are 2 active paths to the volume via 2 different FC interface. At a given time one interface is up. 2. There is a script which brings down a FC interface while the other interface is up. After 30 seconds the interface is brought up while the other interface goes through the same process. This is repeated iterations 3. After few repeated iteration of FC port flapping some of the paths do not recover. 4. Our multipath dev_los_tmo is 150 seconds. There is a udev event for add path, but multipath is unable to add it Actual results: Feb 11 19:34:32 hitdev-rhel67 multipathd: delay_wait_checks = DISABLED (internal default) Feb 11 19:34:32 hitdev-rhel67 multipathd: mpathe: failed in domap for addition of new path sdk Feb 11 19:34:32 hitdev-rhel67 multipathd: mpathe: giving up reload Feb 11 19:34:32 hitdev-rhel67 kernel: device-mapper: table: 253:5: multipath: error getting device Feb 11 19:34:32 hitdev-rhel67 kernel: device-mapper: ioctl: error adding target to table Expected results: The paths should be recovered Additional info: Following are attached : 1. /var/log/messages 2. multipath.conf 3. multipath -ll