device-mapper is unable to load tables that include offlined devices. So far, I can only see this when I manually offline the device with:
# echo offline > /sys/block/<devname>/device/state
Unfortunately, when multipathd tries to add a path to a multipath device with an offlined path device, it needs to reload the device. This will fail. Worse, multipathd will retry the reload, and will livelock.
If you run multipath, it will segfault. When multipath examines the existing multipath devices, it will see the offlined path. However, since the path is offline, multipath will not have gotten any of the path info, and it will crash trying to select a checker without already having the sysfs info.
I have a fix for this. The ideal fix would be for the device-mapper kernel code to allow new tables with offlined devices, if the old table already included the devices. Instead, I fixed the resulting problems in the multipath code.
multipathd will still not be able to reload the table, but it will add the new path device to the pathlist, where it can be adopted from later, and multipathd won't livelock.
Also multipath will now grab the sysfs pathinfo if necessary before trying to select a checker.
To reproduce this bug
1. Setup a multipath device with multiple paths
# echo offline > /sys/block/<path1>/device/state
# echo 1 > /sys/block/<path2>/device/delete
This should fail to remove the path, because you can't reload the table with path1 offlined. Then
This will make multipathd go into a livelock trying to readd the new path2 device. To see the multipath error
5. kill multipathd
This will segfault, since multipath doesn't get the sysfs info from path1 before it tries to select a checker for it.
If this can actually be hit in without manually offlining the device, this fix will still have the problem that multipathd won't be able to reload the table to add the path back to the multipath device, when it adds the path to its list of available paths. This was true in RHEL5 as well. However it never really bothered people since path devices didn't get removed when they failed. To work around this issue in RHEL6, users can set
dev_loss_tmo and fast_io_fail_tmo to high enough values that their devices never get removed when the fail. This will make RHEL6 work like RHEL 5.
However, I have not been able to recreate this without manually offlining the device, and no testers have reported this problem, so I'm not sure that it will ever come up in normal operations.
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release. Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release. This request is not yet committed for
multipathd now only tries to reload 3 times in ev_add_path if the load fails. Also, multipath will now pull in the sysfs information for devices that lack it. This allows multipath and multipathd to cope with this without segfaulting or getting stuck in inifinte loops.
You can pick up packages with this fix at http://people.redhat.com/bmarzins/device-mapper-multipath/rpms/RHEL6/
*** Bug 608797 has been marked as a duplicate of this bug. ***
Verified by partner.
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.