Hide Forgot
Description of problem: Default hardware table for Nimble Storage devices is missing in device-mapper-multipath. Please pull the following patches to add default table for Nimble devices. We are fine with the settings below. https://www.spinics.net/lists/dm-devel/msg28308.html https://patchwork.kernel.org/patch/9377039/ device { vendor "Nimble" product "Server" path_grouping_policy "group_by_prio" features "1 queue_if_no_path" hardware_handler "1 alua" prio "alua" failback immediate } However, we are currently recommending users to select "round-robin" path selector to get predictable(uniform) I/O load on each path and working on having finite timer for queuing. But meanwhile as this patch is posted upstream, please pull the changes in 7.3 updates. I will post an updated patch to add the new entries(path_selector and no_path_retry) after internal teams are fine with the changes.
hello shivamerla1, There is no Nimble Storage in our lab, so could you help verify it and provide test result? thanks in advance!
Yes, we will help validate the changes. Please provide us the test package. thanks.
(In reply to shivamerla1 from comment #0) > However, we are currently recommending users to select "round-robin" path > selector to get predictable(uniform) I/O load on each path and working on > having finite timer for queuing. > > But meanwhile as this patch is posted upstream, please pull the changes in > 7.3 updates. I will post an updated patch to add the new > entries(path_selector and no_path_retry) after internal teams are fine with > the changes. Do you have an updated config yet?
Ben, below is the recommended configuration for Nimble arrays. Please add the same to RHEL 7.3 updates. We would like to use round-robin path selector as default as its been tested and proven to be optimal for our storage. Also, we need dev_loss_tmo to be set to infinity, as device addition/removal tends to be slow with large configs(stand-by paths). fast_io_fail_tmo is set to 1. We don't want to add the "no_path_retry" setting by default yet and let users add it if need finite time to fail I/O(Cluster scenario etc). Thanks. device { vendor "Nimble" product "Server" path_grouping_policy "group_by_prio" features "1 queue_if_no_path" hardware_handler "1 alua" prio "alua" failback immediate path_selector "round-robin 0" dev_loss_tmo infinity fast_io_fail_tmo 1 }
I've added this to the default configurations.
Thanks a lot.
Just adding a description to fit the Release Notes format of title/description. (It may be a little redundant.)
Verified on device-mapper-multipath-0.4.9-111.el7 [root@storageqe-84 ~]# rpm -qa | grep multipath device-mapper-multipath-debuginfo-0.4.9-111.el7.x86_64 device-mapper-multipath-libs-0.4.9-111.el7.x86_64 device-mapper-multipath-0.4.9-111.el7.x86_64 device-mapper-multipath-sysvinit-0.4.9-111.el7.x86_64 device-mapper-multipath-devel-0.4.9-111.el7.x86_64 [root@storageqe-84 ~]# multipathd show config |grep -C 10 Nimble features "0" hardware_handler "0" prio "alua" failback 30 rr_weight "priorities" no_path_retry "fail" flush_on_last_del "yes" dev_loss_tmo 30 } device { vendor "Nimble" product "Server" path_grouping_policy "group_by_prio" path_selector "round-robin 0" features "1 queue_if_no_path" hardware_handler "1 alua" prio "alua" failback immediate fast_io_fail_tmo 1 dev_loss_tmo "infinity" }
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1961