Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1406226 - Please update the default hardware table with Nimble device section
Please update the default hardware table with Nimble device section
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: device-mapper-multipath (Show other bugs)
7.3
x86_64 Linux
unspecified Severity high
: rc
: ---
Assigned To: Ben Marzinski
Lin Li
Steven J. Levine
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-12-19 20:52 EST by shivamerla1
Modified: 2017-08-01 12:34 EDT (History)
7 users (show)

See Also:
Fixed In Version: device-mapper-multipath-0.4.9-101.el7
Doc Type: Enhancement
Doc Text:
Multipath now has a built-in default configuration for Nimble Storage devices The multipath default hardware table now includes an entry for Nimble Storage arrays.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-01 12:34:26 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1961 normal SHIPPED_LIVE device-mapper-multipath bug fix and enhancement update 2017-08-01 13:56:09 EDT

  None (edit)
Description shivamerla1 2016-12-19 20:52:06 EST
Description of problem:
Default hardware table for Nimble Storage devices is missing in device-mapper-multipath.

Please pull the following patches to add default table for Nimble devices. We are fine with the settings below.

https://www.spinics.net/lists/dm-devel/msg28308.html
https://patchwork.kernel.org/patch/9377039/

	device {
		vendor "Nimble"
		product "Server"
		path_grouping_policy "group_by_prio"
		features "1 queue_if_no_path"
		hardware_handler "1 alua"
		prio "alua"
		failback immediate
	}


However, we are currently recommending users to select "round-robin" path selector to get predictable(uniform) I/O load on each path and working on having finite timer for queuing. 

But meanwhile as this patch is posted upstream, please pull the changes in 7.3 updates. I will post an updated patch to add the new entries(path_selector and no_path_retry) after internal teams are fine with the changes.
Comment 2 Lin Li 2017-01-03 01:36:36 EST
hello shivamerla1,
There is no Nimble Storage in our lab, so could you help verify it and provide test result?
thanks in advance!
Comment 3 shivamerla1 2017-01-03 11:56:43 EST
Yes, we will help validate the changes. Please provide us the test package. thanks.
Comment 4 Ben Marzinski 2017-01-17 16:53:05 EST
(In reply to shivamerla1 from comment #0)

> However, we are currently recommending users to select "round-robin" path
> selector to get predictable(uniform) I/O load on each path and working on
> having finite timer for queuing. 
> 
> But meanwhile as this patch is posted upstream, please pull the changes in
> 7.3 updates. I will post an updated patch to add the new
> entries(path_selector and no_path_retry) after internal teams are fine with
> the changes.

Do you have an updated config yet?
Comment 5 shivamerla1 2017-01-17 17:03:13 EST
Ben, below is the recommended configuration for Nimble arrays. Please add the same to RHEL 7.3 updates. We would like to use round-robin path selector as default as its been tested and proven to be optimal for our storage. Also, we need dev_loss_tmo to be set to infinity, as device addition/removal tends to be slow with large configs(stand-by paths). fast_io_fail_tmo is set to 1.

We don't want to add the "no_path_retry" setting by default yet and let users add it if need finite time to fail I/O(Cluster scenario etc). Thanks.

	device {
		vendor "Nimble"
		product "Server"
		path_grouping_policy "group_by_prio"
		features "1 queue_if_no_path"
		hardware_handler "1 alua"
		prio "alua"
		failback immediate
                path_selector "round-robin 0"
                dev_loss_tmo infinity
                fast_io_fail_tmo  1
	}
Comment 6 Ben Marzinski 2017-02-16 19:13:51 EST
I've added this to the default configurations.
Comment 7 shivamerla1 2017-02-16 19:32:58 EST
Thanks a lot.
Comment 9 Steven J. Levine 2017-05-09 16:54:40 EDT
Just adding a description to fit the Release Notes format of title/description. (It may be a little redundant.)
Comment 10 Lin Li 2017-05-22 05:33:58 EDT
Verified on device-mapper-multipath-0.4.9-111.el7

[root@storageqe-84 ~]# rpm -qa | grep multipath 
device-mapper-multipath-debuginfo-0.4.9-111.el7.x86_64
device-mapper-multipath-libs-0.4.9-111.el7.x86_64
device-mapper-multipath-0.4.9-111.el7.x86_64
device-mapper-multipath-sysvinit-0.4.9-111.el7.x86_64
device-mapper-multipath-devel-0.4.9-111.el7.x86_64

[root@storageqe-84 ~]# multipathd show config |grep  -C 10 Nimble
		features "0"
		hardware_handler "0"
		prio "alua"
		failback 30
		rr_weight "priorities"
		no_path_retry "fail"
		flush_on_last_del "yes"
		dev_loss_tmo 30
	}
	device {
		vendor "Nimble"
		product "Server"
		path_grouping_policy "group_by_prio"
		path_selector "round-robin 0"
		features "1 queue_if_no_path"
		hardware_handler "1 alua"
		prio "alua"
		failback immediate
		fast_io_fail_tmo 1
		dev_loss_tmo "infinity"
	}
Comment 11 errata-xmlrpc 2017-08-01 12:34:26 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1961

Note You need to log in before you can comment on or make changes to this bug.