Bug 1406226 - Please update the default hardware table with Nimble device section
Summary: Please update the default hardware table with Nimble device section
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: device-mapper-multipath
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Ben Marzinski
QA Contact: Lin Li
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-20 01:52 UTC by shivamerla1
Modified: 2017-08-01 16:34 UTC (History)
7 users (show)

Fixed In Version: device-mapper-multipath-0.4.9-101.el7
Doc Type: Enhancement
Doc Text:
Multipath now has a built-in default configuration for Nimble Storage devices The multipath default hardware table now includes an entry for Nimble Storage arrays.
Clone Of:
Environment:
Last Closed: 2017-08-01 16:34:26 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1961 normal SHIPPED_LIVE device-mapper-multipath bug fix and enhancement update 2017-08-01 17:56:09 UTC

Description shivamerla1 2016-12-20 01:52:06 UTC
Description of problem:
Default hardware table for Nimble Storage devices is missing in device-mapper-multipath.

Please pull the following patches to add default table for Nimble devices. We are fine with the settings below.

https://www.spinics.net/lists/dm-devel/msg28308.html
https://patchwork.kernel.org/patch/9377039/

	device {
		vendor "Nimble"
		product "Server"
		path_grouping_policy "group_by_prio"
		features "1 queue_if_no_path"
		hardware_handler "1 alua"
		prio "alua"
		failback immediate
	}


However, we are currently recommending users to select "round-robin" path selector to get predictable(uniform) I/O load on each path and working on having finite timer for queuing. 

But meanwhile as this patch is posted upstream, please pull the changes in 7.3 updates. I will post an updated patch to add the new entries(path_selector and no_path_retry) after internal teams are fine with the changes.

Comment 2 Lin Li 2017-01-03 06:36:36 UTC
hello shivamerla1,
There is no Nimble Storage in our lab, so could you help verify it and provide test result?
thanks in advance!

Comment 3 shivamerla1 2017-01-03 16:56:43 UTC
Yes, we will help validate the changes. Please provide us the test package. thanks.

Comment 4 Ben Marzinski 2017-01-17 21:53:05 UTC
(In reply to shivamerla1 from comment #0)

> However, we are currently recommending users to select "round-robin" path
> selector to get predictable(uniform) I/O load on each path and working on
> having finite timer for queuing. 
> 
> But meanwhile as this patch is posted upstream, please pull the changes in
> 7.3 updates. I will post an updated patch to add the new
> entries(path_selector and no_path_retry) after internal teams are fine with
> the changes.

Do you have an updated config yet?

Comment 5 shivamerla1 2017-01-17 22:03:13 UTC
Ben, below is the recommended configuration for Nimble arrays. Please add the same to RHEL 7.3 updates. We would like to use round-robin path selector as default as its been tested and proven to be optimal for our storage. Also, we need dev_loss_tmo to be set to infinity, as device addition/removal tends to be slow with large configs(stand-by paths). fast_io_fail_tmo is set to 1.

We don't want to add the "no_path_retry" setting by default yet and let users add it if need finite time to fail I/O(Cluster scenario etc). Thanks.

	device {
		vendor "Nimble"
		product "Server"
		path_grouping_policy "group_by_prio"
		features "1 queue_if_no_path"
		hardware_handler "1 alua"
		prio "alua"
		failback immediate
                path_selector "round-robin 0"
                dev_loss_tmo infinity
                fast_io_fail_tmo  1
	}

Comment 6 Ben Marzinski 2017-02-17 00:13:51 UTC
I've added this to the default configurations.

Comment 7 shivamerla1 2017-02-17 00:32:58 UTC
Thanks a lot.

Comment 9 Steven J. Levine 2017-05-09 20:54:40 UTC
Just adding a description to fit the Release Notes format of title/description. (It may be a little redundant.)

Comment 10 Lin Li 2017-05-22 09:33:58 UTC
Verified on device-mapper-multipath-0.4.9-111.el7

[root@storageqe-84 ~]# rpm -qa | grep multipath 
device-mapper-multipath-debuginfo-0.4.9-111.el7.x86_64
device-mapper-multipath-libs-0.4.9-111.el7.x86_64
device-mapper-multipath-0.4.9-111.el7.x86_64
device-mapper-multipath-sysvinit-0.4.9-111.el7.x86_64
device-mapper-multipath-devel-0.4.9-111.el7.x86_64

[root@storageqe-84 ~]# multipathd show config |grep  -C 10 Nimble
		features "0"
		hardware_handler "0"
		prio "alua"
		failback 30
		rr_weight "priorities"
		no_path_retry "fail"
		flush_on_last_del "yes"
		dev_loss_tmo 30
	}
	device {
		vendor "Nimble"
		product "Server"
		path_grouping_policy "group_by_prio"
		path_selector "round-robin 0"
		features "1 queue_if_no_path"
		hardware_handler "1 alua"
		prio "alua"
		failback immediate
		fast_io_fail_tmo 1
		dev_loss_tmo "infinity"
	}

Comment 11 errata-xmlrpc 2017-08-01 16:34:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1961


Note You need to log in before you can comment on or make changes to this bug.