Bug 1300415 - add PURE to multipath-tools on RHEL it is added upstream
add PURE to multipath-tools on RHEL it is added upstream
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: device-mapper-multipath (Show other bugs)
7.4
All Linux
unspecified Severity medium
: rc
: ---
Assigned To: Ben Marzinski
Zhang Yi
Steven J. Levine
: OtherQA
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-20 12:56 EST by Brian Bunker
Modified: 2018-01-12 18:33 EST (History)
8 users (show)

See Also:
Fixed In Version: device-mapper-multipath-0.4.9-88.el7
Doc Type: Enhancement
Doc Text:
Support added for PURE FlashArray With this release, multipath has added built-in configuration support for the PURE FlashArray
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-11-04 04:17:27 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Brian Bunker 2016-01-20 12:56:55 EST
Description of problem:
PURE has been added to upstream multipath-tools

Version-Release number of selected component (if applicable):
a3d5602e785a5ee3de68b1dba62cc3c354f81938

How reproducible:
100%

Steps to Reproduce:
1. Pull in patch to support PURE FlashArray in multipath-tools

Actual results:


Expected results:


Additional info:
Comment 2 Lin Li 2016-02-16 01:11:47 EST
Hello Brian,
Because there is no PURE FlashArray in our lab, could you provide test result once the package is available?
thanks.
Comment 3 Brian Bunker 2016-02-16 17:59:46 EST
Hello Lin,
We can help with whatever you would need.
Comment 5 Ben Marzinski 2016-03-29 23:07:40 EDT
Built-in configuration added.
Comment 8 Zhang Yi 2016-08-16 12:31:26 EDT
Hello Brian
Could you help feedback test results for this bug?

Thanks
Yi
Comment 9 Brian Bunker 2016-08-16 12:40:04 EDT
Yi,
How do I get this version of RedHat to test with PURE?
Thanks,
Brian
Comment 11 Ben Marzinski 2016-08-24 17:19:45 EDT
You can download the latest RHEL7 packages at:

http://people.redhat.com/~bmarzins/device-mapper-multipath/rpms/RHEL7/bz1300415/
Comment 12 Zhang Yi 2016-08-24 22:55:11 EDT
(In reply to Brian Bunker from comment #9)
> Yi,
> How do I get this version of RedHat to test with PURE?
> Thanks,
> Brian

Hello Brian

Could you pls download the package from comment 11 and test it with your PURE environment?

thanks
YI
Comment 13 Brian Bunker 2016-08-25 12:32:33 EDT
Yi,

Thanks for the RPMs. What version of RHEL will we need to satisfy the library dependencies? Is any RHEL7 enough?
Comment 14 Zhang Yi 2016-08-25 22:49:33 EDT
(In reply to Brian Bunker from comment #13)
> Yi,
> 
> Thanks for the RPMs. What version of RHEL will we need to satisfy the
> library dependencies? Is any RHEL7 enough?

RHEL 7.3 Beta(included that fix) is OK, if you don't have that, RHEL7.2 is OK too,

Thanks
Yi
Comment 15 Brian Bunker 2016-09-02 12:41:23 EDT
Yi,
This looks good on RedHat 7.2 with our testing. Is this a RHEL 7.3 or 7.4 target?
Thanks,
Brian
Comment 16 Ben Marzinski 2016-09-04 11:30:06 EDT
This is already in the rhel-7.3 build.
Comment 17 Zhang Yi 2016-09-18 07:04:40 EDT
(In reply to Brian Bunker from comment #15)
> Yi,
> This looks good on RedHat 7.2 with our testing. Is this a RHEL 7.3 or 7.4
> target?
> Thanks,
> Brian

Hi Brian

The fixed package already in the RHEL7.3 build, is it possible for you to try it on 7.3, thanks

Yi
Comment 19 Brian Bunker 2016-09-30 12:50:25 EDT
Yi,
We have tried it and we are happy with it.
Brian
Comment 21 errata-xmlrpc 2016-11-04 04:17:27 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2536.html
Comment 22 tienh.nguyen 2018-01-12 13:44:54 EST
Hi 

  We have setup DM Multipath on PURE Storage of RedHat Linux 7.3
We would like to verify whether it's configured of multipath as "active/active"

How do we verify it is already "active/active" or "active/passive"?

Your advice is appreciated.

Below is the out put info:
[root@new ~]# multipath -ll | grep -i PURE
3624a93707c07ea4dc93341320001f55b dm-5 PURE    ,FlashArray
3624a93707c07ea4dc93341320001f55a dm-4 PURE    ,FlashArray
3624a93707c07ea4dc93341320001f559 dm-3 PURE    ,FlashArray
[root@new ~]#

 multipath -ll
3624a93707c07ea4dc93341320001f55b dm-5 PURE    ,FlashArray
size=2.0T features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=enabled
  |- 1:0:0:3  sdd 8:48   active ready running
  |- 1:0:1:3  sdj 8:144  active ready running
  |- 1:0:2:3  sdq 65:0   active ready running
  |- 1:0:3:3  sdw 65:96  active ready running
  |- 13:0:0:3 sdi 8:128  active ready running
  |- 13:0:1:3 sdn 8:208  active ready running
  |- 13:0:2:3 sdt 65:48  active ready running
  `- 13:0:3:3 sdy 65:128 active ready running
3624a93707c07ea4dc93341320001f55a dm-4 PURE    ,FlashArray
size=3.0T features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=enabled
  |- 1:0:0:2  sdc 8:32   active ready running
  |- 1:0:1:2  sdh 8:112  active ready running
  |- 1:0:2:2  sdo 8:224  active ready running
  |- 1:0:3:2  sdu 65:64  active ready running
  |- 13:0:0:2 sdg 8:96   active ready running
  |- 13:0:1:2 sdm 8:192  active ready running
  |- 13:0:2:2 sdr 65:16  active ready running
  `- 13:0:3:2 sdx 65:112 active ready running
3624a93707c07ea4dc93341320001f559 dm-3 PURE    ,FlashArray
size=100G features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=enabled
  |- 1:0:0:1  sdb 8:16   active ready running
  |- 1:0:1:1  sdf 8:80   active ready running
  |- 1:0:2:1  sdl 8:176  active ready running
  |- 1:0:3:1  sds 65:32  active ready running
  |- 13:0:0:1 sde 8:64   active ready running
  |- 13:0:1:1 sdk 8:160  active ready running
  |- 13:0:2:1 sdp 8:240  active ready running
  `- 13:0:3:1 sdv 65:80  active ready running
Comment 23 Ben Marzinski 2018-01-12 14:19:34 EST
Multipath has the concept of pathgroups. A pathgroup contains all of the paths that can be used a the same time, and load balanced across. By an active/active configuration, I assume you mean that you want multipath to load balance across all of the paths to the storage. This is what

path_grouping_policy "multibus"

does in your configuration.

In the "multipath -l" output, the pathgroups look like this:

-+- policy=<path_selector> prio=<path_group_priority> status=<path_group_status>

So looking at the output for 3624a93707c07ea4dc93341320001f559 above, you see

-+- policy='queue-length 0' prio=1 status=enabled

as the only path group, with all the paths under it. This means that all of the paths are being load_balanced over.

If you had multiple pathgroups, so that only some of the paths were able to be used at any one time, the output would look more like this

mpathd (353333330000007d0) dm-6 Linux   ,scsi_debug      
size=8.0M features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 29:0:0:0 sdf 8:80 active undef unknown
`-+- policy='service-time 0' prio=0 status=enabled
  `- 30:0:0:0 sdg 8:96 active undef unknown

With multiple pathgroup lines and separate paths under each.

So, in short, the output you pasted into Comment 22 corresponds to an active/active setup.
Comment 24 tienh.nguyen 2018-01-12 15:20:11 EST
Hi Brian,

   After uncommented those lines and reboot, it's now status=active.
Do you think it's now active/active ? right?
   Can you please confirm that it is right for the "active/active" mode as shown below after un-commented those lines?

   [root@svrhbq-new ~]# grep path_grouping_policy  /etc/multipath.conf
        path_grouping_policy    multibus
                path_grouping_policy    multibus
                path_grouping_policy    multibus
                path_grouping_policy    multibus
[root@svrhbq-new ~]#

[root@svrhbq-new ~]# multipath -ll | grep active
`-+- policy='queue-length 0' prio=1 status=active
  |- 1:0:0:3  sdd 8:48   active ready running
  |- 13:0:0:3 sdi 8:128  active ready running
  |- 1:0:1:3  sdj 8:144  active ready running
  |- 1:0:2:3  sdo 8:224  active ready running
  |- 13:0:1:3 sdp 8:240  active ready running
  |- 1:0:3:3  sdu 65:64  active ready running
  |- 13:0:2:3 sdv 65:80  active ready running
  `- 13:0:3:3 sdy 65:128 active ready running
`-+- policy='queue-length 0' prio=1 status=active
  |- 1:0:0:2  sdc 8:32   active ready running
  |- 13:0:0:2 sdh 8:112  active ready running
  |- 1:0:1:2  sdg 8:96   active ready running
  |- 1:0:2:2  sdm 8:192  active ready running
  |- 13:0:1:2 sdn 8:208  active ready running
  |- 1:0:3:2  sds 65:32  active ready running
  |- 13:0:2:2 sdt 65:48  active ready running
  `- 13:0:3:2 sdx 65:112 active ready running
`-+- policy='queue-length 0' prio=1 status=active
  |- 1:0:0:1  sdb 8:16   active ready running
  |- 1:0:1:1  sde 8:64   active ready running
  |- 13:0:0:1 sdf 8:80   active ready running
  |- 1:0:2:1  sdl 8:176  active ready running
  |- 13:0:1:1 sdk 8:160  active ready running
  |- 1:0:3:1  sdq 65:0   active ready running
  |- 13:0:2:1 sdr 65:16  active ready running
  `- 13:0:3:1 sdw 65:96  active ready running

Your advice is appreciated!

Thanks Brian,

Note You need to log in before you can comment on or make changes to this bug.