Red Hat Bugzilla – Bug 1300415
add PURE to multipath-tools on RHEL it is added upstream
Last modified: 2018-01-12 18:33:27 EST
Description of problem: PURE has been added to upstream multipath-tools Version-Release number of selected component (if applicable): a3d5602e785a5ee3de68b1dba62cc3c354f81938 How reproducible: 100% Steps to Reproduce: 1. Pull in patch to support PURE FlashArray in multipath-tools Actual results: Expected results: Additional info:
Hello Brian, Because there is no PURE FlashArray in our lab, could you provide test result once the package is available? thanks.
Hello Lin, We can help with whatever you would need.
Built-in configuration added.
Hello Brian Could you help feedback test results for this bug? Thanks Yi
Yi, How do I get this version of RedHat to test with PURE? Thanks, Brian
You can download the latest RHEL7 packages at: http://people.redhat.com/~bmarzins/device-mapper-multipath/rpms/RHEL7/bz1300415/
(In reply to Brian Bunker from comment #9) > Yi, > How do I get this version of RedHat to test with PURE? > Thanks, > Brian Hello Brian Could you pls download the package from comment 11 and test it with your PURE environment? thanks YI
Yi, Thanks for the RPMs. What version of RHEL will we need to satisfy the library dependencies? Is any RHEL7 enough?
(In reply to Brian Bunker from comment #13) > Yi, > > Thanks for the RPMs. What version of RHEL will we need to satisfy the > library dependencies? Is any RHEL7 enough? RHEL 7.3 Beta(included that fix) is OK, if you don't have that, RHEL7.2 is OK too, Thanks Yi
Yi, This looks good on RedHat 7.2 with our testing. Is this a RHEL 7.3 or 7.4 target? Thanks, Brian
This is already in the rhel-7.3 build.
(In reply to Brian Bunker from comment #15) > Yi, > This looks good on RedHat 7.2 with our testing. Is this a RHEL 7.3 or 7.4 > target? > Thanks, > Brian Hi Brian The fixed package already in the RHEL7.3 build, is it possible for you to try it on 7.3, thanks Yi
Yi, We have tried it and we are happy with it. Brian
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2536.html
Hi We have setup DM Multipath on PURE Storage of RedHat Linux 7.3 We would like to verify whether it's configured of multipath as "active/active" How do we verify it is already "active/active" or "active/passive"? Your advice is appreciated. Below is the out put info: [root@new ~]# multipath -ll | grep -i PURE 3624a93707c07ea4dc93341320001f55b dm-5 PURE ,FlashArray 3624a93707c07ea4dc93341320001f55a dm-4 PURE ,FlashArray 3624a93707c07ea4dc93341320001f559 dm-3 PURE ,FlashArray [root@new ~]# multipath -ll 3624a93707c07ea4dc93341320001f55b dm-5 PURE ,FlashArray size=2.0T features='0' hwhandler='0' wp=rw `-+- policy='queue-length 0' prio=1 status=enabled |- 1:0:0:3 sdd 8:48 active ready running |- 1:0:1:3 sdj 8:144 active ready running |- 1:0:2:3 sdq 65:0 active ready running |- 1:0:3:3 sdw 65:96 active ready running |- 13:0:0:3 sdi 8:128 active ready running |- 13:0:1:3 sdn 8:208 active ready running |- 13:0:2:3 sdt 65:48 active ready running `- 13:0:3:3 sdy 65:128 active ready running 3624a93707c07ea4dc93341320001f55a dm-4 PURE ,FlashArray size=3.0T features='0' hwhandler='0' wp=rw `-+- policy='queue-length 0' prio=1 status=enabled |- 1:0:0:2 sdc 8:32 active ready running |- 1:0:1:2 sdh 8:112 active ready running |- 1:0:2:2 sdo 8:224 active ready running |- 1:0:3:2 sdu 65:64 active ready running |- 13:0:0:2 sdg 8:96 active ready running |- 13:0:1:2 sdm 8:192 active ready running |- 13:0:2:2 sdr 65:16 active ready running `- 13:0:3:2 sdx 65:112 active ready running 3624a93707c07ea4dc93341320001f559 dm-3 PURE ,FlashArray size=100G features='0' hwhandler='0' wp=rw `-+- policy='queue-length 0' prio=1 status=enabled |- 1:0:0:1 sdb 8:16 active ready running |- 1:0:1:1 sdf 8:80 active ready running |- 1:0:2:1 sdl 8:176 active ready running |- 1:0:3:1 sds 65:32 active ready running |- 13:0:0:1 sde 8:64 active ready running |- 13:0:1:1 sdk 8:160 active ready running |- 13:0:2:1 sdp 8:240 active ready running `- 13:0:3:1 sdv 65:80 active ready running
Multipath has the concept of pathgroups. A pathgroup contains all of the paths that can be used a the same time, and load balanced across. By an active/active configuration, I assume you mean that you want multipath to load balance across all of the paths to the storage. This is what path_grouping_policy "multibus" does in your configuration. In the "multipath -l" output, the pathgroups look like this: -+- policy=<path_selector> prio=<path_group_priority> status=<path_group_status> So looking at the output for 3624a93707c07ea4dc93341320001f559 above, you see -+- policy='queue-length 0' prio=1 status=enabled as the only path group, with all the paths under it. This means that all of the paths are being load_balanced over. If you had multiple pathgroups, so that only some of the paths were able to be used at any one time, the output would look more like this mpathd (353333330000007d0) dm-6 Linux ,scsi_debug size=8.0M features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | `- 29:0:0:0 sdf 8:80 active undef unknown `-+- policy='service-time 0' prio=0 status=enabled `- 30:0:0:0 sdg 8:96 active undef unknown With multiple pathgroup lines and separate paths under each. So, in short, the output you pasted into Comment 22 corresponds to an active/active setup.
Hi Brian, After uncommented those lines and reboot, it's now status=active. Do you think it's now active/active ? right? Can you please confirm that it is right for the "active/active" mode as shown below after un-commented those lines? [root@svrhbq-new ~]# grep path_grouping_policy /etc/multipath.conf path_grouping_policy multibus path_grouping_policy multibus path_grouping_policy multibus path_grouping_policy multibus [root@svrhbq-new ~]# [root@svrhbq-new ~]# multipath -ll | grep active `-+- policy='queue-length 0' prio=1 status=active |- 1:0:0:3 sdd 8:48 active ready running |- 13:0:0:3 sdi 8:128 active ready running |- 1:0:1:3 sdj 8:144 active ready running |- 1:0:2:3 sdo 8:224 active ready running |- 13:0:1:3 sdp 8:240 active ready running |- 1:0:3:3 sdu 65:64 active ready running |- 13:0:2:3 sdv 65:80 active ready running `- 13:0:3:3 sdy 65:128 active ready running `-+- policy='queue-length 0' prio=1 status=active |- 1:0:0:2 sdc 8:32 active ready running |- 13:0:0:2 sdh 8:112 active ready running |- 1:0:1:2 sdg 8:96 active ready running |- 1:0:2:2 sdm 8:192 active ready running |- 13:0:1:2 sdn 8:208 active ready running |- 1:0:3:2 sds 65:32 active ready running |- 13:0:2:2 sdt 65:48 active ready running `- 13:0:3:2 sdx 65:112 active ready running `-+- policy='queue-length 0' prio=1 status=active |- 1:0:0:1 sdb 8:16 active ready running |- 1:0:1:1 sde 8:64 active ready running |- 13:0:0:1 sdf 8:80 active ready running |- 1:0:2:1 sdl 8:176 active ready running |- 13:0:1:1 sdk 8:160 active ready running |- 1:0:3:1 sdq 65:0 active ready running |- 13:0:2:1 sdr 65:16 active ready running `- 13:0:3:1 sdw 65:96 active ready running Your advice is appreciated! Thanks Brian,