Bug 1477203
Summary: | disable-multipath.sh script does not add the lines required to blacklist devices in /etc/multipath.conf file. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | RamaKasturi <knarra> | |
Component: | gdeploy | Assignee: | Sachidananda Urs <surs> | |
Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.3 | CC: | amukherj, asrivast, knarra, nsoffer, rcyriac, rhs-bugs, sabose, sasundar, smohan, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.3.1 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | gdeploy-2.0.2-18 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1487532 (view as bug list) | Environment: | ||
Last Closed: | 2017-11-29 03:27:19 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1475688, 1487532 |
Description
RamaKasturi
2017-08-01 13:10:04 UTC
Hi Nir, Do you know or point me to someone who would know the steps to blacklist devices? What's currently in place as per https://github.com/gluster/gdeploy/blob/2.0.2/extras/scripts/blacklist_all_disks.sh : 1. call multipath -F 2. edit /etc/multipath.conf to blacklist devices This errors out in step 1. With the all the testing, 'multipath -F' fails on two cases. 1. when the multipath kernel module is not loaded 2. When the mpath maps are in use. In the case 1, lets think about fresh installation. -------------------------------------------------- For the fresh installation blacklist_all_disks.sh fails with 'multipath -F' when the multipath kernel module is not loaded. blacklist_all_disks.sh should check whether the multipath kernel module is loaded, if not then load the kernel module, before proceeding any further with the script. In the case 2, lets think about the case where mpath names exists already ------------------------------------------------------------------------- If the mpath names exist already in the system, before RHHI deployment. And removing them, 'multipath -F' fails, then 'blacklist_all_disks.sh' script fails, which in turn fails the RHHI deployment. Then the onus on the user to remove the mpath names before deployment. @Sahina, what do you think ? (In reply to SATHEESARAN from comment #4) > With the all the testing, 'multipath -F' fails on two cases. > > 1. when the multipath kernel module is not loaded > 2. When the mpath maps are in use. > > In the case 1, lets think about fresh installation. > -------------------------------------------------- > For the fresh installation blacklist_all_disks.sh fails with 'multipath -F' > when the multipath kernel module is not loaded. > > blacklist_all_disks.sh should check whether the multipath kernel module is > loaded, if not then load the kernel module, before proceeding any further > with the script. I think we can add this as a step in gdeploy.conf before invoking the script > > In the case 2, lets think about the case where mpath names exists already > ------------------------------------------------------------------------- > If the mpath names exist already in the system, before RHHI deployment. > And removing them, 'multipath -F' fails, then 'blacklist_all_disks.sh' > script fails, which in turn fails the RHHI deployment. > > Then the onus on the user to remove the mpath names before deployment. Does it show a proper error message so that the user knows what needs to be done > > @Sahina, what do you think ? (In reply to Sahina Bose from comment #5) > > Does it show a proper error message so that the user knows what needs to be > done No, it doesn't. We should figure out a way to throw proper hint to the user. Note: My above comments in comment4 is valid with respect to 'blacklist_all_disks.sh' script for RHHI 1.1. The effect and need for blacklisting local disks should be again out of this bz (In reply to SATHEESARAN from comment #6) > (In reply to Sahina Bose from comment #5) > > > > Does it show a proper error message so that the user knows what needs to be > > done > > No, it doesn't. We should figure out a way to throw proper hint to the user. > > Note: My above comments in comment4 is valid with respect to > 'blacklist_all_disks.sh' script for RHHI 1.1. The effect and need for > blacklisting local disks should be again out of this bz sas, these are the current changes: https://github.com/gluster-deploy/gdeploy/commit/40a0c02a46ef14f4635359daf4c5c7c8e7955d69 I'm waiting for your input for pre-existing multipath devices I'm waiting for your input. This can be targeted for 3.3.1, we have an agreement: * We will blacklist all multipath devices * Add a note in RHHI guide about this behavior * modprobe the multipath modules * Add comments that will be parsed by vdsm (In reply to Sachidananda Urs from comment #7) > (In reply to SATHEESARAN from comment #6) > > (In reply to Sahina Bose from comment #5) > > > > > > Does it show a proper error message so that the user knows what needs to be > > > done > > > > No, it doesn't. We should figure out a way to throw proper hint to the user. > > > > Note: My above comments in comment4 is valid with respect to > > 'blacklist_all_disks.sh' script for RHHI 1.1. The effect and need for > > blacklisting local disks should be again out of this bz > > sas, these are the current changes: > > https://github.com/gluster-deploy/gdeploy/commit/ > 40a0c02a46ef14f4635359daf4c5c7c8e7955d69 > > I'm waiting for your input for pre-existing multipath devices I'm > waiting for your input. The script changes looks good. We can validate it with the build though Commit is posted in Comment 7 We do not need both the lines in the disable-multipath.sh script. modprobe multipath modprobe dm_multipath Just having modprobe dm_multipath is sufficient and it works fine with RHVH systems. When performing modprobe multipath on RHVH systems it fails with the error below. modprobe multipath modprobe: FATAL: Module multipath not found. Verified and works fine with build gdeploy-2.0.2-18.el7rhgs.noarch. I see that in the generated /etc/multipath.conf file there is an entry created to black list all devices. # inserted by disable-multipath.sh blacklist { devnode "*" } In disable-multipath.sh script i see that 'modprobe multipath' is commented out and only 'modprobe dm_multipath' is present. # Load the multipath module before trying to flush # modprobe multipath modprobe dm_multipath * Add comments that will be parsed by vdsm will be addressed as part of another bug and will be verified there https://bugzilla.redhat.com/show_bug.cgi?id=1433564 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3274 |