Bug 837869 - VDSM copies a multipath.conf that is accepting all devices by default and throwing errors on local disk.
VDSM copies a multipath.conf that is accepting all devices by default and thr...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: vdsm (Show other bugs)
2.0
x86_64 Linux
medium Severity low
: ---
: ---
Assigned To: Bala.FA
M S Vishwanath Bhat
:
Depends On:
Blocks: 840817
  Show dependency treegraph
 
Reported: 2012-07-05 12:13 EDT by Ben Turner
Modified: 2016-05-31 21:56 EDT (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 840817 (view as bug list)
Environment:
Last Closed: 2012-09-11 10:23:09 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 158703 None None None 2012-07-09 02:41:11 EDT

  None (edit)
Description Ben Turner 2012-07-05 12:13:42 EDT
Description of problem:

It looks like during first boot the following multipath.conf file gets put in place by VDSM:

[root@dell-per510-1 automated-tests]# cat /etc/multipath.conf 
# RHEV REVISION 0.7

defaults {
    polling_interval        5
    getuid_callout          "/sbin/scsi_id -g -u -d /dev/%n"
    no_path_retry           fail
    user_friendly_names     no
    flush_on_last_del       yes
    fast_io_fail_tmo        5
    dev_loss_tmo            30
    max_fds                 4096
}

I would like to see the RHEL 6.2 default multipath config or at least a multipath.conf that blacklists my local disks/storage bricks that are local.  Here is the snip from vdsm.log:

MainThread::DEBUG::2012-06-27 13:14:50,540::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False
MainThread::DEBUG::2012-06-27 13:14:50,541::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /bin/cp /tmp/tmpFP0_SH /etc/multipath.conf' (cwd None)
MainThread::DEBUG::2012-06-27 13:14:50,602::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2012-06-27 13:14:50,603::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None)
MainThread::DEBUG::2012-06-27 13:14:50,615::__init__::1164::Storage.Misc.excCmd::(_log) FAILED: <err> = ''; <rc> = 1
MainThread::DEBUG::2012-06-27 13:14:50,616::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None)
MainThread::DEBUG::2012-06-27 13:14:50,764::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2012-06-27 13:14:50,764::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None)
MainThread::DEBUG::2012-06-27 13:14:50,779::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0

Version-Release number of selected component (if applicable):

I installed the ISO:
RHS-2.0-20120621.2-RHS-x86_64-DVD1.iso

The package is:
[root@dell-per510-2 ~]# rpm -q device-mapper-multipath
device-mapper-multipath-0.4.9-56.el6.x86_64

# rpm -q vdsm
vdsm-4.9.6-14.el6rhs.x86_64

How reproducible:

Every install.

Steps to Reproduce:
1.  Install RHS 2.0
2.  cat /etc/multipath.conf

Actual results:

[root@dell-per510-1 automated-tests]# cat /etc/multipath.conf 
# RHEV REVISION 0.7

defaults {
    polling_interval        5
    getuid_callout          "/sbin/scsi_id -g -u -d /dev/%n"
    no_path_retry           fail
    user_friendly_names     no
    flush_on_last_del       yes
    fast_io_fail_tmo        5
    dev_loss_tmo            30
    max_fds                 4096
}


Expected results:

Default multipath config for RHEL 6 or a multipath.conf that blacklists local disks.

Additional info:

I didn't know what component to open this under as I didn't see vdsm or dm-multipath under RHS 2.0.  Please assign the proper component if applicable.
Comment 2 Bala.FA 2012-07-12 03:08:42 EDT
Solution is available in Red Hat Knowledge Base https://access.redhat.com/knowledge/articles/43459
Comment 3 Ben England 2012-07-19 11:23:07 EDT
It's not my problem in a sense because it's not performance, so you don't have to respond, just my opinion.  

Is this full procedure including the KB article documented by RHS?  If not, how would customers learn about this?

The procedure in comment 2 works but is labor-intensive (requires a series of commands and edits FOR EACH host), is a command-line process, and is therefore not suitable for an APPLIANCE.  We do not have a GUI yet so we can't just wave our hands and say RHS-C is coming.  I still think default should be for DAS devices to come up blacklisted for multipathing.  There has been discussion about possibility of using it, but no one actually is taking advantage of multipathing with RHS AFAIK.  Can we fix the RHS installer to blacklist DAS devices automatically?   Otherwise I do not consider this bug fixed.

This is the same issue I had with SELinux.  Our goal should be that you should never have to touch the individual servers in a Gluster config unless there is something non-standard that you are doing with them.  If we ever want to sell a lot of RHS then we really need to think about the cost of managing the systems (also called total cost of ownership or TCO).
Comment 4 Bala.FA 2012-07-20 03:59:45 EDT
vdsmd creates/overwrites /etc/multipath.conf file on every start only if first two lines are friendly with.

Fix is to have '# RHEV PRIVATE' text in second line of /etc/multipath.conf file.

Example /etc/multipath.conf is

# Do not delete this and below line
# RHEV PRIVATE

defaults {
    find_multipaths    yes
}
Comment 5 M S Vishwanath Bhat 2012-07-20 07:19:31 EDT
If I'm not wrong there are two issues here.

One is what bturner describes, like having a multipath.conf which throws error trying to create the multipath devices on local storage. Which I think can be fixed having find_multipaths yes in defaults section of the conf file. I found out what it does by looking into man 5 page (http://linux.die.net/man/5/multipath.conf). I have tried making this change to multipath.conf file and I don't get errors upon reboot. I think that 'find_multipaths' option should be added to /etc/multipath.conf by default in RHS. 


Other issue is what Bala mentioned, that once the vdsm is upgraded it overwrites the multipath.conf file. I wasn't able to verify this with RHS as I couldn't upgrade the vdsm rpm. I will verify the fix mentioned by Bala once update is available.
Comment 6 M S Vishwanath Bhat 2012-07-20 07:20:02 EDT
If I'm not wrong there are two issues here.

One is what bturner describes, like having a multipath.conf which throws error trying to create the multipath devices on local storage. Which I think can be fixed having find_multipaths yes in defaults section of the conf file. I found out what it does by looking into man 5 page (http://linux.die.net/man/5/multipath.conf). I have tried making this change to multipath.conf file and I don't get errors upon reboot. I think that 'find_multipaths' option should be added to /etc/multipath.conf by default in RHS. 


Other issue is what Bala mentioned, that once the vdsm is upgraded it overwrites the multipath.conf file. I wasn't able to verify this with RHS as I couldn't upgrade the vdsm rpm. I will verify the fix mentioned by Bala once update is available.
Comment 7 Ben Turner 2012-07-20 16:09:10 EDT
Just curious, but why do we even have VDSM installed on RHS 2.0 at all?  As far as I know VDSM is required by Virt-Manager and RHEV, neither of which should be ran on top of a software storage appliance.
Comment 8 Ben Turner 2012-07-20 16:20:22 EDT
Answered my own question, looks like RHSC requires VDSM after reading the DOCs on it.
Comment 9 Vidya Sakar 2012-07-31 06:46:13 EDT
Had a chat with Bala, and for UPDATE_2 of 2.0.z looks like the best approach is to modify /etc/multipath.conf file to have '# RHEV PRIVATE' text in the second line. This will be done in vdsm post install script. For RHS 2.1, we can fix it in a cleaner way when the ISO is being rebuilt.
Comment 11 M S Vishwanath Bhat 2012-08-27 08:55:11 EDT
Moving it to ON_DEV since the build present in brew doesn't have the fix. Please move it to ON_DEV once the fix is done.
Comment 13 Vidya Sakar 2012-08-28 08:22:41 EDT
As per Bala, having the fix as patch file will not help since the spec file is read before applying the patch.  The patch should be applied before making the source tarball, so that the fix goes into the spec file. This change is made and a new build is available
https://brewweb.devel.redhat.com/buildinfo?buildID=230955
Comment 14 M S Vishwanath Bhat 2012-08-29 02:41:20 EDT
So after I downloaded the rpms and did "yum localupdate" I made following observations. 

If the /etc/multipath.conf file is UN-EDITED, then the file would be overwritten and the all the options would be gone except for find_multipaths yes


[root@cutlass rhs-26]# cat /etc/multipath.conf 
# Do not delete this and below line
# RHEV PRIVATE

defaults {
    find_multipaths    yes
}


But if the file is edited already. Then only the first two lines of the /etc/multipath.conf file is changed. All other options are not retained as is.


[root@cutlass rhs-26]# cat /etc/multipath.conf 
# Do not delete this and below line
# RHEV PRIVATE
# RHEV REVISION 0.7

defaults {
    polling_interval        5
    getuid_callout          "/sbin/scsi_id -g -u -d /dev/%n"
    no_path_retry           fail
    user_friendly_names     no
    flush_on_last_del       yes
    fast_io_fail_tmo        5
    dev_loss_tmo            30
    max_fds                 4096
    find_multipaths          yes
}
Comment 15 M S Vishwanath Bhat 2012-08-29 06:12:37 EDT
So if the multipath.conf has been overwritten as in first scenario in above comment, after reboot I don't see the errors.

If the file's already modified to include "find_multipaths yes" in that case file will not be overwritten and after reboot I don't see the errors.

So moving the bug to verified state.
Comment 16 Bala.FA 2012-08-29 07:41:48 EDT
(In reply to comment #14)
> So after I downloaded the rpms and did "yum localupdate" I made following
> observations. 
> 
> If the /etc/multipath.conf file is UN-EDITED, then the file would be
> overwritten and the all the options would be gone except for find_multipaths
> yes
> 
> 
> [root@cutlass rhs-26]# cat /etc/multipath.conf 
> # Do not delete this and below line
> # RHEV PRIVATE
> 
> defaults {
>     find_multipaths    yes
> }
> 
> 
> But if the file is edited already. Then only the first two lines of the
> /etc/multipath.conf file is changed. All other options are not retained as
> is.
> 
> 
> [root@cutlass rhs-26]# cat /etc/multipath.conf 
> # Do not delete this and below line
> # RHEV PRIVATE
> # RHEV REVISION 0.7
> 
> defaults {
>     polling_interval        5
>     getuid_callout          "/sbin/scsi_id -g -u -d /dev/%n"
>     no_path_retry           fail
>     user_friendly_names     no
>     flush_on_last_del       yes
>     fast_io_fail_tmo        5
>     dev_loss_tmo            30
>     max_fds                 4096
>     find_multipaths          yes
> }


FYI, on every restart of vdsm service, /etc/multipath.conf file is checked for vdsm compatibility.  If the file doesn't compatible, vdsm recreates it (ie user looses his custom configurations) unless '#RHEV PRIVATE' line is present.
Comment 18 errata-xmlrpc 2012-09-11 10:23:09 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-1253.html

Note You need to log in before you can comment on or make changes to this bug.