Description of problem: It looks like during first boot the following multipath.conf file gets put in place by VDSM: [root@dell-per510-1 automated-tests]# cat /etc/multipath.conf # RHEV REVISION 0.7 defaults { polling_interval 5 getuid_callout "/sbin/scsi_id -g -u -d /dev/%n" no_path_retry fail user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 } I would like to see the RHEL 6.2 default multipath config or at least a multipath.conf that blacklists my local disks/storage bricks that are local. Here is the snip from vdsm.log: MainThread::DEBUG::2012-06-27 13:14:50,540::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-06-27 13:14:50,541::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /bin/cp /tmp/tmpFP0_SH /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-06-27 13:14:50,602::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-06-27 13:14:50,603::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-06-27 13:14:50,615::__init__::1164::Storage.Misc.excCmd::(_log) FAILED: <err> = ''; <rc> = 1 MainThread::DEBUG::2012-06-27 13:14:50,616::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-06-27 13:14:50,764::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-06-27 13:14:50,764::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-06-27 13:14:50,779::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0 Version-Release number of selected component (if applicable): I installed the ISO: RHS-2.0-20120621.2-RHS-x86_64-DVD1.iso The package is: [root@dell-per510-2 ~]# rpm -q device-mapper-multipath device-mapper-multipath-0.4.9-56.el6.x86_64 # rpm -q vdsm vdsm-4.9.6-14.el6rhs.x86_64 How reproducible: Every install. Steps to Reproduce: 1. Install RHS 2.0 2. cat /etc/multipath.conf Actual results: [root@dell-per510-1 automated-tests]# cat /etc/multipath.conf # RHEV REVISION 0.7 defaults { polling_interval 5 getuid_callout "/sbin/scsi_id -g -u -d /dev/%n" no_path_retry fail user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 } Expected results: Default multipath config for RHEL 6 or a multipath.conf that blacklists local disks. Additional info: I didn't know what component to open this under as I didn't see vdsm or dm-multipath under RHS 2.0. Please assign the proper component if applicable.
Solution is available in Red Hat Knowledge Base https://access.redhat.com/knowledge/articles/43459
It's not my problem in a sense because it's not performance, so you don't have to respond, just my opinion. Is this full procedure including the KB article documented by RHS? If not, how would customers learn about this? The procedure in comment 2 works but is labor-intensive (requires a series of commands and edits FOR EACH host), is a command-line process, and is therefore not suitable for an APPLIANCE. We do not have a GUI yet so we can't just wave our hands and say RHS-C is coming. I still think default should be for DAS devices to come up blacklisted for multipathing. There has been discussion about possibility of using it, but no one actually is taking advantage of multipathing with RHS AFAIK. Can we fix the RHS installer to blacklist DAS devices automatically? Otherwise I do not consider this bug fixed. This is the same issue I had with SELinux. Our goal should be that you should never have to touch the individual servers in a Gluster config unless there is something non-standard that you are doing with them. If we ever want to sell a lot of RHS then we really need to think about the cost of managing the systems (also called total cost of ownership or TCO).
vdsmd creates/overwrites /etc/multipath.conf file on every start only if first two lines are friendly with. Fix is to have '# RHEV PRIVATE' text in second line of /etc/multipath.conf file. Example /etc/multipath.conf is # Do not delete this and below line # RHEV PRIVATE defaults { find_multipaths yes }
If I'm not wrong there are two issues here. One is what bturner describes, like having a multipath.conf which throws error trying to create the multipath devices on local storage. Which I think can be fixed having find_multipaths yes in defaults section of the conf file. I found out what it does by looking into man 5 page (http://linux.die.net/man/5/multipath.conf). I have tried making this change to multipath.conf file and I don't get errors upon reboot. I think that 'find_multipaths' option should be added to /etc/multipath.conf by default in RHS. Other issue is what Bala mentioned, that once the vdsm is upgraded it overwrites the multipath.conf file. I wasn't able to verify this with RHS as I couldn't upgrade the vdsm rpm. I will verify the fix mentioned by Bala once update is available.
Just curious, but why do we even have VDSM installed on RHS 2.0 at all? As far as I know VDSM is required by Virt-Manager and RHEV, neither of which should be ran on top of a software storage appliance.
Answered my own question, looks like RHSC requires VDSM after reading the DOCs on it.
Had a chat with Bala, and for UPDATE_2 of 2.0.z looks like the best approach is to modify /etc/multipath.conf file to have '# RHEV PRIVATE' text in the second line. This will be done in vdsm post install script. For RHS 2.1, we can fix it in a cleaner way when the ISO is being rebuilt.
Moving it to ON_DEV since the build present in brew doesn't have the fix. Please move it to ON_DEV once the fix is done.
As per Bala, having the fix as patch file will not help since the spec file is read before applying the patch. The patch should be applied before making the source tarball, so that the fix goes into the spec file. This change is made and a new build is available https://brewweb.devel.redhat.com/buildinfo?buildID=230955
So after I downloaded the rpms and did "yum localupdate" I made following observations. If the /etc/multipath.conf file is UN-EDITED, then the file would be overwritten and the all the options would be gone except for find_multipaths yes [root@cutlass rhs-26]# cat /etc/multipath.conf # Do not delete this and below line # RHEV PRIVATE defaults { find_multipaths yes } But if the file is edited already. Then only the first two lines of the /etc/multipath.conf file is changed. All other options are not retained as is. [root@cutlass rhs-26]# cat /etc/multipath.conf # Do not delete this and below line # RHEV PRIVATE # RHEV REVISION 0.7 defaults { polling_interval 5 getuid_callout "/sbin/scsi_id -g -u -d /dev/%n" no_path_retry fail user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 find_multipaths yes }
So if the multipath.conf has been overwritten as in first scenario in above comment, after reboot I don't see the errors. If the file's already modified to include "find_multipaths yes" in that case file will not be overwritten and after reboot I don't see the errors. So moving the bug to verified state.
(In reply to comment #14) > So after I downloaded the rpms and did "yum localupdate" I made following > observations. > > If the /etc/multipath.conf file is UN-EDITED, then the file would be > overwritten and the all the options would be gone except for find_multipaths > yes > > > [root@cutlass rhs-26]# cat /etc/multipath.conf > # Do not delete this and below line > # RHEV PRIVATE > > defaults { > find_multipaths yes > } > > > But if the file is edited already. Then only the first two lines of the > /etc/multipath.conf file is changed. All other options are not retained as > is. > > > [root@cutlass rhs-26]# cat /etc/multipath.conf > # Do not delete this and below line > # RHEV PRIVATE > # RHEV REVISION 0.7 > > defaults { > polling_interval 5 > getuid_callout "/sbin/scsi_id -g -u -d /dev/%n" > no_path_retry fail > user_friendly_names no > flush_on_last_del yes > fast_io_fail_tmo 5 > dev_loss_tmo 30 > max_fds 4096 > find_multipaths yes > } FYI, on every restart of vdsm service, /etc/multipath.conf file is checked for vdsm compatibility. If the file doesn't compatible, vdsm recreates it (ie user looses his custom configurations) unless '#RHEV PRIVATE' line is present.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-1253.html