RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 615040 - Multipathd not enabled on boot, although installed with multipath
Summary: Multipathd not enabled on boot, although installed with multipath
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: anaconda
Version: 6.0
Hardware: x86_64
OS: Linux
low
high
Target Milestone: rc
: ---
Assignee: David Cantrell
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-07-15 20:10 UTC by Jim Lester
Modified: 2018-10-27 12:40 UTC (History)
18 users (show)

Fixed In Version: anaconda-13.21.63-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-11-10 19:51:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jim Lester 2010-07-15 20:10:22 UTC
Description of problem:
On a multipath boot from SAN system the install drive becomes mpatha because it is the first device setup by multipath. If after the install you try to add a second volume and set it up as multipath it also tries to use mpatha and errors out. On reboot it can get into a weird situation depending on order that the devices are created and your root drive can end up being setup second (which will fail) and you cannot boot. 

Version-Release number of selected component (if applicable):
RHEL 6.0 Beta (Santiago) 
2.6.32-37.el6.x86_64
device-mapper-multipath-0.4.9

How reproducible: Very


Steps to Reproduce:
1. Install RHEL6 boot from SAN in a multipath enviroment
2. Add a second drive (multipath) to the system after the install
3. Configure that drive to be multipath and run # multiath -v2 to create it
4. See error. 
5. Reboot while both devices are still not blacklisted and there is the possibility of error. 
  
Actual results:
The second drive isn't correctly set up because it attempts to use mpatha as well. 

Expected results:
The second drive should be configured as mpathb. This would be saved and there would be no possibility of collision. 

Additional info:
Tested using Compellent SAN, fiber channel, LPe12002-M8, Emulex LightPulse Fibre Channel SCSI driver 8.3.5.13

Comment 2 RHEL Program Management 2010-07-15 20:37:31 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release. It has
been denied for the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 3 Jim Lester 2010-07-15 21:41:55 UTC
I would consider not being able to have more than one volume multipathed from a SAN a blocking issue for Enterprise customers. That is basic operation for deployment of boot from SAN servers.

Comment 4 Bill Nottingham 2010-07-16 15:46:02 UTC
Can you attach your config files?

Comment 5 Ben Marzinski 2010-07-16 18:17:44 UTC
This looks like it's a dracut problem, or possibly an anaconda one.

When you install with multipathed root/boot, you need to have the multipath devices set up in the initramfs.  If user_friendly_names is set, multipath keeps track of which devices should have which names by keeping a mapping in /etc/multipath/bindings.  If the devices in the initramfs version of this file don't have the same mappings in the actual version of this fine, you can be problems.

To solve this problem, you need to make sure that whatever devices you have in the initramfs version of this file, you also have in the actual file, with the same user_friendly names.

This file gets updated when a device is created, so the initramfs can run into problems if it creates any multipath device that it is able to, instead of just the ones it needs to. I'm not sure if it does this, but if it does, then this could cause other problems.

I'm going to reassign this to dracut. If that isn't the proper place to fix this, please reassign it.

Comment 6 Jim Lester 2010-07-16 19:16:18 UTC
I believe I have identified part of the problem. The boxes that are giving us trouble do not have multipathd enabled at boot. By if we enable multipathd to be part of the run level than behavior is as expected and the new volume is assigned mpathb or higher. 

[root@smithwicks ~]# chkconfig --list multipathd
multipathd      0:off   1:off   2:off   3:off   4:off   5:off   6:off

[root@smithwicks ~]# cat /etc/multipath.conf
# multipath.conf written by anaconda

defaults {
        user_friendly_names yes
}
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
        devnode "^dcssblk[0-9]*"
        device {
                vendor "DGC"
                product "LUNZ"
        }
        device {
                vendor "IBM"
                product "S/390.*"
        }
        # don't count normal SATA devices as multipaths
        device {
                vendor  "ATA"
        }
        # don't count 3ware devices as multipaths
        device {
                vendor  "3ware"
        }
        device {
                vendor  "AMCC"
        }
        # nor highpoint devices
        device {
                vendor  "HPT"
        }
        device {
                vendor TEAC
                product DVD-ROM_DV28EV
        }
}
multipaths {
        multipath {
                uid 0
                alias mpatha
                gid 0
                wwid 36000d3100003d50000000000000003c4
                mode 0600
        }
}

Comment 7 Harald Hoyer 2010-07-17 06:42:18 UTC
(In reply to comment #6)
> I believe I have identified part of the problem. The boxes that are giving us
> trouble do not have multipathd enabled at boot. By if we enable multipathd to
> be part of the run level than behavior is as expected and the new volume is
> assigned mpathb or higher. 

so.. NOTABUG?

Comment 8 Jim Lester 2010-07-19 14:02:30 UTC
I guess I would consider it to be a bug that the multipathd daemon isn't automatically configured to start on boot when a multipath install is done. Especially since it can lead to such problems.

Comment 9 Harald Hoyer 2010-07-19 14:21:52 UTC
(In reply to comment #8)
> I guess I would consider it to be a bug that the multipathd daemon isn't
> automatically configured to start on boot when a multipath install is done.
> Especially since it can lead to such problems.    

That would be an anaconda bug, then.

Comment 10 David Cantrell 2010-07-21 19:48:44 UTC
anaconda makes sure the device-mapper-multipath package (which contains multipathd) is installed if you are using mpath devices, but we do not enable or disable services from anaconda.  The packages themselves have to do that, otherwise anaconda would have to be in the business of enabling services for each runlevel and for different types of installs.  It's far easier to have the packages do that themselves.

Reassigning to device-mapper-multipath.

Comment 11 Ben Marzinski 2010-07-22 22:35:54 UTC
I'm not sure what you want multipath to do here.  Normally, multipathing gets enabled after install and multipathd is chkconfig'd on then.  However for multipath root/boot systems, anaconda is already setting up multipathing.
Is it hard for anaconda to also chkconfig multipathd on when it does this?

I could have the device-mapper-multipath always chkconfig multipathd on.  If there is no /etc/multipath.conf it will blacklist all devices, so it shouldn't do anything.  I would rather have put than change in earlier, since it effects all users, if thats that route we're going. Besides, all I'd be doing is running chkconfig as a post-install script. If the device-mapper-multipath package can do that, surely anaconda can do it as well. However, anaconda is in a position to know when it's the right thing to do (only when you are already setting up multipathing in anaconda).

Comment 12 Chris Lumens 2010-07-23 19:26:01 UTC
It's my understanding that packaging best practices advise against turning on the service automatically from a %post script.  anaconda can currently enable services as listed by kickstart, but it has no further mechanism for specifying additional services on a storage-specific basis.  However, we do have something for specifying which packages should be installed.  It seems trivial to add the same support for services too.

Comment 17 Jan Stodola 2010-08-20 10:20:12 UTC
Thanks for testing.
I also tested this issue on build RHEL6.0-20100811.2 with anaconda-13.21.74-1.el6, multipathd was enabled and running after restart.
Moving to VERIFIED.

Comment 18 releng-rhel@redhat.com 2010-11-10 19:51:00 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.