Bug 235839 - Booting RHEL5 with multipathing and lvm
Summary: Booting RHEL5 with multipathing and lvm
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: device-mapper-multipath
Version: 5.0
Hardware: All
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Ben Marzinski
QA Contact: Corey Marthaler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-04-10 14:02 UTC by Thomas von Steiger
Modified: 2010-01-12 02:37 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-04-25 10:36:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
output from multipath -v3 and multipath -l (6.23 KB, text/plain)
2007-04-10 14:02 UTC, Thomas von Steiger
no flags Details
/etc/multipath.conf (875 bytes, application/octet-stream)
2007-04-10 14:05 UTC, Thomas von Steiger
no flags Details
lvcreate -vvvv (to sea more output) (10.56 KB, application/octet-stream)
2007-04-17 11:23 UTC, Thomas von Steiger
no flags Details

Description Thomas von Steiger 2007-04-10 14:02:10 UTC
Description of problem:
multipath Device can not be startet.

Version-Release number of selected component (if applicable):
kernel-2.6.18-8.el5
device-mapper-multipath-0.4.7-8.el5

How reproducible:
- Create /etc/multipath.conf  (Attachment)
- Create "8e" Partition on san storage
- multipath -F
- multipath -v0
- dmsetup ls --target multipath --exec "/sbin/kpartx -a -p p"
- multipath -l

Steps to Reproduce:
1. multipath -F
2. multipath -v0
3. multipath -l
  
Actual results:
[root@si1167z sbin]# multipath -l
lun001 (360060e8004eb2d000000eb2d00003a4e) dm-1 HITACHI,OPEN-V
[size=20G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:0:1 sdb 8:16  [active][undef]
 \_ 1:0:0:1 sdd 8:48  [active][undef]

Expected results:
[root@si1161z ~]# multipath -l lun001
lun001 (360060e8004eb2d000000eb2d00000504)
[size=14 GB][features="1 queue_if_no_path"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 0:0:0:1 sdb 8:16  [active][ready]
\_ round-robin 0 [enabled]
 \_ 1:0:0:1 sdh 8:112 [active][ready]

Additional info:
There is output from "multipath -v3" as attachment.

Comment 1 Thomas von Steiger 2007-04-10 14:02:10 UTC
Created attachment 152133 [details]
output from multipath -v3 and multipath -l

Comment 2 Thomas von Steiger 2007-04-10 14:05:32 UTC
Created attachment 152134 [details]
/etc/multipath.conf

Comment 3 Ben Marzinski 2007-04-11 18:02:35 UTC
o.k. I'm sort of confused by the title of this.  Seeing an [undef] is exactly
what I would expect to see if you run create a multipath device, or run
multipath -l.  Neither of these commands actually figure out what device-mapper's
view of the state is.  If you run

# multipath -ll

You will actually get the correct state information.

However, I did notice that you have two mutlitpathable devices, and only one
gets a multipath map created. Is that what the bug is really about?

Comment 4 Ben Marzinski 2007-04-11 18:13:39 UTC
Just a little more information on this.

at the end of the path lines in the multipath map printouts, there are two
status values.  The first is the result of running the path_checker callout on
the path. In you case that is the SCSI Test Unit Ready (tur) checker. This
checker says that your devices are working fine. The second status value is the
value currently in the dm status line for this device. This will almost always
be the same as the path checker value.  There are brief intervals when these two
values may not agree.  For instance, if the path checker in multipathd sees that
the device has failed, but hasn't updated the dm value yet, they will not match.
 Also, if IO fails on a path, but the multipathd path checker hasn't run since
the path failed, they will not match. [undef] simply means that you haven't
checked the dm status line.

If this is your only problem, let me know, so I can close out this bug.  If
don't understand why your second device wasn't multipathed, let me know, and we
can look into that further.

Comment 5 Thomas von Steiger 2007-04-12 08:40:34 UTC
many thanks for you information.
ok, you are right, the titel of this is not the right one.

"multipath -ll" means:

RHEL5(multibus):
[root@si1168z ~]# multipath -ll
sys001 (360060e8004eb2d000000eb2d000016eb) dm-0 HITACHI,OPEN-V
[size=9.8G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:0:0 sda 8:0   [active][ready]
 \_ 1:0:0:0 sdc 8:32  [active][ready]
lun001 (360060e8004eb2d000000eb2d00003a4f) dm-1 HITACHI,OPEN-V
[size=20G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:0:1 sdb 8:16  [active][ready]
 \_ 1:0:0:1 sdd 8:48  [active][ready]

RHEL4(failover):
[root@si1161z ~]# multipath -l lun001
lun001 (360060e8004eb2d000000eb2d00000504)
[size=14 GB][features="1 queue_if_no_path"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 0:0:0:1 sdb 8:16  [active][ready]
\_ round-robin 0 [enabled]
 \_ 1:0:0:1 sdh 8:112 [active][ready]

Then we have with rhel5 a other output layout as with rhel4 for multipath -ll?
One RHEL4, "round-robin 0 [active], round-robin 0 [enabled]" means failover is
running.
On RHEL5, i can not sea this information ?

[root@si1167z ~]# grep default_path_grouping_policy /etc/multipath.conf 
   default_path_grouping_policy   failover
[root@si1167z ~]# multipath -ll
lun001 (360060e8004eb2d000000eb2d00003a4e) dm-1 HITACHI,OPEN-V
[size=20G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:0:1 sdb 8:16  [active][ready]
 \_ 1:0:0:1 sdd 8:48  [active][ready]

[root@si1168z ~]# grep default_path_grouping_policy /etc/multipath.conf 
   default_path_grouping_policy   multibus
[root@si1168z ~]# multipath -ll
sys001 (360060e8004eb2d000000eb2d000016eb) dm-0 HITACHI,OPEN-V
[size=9.8G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:0:0 sda 8:0   [active][ready]
 \_ 1:0:0:0 sdc 8:32  [active][ready]
lun001 (360060e8004eb2d000000eb2d00003a4f) dm-1 HITACHI,OPEN-V
[size=20G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:0:1 sdb 8:16  [active][ready]
 \_ 1:0:0:1 sdd 8:48  [active][ready]

si1167z has booted with standart initrd.
si1168z has booted with multipath device.
For this we have build a new initrd with all the multipath stuff in the same way
like we have on rhel4.

Thomas

Comment 6 Thomas von Steiger 2007-04-12 10:17:36 UTC
My real Problem is to setup lvm ontop multipath device.
We need to boot full multipath with rhel5 like we have with rhel3 and 4.
Becose we have no internal storage, only san storage and ibm blades.

[root@si1261z ~]# multipath -F
sys001: map in use
[root@si1261z ~]# multipath -v0
[root@si1261z ~]# /sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a -p p"
[root@si1261z ~]# ls -la /dev/mapper/
total 0
drwxr-xr-x  2 root root     160 Apr 12 11:55 .
drwxr-xr-x 12 root root    3840 Apr 12 11:55 ..
crw-------  1 root root  10, 63 Apr 12 11:41 control
brw-rw----  1 root disk 253,  1 Apr 12 11:55 lun001
brw-rw----  1 root disk 253,  0 Apr 12 11:41 sys001
brw-rw----  1 root disk 253,  2 Apr 12 11:42 sys001p1
brw-rw----  1 root disk 253,  3 Apr 12 11:42 sys001p2
brw-rw----  1 root disk 253,  4 Apr 12 11:55 sys001p3
[root@si1261z ~]# ls -la /dev/mpath/
total 0
drwxr-xr-x  2 root root  140 Apr 12 11:55 .
drwxr-xr-x 12 root root 3840 Apr 12 11:55 ..
lrwxrwxrwx  1 root root    7 Apr 12 11:55 lun001 -> ../dm-1
lrwxrwxrwx  1 root root    7 Apr 12 11:41 sys001 -> ../dm-0
lrwxrwxrwx  1 root root    7 Apr 12 11:41 sys001p1 -> ../dm-2
lrwxrwxrwx  1 root root    7 Apr 12 11:41 sys001p2 -> ../dm-3
lrwxrwxrwx  1 root root    7 Apr 12 11:55 sys001p3 -> ../dm-4
[root@si1261z ~]# multipath -ll
sys001 (360060e8004eb2d000000eb2d00001675) dm-0 HITACHI,OPEN-V
[size=9.8G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:0:0 sda 8:0   [active][ready]
 \_ 1:0:0:0 sdc 8:32  [active][ready]
lun001 (360060e8004eb2d000000eb2d00003a50) dm-1 HITACHI,OPEN-V
[size=20G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 0:0:0:1 sdb 8:16  [active][ready]
 \_ 1:0:0:1 sdd 8:48  [active][ready]
[root@si1261z ~]# pvscan
  No matching physical volumes found
[root@si1261z ~]# pvcreate /dev/mapper/sys001p3
  Physical volume "/dev/mapper/sys001p3" successfully created
[root@si1261z ~]# pvscan
  PV /dev/dm-4         lvm2 [8.34 GB]
  Total: 1 [8.34 GB] / in use: 0 [0   ] / in no VG: 1 [8.34 GB]

--> to this point everything is ok..

[root@si1261z ~]# vgcreate rhvg /dev/mapper/sys001p3
  Found duplicate PV QZUgJaZETMkvRhfkAOImgc6K9GR3Inrt: using /dev/sdc3 not /dev/sda3
  Found duplicate PV QZUgJaZETMkvRhfkAOImgc6K9GR3Inrt: using /dev/sda3 not /dev/sdc3
  Volume group "rhvg" successfully created
[root@si1261z ~]# pvscan
  PV /dev/dm-4   VG rhvg   lvm2 [8.34 GB / 8.34 GB free]
  Total: 1 [8.34 GB] / in use: 1 [8.34 GB] / in no VG: 0 [0   ]
[root@si1261z ~]# pvs
  Found duplicate PV QZUgJaZETMkvRhfkAOImgc6K9GR3Inrt: using /dev/sdc3 not /dev/sda3
  PV         VG   Fmt  Attr PSize PFree
  /dev/sdc3  rhvg lvm2 a-   8.34G 8.34G
[root@si1261z ~]# lvcreate -L 1000 -n slashlv rhvg
  Found duplicate PV QZUgJaZETMkvRhfkAOImgc6K9GR3Inrt: using /dev/sdc3 not /dev/sda3
  device-mapper: reload ioctl failed: Invalid argument
  Failed to activate new LV.
[root@si1261z ~]# tail -4 /var/log/messages
Apr 12 11:50:55 si1261z kernel: device-mapper: table: 253:5: linear: dm-linear:
Device lookup failed
Apr 12 11:50:55 si1261z kernel: device-mapper: ioctl: error adding target to table
Apr 12 11:57:50 si1261z kernel: device-mapper: table: 253:5: linear: dm-linear:
Device lookup failed
Apr 12 11:57:50 si1261z kernel: device-mapper: ioctl: error adding target to table
[root@si1261z ~]# ls -la /dev/rhvg
ls: /dev/rhvg: No such file or directory
[root@si1261z ~]# ls -la /dev/mapper/
total 0
drwxr-xr-x  2 root root     180 Apr 12 11:57 .
drwxr-xr-x 12 root root    3840 Apr 12 11:55 ..
crw-------  1 root root  10, 63 Apr 12 11:41 control
brw-rw----  1 root disk 253,  1 Apr 12 11:55 lun001
brw-rw----  1 root disk 253,  5 Apr 12 11:57 rhvg-slashlv
brw-rw----  1 root disk 253,  0 Apr 12 11:41 sys001
brw-rw----  1 root disk 253,  2 Apr 12 11:42 sys001p1
brw-rw----  1 root disk 253,  3 Apr 12 11:42 sys001p2
brw-rw----  1 root disk 253,  4 Apr 12 11:56 sys001p3
[root@si1261z ~]# lvscan
  Found duplicate PV QZUgJaZETMkvRhfkAOImgc6K9GR3Inrt: using /dev/sdc3 not /dev/sda3
  ACTIVE            '/dev/rhvg/slashlv' [1000.00 MB] inherit

Now the lv is here becose i can not access them, there is now /dev/rhvg/slaslv.
The only what it can is "lvremove /dev/rhvg/slaslv".

What you think, are we need a other bugzilla or only a new headline for this
problem ?

Thomas

Comment 7 Ben Marzinski 2007-04-16 22:07:22 UTC
In answer to comment #5, The default_path_grouping_policy only takes effect if
there isn't already a path_grouping_policy set for your device.  I believe that
the path grouping policy for HITACHI,OPEN-V devices has always been multibus.
Certainly, in RHEL5 and RHEL4 is was.  Do you possibly have a "multipaths"
section in your /etc/multipath.conf file on your RHEL4 machines that overrides
the path_group_policy for these devices.

Also, I assume that HITACHI was correct when they set the path_grouping_policy
for these devices to multibus.  If these are active-active devices that can be
used in a multibus setup, then is there a reason why you would prefer them to be
used only in a failover setup.

On my setup, I can do all the lvm commands you list in comment #6, so I'm not
sure yet why this is happening to you. Do you need LVM to access any of your
scsi devices directly? If you are always working on top of multipathed devices,
you can try editting /etc/lvm/lvm.conf to exclude the scsi devices.

Try adding something like "r/sd.*/" to your filter line.

Comment 8 Thomas von Steiger 2007-04-17 07:58:48 UTC
I have upload /etc/multipath.conf in 
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=235839#c2

There is a "path_grouping_policy   failover" that means for HITACHI it takes 
failover.

Now i have testet your idea with filter for /dev/sd*.
Finaly i get the same error.

[root@si1261z lvm]#  pvcreate /dev/mapper/sys001p3
  Physical volume "/dev/mapper/sys001p3" successfully created
[root@si1261z lvm]# vgcreate rhvg /dev/mapper/sys001p3
  Found duplicate PV ymq63JW25qIxf1JbIFRDqBILfZoOMNew: using /dev/sdc3 
not /dev/sda3
  Found duplicate PV ymq63JW25qIxf1JbIFRDqBILfZoOMNew: using /dev/sda3 
not /dev/sdc3
  Volume group "rhvg" successfully created
[root@si1261z lvm]# lvcreate -L 1000 -n slash rhvg
  Found duplicate PV ymq63JW25qIxf1JbIFRDqBILfZoOMNew: using /dev/sdc3 
not /dev/sda3
  device-mapper: reload ioctl failed: Invalid argument
  Failed to activate new LV.
[root@si1261z lvm]# tail -4 /var/log/messages
Apr 17 09:46:15 si1261z kernel: device-mapper: table: 253:5: linear: dm-
linear: Device lookup failed
Apr 17 09:46:15 si1261z kernel: device-mapper: ioctl: error adding target to 
table
Apr 17 09:49:48 si1261z kernel: device-mapper: table: 253:5: linear: dm-
linear: Device lookup failed
Apr 17 09:49:48 si1261z kernel: device-mapper: ioctl: error adding target to 
table
[root@si1261z lvm]# grep "/dev/sd" /etc/lvm/lvm.conf
    filter = [ "a/.*/", "r|/dev/sd*/|" ]
[root@si1261z lvm]# ls -la /dev/rhvg
ls: /dev/rhvg: No such file or directory
[root@si1261z lvm]# ls -la /dev/mapper/
total 0
drwxr-xr-x  2 root root     180 Apr 17 09:49 .
drwxr-xr-x 12 root root    3660 Apr 17 09:42 ..
crw-------  1 root root  10, 63 Apr 12 13:19 control
brw-rw----  1 root disk 253,  1 Apr 16 16:34 lun001
brw-rw----  1 root disk 253,  5 Apr 17 09:49 rhvg-slash
brw-rw----  1 root disk 253,  0 Apr 12 13:19 sys001
brw-rw----  1 root disk 253,  2 Apr 12 13:19 sys001p1
brw-rw----  1 root disk 253,  3 Apr 12 13:19 sys001p2
brw-rw----  1 root disk 253,  4 Apr 17 09:49 sys001p3

Are you was able to boot RHEL5 with full multipath support ?

Are you have testet to mount Filesystem ontop multipath Devices with LABEL 
definition in /etc/fstab from e2label. I need to open a other bugZilla for 
this problem.


regards,
Thomas

Comment 9 Thomas von Steiger 2007-04-17 11:23:35 UTC
Created attachment 152779 [details]
lvcreate -vvvv (to sea more output)

In this logfile i can sea "lvcreate -l 200 -n test1 rhvg" with the error output
and "lvcreate -vvvv -l 200 -n test2 rhvg" for more output. Hope you found
something in this output..

Comment 10 Thomas von Steiger 2007-04-19 11:29:10 UTC
Hi Ben,

Now we can install RHEL5 with multipath with the boot Option "mpath" :-)
I have a open Call 1392916 about this becose there are some discussion about 
how to configure multipath and about using aliases for the Lun's.
For now the compled multipath Config is inside initrd.

Regards Thomas


Note You need to log in before you can comment on or make changes to this bug.