RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 818571 - path check failed by RHEL 6.2
Summary: path check failed by RHEL 6.2
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: device-mapper-multipath
Version: 6.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Ben Marzinski
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-05-03 12:13 UTC by 25641463
Modified: 2022-12-20 10:50 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-11 19:21:22 UTC
Target Upstream Version:
Embargoed:
yangfanlinux: needinfo-
yangfanlinux: needinfo-


Attachments (Terms of Use)

Description 25641463 2012-05-03 12:13:11 UTC
Hello,develop team.I test multipath in RHEL 6.2(X86_64).There are 8 disk in one target path;All disk are mounted by ext3,and use dd to write.I find a issue when test takeover(multipath + srpdaemon.sh).Sometimes 7 disk is active,but 1 is faulty(mpathf).I check target ,and there is well;I didn't how can i avoid the issue. I find some different between active and faulty by /var/log/message.The mpathf print "map in use,unable to flush devmap".


[root@localhost ~]# multipath -ll
mpathe (23431373633313138) dm-8 _INSPUR_,vg5_lv52
size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 19:0:0:6 sdq 65:0  active ready  running
mpathd (23735393139393831) dm-10 _INSPUR_,vg5_lv51
size=350G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 19:0:0:2 sdm 8:192 active ready  running
mpathc (26238643332373337) dm-9 _INSPUR_,vg4_lv41
size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 19:0:0:1 sdl 8:176 active ready  running
mpathj (23238623936303261) dm-4 _INSPUR_,vg5_lv54
size=350G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 19:0:0:8 sds 65:32 active ready  running
mpathi (23966393631376439) dm-7 _INSPUR_,vg4_lv43
size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 19:0:0:4 sdo 8:224 active ready  running
mpathh (26535666264653963) dm-3 _INSPUR_,vg4_lv44
size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 19:0:0:5 sdp 8:240 active ready  running
mpathg (23532643461393666) dm-6 _INSPUR_,vg5_lv53
size=350G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 19:0:0:7 sdr 65:16 active ready  running
mpathf (23863333438666165) dm-5 ,
size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  |- #:#:#:#  -   #:#   failed faulty running
  `- 19:0:0:3 sdn 8:208 failed faulty running
 
[root@localhost ~]# dmsetup -v status
Name:              mpathe
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      7
Major, minor:      253, 8
Number of targets: 1
UUID: mpath-23431373633313138
 
0 629145600 multipath 2 0 0 0 1 1 A 0 1 0 65:0 A 0 
 
Name:              VolGroup-lv_swap
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 1
Number of targets: 1
UUID: LVM-g146OXH4QW6yuzxVPQg6MTowS4n70ZQE2yXeAnBJ0rzl35hQd5ENo9JrFuavLaZ4
 
0 8192000 linear 
 
Name:              mpathd
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      7
Major, minor:      253, 10
Number of targets: 1
UUID: mpath-23735393139393831
 
0 734003200 multipath 2 0 0 0 1 1 A 0 1 0 8:192 A 0 
 
Name:              VolGroup-lv_root
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 0
Number of targets: 1
UUID: LVM-g146OXH4QW6yuzxVPQg6MTowS4n70ZQEJuESdV8wAAi6xtZJsvue0MMdKwv8JrD1
 
0 104857600 linear 
 
Name:              mpathc
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      8
Major, minor:      253, 9
Number of targets: 1
UUID: mpath-26238643332373337
 
0 629145600 multipath 2 0 0 0 1 1 A 0 1 0 8:176 A 0 
 
Name:              mpathj
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      7
Major, minor:      253, 4
Number of targets: 1
UUID: mpath-23238623936303261
 
0 734003200 multipath 2 0 0 0 1 1 A 0 1 0 65:32 A 0 
 
Name:              mpathi
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      8
Major, minor:      253, 7
Number of targets: 1
UUID: mpath-23966393631376439
 
0 629145600 multipath 2 0 0 0 1 1 A 0 1 0 8:224 A 0 
 
Name:              mpathh
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      7
Major, minor:      253, 3
Number of targets: 1
UUID: mpath-26535666264653963
 
0 629145600 multipath 2 0 0 0 1 1 A 0 1 0 8:240 A 0 
 
Name:              VolGroup-lv_home
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 2
Number of targets: 1
UUID: LVM-g146OXH4QW6yuzxVPQg6MTowS4n70ZQEXnAad4uTeBensk60oEaKeheiQTGKySgB
 
0 1839439872 linear 
 
Name:              mpathg
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        0
Event number:      8
Major, minor:      253, 6
Number of targets: 1
UUID: mpath-23532643461393666
 
0 734003200 multipath 2 0 0 0 1 1 A 0 1 0 65:16 A 0 
 
Name:              mpathf
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      9
Major, minor:      253, 5
Number of targets: 1
UUID: mpath-23863333438666165
 
0 629145600 multipath 2 39 0 0 1 1 E 0 2 0 8:64 F 1 8:208 F 1 
 
 
 

ay  2 21:54:02 localhost kernel: end_request: I/O error, dev sdn, sector 169980416
May  2 21:54:03 localhost multipathd: sdb: remove path (uevent)
May  2 21:54:03 localhost multipathd: sdc: remove path (uevent)
May  2 21:54:03 localhost kernel: device-mapper: table: 253:5: multipath: error getting device
May  2 21:54:03 localhost kernel: device-mapper: ioctl: error adding target to table
May  2 21:54:03 localhost kernel: device-mapper: table: 253:5: multipath: error getting device
May  2 21:54:03 localhost kernel: device-mapper: ioctl: error adding target to table
May  2 21:54:03 localhost multipathd: mpathc: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:176 1]
May  2 21:54:03 localhost multipathd: sdc: path removed from map mpathc
May  2 21:54:03 localhost multipathd: sdd: remove path (uevent)
May  2 21:54:03 localhost multipathd: sdg: remove path (uevent)
May  2 21:54:03 localhost multipathd: mpathh: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:240 1]
May  2 21:54:03 localhost multipathd: sdg: path removed from map mpathh
May  2 21:54:03 localhost multipathd: sdf: remove path (uevent)
May  2 21:54:03 localhost multipathd: mpathi: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:224 1]
May  2 21:54:03 localhost multipathd: sdf: path removed from map mpathi
May  2 21:54:03 localhost multipathd: sdi: remove path (uevent)
May  2 21:54:03 localhost multipathd: mpathg: load table [0 734003200 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 65:16 1]
May  2 21:54:03 localhost multipathd: sdi: path removed from map mpathg
May  2 21:54:03 localhost multipathd: sdj: remove path (uevent)
May  2 21:54:03 localhost multipathd: mpathj: load table [0 734003200 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 65:32 1]
May  2 21:54:03 localhost multipathd: sdj: path removed from map mpathj
May  2 21:54:03 localhost multipathd: sde: remove path (uevent)
May  2 21:54:03 localhost multipathd: mpathf: failed in domap for removal of path sde
May  2 21:54:03 localhost multipathd: uevent trigger error
May  2 21:54:03 localhost multipathd: sdh: remove path (uevent)
May  2 21:54:03 localhost multipathd: mpathe: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 65:0 1]
May  2 21:54:03 localhost multipathd: sdh: path removed from map mpathe
May  2 21:54:03 localhost multipathd: mpathf: Entering recovery mode: max_retries=300
May  2 21:54:03 localhost multipathd: 8:64: mark as failed
May  2 21:54:03 localhost multipathd: mpathf: Entering recovery mode: max_retries=300
May  2 21:54:03 localhost multipathd: 8:64: mark as failed
May  2 21:55:19 localhost multipathd: reconfigure (SIGHUP)
May  2 21:55:19 localhost multipathd: mpathc: stop event checker thread (140270187837184)
May  2 21:55:19 localhost multipathd: mpathh: stop event checker thread (140270180927232)
May  2 21:55:19 localhost multipathd: mpathf: stop event checker thread (140270180894464)
May  2 21:55:19 localhost multipathd: mpathi: stop event checker thread (140270180861696)
May  2 21:55:19 localhost multipathd: mpathe: stop event checker thread (140270180828928)
May  2 21:55:19 localhost multipathd: mpathg: stop event checker thread (140270180796160)
May  2 21:55:19 localhost multipathd: mpathj: stop event checker thread (140270180763392)
May  2 21:55:19 localhost multipathd: mpathd: stop event checker thread (140270180730624)
May  2 21:55:20 localhost kernel: device-mapper: table: 253:11: multipath: error getting device
May  2 21:55:20 localhost kernel: device-mapper: ioctl: error adding target to table
May  2 21:55:20 localhost kernel: device-mapper: table: 253:11: multipath: error getting device
May  2 21:55:20 localhost kernel: device-mapper: ioctl: error adding target to table
May  2 21:55:20 localhost multipathd: mpatha: ignoring map
May  2 21:55:20 localhost multipathd: mpathc: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:176 1]
May  2 21:55:20 localhost multipathd: mpathd: load table [0 734003200 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:192 1]
May  2 21:55:20 localhost multipathd: mpathg: load table [0 734003200 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 65:16 1]
May  2 21:55:20 localhost multipathd: mpathe: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 65:0 1]
May  2 21:55:20 localhost multipathd: mpathh: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:240 1]
May  2 21:55:20 localhost multipathd: mpathi: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:224 1]
May  2 21:55:20 localhost multipathd: mpathj: load table [0 734003200 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 65:32 1]
May  2 21:55:20 localhost multipathd: mpathf: map in use
May  2 21:55:20 localhost multipathd: mpathf: unable to flush devmap
May  2 21:55:20 localhost multipathd: mpathc: event checker started
May  2 21:55:20 localhost multipathd: mpathd: event checker started
May  2 21:55:20 localhost multipathd: mpathg: event checker started
May  2 21:55:20 localhost multipathd: mpathe: event checker started
May  2 21:55:20 localhost multipathd: mpathh: event checker started
May  2 21:55:20 localhost multipathd: mpathi: event checker started
May  2 21:55:20 localhost multipathd: mpathj: event checker started
May  2 21:55:20 localhost multipathd: mpathf: event checker started

Comment 2 RHEL Program Management 2012-05-07 04:05:35 UTC
Since RHEL 6.3 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 3 Ben Marzinski 2012-05-07 17:32:31 UTC
I'm not exactly sure what your question is.  Are you wondering why 7 of your devices are have working paths, but one of them doesn't?

Looking at mpathf

mpathf (23863333438666165) dm-5 ,
size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  |- #:#:#:#  -   #:#   failed faulty running
  `- 19:0:0:3 sdn 8:208 failed faulty running


I can see one path that is gone, but couldn't get cleaned up. That's this line

  |- #:#:#:#  -   #:#   failed faulty running

That will get removed as soon as you reload the device with a working path. The
other path simply looks like it is not working.  Do you know if the /dev/sdn is
actually working.  If it isn't, there's nothing multipath can do.

You can try running

# sg_turs /dev/sdn
# echo $?

If the result is 0, then sg_turs says that the path is ready.  This mimics what the tur path checker does. sg_turs is from the sg3_utils package.

You can also try

# dd if=/dev/sdn of=/dev/null bs=1K count=1 iflag=direct

If dd successfully reads from the device, then the directio path checker should
say that it is active.

If one of these tests doesn't say that the device is active, then there really isn't anything that multipathd can do to make it work.  On the other hand, if the device is working according to these tests, then please let me know, since the path checkers should find the same results.

Comment 4 25641463 2012-05-08 01:05:02 UTC
Thanks for your reply.I test SRP target for HA.There are two path connected to  two different target.And every session has 8 disk.So if one of sessions is active,i didn't know why mpathf  is faulty by all path.After reboot initiator,all disk will active again.I will try your recommand.Thanks again.

Comment 5 25641463 2012-05-08 01:05:19 UTC
Thanks for your reply.I test SRP target for HA.There are two path connected to  two different target.And every session has 8 disk.So if one of sessions is active,i didn't know why mpathf  is faulty by all path.After reboot initiator,all disk will active again.I will try your recommand.Thanks again.

Comment 6 25641463 2012-05-08 02:49:30 UTC
By the way,it print follow when i test,and multipathd is dead:


evice-mapper: table: 253:10: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:10: multipath: error getting device
device-mapper: ioctl: error adding target to table
multipathd[2545]: segfault at 0 ip 000000381760ef2c sp 00007f353ceeac90 error 4 in libmultipath.so[3817600000+48000]

[root@localhost ~]# service multipathd status
multipathd dead but pid file exists

Comment 7 Ben Marzinski 2012-05-09 19:11:51 UTC
Well, a crash is always a bug.  Can you recreate this? If so could you please capture a core dump from this crash, and upload it.

If it's too big for attaching to the bugzilla directly, you can follow the
instructions here 

https://access.redhat.com/kb/docs/DOC-2113

to upload it to our dropbox.  Let me know if you have any problems getting a core
file or uploading it.

Comment 8 Ben Marzinski 2015-10-06 02:25:12 UTC
Is this bug still present?

Comment 9 Ben Marzinski 2016-08-11 19:21:22 UTC
I you can recreate this and capture a core dump, please reopen this bug.


Note You need to log in before you can comment on or make changes to this bug.