Bug 442583 - Incorrect priority values displayed with "multipath -ll"
Incorrect priority values displayed with "multipath -ll"
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: device-mapper-multipath (Show other bugs)
4.7
All Linux
medium Severity medium
: rc
: ---
Assigned To: Dave Wysochanski
Corey Marthaler
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-04-15 13:22 EDT by Dave Wysochanski
Modified: 2010-01-11 21:31 EST (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-04-16 13:43:33 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Bug triage for 'prio=2' display (8.32 KB, text/plain)
2008-04-15 13:23 EDT, Dave Wysochanski
no flags Details

  None (edit)
Description Dave Wysochanski 2008-04-15 13:22:59 EDT
Description of problem:

The [prio=2] line is incorrect below.  Seems to be the result of the priority
callout not getting called.  Now I am having a harder time reproducing though -
might have to do with some sequence of running/not-running multipathd while
recreating the multipath maps with "multipath -v3" or unloading/reloading the
qla2300 module.

Here's incorrect output:
mpath3 (3600805f30005e240c57054bda29000d2)
[size=1 GB][features="1 queue_if_no_path"][hwhandler="1 hp-sw"]
\_ round-robin 0 [prio=2][active]
 \_ 5:0:0:23 sdad 65:208 [active][ready]
 \_ 4:0:0:23 sdd  8:48   [active][ready]
\_ round-robin 0 [prio=2][enabled]
 \_ 5:0:1:23 sdaq 66:160 [active][ghost]
 \_ 4:0:1:23 sdq  65:0   [active][ghost]


Correct output:
mpath3 (3600805f30005e240c57054bda29000d2)
[size=1 GB][features="1 queue_if_no_path"][hwhandler="1 hp-sw"]
\_ round-robin 0 [prio=8][active]
 \_ 5:0:0:23 sdad 65:208 [active][ready]
 \_ 4:0:0:23 sdd  8:48   [active][ready]
\_ round-robin 0 [prio=4][enabled]
 \_ 5:0:1:23 sdaq 66:160 [active][ghost]
 \_ 4:0:1:23 sdq  65:0   [active][ghost]


Version-Release number of selected component (if applicable):
device-mapper-multipath-0.4.5-31.el4

How reproducible:
I just had it happen, and it was reproducible.  Then I restarted multipathd or
something and now I can not get it to reproduce.

Steps to Reproduce:
1.
2.
3.
  
Actual results:
"[prio=2]" output of multipath -ll

Expected results:
Proper priority value (in my case '4' or '8')

Additional info:
At this point I am not ruling out user error though I did see this before and
started debugging it so I have some notes.  This was found while working on
214809 (dm-hp userspace update for RHEL4.7).
Comment 1 Dave Wysochanski 2008-04-15 13:23:00 EDT
Created attachment 302495 [details]
Bug triage for 'prio=2' display
Comment 2 Dave Wysochanski 2008-04-16 13:43:33 EDT
Ok, summarizing so I remember in case we see this again. 

In talking with Ben, we suspect this has been fixed recently.  Problem is
thought to have been introduced with the 'max_fds' fix for another bz.  This fix
was done in Jan 2008, just after U6:
commit 4ced5657f5361cca01bf8a84abb226b28dc19f47
Author: Benjamin Marzinski <bmarzin@redhat.com>
Date:   Tue Jan 15 23:59:04 2008 +0100

    [libmultipath] fix the "too many files" error
    
    Added a max_fds parameter to /etc/multipath.conf. This allows
    you to set the maximum number of open fds that multipathd can use, like with
    ulimit -n.  Also added some code so that multipath closes the file descriptor
    after it's used by the checker function, since multipath doesn't need to keep
    them always open like multipathd does.


Only thing I was somewhat in doubt about was in my earlier analysis, I found U6
was where I started to see the problem.  U6 is:
RHEL-4/U6/AS/i386/tree/RedHat/RPMS/device-mapper-multipath-0.4.5-27.RHEL4.i386.rpm

I just did a quick test on U6 though and could not reproduce the problem.  I
looked at the fix and it does seem to make sense based on the rest of my earlier
analysis and the other data we have.  So I will close this on this basis and the
fact that my U6 test was probably user error.
Comment 3 Dave Wysochanski 2008-04-16 13:48:38 EDT
Just to be clear, this is not believed to be in any release.

Note You need to log in before you can comment on or make changes to this bug.