Bug 661984

Summary: vendor/product blacklist entry doesn't work.
Product: Red Hat Enterprise Linux 5 Reporter: Tore Anderson <tore>
Component: device-mapper-multipathAssignee: Ben Marzinski <bmarzins>
Status: CLOSED NOTABUG QA Contact: Storage QE <storage-qe>
Severity: medium Docs Contact:
Priority: low    
Version: 5.5CC: agk, bdonahue, bmarzins, bmr, christophe.varoqui, dwysocha, heinzm, junichi.nomura, kueda, lmb, prajnoha, prockai, starlight, troels
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-06 16:57:45 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
/etc/multipath.conf from a production server that exhibits the problem
none
Output from "multipath -v 3 -ll" run on the production server in question none

Description Tore Anderson 2010-12-10 09:01:36 UTC
Description of problem:

EMC CLARiiON storage arrays will present a fake volume as SCSI LUN 0 if no real volume is assigned this LUN.  This always has model strings set to «LUNZ».  Since this isn't a real LUN, it should be ignored by multipath/multipathd.

The following code in /etc/multipath.conf should have done the trick:

blacklist {
	device {
		vendor			DGC
		product			LUNZ
	}
}

But it doesn't.  When I run «multipath -ll», the error following error message appears, once for each /dev/sdx device assigned to the fake LUNs.

sdb: checker msg is "tur checker reports path is down"

This, in turn, confuses Nagios into believing something is wrong with the fibre channel paths.

Version-Release number of selected component (if applicable):

device-mapper-multipath-0.4.7-34.el5_5.6

How reproducible:

100%


Steps to Reproduce:
1. Connect a RHEL 5.5 host to a EMC CLARiiON storage array that exports the LUNZ fake volume
2. Run multipath -ll
  
Actual results:

An error message appears for each of the paths to the LUNZ fake volume


Expected results:

No error message should appear and the paths should have been ignored completely

Additional info:

Will attach a multipath.conf file with the blacklist entry and the output from «multipath -v 3 -ll» when run on the machine with that multipath.conf file in use.

Comment 1 Tore Anderson 2010-12-10 09:03:16 UTC
Created attachment 467926 [details]
/etc/multipath.conf from a production server that exhibits the problem

Comment 2 Tore Anderson 2010-12-10 09:04:30 UTC
Created attachment 467927 [details]
Output from "multipath -v 3 -ll" run on the production server in question

Comment 3 Ben Marzinski 2010-12-14 00:48:51 UTC
Multipath did blacklist your LUNZ devices. Multipath only does the devnode blacklisting before getting the path information. It does the device and wwid information blacklisting afterwards, and that error message is happening when multipath is getting the path information.  It should be possible to move up the device blacklisting to immediately after the device type has been determined.

Comment 4 Troels Arvin 2011-06-08 21:26:40 UTC
I believe that I'm seeing the same problem on an RHEL 6.1 installation:

The server has these lines (among others) in multipath.conf:

blacklist {
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z]"
    devnode "^sda[0-9]*$"

    device {
        vendor  "DELL.*"
        product "PERC.*"
    }
    device {
        vendor  "TEAC.*"
        product "DVD.*"
    }
}

Still, a map like this shows up after boot:
36782bcb01fbda70015810024681f40e3 dm-0 DELL,PERC H700
size=136G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 0:2:1:0  sdb  8:16   active ready running

A strange thing: If I flush the 36782bcb01fbda70015810024681f40e3 after boot, and then run "multipath" again, the filtering works. If I then remove the DELL-filter and run "multipath" again, the 36782bcb01fbda70015810024681f40e3 map returns.

So: Something about multipath's filtering does not work at boot-time (but will subsequently work fine).

Comment 5 Troels Arvin 2011-06-08 23:09:26 UTC
Ah - in my case, it seems that a new initramfs needs to be created after changes to multipath.conf. Gotcha.

Comment 6 RHEL Program Management 2014-01-29 10:38:22 UTC
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux release.  Product Management has
requested further review of this request by Red Hat Engineering, for
potential inclusion in a Red Hat Enterprise Linux release for currently
deployed products.  This request is not yet committed for inclusion in
a release.