Description of problem: EMC CLARiiON storage arrays will present a fake volume as SCSI LUN 0 if no real volume is assigned this LUN. This always has model strings set to «LUNZ». Since this isn't a real LUN, it should be ignored by multipath/multipathd. The following code in /etc/multipath.conf should have done the trick: blacklist { device { vendor DGC product LUNZ } } But it doesn't. When I run «multipath -ll», the error following error message appears, once for each /dev/sdx device assigned to the fake LUNs. sdb: checker msg is "tur checker reports path is down" This, in turn, confuses Nagios into believing something is wrong with the fibre channel paths. Version-Release number of selected component (if applicable): device-mapper-multipath-0.4.7-34.el5_5.6 How reproducible: 100% Steps to Reproduce: 1. Connect a RHEL 5.5 host to a EMC CLARiiON storage array that exports the LUNZ fake volume 2. Run multipath -ll Actual results: An error message appears for each of the paths to the LUNZ fake volume Expected results: No error message should appear and the paths should have been ignored completely Additional info: Will attach a multipath.conf file with the blacklist entry and the output from «multipath -v 3 -ll» when run on the machine with that multipath.conf file in use.
Created attachment 467926 [details] /etc/multipath.conf from a production server that exhibits the problem
Created attachment 467927 [details] Output from "multipath -v 3 -ll" run on the production server in question
Multipath did blacklist your LUNZ devices. Multipath only does the devnode blacklisting before getting the path information. It does the device and wwid information blacklisting afterwards, and that error message is happening when multipath is getting the path information. It should be possible to move up the device blacklisting to immediately after the device type has been determined.
I believe that I'm seeing the same problem on an RHEL 6.1 installation: The server has these lines (among others) in multipath.conf: blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" devnode "^sda[0-9]*$" device { vendor "DELL.*" product "PERC.*" } device { vendor "TEAC.*" product "DVD.*" } } Still, a map like this shows up after boot: 36782bcb01fbda70015810024681f40e3 dm-0 DELL,PERC H700 size=136G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active `- 0:2:1:0 sdb 8:16 active ready running A strange thing: If I flush the 36782bcb01fbda70015810024681f40e3 after boot, and then run "multipath" again, the filtering works. If I then remove the DELL-filter and run "multipath" again, the 36782bcb01fbda70015810024681f40e3 map returns. So: Something about multipath's filtering does not work at boot-time (but will subsequently work fine).
Ah - in my case, it seems that a new initramfs needs to be created after changes to multipath.conf. Gotcha.
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux release for currently deployed products. This request is not yet committed for inclusion in a release.