Red Hat Bugzilla – Bug 142263
Only 16 EMC powerpath LUNs usable with LVM1
Last modified: 2007-11-30 17:07:05 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.3)
Description of problem:
vgextend uses /dev/emcpowera to /dev/emcpowerp, but stops accepting
devices beyond this (/dev/emcpowerq ...)
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. setup lvm, create one or more volumegroup, etc.
2. setup emcpowerpath, create more than 16 LUNs
3. add /dev/emcpowera to /dev/emcpowerp to these VGs
4. vgextend vgsgdb2data01 /dev/emcpowerq /dev/emcpowerr
Actual Results: # vgextend vgsgdb2data01 /dev/emcpowerq /dev/emcpowerr
vgextend -- INFO: maximum logical volume size is 2 Terabyte
vgextend -- ERROR: no physical volumes usable to extend volume group
Expected Results: /dev/emcpowerq and /dev/emcpowerr added to VG
The error message seems to be the default error, when vgextend fails,
Also there seems to be several other problems involving LVM1 + EMC
PowerPath, eg. LVM cannot distinguish between the "raw" SCSI devices
(sda, sdb,...) and the bound meta devices (emcpowera,...) happily
taking the first to come, making the redundancy of multi pathing useless.
I need to add a filter to the device discovery code in LVM1
in order to avoid access to the 'raw' devices in case PowerPath
ones give access.
The other issue about limited number of devices (16) needs tweeking
in the same discovery code.
Hopefully will get around to this before Christmas ;)
Still no PowerPath copy here to be able to test myself (EMC claims
they are pushing delivery).
I'll push any fixes to you to test on the Bladecenter configuration first.
I just received some more informations from the customer, from which
I would say, to put the "max 16 devices" issue to a hold, the devices
do *not* show up in /proc/partitions, sorry for the confusion...
The emcpower*/sd* device thing is still left (and much more nasty),
shall I create a new bug for that, so we can track it separately ?
in lieu of the PowerPath /proc/partitions bug, yes please create
a new bug in order to track the /dev/sd* filter issue seperately
from this one.
Background on the 16 max limit issue:
----- Forwarded message from "goggin, edward" <email@example.com> -----
Assuming this is a 2.4 kernel ...
the 16 device limit likely applies to emcpower whole device names which
show up in /proc/partitions. Only 16 emcpower whole device names show
up in this file due to the code in disk_name() in
fs/partitions/check.c which both (1) enforces a very simplistic naming
policy on all devices which are not (scsi,ide, and a few privileged others)
and (2) assumes a driver can only manage a single major number.
This single major number requirement is not met by the PowerPath driver
since like the scsi class driver, it manages 16 major numbers. Given
enough LUNs in your SAN, you will see the device name sequence
(emcpowera, ... , emcpowerp) repeated multiple times in /proc/partitions,
each time with a different major number, from 247-232 inclusive.
Sixteen show up for each major number since, like scsi, PowerPath's dev_t
uses 4 minor bits for indicating partition so there are but 4 bits left
in each 8-bit minor to specify whole device instance.
This is all done better in the 2.6 kernel since each driver is allowed
to determine the name of the devices the driver manages and record the
name in the gendisk entry (which is per minor not per major) for each
SuSE has addressed this problem in its 2.4 based SLES 8 distribution
by introducing a "driver name" callout in the per-major gendisk structure
and calling this callout in disk_name().
We have been aware of this issue for over 2 1/2 years and have made Red Hat
aware of the issue for that same length of time. My suspicion has been that
due to Red Hat's goal to provide backward compatibility for kernel modules
in its 2.4 based enterprise distributions, it was more difficult to fix
Created attachment 108192 [details]
ok, that makes sense now. Isn't quite nice (the device naming from EMC admin
tools (you can get emcpowerq and above) compared to /proc/partitions))
Is there any chance of ever getting this fixed within RHEL3 ?
Followup Bug for device filtering:
PM ACK for U6
*** This bug has been marked as a duplicate of 79086 ***
A fix for this problem has just been committed to the RHEL3 U6
patch pool this evening (in kernel version 2.4.21-34.EL).
Propagating acks from bug 79086.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.