Bug 830883 - lvm2 not ignoring multipath managed iscsi volumes
lvm2 not ignoring multipath managed iscsi volumes
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.2
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: LVM and device-mapper development team
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-06-11 11:01 EDT by Christian Becker
Modified: 2012-06-12 05:14 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-06-12 05:14:07 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
/dev/disk/by-id listing (4.19 KB, text/plain)
2012-06-11 13:02 EDT, Christian Becker
no flags Details

  None (edit)
Description Christian Becker 2012-06-11 11:01:18 EDT
Description of problem:

We have a iscsi multipath setup and everytime we exec lvm related commands like lvs, vgs and pvs we´ll get several io errors on the multipath ghost devices.

Version-Release number of selected component (if applicable):

lvm2-2.02.87-6.el6.x86_64

How reproducible:

Connect to iscsi device, setup multipath and execute lvs


Actual results:

[root@srv10 ~]# lvs
  /dev/sdk: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdk: read failed after 0 of 4096 at 32985348767744: Input/output error
  /dev/sdk: read failed after 0 of 4096 at 32985348825088: Input/output error
  /dev/sdk: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 32985348767744: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 32985348825088: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 450882371584: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 450882437120: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdp: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdp: read failed after 0 of 4096 at 450882371584: Input/output error
  /dev/sdp: read failed after 0 of 4096 at 450882437120: Input/output error
  /dev/sdp: read failed after 0 of 4096 at 4096: Input/output error
  LV                      VG              Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  root                    sysvol          -wi-ao  10.00g                                      
  swap                    sysvol          -wi-ao   4.00g                                      
  varlog                  sysvol          -wi-ao  10.00g                                      

Expected results:

[root@srv10 ~]# lvs
  LV                      VG              Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  root                    sysvol          -wi-ao  10.00g                                      
  swap                    sysvol          -wi-ao   4.00g                                      
  varlog                  sysvol          -wi-ao  10.00g                                      


Additional info:

[root@srv10 ~]# multipath -ll
mpathc (360080e50002d4f0c00000cea4fd64609) dm-9 IBM,1746      FAStT
size=420G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 14:0:0:1 sdn 8:208 active ready running
| `- 11:0:0:1 sdq 65:0  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 13:0:0:1 sdm 8:192 active ghost running
  `- 12:0:0:1 sdp 8:240 active ghost running
mpathb (360080e50002d4ea0000001954f3da062) dm-8 IBM,1746      FAStT
size=30T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 14:0:0:0 sdj 8:144 active ready running
| `- 11:0:0:0 sdo 8:224 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 13:0:0:0 sdk 8:160 active ghost running
  `- 12:0:0:0 sdl 8:176 active ghost running

In the multipath documentation, there is a section regarding lvm: http://docs.redhat.com/docs/de-DE/Red_Hat_Enterprise_Linux/6/html/DM_Multipath/multipath_logical_volumes.html

The filter statement described there doesn´t have any effect at all.

In RedHat 5 there is a lvm config setting: multipath_component_detection = 1
This is working in RedHat 5, but ignored in RedHat 6.
Comment 2 Bryn M. Reeves 2012-06-11 11:39:59 EDT
Please could you post the filter configuration you are using?

Depending on the style used it's possible that you are not filtering out some of the additional alias names that modern udev maintains for devices (e.g. the /dev/disk/by-* trees).

Personally I do not recommend the style found in the documentation you reference (reject first, then accept) since it is vulnerable to these types of problems - it is normally more robust to accept only the patterns you do wish to use and to reject all others ("r|.*|").  

The multipath_component_detection filter only works with active multipath devices (it will inspect the holders directories in sysfs for candidate devices to see if they are owned by a device-mapper device and then checks for the mpath UUID prefix).

This means that if LVM is invoked before multipath devices have been set up it will be unable to detect that a path is part of a multipath device and will proceed to use it if the filter configuration would permit the device.
Comment 3 Christian Becker 2012-06-11 12:00:50 EDT
My current filter is as follows:

filter = [ "r/disk/", "r/sd.*/", "a/.*/" ]

I also found a filter with the order you´ve described:

filter = [ "a|/dev/sda.*|", "a|/dev/disk/by-id/.*|", "r|.*|" ]

This seems to work for lvs,vgs and pvs. But you can´t use the /dev/mapper paths for physical volume creation. I think this could be easily fixed by adding /dev/mapper to the filter, but i can´t test it.

From my test, the multipath_component_detection hat no effect at all, it wasn´t even included in the default config - i just found it on an older RHEL5 Machine not affected by this issue.
Comment 4 Bryn M. Reeves 2012-06-11 12:52:35 EDT
Actually I'd expect that filter to have the same problem since it is accepting all disk symlinks in the /dev/disk/by-id tree (which normally includes all disks on the system).

Do you have any custom udev rules in place on the system?

I am not able to reproduce the behaviour you see with pvcreate - if I include "/dev/disk/by-id" in any accept clause I am able to create new labels on the device (which is the expected behaviour since the device node can be reached via that path).

It sounds like there may be some further configuration problems on the system that need to be tracked down and addressed which could be better handled by raising a support case than via bugzilla.

If you have active Red Hat Enterprise Linux entitlements for the affected systems please open a case via your support representative or using the online tools at the Red Hat customer portal:

  https://access.redhat.com/support/cases/new
Comment 5 Christian Becker 2012-06-11 13:02:02 EDT
Created attachment 590987 [details]
/dev/disk/by-id listing
Comment 6 Christian Becker 2012-06-11 13:16:29 EDT
Thanks for your quick reply!

This is no bug for me anymore, because the second filter i posted fixes this issue - at least for me. But it seems as this is a error in the documentation.

The second filter is not checking the inactive disks, because they´re not in /dev/disk/by-id. I have just attached a listing of my /dev/disk/by-id/ tree, you can compare it with my multipath output from the initial post.

> Do you have any custom udev rules in place on the system?

No.

Regarding pvcreate, i think this i my fault. I *think* that it won´t work with /dev/mapper paths, but i haven´t tested it. It should definitely work with 'pvcreate /dev/disk/by-id/$foo' so this is expected and no issue at all.

As i previously noted, this is not a real issue, but i think someone should review  the documentation regarding the lvm behavior in multipath environments as the documented filter is not working as expected. 

Regards,
Christian
Comment 7 Bryn M. Reeves 2012-06-12 05:14:07 EDT
pvcreate works fine on /dev/mapper entries:

# pvcreate /dev/mapper/datalunp4 
  Physical volume "/dev/mapper/datalunp4" successfully created

As I said, I think you still have some outstanding problems in your environment if you are seeing strange behaviour like this but I'll close this NOTABUG now as I do not think that there is any defect here to be addressed.

I have a draft documentation bug to explain the interaction of LVM filters with udev aliases in the multipath guide and will file that today.

Note You need to log in before you can comment on or make changes to this bug.