Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 855448

Summary: DM RAID: Bad table argument could cause kernel panic
Product: Red Hat Enterprise Linux 6 Reporter: Jonathan Earl Brassow <jbrassow>
Component: kernelAssignee: Jonathan Earl Brassow <jbrassow>
Status: CLOSED ERRATA QA Contact: Petr Beňas <pbenas>
Severity: high Docs Contact:
Priority: high    
Version: 6.3CC: cmarthal, czhang, pbenas, pstehlik
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: kernel-2.6.32-328.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-21 06:35:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jonathan Earl Brassow 2012-09-07 19:22:09 UTC
The 'rebuild' parameter takes an index as an argument - an index that starts at '0'.  However, the code that checks this value is checking for 'value > raid_disks'.  This means that if 'value == raid_disks' and 'value' is used to access the device array, the bounds of the array will be blown - potentially causing a kernel panic.  It certainly won't work as expected.

This check needs to be changed to 'value >= raid_disks'.


Currently, things work as follows:
[root@hayes-01 ~]# pvs
  PV                 VG         Fmt  Attr PSize   PFree  
  /dev/etherd/e1.1p1 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p2 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p3 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p4 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p5 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p6 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p7 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p8 vg         lvm2 a--  648.73g 648.73g
  /dev/sda2          vg_hayes01 lvm2 a--   74.01g      0 
[root@hayes-01 ~]# vgs vg
  VG   #PV #LV #SN Attr   VSize VFree
  vg     8   0   0 wz--n- 5.07t 5.07t
[root@hayes-01 ~]# lvcreate --type raid1 -m 1 -L 200M -n lv vg
  Logical volume "lv" created
[root@hayes-01 ~]# dmsetup table vg-lv
0 409600 raid raid1 3 0 region_size 1024 2 253:3 253:4 253:5 253:6
[root@hayes-01 ~]# echo "0 409600 raid raid1 3 0 region_size 1024 2 253:3 253:4 253:5 253:6"
0 409600 raid raid1 3 0 region_size 1024 2 253:3 253:4 253:5 253:6
[root@hayes-01 ~]# echo "0 409600 raid raid1 5 0 region_size 1024 rebuild 2 2 253:3 253:4 253:5 253:6" | dmsetup load vg-lv

^^^^^^^^ No detection of the bad input!!

This is how things should work:
[root@hayes-01 ~]# pvs
  PV                 VG         Fmt  Attr PSize   PFree  
  /dev/etherd/e1.1p1 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p2 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p3 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p4 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p5 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p6 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p7 vg         lvm2 a--  648.73g 648.73g
  /dev/etherd/e1.1p8 vg         lvm2 a--  648.73g 648.73g
  /dev/sda2          vg_hayes01 lvm2 a--   74.01g      0 
[root@hayes-01 ~]# vgs vg
  VG   #PV #LV #SN Attr   VSize VFree
  vg     8   0   0 wz--n- 5.07t 5.07t
[root@hayes-01 ~]# lvcreate --type raid1 -m 1 -L 200M -n lv vg
  Logical volume "lv" created
[root@hayes-01 ~]# dmsetup table vg-lv
0 409600 raid raid1 3 0 region_size 1024 2 254:3 254:4 254:5 254:6
[root@hayes-01 ~]# echo "0 409600 raid raid1 5 0 region_size 1024 rebuild 2 2 254:3 254:4 254:5 254:6" | dmsetup load vg-lv
device-mapper: reload ioctl on vg-lv failed: Invalid argument
Command failed
^^^^^^^^^^ Rejection of invalid input.

Comment 2 RHEL Program Management 2012-09-07 19:39:03 UTC
This request was evaluated by Red Hat Product Management for
inclusion in a Red Hat Enterprise Linux release.  Product
Management has requested further review of this request by
Red Hat Engineering, for potential inclusion in a Red Hat
Enterprise Linux release for currently deployed products.
This request is not yet committed for inclusion in a release.

Comment 4 Jarod Wilson 2012-10-10 20:04:19 UTC
Patch(es) available on kernel-2.6.32-328.el6

Comment 8 Petr Beňas 2012-11-09 10:48:09 UTC
Reproduced in 2.6.32-325.el6.x86_64 and verified in 2.6.32-326.el6.x86_64.

Comment 10 errata-xmlrpc 2013-02-21 06:35:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0496.html