Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1393206

Summary: Segmentation fault for RAID5 three volume array when third disk is missing
Product: Red Hat Enterprise Linux 6 Reporter: Zhang Xiaotian <xiaotzha>
Component: dmraidAssignee: Heinz Mauelshagen <heinzm>
Status: CLOSED WONTFIX QA Contact: Zhang Yi <yizhan>
Severity: high Docs Contact:
Priority: high    
Version: 6.8CC: agk, heinzm, jbrassow, msnitzer, prajnoha, yizhan, zkabelac
Target Milestone: rcFlags: heinzm: needinfo+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-06 10:43:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Zhang Xiaotian 2016-11-09 05:54:20 UTC
Description of problem:
Segmentation fault when starting rebuild on 3 volume RAID5.
After issuing command: dmraid -R, segmentation error shows up.

Version-Release number of selected component (if applicable):
RHEL6.8 with kernel-2.6.32-642.el6.x86_64

# rpm -qa dmraid*
dmraid-events-1.0.0.rc16-11.el6.x86_64
dmraid-1.0.0.rc16-11.el6.x86_64

How reproducible:
Always

Steps to Reproduce:
1.echo y |  dmraid  -f  isw -C Raid5 --type 5 --disk "/dev/sdb /dev/sdc /dev/sdd"
2.reboot PC - remove third disk from raid
3.dmraid -R isw_defiaifai_Raid5 /dev/sdd
ERROR: isw: wrong number of devices in RAID set "isw_defiaifai_Raid5" [2/3] on /dev/sdb
ERROR: isw: wrong number of devices in RAID set "isw_defiaifai_Raid5" [2/3] on /dev/sdc
Segmentation fault (core dumped)

Actual results:
Segmentation fault

Expected results:
Rebuilding starts and finishes positively.

Additional info:

Comment 1 Zhang Xiaotian 2016-11-09 06:38:03 UTC
I also encountered this problem on RAID1,  with the steps below:

# dmraid  -f  isw -C Raid1 --type 1 --disk "/dev/sdb /dev/sdc"

# dmraid -tay
isw_bijaajbejg_Raid1: 0 1953515520 mirror core 2 131072 nosync 2 /dev/sdb 0 /dev/sdc 0 1 handle_errors

# dmraid -r
/dev/sdb: isw, "isw_bijaajbejg", GROUP, ok, 1953525166 sectors, data@ 0
/dev/sdc: isw, "isw_bijaajbejg", GROUP, ok, 1953525166 sectors, data@ 0

# dmraid -s
*** Group superset isw_bijaajbejg
--> Subset
name   : isw_bijaajbejg_Raid1
size   : 1953515520
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0
# dmraid -ay
RAID set "isw_bijaajbejg_Raid1" was activated
device "isw_bijaajbejg_Raid1" is now registered with dmeventd for monitoring
# dmraid -an
ERROR: device "isw_bijaajbejg_Raid1" is not currently being monitored
# dmraid -E -r /dev/sdc
Do you really want to erase "isw" ondisk metadata on /dev/sdc ? [y/n] :y

# dmraid -s
ERROR: isw: wrong number of devices in RAID set "isw_bijaajbejg_Raid1" [1/2] on /dev/sdb
*** Group superset isw_bijaajbejg
--> *Inconsistent* Subset
name   : isw_bijaajbejg_Raid1
size   : 1953515520
stride : 128
type   : mirror
status : inconsistent
subsets: 0
devs   : 1
spares : 0

# dmraid -R isw_bijaajbejg_Raid1 /dev/sdc
ERROR: isw: wrong number of devices in RAID set "isw_bijaajbejg_Raid1" [1/2] on /dev/sdb
Segmentation fault (core dumped)

Comment 2 Heinz Mauelshagen 2017-10-04 13:32:46 UTC
We recommend mdadm to run isw (Intel Matrix RAID) arrays from RHEL6 on instead of dmraid. Does mdadm work for you?

Comment 3 Zhang Yi 2017-10-09 02:51:42 UTC
Will try this issue with mdadm after I found HW which support ISW.

Comment 4 Jan Kurik 2017-12-06 10:43:03 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/