Bug 427550
Summary: | dmraid segfaults on boot resulting in broken mirror | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Michael Young <m.a.young> | ||||
Component: | dmraid | Assignee: | Ian Kent <ikent> | ||||
Status: | CLOSED ERRATA | QA Contact: | Corey Marthaler <cmarthal> | ||||
Severity: | low | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 5.1 | CC: | agk, dwysocha, heinzm, mbroz, prockai | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | RHBA-2008-0475 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2008-05-21 17:21:01 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Michael Young
2008-01-04 16:53:56 UTC
I forgot to mention this is on an x86_64 machine. It looks like my problem might be related to the one mentioned here http://www.redhat.com/archives/ataraid-list/2007-November/msg00011.html though it still a bug because the code shouldn't segfault. I'll see if I can track this down. Created attachment 294068 [details]
Patch to prevent SEGV when activting raid set
Turns out that, for this device, if the raid set
name given doesn't exist, doesn't match an exiting
raid set name or is a sub-set of a raid set then
dmraid would SEGV.
I'm not sure that this correction will actually resolve the issue reported here but it does resolve the SEGV that occurs. Could someone give the patch posted a try please. Ian Yes, it seems to fix the segfault. I am currently getting around the changing name of the raid device by hacking dmraid -ay -i -P p into the initrd instead of the existing dmraid and kpartx lines. This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release. (In reply to comment #11) > Yes, it seems to fix the segfault. I am currently getting around the changing > name of the raid device by hacking dmraid -ay -i -P p into the initrd instead of > the existing dmraid and kpartx lines. Do you mean the way dmraid won't activate the RAID a specified individual subset? I believe we're waiting on patches for dmraid before this support can be added. Ian (In reply to comment #13) > (In reply to comment #11) > > Yes, it seems to fix the segfault. I am currently getting around the changing > > name of the raid device by hacking dmraid -ay -i -P p into the initrd instead of > > the existing dmraid and kpartx lines. > > Do you mean the way dmraid won't activate the RAID a specified > individual subset? > > I believe we're waiting on patches for dmraid before this > support can be added. Well my main reason for moving to dmraid -ay -i -P p is that in the initrd I couldn't use a fixed name because for these disks the full name changes between boots (for example from ddf1_4c53492020202020808626820000000034db222e00000a28 to ddf1_4c53492020202020808626820000000034dd936800000a28 after a reboot). I don't know whether dmraid -ay -i -p "ddf1_4c53492020202020808626820000000034dd936800000a28" or similar actually starts the raid because the raid is running by the time I can test it, so that might change the result, though currently I get dmraid -ay -i -p "ddf1_4c53492020202020808626820000000034dd936800000a28" No RAID sets and with names: "ddf1_4c53492020202020808626820000000034dd936800000a28" if I try. (In reply to comment #14) > > > > Do you mean the way dmraid won't activate the RAID a specified > > individual subset? > > > > I believe we're waiting on patches for dmraid before this > > support can be added. > Well my main reason for moving to dmraid -ay -i -P p is that in the initrd I > couldn't use a fixed name because for these disks the full name changes between > boots (for example from > ddf1_4c53492020202020808626820000000034db222e00000a28 to > ddf1_4c53492020202020808626820000000034dd936800000a28 > after a reboot). I don't know whether > dmraid -ay -i -p "ddf1_4c53492020202020808626820000000034dd936800000a28" > or similar actually starts the raid because the raid is running by the time I > can test it, so that might change the result, though currently I get > dmraid -ay -i -p "ddf1_4c53492020202020808626820000000034dd936800000a28" > No RAID sets and with names: "ddf1_4c53492020202020808626820000000034dd936800000a28" > if I try. I believe that's correct, assuming these are RAID subsets, that is the message you will get. In the metadata supplied the superset name is .ddf1_disks. But using that will activate all the subsets which may not be what you want. It's not possible to activate individual subsets at the moment. Sorry. But them the superset name shouldn't change. Ian An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on the solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2008-0475.html |