Bug 605895
Summary: | MDRaidError: mdcreate failed for /dev/md0: 04:21:26,144 ERROR : mdadm: create aborted | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Alexander Todorov <atodorov> | ||||||||
Component: | anaconda | Assignee: | Anaconda Maintenance Team <anaconda-maint-list> | ||||||||
Status: | CLOSED DUPLICATE | QA Contact: | Release Test Team <release-test-team-automation> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | medium | ||||||||||
Version: | 6.0 | CC: | atodorov, hdegoede | ||||||||
Target Milestone: | rc | ||||||||||
Target Release: | --- | ||||||||||
Hardware: | x86_64 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | anaconda_trace_hash:39191f65ce6a08de1fec830fa81d3390b13d833f408798e0d99d80e9d71dff14 | ||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2010-06-30 13:52:50 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | |||||||||||
Bug Blocks: | 582286 | ||||||||||
Attachments: |
|
Description
Alexander Todorov
2010-06-19 08:22:13 UTC
Created attachment 425309 [details]
Attached traceback automatically from anaconda.
This is on KVM domU with the following layout: 1st install: vda1 - /boot vda2 - software raid, encrypted vda3 - swap vda4 - extended vda5 - software raid, encrypted vdb1 - software raid, encrypted md0 - /, RAID1, all 3 members. Install completed and was able to boot ( I didn't wait for the array to finish syncing). Then I started second install. 1) Enter the pass-phrase and click Global pass-phrase 2) Anaconda unlocks all partitions. Select custom layout 3) I've deleted md0 first 4) Then select to re-format all existing partitions with the same as previous file system or as software RAID. Here I noticed that vdb1 was identified as ext4 and the rest 2 RAID partitions were identified as software RAID. 5) After selecting to format all partitions (and no encryption selected) I've created the md0 device again. Anaconda UI was showing vdb1 and 2 luks devices in the list of available RAID members. I've selected all 3 6) When proceeded further the crash happened. Created attachment 425310 [details]
logs from the system
X.log
anaconda.log
ifcfg.log
program.log
storage.log
syslog
storage.log
storage.state
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux major release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Major release. This request is not yet committed for inclusion. Created attachment 425316 [details]
Attached traceback automatically from anaconda.
the second crash from comment #6 is with the following layout: 1st install: vda1 - /boot vda2 - software raid vda3 - swap vda4 - extended vda5 - software raid vdb1 - software raid md0 - /, RAID1, all 3 members. Install completed and was able to boot. I did wait for the array to finish syncing this time. Then I started second install. Before anaconda discovered disk devices I switched to tty2 and erased vdb: dd if=/dev/zero of=/dev/vdb Then selected custom partitioning. In the UI vdb was shown as free space, partitions on vda shown accordingly and md/0 device shown with Unknown file system. I deleted the md0 device, created vdb1, ext4 /home and re-created md0 with the remaining raid members. From the 1st log, this seems to be the culprit: 08:19:59,948 INFO : teardown of md/0 failed: luks_close failed for luks-f5f2a890-a181-4b7f-b417-579e26b1d5d5 ok, it took me a while to understand what is going on here, but bug 604633 in combination with an incomplete mdraid set causes us to not properly stop the mdraid set. Marking this as a dup of 604633 *** This bug has been marked as a duplicate of bug 604633 *** |