Bug 514124
Summary: | F11 mdadm caused array element to be kicked out of existing arrays on system shutdown | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | ed leaver <eleaver> | ||||||
Component: | mdadm | Assignee: | Doug Ledford <dledford> | ||||||
Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | low | ||||||||
Version: | 11 | CC: | dledford | ||||||
Target Milestone: | --- | ||||||||
Target Release: | --- | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2009-09-15 19:42:47 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
ed leaver
2009-07-28 05:30:38 UTC
The mdadm problem occurs under both fc11 kernels 2.6.29.4-167 and 2.6.29.6-213. But it occurred under fc10 as well. I installed a similar raid setup on my mothers PC running Fedora 9; that never had any problems. Created attachment 355449 [details]
/var/log/boot.log and /var/log/messages from F11 session that apparently broke the raid.
I forgot to include /Fedora/var/log/boot.log and /Fedora/var/log/messages; the latter does report (many) disk errors on shutdown. Comparing with the subsequent /CentOS/var/log/messages from the immediately following CentOS session, we see the disk partitions were actually kicked out of the raids by CentOS, although I suspect Fedora would have kicked them out on reboot as well. Is it possible there is a problem with the disks that CentOS fsck does not report? What other diagnostics can I run?
Created attachment 355450 [details]
/var/log/messages from subsequent CentOS session that recovered the broken raid.
Made this a separate attachment for clarity.
The snd_hda_intel driver appears to write outside its allocated memory. I ran Palimpsest after one of the sata disks had been kicked out of the raid. Palimpsest couldn't tell me much. Not even that the disk was SMART enabled. Looked like a SATA driver problem. Oops. It moved (natch) after I updated to 2.6.29.6-217.2.16.fc11.x86_64 kernel. Now the system locks up on boot (Starting udev) if nomodeset is given, but finishes boot to runlevel 3 if it isn't. But then locks up upon startx. These troubles go away if snd_hda_intel is blacklisted in /etc/modprobe.d/blacklist.conf. The ULi 1575 SB has a ULi M5288 SATA controller and Realtek 883D sound chip, for which snd_hda_intel is the correct driver. But snd_hda_intel appears to write outside its allocated memory. There is a similar -- perhaps duplicate -- bug against Rawhide: #521004. Thanks. Since this is not an mdadm or md raid issue as originally thought, I'm closing this bug out. |