Bug 67949
Summary: | badblocks incorrectly detected on raid autodetect partitions | ||
---|---|---|---|
Product: | [Retired] Red Hat Linux | Reporter: | Kevin R. Page <redhat-bugzilla> |
Component: | e2fsprogs | Assignee: | Florian La Roche <laroche> |
Status: | CLOSED WONTFIX | QA Contact: | Brock Organ <borgan> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.3 | CC: | alvarezp, redhat-bugzilla, scott |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i386 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2003-08-12 12:40:22 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Kevin R. Page
2002-07-04 18:17:32 UTC
So you selected to do a 'bad blocks' check on the partitions when you created them in disk druid? Yes, I selected a badblocks check when I changed the partition type and formatted in disk druid - so I get the symptoms of bug #66181. HOWEVER, I'm pretty sure that badblocks shouldn't get detected at all (and thus trip the bug #66181). I see the same detection "pattern" on a Redhat 7.2 box (up and running, not during installation - see above). I've just run a badblock check on one of the constituent partitions of a 5.1G RAID 1 partition on the now running machine (described in the install above), and I get a badblock turning up again, on the last but one block of the partition (as expected from my first comment). Your skepticism about bypassing the badblocks check is well-founded. We did that. We subsequently found that one of the two disks wasn't in the mirrored RAID array: [root@1post root]# cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md3 : active raid1 hda7[0] 513984 blocks [2/1] [U_] md5 : active raid1 hda6[0] 513984 blocks [2/1] [U_] md4 : active raid1 hda5[0] 513984 blocks [2/1] [U_] md1 : active raid1 hda3[0] 15649088 blocks [2/1] [U_] md0 : active raid1 hda2[0] 2048192 blocks [2/1] [U_] md2 : active raid1 hda1[0] 48064 blocks [2/1] [U_] unused devices: <none> Note that not all of the partitions have the badblock problem. But all the partitions on /dev/hda are kicked out of the array. scott: Unfortunately, I'm more than familiar with the symptoms of hard drive failure in a software RAID array (I seem to have had more than my fair share die over the years). It does look as if your drive is broken (though it's always worth a check with the manufacturers verification tool). If you also check through /var/log/messages you should be able to find where the kernel hits the errors, shuts down the drive and kick the partitions out of the RAID array. However, in my case, I'm pretty sure the drive is not at fault. As detailed in my original comment, the same partition badblock checks ok _unless_ the partition type is raid autodetect. Indeed, if the partition isn't set to raid auto at the start of the installation, I can successfully install _including_ the anaconda badblock check. /proc/mdstat on all the machines I see this problem on show all partitions as up and active. Manufacturers diagnostic checks are all good. Does not look like this is something that is caused by userspace e2fsprogs to me. Closing this here. Florian La Roche |