Bug 20861 - e2fsck fails on RAID-5 array if disk 0 disconnected
e2fsck fails on RAID-5 array if disk 0 disconnected
Status: CLOSED WONTFIX
Product: Red Hat Linux
Classification: Retired
Component: e2fsprogs (Show other bugs)
6.2
i386 Linux
high Severity high
: ---
: ---
Assigned To: Florian La Roche
Dale Lovelace
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2000-11-14 14:23 EST by Need Real Name
Modified: 2005-10-31 17:00 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2003-08-12 06:33:25 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Need Real Name 2000-11-14 14:23:23 EST
Not sure which is the defective component: raidtools, e2fsprogs, perhaps even the kernel.

Configured partitions on 3 SCSI disks into a RAID-5 array, using the RedHat installation tool.
Filesystem mounted, array allowed to sync.
Powered down system.
If I disconnect /dev/sdb or /dev/sdc and then power up:
   system correctly detects missing disk and runs in degraded mode.
If I disconnect /dev/sda and then power up:
   e2fsck fails with the following error:
  "The superblock could not be read or does not describe a correct
   ext2 filesystem.  If the device is valid and it really contains
   an ext2 filesystem (and not swap or ufs or something else), then
   the superblock is corrupt, and you might try running e2fsck with
   an alternate superblock:
     e2fsck -b 8193 <device>

   fsck.ext2: Bad magic number in super-block while trying to open
   /dev/md2"

I have tried running "e2fsck -b 32768" (the correct location of the
first superblock) and also the second & third superblocks.  Same
error.

My /etc/raidtab specifies persistent-superblock.
===================================================================
/proc/mdstat before disconnecting drive 0 (sda):

   Personalities : [raid1] [raid5]
   read_ahead 1024 sectors
   md0 : active raid1 sdc1[2] sdb1[1] sda1[0] 2048192 blocks [3/3] [UUU]
   md2 : active raid5 sdc5[2] sdb5[1] sda5[0] 4096384 blocks level 5, 64k
   chunk, algorithm 0 [3/3] [UUU]
   md1 : active raid1 sdc7[2] sdb7[1] sda7[0] 24000 blocks [3/3] [UUU]
===================================================================
/proc/mdstat AFTER disconnecting drive 0 and booting
(note that sdc became sdb, sdb became sda):

   Personalities : [raid1] [raid5]
   read_ahead 1024 sectors
   md0 : active raid1 sdb1[2] sda1[1] 2048192 blocks [3/2] [_UU]
   md2 : active raid5 sdb5[2] sda5[1] 4096384 blocks level 5, 64k chunk,
   algorithm 0 [3/2] [_UU]
   md1 : active raid1 sdb7[2] sda7[1] 24000 blocks [3/2] [_UU]
   unused devices: <none
===================================================================
/etc/raidtab (md2 -- the RAID-5 array -- is at the bottom):

   raiddev      /dev/md0
   raid-level      1
   nr-raid-disks      3
   chunk-size      64k
   persistent-superblock     1
   #nr-spare-disks     0
       device     /dev/sda1
       raid-disk     0
       device     /dev/sdb1
       raid-disk     1
       device     /dev/sdc1
       raid-disk     2
   raiddev      /dev/md1
   raid-level      1
   nr-raid-disks      3
   chunk-size      64k
   persistent-superblock     1
   #nr-spare-disks     0
       device     /dev/sda7
       raid-disk     0
       device     /dev/sdb7
       raid-disk     1
       device     /dev/sdc7
       raid-disk     2
   raiddev      /dev/md2
   raid-level      5
   nr-raid-disks      3
   chunk-size      64k
   persistent-superblock     1
   #nr-spare-disks     0
       device     /dev/sda5
       raid-disk     0
       device     /dev/sdb5
       raid-disk    1
       device     /dev/sdc5
       raid-disk     2
===================================================================
fdisk /dev/sda (sdb and sdc are identical):

   Disk /dev/sda: 255 heads, 63 sectors, 1106 cylinders
   Units = cylinders of 16065 * 512 bytes

      Device Boot    Start       End    Blocks   Id  System
   /dev/sda1   *         1       255   2048256   fd  Linux raid autodetect
   /dev/sda2           256      1106   6835657+   5  Extended
   /dev/sda5           256       510   2048256   fd  Linux raid autodetect
   /dev/sda6           511       527    136521   82  Linux swap
   /dev/sda7           528       530     24066   fd  Linux raid autodetect

Note You need to log in before you can comment on or make changes to this bug.