This service will be undergoing maintenance at 20:00 UTC, 2017-04-03. It is expected to last about 30 minutes
Bug 81258 - Can't start raid 5 when device names have changed
Can't start raid 5 when device names have changed
Status: CLOSED WONTFIX
Product: Red Hat Linux
Classification: Retired
Component: raidtools (Show other bugs)
8.0
i586 Linux
medium Severity medium
: ---
: ---
Assigned To: Doug Ledford
David Lawrence
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2003-01-07 02:13 EST by Mr Watkins
Modified: 2007-04-18 12:49 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-11-27 18:18:48 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mr Watkins 2003-01-07 02:13:41 EST
Description of problem:
I have a raid 5 with 4 disks.  sda1, sda2, sda3, sda4
I have added 2 disks to my system.
The new disks are sdb and sdd.  At first they did not have a valid partion 
table.  I got this error on both disks:
md: could not lock sdb1, zero-size? Marking faulty.
md: could not import sdb1, trying to run array nevertheless.
 [events: 00000004]
md: could not lock sdd1, zero-size? Marking faulty.
md: could not import sdd1, trying to run array nevertheless.

That should not be a problem because the raid 5 disks are now sda1, sdc1, sde1 
and sdf1.

I did change raidtab before I shutdown to add the new disks.  The system seems 
to ignore raidtab for existing arrays.  Must only use the file when you use 
mkraid.

I have since partitioned the 2 disks sdb and sdd with type fd.
raidstart still fails.

I have created a raid 1 array on the 2 disks sdb and sdd.  I did not need to 
use the force option with mkraid.

The OS is on 2 IDE disks (hda and hdb).

/dev/md0 is swap
/dev/md1 is /
/dev/md2 is /boot
/dev/md3 is the problem array.
/dev/md4 is the array on the 2 new disks.
Here is a copy of my raidtab file:
raiddev             /dev/md1
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda2
    raid-disk     0
    device          /dev/hdb3
    raid-disk     1

raiddev             /dev/md2
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda1
    raid-disk     0
    device          /dev/hdb1
    raid-disk     1

raiddev             /dev/md0
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda3
    raid-disk     0
    device          /dev/hdb2
    raid-disk     1

raiddev             /dev/md3
raid-level                  5
parity-algorithm            left-symmetric
nr-raid-disks               4
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/sda1
    raid-disk     0
    device          /dev/sdc1
    raid-disk     1
    device          /dev/sde1
    raid-disk     2
    device          /dev/sdf1
    raid-disk     3

raiddev             /dev/md4
raid-level                  1
#parity-algorithm            left-symmetric
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/sdb1
    raid-disk     0
    device          /dev/sdd1
    raid-disk     1

I did remove the 2 new disks.  Then my device names were back to normal.  The 
array /dev/md3 did come up after a re-boot.  /dev/md4 did not (as expected, 
the drives had no power)!

Please help!  I can resolve the problem by addressing the new disks so they 
will be sde and sdf.  But this seems like it could be a major problem for 
others in the future.  You should be able to add disks to a system without 
such problems.

I will keep the disks configured like this for a while.  If someone wants me 
to try something I will.  If I lose data I don't care.  I have 2 or more 
backups.

Thanks.
Comment 1 Mr Watkins 2003-01-09 22:20:22 EST
I will be trashing the array soon.  I will re-create it with 7 disks.  If 
anyone wants to debug this issue, they need to start within the next few 
days.  I may play first, add a hot spare, fail a drive, raidhotadd.  Stuff 
like that.  I want to determine if this stuff is top notch, or not.
Comment 2 Mr Watkins 2003-01-10 01:01:13 EST
kernel: md: bug in file md.c, line 2341

I was going to fail a disk.  I don't care about the data yet, so no risk.

I did "raidhotremove /dev/md3 /dev/sdc1" and got this: 
/dev/md3: can not hot-remove disk: disk busy!

and in /var/log/messages: 

Jan  9 23:56:14 watkins-home kernel: md: trying to remove sdc1 from md3 ... 
Jan  9 23:56:14 watkins-home kernel: md: bug in file md.c, line 2341 
Jan  9 23:56:14 watkins-home kernel: 
Jan  9 23:56:14 watkins-home kernel: md:^I********************************** 
Jan  9 23:56:14 watkins-home kernel: md:^I* <COMPLETE RAID STATE PRINTOUT> * 
Jan  9 23:56:14 watkins-home kernel: md:^I********************************** 
Jan  9 23:56:14 watkins-home kernel: md3: <sdd1><sdc1><sdb1><sda1> array 
superblock: 
Jan  9 23:56:14 watkins-home kernel: md:  SB: (V:0.90.0) 
ID:<df11afb0.817f8a3f.01906d9f.0673fe7c> CT:3e19eb15 
Jan  9 23:56:14 watkins-home kernel: md:     L5 S17775808 ND:4 RD:4 md3 LO:2 
CS:65536 
Jan  9 23:56:14 watkins-home kernel: md:     UT:3e1b8bfd ST:0 AD:4 WD:4 FD:0 
SD:0 CSUM:8f0593ee E:00000009 
Jan  9 23:56:14 watkins-home kernel:      D  0:  DISK<N:0,sda1(8,1),R:0,S:6> 
Jan  9 23:56:14 watkins-home kernel:      D  1:  DISK<N:1,sdb1(8,17),R:1,S:6> 
Jan  9 23:56:14 watkins-home kernel:      D  2:  DISK<N:2,sdc1(8,33),R:2,S:6> 
.
.
.
The list goes on for 100+ lines.


Note You need to log in before you can comment on or make changes to this bug.