Bug 532991 - mdadm: not large enough to join array (v1.2 superblock)
Summary: mdadm: not large enough to join array (v1.2 superblock)
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: mdadm
Version: 11
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Doug Ledford
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-11-04 16:08 UTC by Douglas E. Warner
Modified: 2009-11-05 19:52 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-11-05 19:52:11 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Douglas E. Warner 2009-11-04 16:08:06 UTC
Description of problem:
The problem seems to be identical to what is described in this debian bug (but not since that's mdadm 2.6.7); there is a patch included:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=500309

I have a RAID5 array with v1.2 superblock and internal bitmap and am trying to add a new member, but it's failing with this error:

mdadm: /dev/sdi1 not large enough to join array


Version-Release number of selected component (if applicable):
mdadm-3.0-1.fc11.x86_64


# mdadm -D /dev/md5 
/dev/md5:
        Version : 1.02
  Creation Time : Tue Oct 27 13:42:06 2009
     Raid Level : raid5
     Array Size : 2930276864 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 1465138432 (1397.26 GiB 1500.30 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed Nov  4 10:00:25 2009
          State : active
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : thor.home.silfreed.net:5  (local to host thor.home.silfreed.net)
           UUID : 33ff9470:99a30922:75860b97:c34f4d1d
         Events : 31264

    Number   Major   Minor   RaidDevice State
       0       8       64        0      active sync   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       3       8       96        2      active sync   /dev/sdg


# sfdisk -d /dev/sdc
# partition table of /dev/sdc
unit: sectors

/dev/sdc1 : start=       63, size=2930272002, Id=da
/dev/sdc2 : start=        0, size=        0, Id= 0
/dev/sdc3 : start=        0, size=        0, Id= 0
/dev/sdc4 : start=        0, size=        0, Id= 0


# sfdisk -d /dev/sdi
# partition table of /dev/sdi
unit: sectors

/dev/sdi1 : start=       63, size=2930272002, Id=da
/dev/sdi2 : start=        0, size=        0, Id= 0
/dev/sdi3 : start=        0, size=        0, Id= 0
/dev/sdi4 : start=        0, size=        0, Id= 0

Comment 1 Douglas E. Warner 2009-11-05 00:37:42 UTC
The patch specified in the debian bug has already been applied to mdadm 3.0.  I also tested mdadm 3.0.2 from F12 and still experienced the error.

However, I was able to get past the error by specifying the entire device (sdi) instead of just the partition (sdi1).

IE, this worked:

# mdadm /dev/md5 --add /dev/sdi
mdadm: added /dev/sdi

But this failed:

# mdadm /dev/md5 --add /dev/sdi1 
mdadm: /dev/sdi1 not large enough to join array

Comment 2 Doug Ledford 2009-11-05 19:52:11 UTC
This would be correct.  The original raid array was created from all disks using whole devices, not partitions.  So, even though you have a partition table on those disks, it's unused (and skipped over by having a version 1.2 superblock, which is offset 4k past the start of the disk, leaving the partition table intact but totally meanningless).  So, indeed, /dev/sdc1 *was* too small, while sdc was just the right size.  You can see what I'm talking about by looking at the end of the output of mdadm -D /dev/md5 where it lists all the current active devices in the array and none of them are the partition devices, they are all the whole disk devices.  When using whole disk devices, I would strongly recommend version 1.1 superblocks as they won't allow for this confusion, the superblock and the partition table would both try and occupy the first sector, so the 1.1 superblock would result in all the disks having invalid partition tables.


Note You need to log in before you can comment on or make changes to this bug.