RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 607090 - mdadm allows to create RAID array using duplicated disks
Summary: mdadm allows to create RAID array using duplicated disks
Keywords:
Status: CLOSED DUPLICATE of bug 617280
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: mdadm
Version: 6.0
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Doug Ledford
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-06-23 07:47 UTC by jakub
Modified: 2010-08-20 11:19 UTC (History)
11 users (show)

Fixed In Version: mdadm-3.1.3-0.git20100722.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-07-27 16:39:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
cleanup after failed create in duplicated array member case (4.66 KB, patch)
2010-06-23 11:27 UTC, Krzysztof Wojcik
no flags Details | Diff
x86_64 mdadm + patch rpm (670.60 KB, application/octet-stream)
2010-07-01 18:37 UTC, Prarit Bhargava
no flags Details
RHEL6 src rpm + patch (324.98 KB, application/octet-stream)
2010-07-06 13:03 UTC, Prarit Bhargava
no flags Details

Description jakub 2010-06-23 07:47:42 UTC
Description: 
mdadm allows to create RAID array using duplicated disks:
mdadm -C /dev/md/myvolume -amd -l0 --chunk 128 -n 2 /dev/sdb /dev/sdb -R
It should be forbidden.

Commands trace:

# mdadm -C /dev/md/imsm0 -amd -e imsm -n 6 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg

# cat /proc/mdstat 
Personalities : [raid1] 
md127 : inactive sdg[5](S) sdf[4](S) sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
      1254 blocks super external:imsm
       
unused devices: <none>

# mdadm -C /dev/md/myvolume -amd -l0 --chunk 128 -n 2 /dev/sdb /dev/sdb -R
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: Creating array inside imsm container /dev/md/imsm0
mdadm: ADD_NEW_DISK for /dev/sdb failed: File exists

# cat /proc/mdstat 
Personalities : [raid1] 
md126 : inactive sdb[0]
      244196224 blocks super external:/md127/0
       
md127 : inactive sdg[5](S) sdf[4](S) sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
      1254 blocks super external:imsm
       
unused devices: <none>

Steps to reproduce:
1. Boot OS.
2. Create IMSM container.
3. Create RAID0 using duplicated disks passed through the parameter:
    mdadm -C /dev/md/myvolume -amd -l0 --chunk 128 -n 2 /dev/sdb /dev/sdb -R

Expected results:
Last operation shall be forbidden. Error message shall appear.

Actual results:
RAID array is created.

Environment Details:
-OS: RH 6.0 Snap 6 (64-bit) 
-Chipset: IBX 
-CRB: ASUS
-HDDs: WD 250 GB

Comment 2 RHEL Program Management 2010-06-23 08:03:08 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 3 Krzysztof Wojcik 2010-06-23 11:27:13 UTC
Created attachment 426237 [details]
cleanup after failed create in duplicated array member case

mdadm prevents creation when device names are duplicated on the command
line, but leaves the partially created array intact.  Detect this case
in the error code from add_to_super() and cleanup the partially created
array.  The imsm handler is updated to report this conflict in
add_to_super_imsm_volume().

Note that since neither mdmon, nor userspace for that matter, ever saw an
active array we only need to perform a subset of the cleanup actions.
So call ioctl(STOP_ARRAY) directly and arrange for Create() to cleanup
the map file rather than calling Manage_runstop().

RedHat,

Please include this mdadm's patch in RHEL6.0.

Comment 4 Krzysztof Wojcik 2010-06-30 14:15:40 UTC
Doug,

Could you add this patch to mdadm's package?

Comment 5 Ed Ciechanowski 2010-06-30 18:03:49 UTC
Peter, this issue needs to be added to the mdadm package for RHEL 6.0 release. Please set this bug to the correct owner or status so we can get the fix included into RHEL 6.0. Thanks, EDC

Comment 7 Prarit Bhargava 2010-07-01 18:37:30 UTC
Created attachment 428525 [details]
x86_64 mdadm + patch rpm

Intel colleagues,

Attached is an mdadm RPM, based on the latest internal version of mdadm plus the patch you have suggested.

Could you please download and test this RPM?

Thanks,

P.

Comment 8 Krzysztof Wojcik 2010-07-02 08:46:41 UTC
Prarit, Are you sure you include patch to this rpm? Mdadm's behavior is the same as without patch...
Could you send me src rpm?

Comment 9 Prarit Bhargava 2010-07-06 13:03:21 UTC
Created attachment 429768 [details]
RHEL6 src rpm + patch

Hi Krystof,

AFAICT, the patch is in the binary rpm I previously provided.

When I do an rpmbuild -bp mdadm.spec I see

Patch #21 (mdadm-test.patch):
+ /bin/cat /root/rpmbuild/SOURCES/mdadm-test.patch
+ /usr/bin/patch -s -p1 -b --suffix .test --fuzz=0
+ exit 0

and visual inspection of the resulting source tree shows that the patch has been applied.

Additionally, the build logs also show that mdadm-test.patch was applied to the tree.

P.

Comment 10 Prarit Bhargava 2010-07-06 13:04:06 UTC
> Hi Krystof,

Oops ... sorry about that Krzysztof.  My apologies for spelling your name wrong.

P.

Comment 11 Krzysztof Wojcik 2010-07-06 14:36:43 UTC
(In reply to comment #10)
> > Hi Krystof,
> 
> Oops ... sorry about that Krzysztof.  My apologies for spelling your name
> wrong.
> 
> P.    
OK :)

I built binary rpm according your src rpm and it works as expected so you may include the patch to next snapshot of the RHEL.

Note:
Please investigate why your binary rpm is different then my and does not include the patch?

Comment 12 Krzysztof Wojcik 2010-07-08 13:54:30 UTC
Prarit,

How are things going on?
Are you post the patch to the next snapshot of RHEL6.0

Comment 13 Doug Ledford 2010-07-08 14:07:37 UTC
Hi Krzysztof, I'm actually going to be pulling in all the various bug fixes that are upstream via a git update, so this will get pulled in for sure that way.

Comment 15 Doug Ledford 2010-07-22 18:07:53 UTC
This has been fixed by the refresh to the latest upstream mdadm git sources.

Comment 16 Doug Ledford 2010-07-27 16:39:04 UTC

*** This bug has been marked as a duplicate of bug 617280 ***

Comment 17 jbielans 2010-08-20 11:19:00 UTC
Not reproducible on RHEL6.0 Snapshot 10 x86_64.


Note You need to log in before you can comment on or make changes to this bug.