Bug 73365 - mkraid fails sometimes
mkraid fails sometimes
Status: CLOSED WONTFIX
Product: Red Hat Linux
Classification: Retired
Component: raidtools (Show other bugs)
7.3
All Linux
medium Severity high
: ---
: ---
Assigned To: Doug Ledford
David Lawrence
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2002-09-03 11:33 EDT by Jan "Yenya" Kasprzak
Modified: 2007-04-18 12:46 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-11-27 18:13:32 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jan "Yenya" Kasprzak 2002-09-03 11:33:54 EDT
Description of Problem:

mkraid complains that the device is busy

Version-Release number of selected component (if applicable):
1.00.2 (tested both on RH 7.3 and the "null" public beta)

How Reproducible:

Create a new entry for (say) /dev/md4 in /etc/raidtab, but do not enter it
as a last entry (let's say the last is for /dev/md3, and add the one for
/dev/md4 at the beginning of raidtab). Activate the /dev/md3 using raidstart
provided it is not active yet.

Now try to run mkraid /dev/md4. You got the error message saying that "/dev/md3
is active" (note that it complains about md3, not md4).

The bug is in mkraid.c around the line 247: it tries to walk the list of
all config entries (the actual one being in the variable "p"), but then
checks the state using the unrelated variable called "cfg", which incidentally
points to the last entry in /etc/raidtab. So if the last entry is active,
mkraid fails. What is worse - if the last entry is not active, it can allow
to overwrite the existing array even though it is in use!

Patch is as follows (cut&paste, so tabs are mangled):

--- raidtools-1.00.2/mkraid.c.orig	2002-04-15 10:09:11.000000000 +0200
+++ raidtools-1.00.2/mkraid.c	2002-09-03 17:22:04.000000000 +0200
@@ -244,7 +244,7 @@
     while (*args) {
 	for (p = cfg_head; p; p = p->next) {
 	    if (strcmp(p->md_name, *args)) continue;
-
    if (check_active(cfg)) 
+
    if (check_active(p)) 
 
	goto abort;
 	    if (force_flag) {
 
	fprintf(stderr, "DESTROYING the contents of %s in 5 seconds, Ctrl-C if
unsure!\n", *args);
Comment 1 Jason Tibbitts 2003-05-15 22:13:12 EDT
Is this still an open problem?  I'm seeing something that looks a lot like this:

I have two RAID0 arrays.  The disk is already partitioned; md0 spans sda3 and
sdb3 (each just under 1TB) while md1 spans sda2 and sdb2 (anaconda created them
out of order for whatever reason).  md0 won't activate ('invalid raid superblock
magic on sda3') and mkraid /dev/md0 gives:

/dev/md1: array is active -- run raidstop first.
mkraid: aborted.
(In addition to the above messages, see the syslog and /proc/mdstat as well
 for potential clues.)

I can't stop /dev/md1 because the system is on it.  For grins I tried mkraid
/dev/md1 and it gives the same message.

BTW, I just noticed that this and bug 85313 seem to be duplicates of 71637.
Comment 2 Jason Tibbitts 2003-05-15 22:33:49 EDT
I applied the suggested patch and built a new RPM.  My problem is solved.  The
SRPM is at http://www.math.uh.edu/~tibbs/raidtools-1.00.3-2_uh_2.i386.rpm

Note You need to log in before you can comment on or make changes to this bug.