This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 155319 - Diskdruid does not recognise existing RAID volumes
Diskdruid does not recognise existing RAID volumes
Status: CLOSED WORKSFORME
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
4
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Anaconda Maintenance Team
Mike McLean
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-04-19 01:56 EDT by Stig Nielsen
Modified: 2007-11-30 17:11 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-07-18 17:44:00 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
dd /dev/md1 output (8.00 MB, application/octet-stream)
2005-04-27 15:24 EDT, Stig Nielsen
no flags Details
Output of 'lsmod' (1.65 KB, text/plain)
2005-06-12 15:00 EDT, Stig Nielsen
no flags Details
Output of 'dmesg' (7.47 KB, text/plain)
2005-06-12 15:01 EDT, Stig Nielsen
no flags Details
Output of 'lspci' (1.44 KB, text/plain)
2005-06-12 15:01 EDT, Stig Nielsen
no flags Details

  None (edit)
Description Stig Nielsen 2005-04-19 01:56:25 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.5) Gecko/20041110 Firefox/1.0

Description of problem:
During installation, when re-using old SCSI RAID volumes, e.g. md0, md1, md2 (created with RH9 and ok during FC3 install), DiskDruid displays "Type" as "foreign".

The Raid type is "0" and combined from 2 partitions all using ext3 file system:
 
/dev/sda2   /dev/sdb2   -> md0 
/dev/sda3   /dev/sdb3   -> md1
/dev/sda4   /dev/sdb4   -> md2
(sda1 and sdb1 are both swap, no raid)

Funny thing is that the devices are identified as 

/dev/sdf2   /dev/sdg2   -> md0 
/dev/sdf3   /dev/sdg3   -> md1
/dev/sdf4   /dev/sdg4   -> md2

Please let me know if I should provide more details.



Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.Choose "Custom Install"
2.
3.
  

Actual Results:  Can't go any further as md2 partition is /home and cannot be formatted 

Expected Results:  Allow me to choose mdx

Additional info:
Comment 1 Jeremy Katz 2005-04-27 01:31:08 EDT
Can you provide a dd of the first meg of one of the raid volumes?
Comment 2 Stig Nielsen 2005-04-27 15:24:47 EDT
Created attachment 113733 [details]
dd /dev/md1 output
Comment 3 Stig Nielsen 2005-04-27 15:37:19 EDT
Thanks Jeremy

Attachment 113733 [details] is an output of ' dd if=/dev/md1 of=md1.dd bs=1M count=8'

Below is part of /etc/fstab as of FC3:

/dev/md0                /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
/dev/md2                /home                   ext3    defaults        1 2
LABEL=/opt              /opt                    ext3    defaults        1 2
/dev/md1                /usr                    ext3    defaults        1 2
/dev/sdb1               swap                    swap    defaults        0 0
/dev/sda1               swap                    swap    defaults        0 0
Comment 4 Jeremy Katz 2005-04-27 16:17:23 EDT
Hrmm, it's definitely showing up as ext3 when I run the same code that's used
for the sniffing.  Is it definitely starting the raid?  If so, can you switch to
tty2 and run

raidstart md2
python -c 'import partedUtils; print partedUtils.sniffFilesystemType("/dev/md2")'

for all of the md*?
Comment 5 Stig Nielsen 2005-04-29 00:28:45 EDT
I'm just gonna post what I see (maybe there's some PATH problems?)
raidstart md0 returns 'usage: raidstart /dev/md[minornum]'

raidstart /dev/md0 returns nothing (seems to accept) but does not seem to
correct the problem (going back to X0, press 'back one step and forward
selecting 'configure with DiskDruid'  

python -c 'import partedUtils; print partedUtils.sniffFilesystemType("/dev/md0")'
returns something like ' partedUtils not found, for all md0,1,2 (I tried to tee
the next command but the file is empty) 

find -name partedUtils returns:
/mnt/runtime/usr/lib/anaconda/partedUtils.py
/mnt/runtime/usr/lib/anaconda/partedUtils.pyc
/mnt/runtime/usr/lib/python2.4/site-packages/partedmodule.so
/mnt/runtime/usr/sbin/parted
/mnt/source/Fedora/RPMS/parted-1.6.22-1.i386.rpm
/mnt/source/Fedora/RPMS/parted-devel-1.6.22-1.i386.rpm

Any suggestions? 

Thanks
Comment 6 Chris Lumens 2005-05-09 12:01:25 EDT
Try

PYTHONPATH=/mnt/runtime/usr/lib/anaconda python -c ....

instead.  The problem there is it just can't fine the location of the anaconda
python modules.
Comment 7 Stig Nielsen 2005-06-10 11:44:10 EDT
Thanks for the info. Sorry for the delay....
raidstart md1
PYTHONPATH=/mnt/runtime/usr/lib/anaconda python -c python -c 'import
partedUtils; print partedUtils.sniffFilesystemType("/dev/md1")'

returns:
* Tried to read pagesize for /dev/md1 in sniffFilesystemType and only read 0
None
Comment 8 Stig Nielsen 2005-06-10 13:10:33 EDT
Actually this is tested on FC4 Test3 (3.92)

PS: there is a mistyper in the previous message, the command used was:
PYTHONPATH=/mnt/runtime/usr/lib/anaconda python -c 'import
partedUtils; print partedUtils.sniffFilesystemType("/dev/md1")'
Comment 9 Stig Nielsen 2005-06-12 15:00:23 EDT
Created attachment 115340 [details]
Output of 'lsmod'
Comment 10 Stig Nielsen 2005-06-12 15:01:14 EDT
Created attachment 115341 [details]
Output of 'dmesg'
Comment 11 Stig Nielsen 2005-06-12 15:01:59 EDT
Created attachment 115342 [details]
Output of 'lspci'
Comment 12 Stig Nielsen 2005-07-18 17:44:00 EDT
I tried the FC4 release and that seems to detect the volumes as ext3. 

To make sure that nothing had changed onm the HW I tried FC4 test 3, with the 
same result as before = did not detect the raid volumes. 

However, what you guys did in the released FC4 corrected the problem, whatever 
it was. 
Please let me know if I need to do further tests. For now anyway, I change the 
status to resolved

Note You need to log in before you can comment on or make changes to this bug.