Bug 484120 - F10 does not properly initialize/activate existing intel (isw) fakeraid RAID0 volume post-install
Summary: F10 does not properly initialize/activate existing intel (isw) fakeraid RAID0...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: dmraid
Version: 10
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: ---
Assignee: LVM and device-mapper development team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-02-04 21:31 UTC by Vince Worthington
Modified: 2009-12-18 07:48 UTC (History)
8 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2009-12-18 07:48:39 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Vince Worthington 2009-02-04 21:31:16 UTC
Description of problem:
Fedora 10 gold does not properly initialize/activate existing fakeraid RAID0 volumes post-install.  The immediate symptom (if any existing partitions were "edited" during installation to assign them mount points) is that you will be brought to a maintenance prompt because these partitions will be added to /etc/fstab by anaconda, but will not be found.  (they are added by UUID but it doesnt' matter how you specify them in /etc/fstab).

It attempts to create and activate them, however the partitions cannot be mounted, claiming there is no ext3 superblock.

This was a "new install" installation of F10 gold (x86_64) onto an existing F8 (x86_64) system.  Unknown if same will occur on x86, but the partitions are accessible from an x86 Fedora 9 Live CD after activating via 'dmraid -ay'.

Version-Release number of selected component (if applicable):
Still occurs with 1.0.0.rc15-2.fc10.x86_64

How reproducible:
always

Steps to Reproduce:
1. Install F10 from gold media, using updates= on commandline per comment #25
   of Bugzilla 468649 to retrieve updated .img file.  isw volumes will not be
   found by anaconda without doing this.
2. Create system filesystems (/ and /boot) on non-isw hard disks
3. "Edit" existing ext3 partitions under isw fakeraid device to add
   mountpoints but do not format them.
4. Complete installation
5. Reboot
  
Actual results:
Existing RAID0 ext3 partitions on isw fakeraid will not be mounted, leading to maintenance mode prompt for root password.  After supplying pw, it will not be possible to mount these existing partitions, citing inability to find a valid filesystem superblock.


Expected results:
Successful mount from /etc/fstab.

Additional info:
Not 100% sure but it appears that some of the partitions are being assigned incorrect block major/minors, comparing results seen under Fedora 9 Live CD (where these partitions work) and under Fedora 10.

Under Fedora 10:

[root@pharaoh ~]# dmsetup table
isw_fibdbbjhh_RAID0_1p8: 0 61448562 linear 253:1 1320976818
isw_fibdbbjhh_RAID0_1p7: 0 460792332 linear 253:1 860184423
isw_fibdbbjhh_RAID0_1: 0 1953535488 striped 2 256 8:0 0 8:32 0
isw_fibdbbjhh_RAID0_1p6: 0 819202482 linear 253:1 40981878
isw_fibdbbjhh_RAID0_1p5: 0 40965687 linear 253:1 16128
isw_fibdbbjhh_RAID0_1p1: 0 1953504000 linear 253:0 16065

and output of 'dmraid -ay -t':

[root@pharaoh ~]# dmraid -ay -t
isw_fibdbbjhh_RAID0_1: 0 1953535488 striped 2 256 /dev/sda 0 /dev/sdc 0
isw_fibdbbjhh_RAID0_1p5: 0 40965687 linear /dev/mapper/isw_fibdbbjhh_RAID0_1
16128
isw_fibdbbjhh_RAID0_1p6: 0 819202482 linear /dev/mapper/isw_fibdbbjhh_RAID0_1
40981878
isw_fibdbbjhh_RAID0_1p7: 0 460792332 linear /dev/mapper/isw_fibdbbjhh_RAID0_1
860184423
isw_fibdbbjhh_RAID0_1p8: 0 61448562 linear /dev/mapper/isw_fibdbbjhh_RAID0_1
1320976818

Also, here's an ls -l of /dev/mapper:

[root@pharaoh ~]# ls /dev/mapper
crw-rw---- 1 root root  10, 63 2009-02-04 08:31 control
brw-rw---- 1 root root 253,  0 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1
brw-rw---- 1 root root 253,  1 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p1
brw-rw---- 1 root root 253,  2 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p5
brw-rw---- 1 root root 253,  3 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p6
brw-rw---- 1 root root 253,  4 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p7
brw-rw---- 1 root root 253,  5 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p8


Interestingly, when I tried to deactivate them with 'dmraid -an', I still ended
up with two entries under /dev/mapper:

crw-rw---- 1 root root  10, 63 2009-02-04 08:31 control
brw-rw---- 1 root root 253,  0 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1
brw-rw---- 1 root root 253,  1 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p1

and a dmesg line indicating one is currently open:

device-mapper: ioctl: unable to remove open device isw_fibdbbjhh_RAID0_1


Comparing the output of 'dmsetup table' to the output of 'ls /dev/mapper', it
almost looks like p5-p8 are "linked" to the wrong device major/minor.  'dmsetup
table' shows them linked to major/minor 253:1 (isw_fibdbbjhh_RAID0_1p1) - not
253:0 (isw_fibdbbjhh_RAID0_1).  And this may explain why I saw what I did when
trying to deactivate with 'dmraid -an'.

Under F9 live CD, all of them are "linked" to the block major/minor
representing the whole device, not partition 1, and I am able to use the
devices.


Here's what corresponding output looks like under F9 Live CD (where the partitions are mountable and usable after activating with 'dmraid -ay'):

[root@localhost ~]# dmraid -ay -t
isw_fibdbbjhh_RAID0_1: 0 1953536512 striped 2 256 /dev/sda 0 /dev/sdc 0

[root@localhost ~]# dmsetup table
isw_fibdbbjhh_RAID0_15: 0 40965687 linear 253:2 16128
isw_fibdbbjhh_RAID0_1: 0 1953536512 striped 2 256 8:0 0 8:32 0
live-osimg-min: 0 8388608 snapshot 7:3 7:1 P 8
live-rw: 0 8388608 snapshot 7:3 7:4 P 8
isw_fibdbbjhh_RAID0_18: 0 61448562 linear 253:2 1320976818
isw_fibdbbjhh_RAID0_17: 0 460792332 linear 253:2 860184423
isw_fibdbbjhh_RAID0_16: 0 819202482 linear 253:2 40981878

[root@localhost ~]# ls /dev/mapper/*
crw-rw---- 1 root root  10, 60 2009-02-04 03:45 /dev/mapper/control
brw-rw---- 1 root disk 253,  2 2009-02-04 08:48 /dev/mapper/isw_fibdbbjhh_RAID0_1
brw-rw---- 1 root disk 253,  3 2009-02-04 08:48 /dev/mapper/isw_fibdbbjhh_RAID0_15
brw-rw---- 1 root disk 253,  4 2009-02-04 08:48 /dev/mapper/isw_fibdbbjhh_RAID0_16
brw-rw---- 1 root disk 253,  5 2009-02-04 08:48 /dev/mapper/isw_fibdbbjhh_RAID0_17
brw-rw---- 1 root disk 253,  6 2009-02-04 08:48 /dev/mapper/isw_fibdbbjhh_RAID0_18
brw-rw---- 1 root disk 253,  1 2009-02-04 03:45 /dev/mapper/live-osimg-min
brw-rw---- 1 root disk 253,  0 2009-02-04 03:45 /dev/mapper/live-rw


Notice that *all* of the partitions are linked to the parent block major/minor (253:2) under F9.

Please let me know if there is any other info needed.

Thanks,
Vince

Comment 2 Hans de Goede 2009-02-12 18:23:14 UTC
Vince can you try editing /etc/rc.d/rc.sysinit and replacing
                        dmname=$(resolve_dm_name $x)
                        [ -z "$dmname" ] && continue
                        /sbin/dmraid -ay -i -p "$dmname" >/dev/null 2>&1

With just:
                        /sbin/dmraid -ay -i -p "$x" >/dev/null 2>&1

I think that will fix this.

Comment 3 Vince Worthington 2009-02-16 15:19:27 UTC
Hi Hans,

This does indeed correct the problem, the isw is now initialized and partitions are mountable.

Thanks for your help.
Vince

Comment 4 Bug Zapper 2009-11-18 10:59:08 UTC
This message is a reminder that Fedora 10 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 10.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '10'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 10's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 10 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 5 Bug Zapper 2009-12-18 07:48:39 UTC
Fedora 10 changed to end-of-life (EOL) status on 2009-12-17. Fedora 10 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.