Bug 542022
Summary: | fedora 12 dmraid, mounting type isw doesn't work (works in ubuntu 9.10) | ||||||
---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | fpee <fp> | ||||
Component: | dmraid | Assignee: | Heinz Mauelshagen <heinzm> | ||||
Status: | CLOSED DUPLICATE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 12 | CC: | agk, bmr, dwysocha, hdegoede, heinzm, lvm-team, mbroz, prockai, saxonm, sjensen | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2010-01-13 08:21:31 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
I activated your attached metadata in my testbed and get: # dmraid -r /dev/dm-16: isw, "isw_djddabgcfi", GROUP, ok, 976773166 sectors, data@ 0 /dev/dm-12: isw, "isw_djddabgcfi", GROUP, ok, 976773166 sectors, data@ 0 # dmraid -s *** Group superset isw_djddabgcfi --> Subset name : isw_djddabgcfi_Volume0 size : 1953536000 stride : 256 type : stripe status : ok subsets: 0 devs : 2 spares : 0 # dmraid -tay isw_djddabgcfi_Volume0: 0 1953536000 striped 2 256 /dev/dm-12 0 /dev/dm-16 0 Activation works fine and size of striped mapping makes sense. No reproducer here. Is temporary remote access to your machine possible to allow for analysis ? Heinz fpee, Fedora 12 uses mdraid for activation of ISW RAID sets, so most likely your set is already activated as /dev/md-127, which in turn is causing the failure of dmraid to activate it as it is already active. Can you paste the output of "cat /proc/mdstat" here please ? (In reply to comment #2) > fpee, > > Fedora 12 uses mdraid for activation of ISW RAID sets, so most likely your set > is already activated as /dev/md-127, which in turn is causing the failure of > dmraid > to activate it as it is already active. > > Can you paste the output of "cat /proc/mdstat" here please ? I am seeing this issue. The filesystem on the RAID mirror is not getting mounted at boot time. [root@mame ~]# dmraid -ay RAID set "isw_egcfjehcj_Mirror" was not activated ERROR: device "isw_egcfjehcj_Mirror" could not be found [root@mame ~]# dmraid -tay isw_egcfjehcj_Mirror: 0 2930272647 mirror core 2 131072 nosync 2 /dev/sdb 0 /dev/sdc 0 1 handle_errors [root@mame ~]# dmraid -dtay DEBUG: _find_set: searching isw_egcfjehcj DEBUG: _find_set: not found isw_egcfjehcj DEBUG: _find_set: searching isw_egcfjehcj_Mirror DEBUG: _find_set: searching isw_egcfjehcj_Mirror DEBUG: _find_set: not found isw_egcfjehcj_Mirror DEBUG: _find_set: not found isw_egcfjehcj_Mirror DEBUG: _find_set: searching isw_egcfjehcj DEBUG: _find_set: found isw_egcfjehcj DEBUG: _find_set: searching isw_egcfjehcj_Mirror DEBUG: _find_set: searching isw_egcfjehcj_Mirror DEBUG: _find_set: found isw_egcfjehcj_Mirror DEBUG: _find_set: found isw_egcfjehcj_Mirror DEBUG: set status of set "isw_egcfjehcj_Mirror" to 16 isw_egcfjehcj_Mirror: 0 2930272647 mirror core 2 131072 nosync 2 /dev/sdb 0 /dev/sdc 0 1 handle_errors DEBUG: freeing devices of RAID set "isw_egcfjehcj_Mirror" DEBUG: freeing device "isw_egcfjehcj_Mirror", path "/dev/sdb" DEBUG: freeing device "isw_egcfjehcj_Mirror", path "/dev/sdc" DEBUG: freeing devices of RAID set "isw_egcfjehcj" DEBUG: freeing device "isw_egcfjehcj", path "/dev/sdb" DEBUG: freeing device "isw_egcfjehcj", path "/dev/sdc" [root@mame ~]# dmraid -r /dev/sdc: isw, "isw_egcfjehcj", GROUP, ok, 2930277166 sectors, data@ 0 /dev/sdb: isw, "isw_egcfjehcj", GROUP, ok, 2930277166 sectors, data@ 0 [root@mame ~]# ls /dev/md* /dev/md126 /dev/md127 /dev/md: imsm_0 [root@mame ~]# cat /proc/mdstat Personalities : [raid1] md127 : inactive sdb[1](S) sdc[0](S) 4256 blocks super external:imsm unused devices: <none> [root@mame ~]# mdadm -Ss mdadm: stopped /dev/md127 [root@mame ~]# dmraid -ay RAID set "isw_egcfjehcj_Mirror" was activated device "isw_egcfjehcj_Mirror" is now registered with dmeventd for monitoring RAID set "isw_egcfjehcj_Mirrorp1" was activated The dynamic shared library "libdmraid-events-dos.so" could not be loaded: libdmraid-events-dos.so: cannot open shared object file: No such file or directory [root@mame ~]# ls /dev/dm-1 /dev/dm-1 Ok, So the info you've provided shows that your raid set is available on your system, but as /dev/md126 instead of as /dev/mapper/isw_..... , because as said already Intel RAID now uses mdraid instead of dmraid. If you specify the correct (new) path to the device in /etc/fstab, any filesystem on there should get mounted without problems. Regards, Hans (In reply to comment #4) > Ok, > > So the info you've provided shows that your raid set is available on your > system, but as /dev/md126 instead of as /dev/mapper/isw_..... , > because as said already Intel RAID now uses mdraid instead of dmraid. > > If you specify the correct (new) path to the device in /etc/fstab, any > filesystem on there should get mounted without problems. > > Regards, > > Hans If I try and mount /dev/md126 or /dev/md127 they fail (md RAID is not active). So if md is running the RAID, why is it being deactivated just prior to dracut switching root? # dmesg | grep md md: bind<sdc> md: bind<sdb> md: bind<sdc> md: bind<sdb> md: raid1 personality registered for level 1 raid1: raid set md126 active with 2 out of 2 mirrors md126: detected capacity change from 0 to 1500299594752 md: md126 switched to read-write mode. dracut: mdadm: Started /dev/md/Mirror_0 with 2 devices md126: p1 md: md126 stopped. md: unbind<sdb> md: export_rdev(sdb) md: unbind<sdc> md: export_rdev(sdc) md: md126 stopped. md: md126 stopped. This leaves me with a deactivated md RAID that will not mount: [root@mame ~]# cat /proc/mdstat Personalities : [raid1] md127 : inactive sdb[1](S) sdc[0](S) 4256 blocks super external:imsm unused devices: <none> I can stop the md RAID: mdadm -Ss mdadm: stopped /dev/md127 [root@mame ~]# cat /proc/mdstat Personalities : [raid1] unused devices: <none> And then use dmraid to configure and mount the devices: [root@mame ~]# dmraid -ay RAID set "isw_egcfjehcj_Mirror" was activated device "isw_egcfjehcj_Mirror" is now registered with dmeventd for monitoring RAID set "isw_egcfjehcj_Mirrorp1" was activated Then I have a usable filesystem. Ah, I understand your problem now :) Your Intel BIOS RAID sets are not listed in /etc/mdadm.conf, so they don't get re-activated by rc.sysinit, this is bug 537329, which is being fixed for Fedora-13. And the still active mdraid container keeps the devices busy causing dmraid to fail. Given that you seem to want to use dmraid anyways, you can work around this issue by specifying "noiswmd" on the grub / kernel cmdline, which will make dracut not use mdraid for the RAID set, and will make rc.sysinit properly activate it with dmraid. Add this to grub.conf to make it permanent. Regards, Hans *** This bug has been marked as a duplicate of bug 537329 *** |
Created attachment 374323 [details] Output of dmraid -rD Description of problem: Can't mount a isw fakeraid setup in fedora 12, works in ubuntu 9.10. Version-Release number of selected component (if applicable): 1.0.0.rc16-4 How reproducible: Every time. Steps to Reproduce: 1. dmraid -a y 2. ... 3. profit? Actual results: dmraid -a y RAID set "isw_djddabgcfi_Volume0" was not activated ERROR: device "isw_djddabgcfi_Volume0" could not be found Expected results: Activated! Additional info: ----------------------------------- Ubuntu info: # dmraid -V dmraid version: 1.0.0.rc15 (2008-09-17) shared dmraid library version: 1.0.0.rc15 (2008.09.17) device-mapper version: 4.15.0 # dmraid -a y -vvv WARN: locking /var/lock/dmraid/.lock NOTICE: /dev/sdc: asr discovering NOTICE: /dev/sdc: ddf1 discovering NOTICE: /dev/sdc: hpt37x discovering NOTICE: /dev/sdc: hpt45x discovering NOTICE: /dev/sdc: isw discovering NOTICE: /dev/sdc: isw metadata discovered NOTICE: /dev/sdc: jmicron discovering NOTICE: /dev/sdc: lsi discovering NOTICE: /dev/sdc: nvidia discovering NOTICE: /dev/sdc: pdc discovering NOTICE: /dev/sdc: sil discovering NOTICE: /dev/sdc: via discovering NOTICE: /dev/sdb: asr discovering NOTICE: /dev/sdb: ddf1 discovering NOTICE: /dev/sdb: hpt37x discovering NOTICE: /dev/sdb: hpt45x discovering NOTICE: /dev/sdb: isw discovering NOTICE: /dev/sdb: isw metadata discovered NOTICE: /dev/sdb: jmicron discovering NOTICE: /dev/sdb: lsi discovering NOTICE: /dev/sdb: nvidia discovering NOTICE: /dev/sdb: pdc discovering NOTICE: /dev/sdb: sil discovering NOTICE: /dev/sdb: via discovering NOTICE: /dev/sda: asr discovering NOTICE: /dev/sda: ddf1 discovering NOTICE: /dev/sda: hpt37x discovering NOTICE: /dev/sda: hpt45x discovering NOTICE: /dev/sda: isw discovering NOTICE: /dev/sda: jmicron discovering NOTICE: /dev/sda: lsi discovering NOTICE: /dev/sda: nvidia discovering NOTICE: /dev/sda: pdc discovering NOTICE: /dev/sda: sil discovering NOTICE: /dev/sda: via discovering NOTICE: added /dev/sdc to RAID set "isw_djddabgcfi" NOTICE: added /dev/sdb to RAID set "isw_djddabgcfi" RAID set "isw_djddabgcfi_Volume0" was activated INFO: Activating GROUP raid set "isw_djddabgcfi" NOTICE: discovering partitions on "isw_djddabgcfi_Volume0" NOTICE: /dev/mapper/isw_djddabgcfi_Volume0: dos discovering NOTICE: /dev/mapper/isw_djddabgcfi_Volume0: dos metadata discovered NOTICE: created partitioned RAID set(s) for /dev/mapper/isw_djddabgcfi_Volume0 RAID set "isw_djddabgcfi_Volume01" was activated INFO: Activating partition raid set "isw_djddabgcfi_Volume01" WARN: unlocking /var/lock/dmraid/.lock # dmsetup table isw_djddabgcfi_Volume01: 0 1953531904 linear 252:0 2048 isw_djddabgcfi_Volume0: 0 1953536512 striped 2 256 8:16 0 8:32 0 # dmraid -b /dev/sdc: 976773168 total, "5QM0TKGC" /dev/sdb: 976773168 total, "5QM0S24A" /dev/sda: 586072368 total, "WD-WX20C7905779" # dmraid -r /dev/sdc: isw, "isw_djddabgcfi", GROUP, ok, 976773166 sectors, data@ 0 /dev/sdb: isw, "isw_djddabgcfi", GROUP, ok, 976773166 sectors, data@ 0 # dmraid -s *** Group superset isw_djddabgcfi --> Active Subset name : isw_djddabgcfi_Volume0 size : 1953536512 stride : 256 type : stripe status : ok subsets: 0 devs : 2 spares : 0 WARN: unlocking /var/lock/dmraid/.lock # dmraid -a y RAID set "isw_djddabgcfi_Volume0" already active RAID set "isw_djddabgcfi_Volume01" already active --------------------------------------------------- Fedora 12 info: # dmraid -V dmraid version: 1.0.0.rc16 (2009.09.16) debug dmraid library version: 1.0.0.rc16 (2009.09.16) device-mapper version: 4.15.0 # dmraid -a y -vvv WARN: locking /var/lock/dmraid/.lock WARN: missing dm serial file for /dev/dm-0 WARN: missing dm serial file for /dev/dm-1 NOTICE: /dev/dm-1: asr discovering NOTICE: /dev/dm-1: ddf1 discovering NOTICE: /dev/dm-1: hpt37x discovering NOTICE: /dev/dm-1: hpt45x discovering NOTICE: /dev/dm-1: isw discovering NOTICE: /dev/dm-1: jmicron discovering NOTICE: /dev/dm-1: lsi discovering NOTICE: /dev/dm-1: nvidia discovering NOTICE: /dev/dm-1: pdc discovering NOTICE: /dev/dm-1: sil discovering NOTICE: /dev/dm-1: via discovering NOTICE: /dev/dm-0: asr discovering NOTICE: /dev/dm-0: ddf1 discovering NOTICE: /dev/dm-0: hpt37x discovering NOTICE: /dev/dm-0: hpt45x discovering NOTICE: /dev/dm-0: isw discovering NOTICE: /dev/dm-0: jmicron discovering NOTICE: /dev/dm-0: lsi discovering NOTICE: /dev/dm-0: nvidia discovering NOTICE: /dev/dm-0: pdc discovering NOTICE: /dev/dm-0: sil discovering NOTICE: /dev/dm-0: via discovering NOTICE: /dev/sdc: asr discovering NOTICE: /dev/sdc: ddf1 discovering NOTICE: /dev/sdc: hpt37x discovering NOTICE: /dev/sdc: hpt45x discovering NOTICE: /dev/sdc: isw discovering NOTICE: /dev/sdc: isw metadata discovered NOTICE: /dev/sdc: jmicron discovering NOTICE: /dev/sdc: lsi discovering NOTICE: /dev/sdc: nvidia discovering NOTICE: /dev/sdc: pdc discovering NOTICE: /dev/sdc: sil discovering NOTICE: /dev/sdc: via discovering NOTICE: /dev/sdb: asr discovering NOTICE: /dev/sdb: ddf1 discovering NOTICE: /dev/sdb: hpt37x discovering NOTICE: /dev/sdb: hpt45x discovering NOTICE: /dev/sdb: isw discovering NOTICE: /dev/sdb: isw metadata discovered NOTICE: /dev/sdb: jmicron discovering NOTICE: /dev/sdb: lsi discovering NOTICE: /dev/sdb: nvidia discovering NOTICE: /dev/sdb: pdc discovering NOTICE: /dev/sdb: sil discovering NOTICE: /dev/sdb: via discovering NOTICE: /dev/sda: asr discovering NOTICE: /dev/sda: ddf1 discovering NOTICE: /dev/sda: hpt37x discovering NOTICE: /dev/sda: hpt45x discovering NOTICE: /dev/sda: isw discovering NOTICE: /dev/sda: jmicron discovering NOTICE: /dev/sda: lsi discovering NOTICE: /dev/sda: nvidia discovering NOTICE: /dev/sda: pdc discovering NOTICE: /dev/sda: sil discovering NOTICE: /dev/sda: via discovering NOTICE: added /dev/sdc to RAID set "isw_djddabgcfi" NOTICE: added /dev/sdb to RAID set "isw_djddabgcfi" RAID set "isw_djddabgcfi_Volume0" was not activated ERROR: device "isw_djddabgcfi_Volume0" could not be found INFO: Activating GROUP raid set "isw_djddabgcfi" WARN: unlocking /var/lock/dmraid/.lock # dmraid -b /dev/dm-1: 12320768 total, "N/A" /dev/dm-0: 163725312 total, "N/A" /dev/sdc: 976773168 total, "5QM0TKGC" /dev/sdb: 976773168 total, "5QM0S24A" /dev/sda: 586072368 total, "WD-WX20C7905779" # dmraid -r /dev/sdc: isw, "isw_djddabgcfi", GROUP, ok, 976773166 sectors, data@ 0 /dev/sdb: isw, "isw_djddabgcfi", GROUP, ok, 976773166 sectors, data@ 0 # dmraid -s *** Group superset isw_djddabgcfi --> Subset name : isw_djddabgcfi_Volume0 size : 1953536000 stride : 256 type : stripe status : ok subsets: 0 devs : 2 spares : 0 # dmraid -a y RAID set "isw_djddabgcfi_Volume0" was not activated ERROR: device "isw_djddabgcfi_Volume0" could not be found ----------------------------------- Output of dmraid -rD on Fedora 12, is attached.