Bug 186842

Summary: dm-striped device sizes must be multiple of chunk-size for 2.6.16 kernels
Product: [Fedora] Fedora Reporter: Dwaine Garden <dwainegarden>
Component: dmraidAssignee: Heinz Mauelshagen <heinzm>
Status: CLOSED WONTFIX QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: 5CC: agk, boaojatista, chabotc, dwysocha, erikj, fedoradocs, grinnz, growltiger, hdegoede, horsley1953, jcohen, jeholden, j-engel, jerry.carter, jochen, mbroz, netllama, orion, pjones, shc, triage, wdeleersnyder, wenzel
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard: bzcl34nup
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-05-06 15:39:15 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
nvidia nforce4 sata stripe data from dmraid -rD
none
Metadata for Stripe Raid0
none
Metadata for VIA striping array
none
nvidia-David_wenzel.raid0
none
`dmraid -rD` output
none
`dmraid -rD` output
none
dmraid -rD output files
none
`dmraid -rD` output after patch dmraid-1.0.0.rc11-pre1-1.x86_64.rpm applied
none
`dmraid -rD` output after patch dmraid-1.0.0.rc11-pre1-1.x86_64.rpm applied
none
The Output of the new dmraid.
none
Metadata on x86_64 nforce4
none
Raid0 after installing new dmraid
none
dmraid debug information (nvraid-data.tar.bz2)
none
dmraid debug information (nvraid-data.tar.bz2)
none
nvidia raid0 metadata (dmraid -rD)
none
Anaconda crashing on dmraid too none

Description Dwaine Garden 2006-03-27 00:56:13 UTC
Description of problem:
I am using the dmraid utility to detect and configure my VIA
Software Raid RAID0 array.  When upgrading to 2.6.16 from 2.6.15, the
system failed to boot because the device-mapper array was not built. 
I have traced the problem to this patch that limits dm-stripe to
targets that are multiples of the chunk size.  I don't know whether to
consider this a problem with device-mapper or with the VIA Software
Raid BIOS that built the array or Anaconda.  

How do you fix this problem??????????????

A patch was submitted for kernel 2.6.16 Here it is.  

[PATCH] dm stripe: Fix bounds

The dm-stripe target currently does not enforce that
the size of a stripe
device be a multiple of the chunk-size.  Under certain
conditions, this can
lead to I/O requests going off the end of an
underlying device.  This
test-case shows one example.

echo "0 100 linear /dev/hdb1 0" | dmsetup create
linear0
echo "0 100 linear /dev/hdb1 100" | dmsetup create
linear1
echo "0 200 striped 2 32 /dev/mapper/linear0 0
/dev/mapper/linear1 0" | \
   dmsetup create stripe0
dd if=/dev/zero of=/dev/mapper/stripe0 bs=1k

This will produce the output:
dd: writing '/dev/mapper/stripe0': Input/output error
97+0 records in
96+0 records out

And in the kernel log will be:
attempt to access beyond end of device
dm-0: rw=0, want=104, limit=100

The patch will check that the table size is a multiple
of the stripe
chunk-size when the table is created, which will
prevent the above striped
device from being created.

This should not affect tools like LVM or EVMS, since
in all the cases I can
think of, striped devices are always created with the
sizes being a
multiple of the chunk-size.

The size of a stripe device must be a multiple of its
chunk-size.

(akpm: that typecast is quite gratuitous)

Signed-off-by: Kevin Corry <kevcorry.com>
Signed-off-by: Alasdair G Kergon <agk>
Signed-off-by: Andrew Morton <akpm>
Signed-off-by: Linus Torvalds <torvalds>




Version-Release number of selected component (if applicable):
dmraid-1.0.0.rc9-FC5_5.2.i386.rpm
device-mapper-1.02.02-3.2.i386.rpm
device-mapper-multipath-0.4.5-12.2.i386.rpm
kernel-smp-2.6.15-1.2054_FC5.i686.rpm
kernel-smp-2.6.16-1.2088_FC6.i686.rpm
kernel-smp-devel-2.6.15-1.2054_FC5.i686.rpm
kernel-smp-devel-2.6.16-1.2088_FC6.i686.rpm

How reproducible:
Have Anaconda format and create a volume.  Tried re-install 8 times.  Same
results when trying to boot to 2.6.16 from a striped raid created with fc5.

Steps to Reproduce:
1. Create raid0 through the bios of via chipset.  Set the raid to chunk size 64
or any value.
2. Pop the FC5 DVD and install FC5 onto the computer.
3. When finish installation.  Update running kernel with yum.  Kernel 2.6.16.x.
4. Reboot, device-mapper will fail with the following message.
device-mapper: dm-stripe: Target length not divisible by chunk size.
device-mapper: error adding target to table.

  
Actual results:


Expected results:


Additional info:

My lvm volume currently on my computer.

dmsetup status
via_ecfdfiehfa: 0 312499998 striped
VolGroup00-LogVol01: 0 4063232 linear
VolGroup00-LogVol00: 0 308150272 linear
via_ecfdfiehfap2: 0 312287535 linear
via_ecfdfiehfap1: 0 208782 linear


*** Active Set
name   : via_ecfdfiehfa
size   : 312499998
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 2
spares : 0

Comment 1 Dwaine Garden 2006-03-27 01:05:21 UTC
system-config-lvm also crashes trying to run.  Here is the backtrace.

Traceback (most recent call last):
  File "/usr/share/system-config-lvm/system-config-lvm.py", line 138, in ?
    runFullGUI()
  File "/usr/share/system-config-lvm/system-config-lvm.py", line 123, in runFullGUI
    blvm = baselvm(glade_xml, app)
  File "/usr/share/system-config-lvm/system-config-lvm.py", line 68, in __init__
    self.lvmm = lvm_model()
  File "/usr/share/system-config-lvm/lvm_model.py", line 142, in __init__
    self.__block_device_model = BlockDeviceModel()
  File "/usr/share/system-config-lvm/BlockDeviceModel.py", line 19, in __init__
    bd = BlockDevice(devname)
  File "/usr/share/system-config-lvm/BlockDevice.py", line 41, in __init__
    self.reload()
  File "/usr/share/system-config-lvm/BlockDevice.py", line 62, in reload
    self.addNoAlign(part.beg, part.end, part.id, part.bootable, part.num)
  File "/usr/share/system-config-lvm/BlockDevice.py", line 213, in addNoAlign
    self.__insert(part)
  File "/usr/share/system-config-lvm/BlockDevice.py", line 218, in __insert
    self.__insert2(part, self.__segs, False)
  File "/usr/share/system-config-lvm/BlockDevice.py", line 248, in __insert2
    raise BlockDeviceErr_cannotFit()
BlockDevice.BlockDeviceErr_cannotFit: <BlockDevice.BlockDeviceErr_cannotFit
instance at 0xb7aee36c>

Dwaine

Comment 2 Alasdair Kergon 2006-03-27 11:44:11 UTC
Should create a separate bugzilla against system-config-lvm for a clean error message rather than a 
crash.

The dm stripe mapping requires that all the sectors supplied to it are usable and that the chunks are 
completely contained within those sectors.

dmraid needs to determine what mapping was intended to be used and correct the table it supplies to 
device-mapper, using linear mappings directly if necessary: Was the last chunk meant to be smaller 
than the rest, for example?

Comment 3 Dwaine Garden 2006-03-27 19:08:08 UTC
How would I fix this this situation manually?   I'm trying to upgrade to 2.6.16,
to test some drivers.

I just used Anaconda to format the hd's.   I did not intend to have the last
chunk smaller.  It was Anaconda decided to use automatically.

Is there anything else I can provide to help?

Dwaine

Comment 4 Heinz Mauelshagen 2006-03-28 14:01:43 UTC
Implemented a fix for dmraid.
Waiting for reporter's metadata sample in order to confirm it.

Comment 5 Dwaine Garden 2006-03-28 21:28:38 UTC
When I get home from work, I will forward the information into
the bug report.

Thanks for working on this bug.

Dwaine

Comment 6 Dwaine Garden 2006-03-29 00:12:54 UTC
Here is the information which you requested.  Let me know if you require anyting
else.

lvs
LV       VG         Attr   LSize   Origin Snap%  Move Log Copy%
LogVol00 VolGroup00 -wi-ao 146.94G
LogVol01 VolGroup00 -wi-ao   1.94G

pvs
PV         VG         Fmt  Attr PSize   PFree
/dev/dm-2  VolGroup00 lvm2 a-   148.91G 32.00M

vgs
VG         #PV #LV #SN Attr   VSize   VFree
VolGroup00   1   2   0 wz--n- 148.91G 32.00M

lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                2UAm3y-F3AN-Xqxt-pCne-95Fj-tVYy-Khc3LP
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                146.94 GB
  Current LE             4702
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:3

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                NMLvUS-FQzH-uejT-QlIP-nxtg-p05T-J9h6hE
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.94 GB
  Current LE             62
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:4

lvscan
  ACTIVE            '/dev/VolGroup00/LogVol00' [146.94 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [1.94 GB] inherit

pvdisplay
  --- Physical volume ---
  PV Name               /dev/dm-2
  VG Name               VolGroup00
  PV Size               148.91 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              4765
  Free PE               1
  Allocated PE          4764
  PV UUID               1F5mi5-vFiP-2mLu-b6rX-wDND-3gvE-HtFk3p

Dwaine



Comment 7 Heinz Mauelshagen 2006-03-29 11:00:17 UTC
Dwaine,

I need your "dmraid -rD" output (*.{dat,size,offset} files).
tar/bzip2 them in via-Dwaine_Garden-raid0.tar.bz2 and send them to me, please.

Heinz

Comment 8 Dwaine Garden 2006-03-29 22:19:10 UTC
I'll complete that right now and send it to you.

Dwaine


Comment 9 erikj 2006-03-29 23:07:54 UTC
I just hit this with my nvidia bios assisted RAID when I upgraded from 
2.6.15-1.2054_FC5smp to kernel-smp-2.6.16-1.2080_FC5.  So I'm back to using
2054.

Regarding comment #1, that is bug 187201.

Comment 10 erikj 2006-03-29 23:16:20 UTC
If it helps, here is the output from the various lv commands and the dmmraid -rD
for my case.

I didn't do anything special here either - anaconda during initial install
picked out everything and I didn't resize anything.  I was just so happy
it detected my nforce4 sata stripe :)

[root@corazon erikj]# dmraid -rD
/dev/sda: nvidia, "nvidia_eccjcebd", stripe, ok, 488397166 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_eccjcebd", stripe, ok, 488397166 sectors, data@ 0
[root@corazon erikj]# lvs
  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%
  LogVol00 VolGroup00 -wi-ao 73.06G
  LogVol01 VolGroup00 -wi-ao  1.94G
[root@corazon erikj]# pvs
  PV                            VG         Fmt  Attr PSize  PFree
  /dev/mapper/nvidia_eccjcebdp3 VolGroup00 lvm2 a-   75.03G 32.00M
[root@corazon erikj]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup00   1   2   0 wz--n- 75.03G 32.00M
[root@corazon erikj]# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                uQqz53-WhJ8-JPil-wTFa-PSMQ-2VDH-XZTyH2
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                73.06 GB
  Current LE             2338
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:4

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                JTlAqz-Jbsd-X8eB-DI55-0b2V-tgoA-UyHX14
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.94 GB
  Current LE             62
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:5

[root@corazon erikj]# lvscan
  ACTIVE            '/dev/VolGroup00/LogVol00' [73.06 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [1.94 GB] inherit
[root@corazon erikj]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/mapper/nvidia_eccjcebdp3
  VG Name               VolGroup00
  PV Size               75.03 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              2401
  Free PE               1
  Allocated PE          2400
  PV UUID               Ry2f2z-bNSx-J5pE-y30H-BC4H-j51K-ar5nsR


Comment 11 Dwaine Garden 2006-03-30 00:12:01 UTC
Here is the output.

dmraid -rD
/dev/sda: via, "via_ecfdfiehfa", stripe, ok, 156301487 sectors, data@ 0
/dev/sdb: via, "via_ecfdfiehfa", stripe, ok, 156249999 sectors, data@ 0


Comment 12 Dwaine Garden 2006-03-30 00:16:43 UTC
Erik I'm glad that you submitted the information from your system.  Both drives
have the same sectors for each drive, unlike what I got after the various
installations.

Here is my version of dmraid too.
dmraid version:         1.0.0.rc9 (2005.09.23) debug
dmraid library version: 1.0.0.rc9 (2005.09.23)
device-mapper version:  4.5.0

Dwaine

Comment 13 Dwaine Garden 2006-03-31 22:25:39 UTC
Heinz, Are there any results from the testing?

Dwaine

Comment 14 Heinz Mauelshagen 2006-04-01 08:32:31 UTC
Dwaine,

I'm still missing your metadata (dmraid -rD;tar jcvf
via-Dwaine_Garden-raid0.tar.bz2 *.{dat,offset,size}).
Send the tarball to me.

Thanks,
Heinz

Comment 15 erikj 2006-04-01 15:33:22 UTC
Created attachment 127176 [details]
nvidia nforce4 sata stripe data from dmraid -rD

FWIW, here is the output for my case if it helps.  -Erik

Comment 16 Dwaine Garden 2006-04-01 16:33:36 UTC
Created attachment 127178 [details]
Metadata for Stripe Raid0

Metadata for Stripe Raid0

Comment 17 Heinz Mauelshagen 2006-04-03 14:08:14 UTC
Dwaine,

this is the mapping for your RAID0 set with the fixes applied I came up with:
via_ecfdfiehfa: 0 312499968 striped 2 128 /dev/sda 0 /dev/sdb 0

Please check if the RAID set size of 312499968 sectors (which is diviable by the
stride size of 128 sectors without rest) looks right to you.

Comment 18 Dwaine Garden 2006-04-04 04:36:39 UTC
Heinz, do not laugh.  How do I check to see if the set size of 312499968 is good?


Comment 19 Heinz Mauelshagen 2006-04-04 14:04:26 UTC
Look at the size the BIOS shows.

Comment 20 Thorsten Leemhuis (ignored mailbox) 2006-04-05 11:15:07 UTC
(In reply to comment #9)
> I just hit this with my nvidia bios assisted RAID when I upgraded from 
> 2.6.15-1.2054_FC5smp to kernel-smp-2.6.16-1.2080_FC5.  So I'm back to using
> 2054.

Same problem happened to me during test with and Intel ICH7R. 2054 works, 2080
fails with

device-mapper: dm-stripe: Target length not divisible by chunk size.

Comment 21 Johannes Engel 2006-04-05 21:21:04 UTC
Created attachment 127374 [details]
Metadata for VIA striping array

Comment 22 Andy Burns 2006-04-06 14:57:43 UTC
(In reply to comment #20)

> Same problem happened to me during test with and Intel ICH7R. 2054 works, 2080
> fails with
> 
> device-mapper: dm-stripe: Target length not divisible by chunk size.

ditto ICHR7 with 50GiB mirror + 400GiB stripe on 2x250GiB SATA

I was only testing that dmraid had improved to the point of being installable
with FC5 final (which it was) but then 2080_FC5 broke it, now re-installed with
mdraid (never intended to stay on dmraid)

Comment 23 Dwaine Garden 2006-04-06 22:00:09 UTC
The only information which the via bios display is the size of each drive in the
raid.   There is no other technicial information about the raid0.

The only other information the bios display is the chunck size used, which is 64K.

I will post all the information the via bios displays, when I get home from work.

Dwaine


Comment 24 Martin Bürge 2006-04-07 04:45:06 UTC
I've got the same Problem with a nvidia raid controller, 2.6.15 works but 2.6.16
won't work, How Can I help?

Comment 25 Dwaine Garden 2006-04-07 07:14:18 UTC
This is what I get with dmraid -tay

dmraid -tay
via_ecfdfiehfa: 0 312499998 striped 2 128 /dev/sda 0 /dev/sdb 0
via_ecfdfiehfa1: 0 208782 linear /dev/mapper/via_ecfdfiehfa 63
via_ecfdfiehfa2: 0 312287535 linear /dev/mapper/via_ecfdfiehfa 208845

Dwaine

Comment 26 Heinz Mauelshagen 2006-04-07 12:08:26 UTC
Dwaine,

the fix works it seems:
312499998 is not divisable by 128 without rest, but
312499968 is (as pasted in comment #17).


Comment 27 Heinz Mauelshagen 2006-04-07 12:11:30 UTC
Martin,

please send me your metadata along the lines of comment #7,
hence attaching a file named nvidia-Martin_Buerge-raid0.tar.bz2 to the bugzilla
for me to verify my fix eith your metadata.

Comment 28 Dwaine Garden 2006-04-07 22:03:41 UTC
Two questions Heinz.  How do I get a hold of this fix, and also how do I fix 
the stripe raid on my box?

Dwaine

Comment 29 Martin Bürge 2006-04-08 06:15:15 UTC
In the terminal, in the graphical termina, command dmraid not found, in tty1
command dmraid will found, but how can I save this output text in tty?

Comment 30 Martin Bürge 2006-04-08 06:32:48 UTC
I write the output here in the box, because the output was small.

/dev/sda: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sda: nvidia, "nvidia_ibcffcgh", stripe ok, 312581806 sector, data@ 0
/dev/sdb: nvidia, "nvidia_ibcffcgh", stripe ok, 312581806 sector, data@ 0

Comment 31 David Wenzel 2006-04-08 07:43:20 UTC
Created attachment 127499 [details]
nvidia-David_wenzel.raid0

Comment 32 David Wenzel 2006-04-08 07:48:25 UTC
[root@newserver-uptime ~]# dmraid -rD
/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sda: nvidia, "nvidia_begbjged", stripe, ok, 321672958 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_begbjged", stripe, ok, 321672958 sectors, data@ 0


[root@newserver-uptime ~]# dmraid -tay
/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
nvidia_begbjged: 0 643345916 striped 2 128 /dev/sdb 0 /dev/sda 0
nvidia_begbjged1: 0 641234412 linear /dev/mapper/nvidia_begbjged 63
nvidia_begbjged2: 0 2104515 linear /dev/mapper/nvidia_begbjged 641234475




Comment 33 Adam Linehan 2006-04-09 00:24:35 UTC
Created attachment 127516 [details]
`dmraid -rD` output

Comment 34 Adam Linehan 2006-04-09 00:25:30 UTC
Created attachment 127517 [details]
`dmraid -rD` output

I've run into this same issue with the 2.6.16-1.2080_FC5 kernel upgrade,
running an Asus K8N-DL (dual opteron 244 with nforce 4). It looks like Heinz
already has the problem worked out, but here's another data set for testing
anyways.

[root@home ~]# dmraid -rD
/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdc: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sda: nvidia, "nvidia_fdacjjdc", stripe, ok, 321672958 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_fdacjjdc", stripe, ok, 321672958 sectors, data@ 0
/dev/sdc: nvidia, "nvidia_fdacjjdc", stripe, ok, 321672958 sectors, data@ 0

[root@home ~]# dmraid -tay
/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdc: "sil" and "nvidia" formats discovered (using nvidia)!
nvidia_fdacjjdc: 0 965018874 striped 3 8 /dev/sda 0 /dev/sdb 0 /dev/sdc 0
nvidia_fdacjjdc1: 0 341959527 linear /dev/mapper/nvidia_fdacjjdc 257103
nvidia_fdacjjdc2: 0 256977 linear /dev/mapper/nvidia_fdacjjdc 63
nvidia_fdacjjdc3: 0 8385930 linear /dev/mapper/nvidia_fdacjjdc 342216630
nvidia_fdacjjdc4: 0 614405925 linear /dev/mapper/nvidia_fdacjjdc 350602560

Comment 35 Debbie Deutsch 2006-04-09 01:55:01 UTC
I've got the same problem as described in comment #9.  Are you (Heinz, anyone
else working on this) still collecting data to check against your fix?

In my case its an nvidia nforc4 chipset and a pair of SATA drives in RAID0
format.    The motherboard is from Biostar (the custom board they use in their
iDeq 330N bare-bones system).  2.6.16-1.2054_FC5 works; upgrading to
2.6.16-1.2080_FC4 does not.  The first error message is device-mapper: dmstripe:
Target length not divisible by chunk size.

Comment 36 greg kai 2006-04-09 07:01:53 UTC
Same here, motherboard is a MSI K8N (nforce 3), Athlon64 3500+ CPU, using 2 200G
sata drives in RAID0. Same error from the device mapper when updgrading the
kernel to 2.6.16-1.2080_FC5
dmraid -rD
/dev/sda: nvidia, "nvidia_bdfchefe", stripe, ok, 398297086 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_bdfchefe", stripe, ok, 398297086 sectors, data@ 0
I can tar the sd[a,b]_nvidia files if needed.
Is the fix already available? Using 2.6.15 is not really an option for me as I
need to install tne nvidia driver from livna :(
Also, does the fix preserve the raid data? If not, I will reinstall everything
using software RAID...
Thanks!

Comment 37 Heinz Mauelshagen 2006-04-09 10:30:09 UTC
The patch is about creating the rigth size for the RAID0 mapping and it'll
preserve the raid data. I'm about to send it to selected parties (i.e. those
filed in this bugzilla ) to confirm this before I release it public.

Comment 38 Jack Holden 2006-04-09 15:46:48 UTC
I have the same problem as described above by others.  (nvraid0 array on FC5
works with 2.6.15-2054 but not with 2.6.16-2080).  I would be willing to test
your patch if you would like additional testers.

Comment 39 Jerry Carter 2006-04-10 01:32:15 UTC
(In reply to comment #38)
> I have the same problem as described above by others.  (nvraid0 array on FC5
> works with 2.6.15-2054 but not with 2.6.16-2080).  I would be willing to test
> your patch if you would like additional testers.



Comment 40 Dwaine Garden 2006-04-10 02:37:49 UTC
Heinz, I would be more than happy to help test out the patch before it goes public.

Dwaine


Comment 41 David Chalmers 2006-04-10 05:19:00 UTC
I have the same problem as everybody else, using an Asus A8N-SLI (nForce 4).  I
had wanted to use the 2.6.16-2080 kernel for the ntfs and fglrx modules, and
also want to access ntfs volumes on the raid.  If you don't have enough
volunteers, I would also be very happy to test the patch.

When I run dmraid -r it returns:
/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sda: nvidia, "nvidia_bibehfde", stripe, ok, 390721966 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_bibehfde", stripe, ok, 390721966 sectors, data@ 0

Comment 42 Martin Bürge 2006-04-10 12:01:23 UTC
I would like to test the patch too, if you have one.

Comment 43 Dan Book 2006-04-10 15:43:09 UTC
*** Bug 187371 has been marked as a duplicate of this bug. ***

Comment 44 Dan Book 2006-04-10 15:44:31 UTC
I am having the same problem as detailed in bu 187371 above, with no LVs
involved anywhere, on an NVRAID0 array. I'll send you the requested data later
today if you like.

Comment 45 Dave Taylor 2006-04-10 19:59:45 UTC
I have been watching this bug with interest.  I have been hit by this bug on 2
separate machines, both are Marvell Raid controllers with 2 x 300gb harddrives
(RAID1).  Upgraded the kernel to the latest and it panics after reboot.

I would be very interested in testing out the patch as these machines have no
mission critical data.

Comment 46 Richard Westwell 2006-04-11 12:19:46 UTC
I'm not using FC just a gentoo system with dmraid, but I though I'd just post
this here for info

getting the same "device-mapper: dm-stripe: Target length not divisible by chunk
size" error during the activation of dmraid on 2.6.16
If I try to use dmsetup manually within the initiramfs image within a shell
echo "0 290452220 striped 2 128 8:0 0 8:16 0" | dmsetup create
(values written down from the use of 2.6.15 with dmraid)
I'll end up with the same error

if I reduce the total volume size to the nearest multiple of the chunk
290452220 -> 290452096 for a chunk size of 128 sectors (64K) (probably better
to go under rather than over to avoid crossing into the raid metadata)
I can manually map it via dmsetup for the overall stripe
echo "0 290452096 striped 2 128 8:0 0 8:16 0" | dmsetup create
nvidia_cadcdfaj
works okay

looking at the last partition on the disk under 2.6.15 / dmsetup table
"nvidia_cadcdfaj6: 0 142978437 linear 253:0 147460698"
142978437 + 147460698 = 290439135 which I believe should be the last sector for
the last partition, so this should easily fall within the new stipe mapping above

I think dmraid just needs to round down the raid volume size to the nearest
multiple of the chunk size when creating the stripe map

Comment 47 Heinz Mauelshagen 2006-04-11 14:45:25 UTC
In reference to comment #37, I've put an i386 pre-release version
of dmraid up at

http://people.redhat.com/heinzm/sw/dmraid/tst/dmraid-1.0.0.rc11-pre1-1.i386.rpm

To test safely you:

o should have a backup of your striped sets data
o start with read-only access to your data

Expecting your test results.

Thanks,
Heinz

Comment 48 Jerry Carter 2006-04-11 15:10:56 UTC
If you want an x86_64 version tested I would be extreemly happy to do so

Jerry

Comment 49 Heinz Mauelshagen 2006-04-11 15:38:13 UTC
Please find it here:

http://people.redhat.com/heinzm/sw/dmraid/tst/dmraid-1.0.0.rc11-pre1-1.x86_64.rpm

Comment 50 Richard Westwell 2006-04-11 18:18:32 UTC
just tried the 64bit version of rc11 under gentoo (had to extract the dmraid
static binary)
works great with 2.6.16 thanks

Comment 51 Dan Book 2006-04-11 18:21:26 UTC
I will be glad to back up the one partition on my NVRAID0 that isn't backed up,
and give that 64bit version a test later today :)

Comment 52 Martin Bürge 2006-04-11 20:32:58 UTC
The same problem after new dmraid version:

device-mapper: dm-stripe: Target length not divisible by chunk size
device-mapper: reload ioctl failed: invalid argument

Comment 53 Martin Bürge 2006-04-11 20:41:10 UTC
I_'ve forgot to write that I have tested the x86_64 version.

Comment 54 Dwaine Garden 2006-04-11 23:35:59 UTC
tested the i386 package.  Same problem with 2.6.16 kernel.
device-mapper: dm-stripe: Target length not divisible by chunk size
device-mapper: reload ioctl failed: invalid argument

Dwaine


Comment 55 Dan Book 2006-04-11 23:59:33 UTC
Created attachment 127642 [details]
dmraid -rD output files

Comment 56 Dan Book 2006-04-12 00:07:25 UTC
I should note I am booting from NVRAID0 on x86_64.

With the test dmraid, 2.6.15 still boots fine, though with the message:
RAID set "nvidia_bfaeeaaf" already active
just after udev loads. I don't recall if it did this before in FC5, I don't
think it did though.

However with 2.6.16, at first it was beginning to boot but then I realized it
was using my duplicate backup partitions that had the same label :). I changed
fstab and grub.conf to not use labels, and now 2.6.16 just can't find any
partitions. It first says unable to access resume device (LABEL=SWAP-nvidia_bfa)
which is odd because in fstab, the swap partition is referred to as /dev/dm-7,
not with the label. Is this set somewhere else too? Following from that, it just
kernel panics as it would if it can't find any root partition.

I attached my dmraid -rD output in case it is of any use.

Comment 57 Dwaine Garden 2006-04-12 03:09:48 UTC
Here is my output for the new raid volume size after the new dmraid was installed.

dmraid -tay
via_ecfdfiehfa: 0 312499968 striped 2 128 /dev/sda 0 /dev/sdb 0
via_ecfdfiehfa1: 0 208782 linear /dev/mapper/via_ecfdfiehfa 63
via_ecfdfiehfa2: 0 312287535 linear /dev/mapper/via_ecfdfiehfa 208845

Here is the old volume size...

dmraid -tay
via_ecfdfiehfa: 0 312499998 striped 2 128 /dev/sda 0 /dev/sdb 0
via_ecfdfiehfa1: 0 208782 linear /dev/mapper/via_ecfdfiehfa 63
via_ecfdfiehfa2: 0 312287535 linear /dev/mapper/via_ecfdfiehfa 208845


Looks like it should be ok,but still get the error with 2.6.16 kernels?

Dwaine

Comment 58 Adam Linehan 2006-04-12 04:36:23 UTC
Hi Heinz,

Sad to report, but same issue as before with the x86_64 patch installed. It does
look like the volume size has changed as well. Here is the output from `dmraid
-tay` with the patch:

/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdc: "sil" and "nvidia" formats discovered (using nvidia)!
nvidia_fdacjjdc: 0 965018856 striped 3 8 /dev/sda 0 /dev/sdb 0 /dev/sdc 0
nvidia_fdacjjdc1: 0 341959527 linear /dev/mapper/nvidia_fdacjjdc 257103
nvidia_fdacjjdc2: 0 256977 linear /dev/mapper/nvidia_fdacjjdc 63
nvidia_fdacjjdc3: 0 8385930 linear /dev/mapper/nvidia_fdacjjdc 342216630
nvidia_fdacjjdc4: 0 614405925 linear /dev/mapper/nvidia_fdacjjdc 350602560

The size reported before the patch was 965018874. I don't know if it will help,
but I'll attach another archive with the `dmraid -rD` output.



Adam

Comment 59 Adam Linehan 2006-04-12 04:39:41 UTC
Created attachment 127648 [details]
`dmraid -rD` output after patch dmraid-1.0.0.rc11-pre1-1.x86_64.rpm applied

Comment 60 Adam Linehan 2006-04-12 04:44:08 UTC
Created attachment 127650 [details]
`dmraid -rD` output after patch dmraid-1.0.0.rc11-pre1-1.x86_64.rpm applied

Comment 61 David Chalmers 2006-04-12 08:41:36 UTC
Hi Heinz,

Better news from my side <g>.  My raid0 is now available, using the i386 patch
with kernel 2.6.16-2080, and the partitions on it can be mounted.

I do still have the same devmapper error messages that I had before applying the
patch (identical to those listed by Dwaine in #54) appear shortly after the
kernel loads at boot time.

I boot into FC5 from a non-raid IDE drive - might this be why it is working for
me and not for others.

David

Comment 62 Jerry Carter 2006-04-12 10:28:31 UTC
Hi Heinz,

I am afraid the patch did not work on my x86_64 machine with the Nforce 4
chipset.the error messages are unchanged.

[root@scar ~]# dmraid -rD
/dev/sda: nvidia, "nvidia_beifdaie", stripe, ok, 625142446 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_beifdaie", stripe, ok, 625142446 sectors, data@ 0
/dev/sdc: nvidia, "nvidia_beifdaie", stripe, ok, 625142446 sectors, data@ 0
/dev/sdd: nvidia, "nvidia_beifdaie", stripe, ok, 625142446 sectors, data@ 0

[root@scar ~]# dmraid -tay
nvidia_beifdaie: 0 2500569600 striped 4 128 /dev/sdb 0 /dev/sdc 0 /dev/sdd 0
/dev/sda 0
ERROR: dos: reading /dev/mapper/nvidia_beifdaie[Invalid argument]

I will try any patch - fortunately this drive has no data on it yet, so I can
mess with it.

Jerry

Comment 63 Heinz Mauelshagen 2006-04-12 12:47:02 UTC
Strange, all output provided shows length being divisable by chunk (or stride) size.

To all reporters having tested my pre-release who haven't sent me their metadata
yet: can you please tar it up as explained in comment #7 and send it to me for
investigation, please ?

Comment 64 Martin Bürge 2006-04-12 13:16:23 UTC
Created attachment 127657 [details]
The Output of the new dmraid.

Comment 65 Martin Bürge 2006-04-12 13:16:58 UTC
The Output in Text: /dev/sda: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sda: nvidia, "nvidia_ibcffcgh", stripe, ok, 312581806 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_ibcffcgh", stripe, ok, 312581806 sectors, data@ 0


Comment 66 Martin Bürge 2006-04-12 13:41:18 UTC
/dev/sda: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sda: nvidia, "nvidia_ibcffcgh", stripe, ok, 312581806 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_ibcffcgh", stripe, ok, 312581806 sectors, data@ 0


Comment 67 Jerry Carter 2006-04-12 14:56:08 UTC
Created attachment 127660 [details]
Metadata on x86_64 nforce4

Comment 68 Dwaine Garden 2006-04-12 21:09:00 UTC
Here is the patch which was applied to 2.6.16.   With this new test version of 
dmraid, the stripe is divisible by the chunck size of 64k (128sectors).

Should be ok.....
Dwaine

--- drivers/md/dm-stripe.c
+++ drivers/md/dm-stripe.c
@@ -103,9 +103,15 @@ static int stripe_ctr(struct dm_target *
return -EINVAL;
}
+ if (((uint32_t)ti->len) & (chunk_size - 1)) {
+ ti->error = "dm-stripe: Target length not divisible by "
+ "chunk size";
+ return -EINVAL;
+ }
+
width = ti->len;
if (sector_div(width, stripes)) {
- ti->error = "dm-stripe: Target length not divisable by "
+ ti->error = "dm-stripe: Target length not divisible by "
"number of stripes";
return -EINVAL;
}


Comment 69 Dwaine Garden 2006-04-12 21:16:39 UTC
Maybe someone could dump the chunck size that gets reported to the kernel 
during boot.

ti->error = "dm-stripe: Target length not divisible by chunk size:%
d",chunk_size;


Recompile the kernel and see what the output is.

Dwaine


Comment 70 Dwaine Garden 2006-04-12 21:27:42 UTC
Maybe someone could dump the chunck size that gets reported to the kernel 
during boot.

ti->error = "dm-stripe: Target length not divisible by chunk size:%
d",chunk_size;


Recompile the kernel and see what the output is.

Dwaine


Comment 71 David Chalmers 2006-04-13 09:26:18 UTC
Heinz,

dmraid -rD returns:

/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sda: nvidia, "nvidia_bibehfde", stripe, ok, 390721966 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_bibehfde", stripe, ok, 390721966 sectors, data@ 0

I have tried running lvm the patched dmraid, but it still refuses to go.

David

Comment 72 David Chalmers 2006-04-13 09:27:08 UTC
Heinz,

dmraid -rD returns:

/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sda: nvidia, "nvidia_bibehfde", stripe, ok, 390721966 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_bibehfde", stripe, ok, 390721966 sectors, data@ 0

I have tried running lvm with the patched dmraid, but it still refuses to go.

David

Comment 73 Dwaine Garden 2006-04-13 13:35:42 UTC
Created attachment 127703 [details]
Raid0 after installing new dmraid

Metadata for Stripe Raid0 after patched dmraid

Dwaine

Comment 74 Johannes Engel 2006-04-13 15:19:20 UTC
I installed the dmraid update, and after that removed and installed again the
2.6.16 kernel to ensure the ramdisk gets rebuilt.
Here is the output of Fedora Core 5, kernel 2.6.16 (2080):

Uncompressing Linux... Ok, booting the kernel.
Red Hat nash version 5.0.32 starting
  Reading all physical volumes. This may take a while...
  /dev/sda2: read failed after 0 of 2048 at 73928736768: Input/output error
  No volume groups found
  /dev/sda2: read failed after 0 of 2048 at 73928736768: Input/output error
  Unable to find volume group "VolGroup00"
Buffer I/O error on device sda2, logical block 36898016

and many more Buffer I/O errors before kernel panic

Comment 75 Ziga Mlinar 2006-04-13 23:07:46 UTC
Having same problem on amd64 using gentoo.
I only see .rpm version of rc11-pre1-1 in the link you posted high above.

Could you release .tar.bz2 file too, please.

I would like to test.

zxy

Comment 76 Dwaine Garden 2006-04-14 05:29:07 UTC
Ziga, just do an update to the existing rpm?

yum update dmraid-1.0.0.rc11-pre1-1.x86_64.rpm



Comment 78 Sascha Schmidt 2006-04-18 17:27:37 UTC
I've upgraded to dmraid-1.0.0.rc11-1 ...
But now if I try to update the kernel to 2.6.16 I get the following error:

Running Transaction
  Installing: kernel                       ######################### [1/1]
grubby fatal error: unable to find a suitable template

Comment 79 David Chalmers 2006-04-19 06:16:13 UTC
Created attachment 127974 [details]
dmraid debug information (nvraid-data.tar.bz2)

Comment 80 David Chalmers 2006-04-19 06:16:58 UTC
Created attachment 127975 [details]
dmraid debug information (nvraid-data.tar.bz2)

Comment 81 David Chalmers 2006-04-19 06:24:02 UTC
Apologies for the double comments and double attachments above (I am hoping that
it will not occur when I post this comment!).  I forgot to include the debug
information in #79 along with comment #71 last week.

David

Comment 82 Dwaine Garden 2006-04-24 13:05:56 UTC
Any news?

Dwaine

Comment 83 Jochen Reinhardt 2006-04-25 17:35:30 UTC
Hello,

I got the same problem with an update to 2.6.16 on FC5.
The same thing with gentoo, the root device can not be found, and a busy box is
opened.
The error message says:

ERROR: dos: reading /dev/mapper/sil_afagadcabbbj[No such file or directory]

dmraid cannot read the partition table from the drive.
I don`t know if these problems are connected in some way - but both only appear
with kernel 2.6.16. 2.6.15 just works fine.

What is the current status of this issue? Is someone working on it and will
there be an update in the near future? Thanks a lot!

Jochen

Comment 84 Martin Bürge 2006-04-25 17:44:06 UTC
Heinz, ist this a problem of dmraid or the kernel, or lvm perhaps?

Comment 85 Karl Wagner 2006-04-25 19:27:35 UTC
I have read this thread with interest.

It appears to me (it may not be stated because it's too obvious, or it may be
here and i missed it), just updating dmraid wont help. The new version would
need to be in the initrd aswell, because it cant load the new dmraid when it
cant mount the root partition.

I am not sure how you distribute initrds, but it seems to me you would need to
put this new dmraid into that before we will see it working.

If I have this wrong in some way, a quick comment to that effect and an email to
me karl [at] mouse-hole.com would be appreciated.

Comment 86 Dwaine Garden 2006-04-25 23:52:25 UTC
How do we update initrd image?

Dwaine


Comment 87 Dan Book 2006-04-26 04:31:36 UTC
/sbin/mkinitrd
see man mkinitrd.

Comment 88 Jochen Reinhardt 2006-04-26 06:19:17 UTC
Updating the initrd with mkinitrd did not change anything. Anyway i extracted a
copy of the original initrd (the one that boots with kernel 2.6.15) to get an
idea what is inside. I expected to find dmraid in the /bin directory. But it is
definitely not there...
But the init-script does some kind of raid initialization. The commands are:

dm create nvidia_acbfeihf 0 980469500 striped 2 128 8:16 0 8:0 0
dm partadd nvidia_acbfeihf
mkrootdev -t ext3 -o defaults,ro dm-7

The numbers are the same in both images, but with kernel 2.16 it fails.
Anyway, what exactly does the dm command do? It does not exist in the initrd as
executable. Is it some kind of undocumented feature of nash? (nash is the
interpreter that runs the script)

In my opinion updating dmraid and recreating the initrd does not make sens. But
I gave it a try, though, but it behaved as expected. And if some kind of dmraid
is somehow included in the ramdisk, then it would have to be a statically linked
one, because there are no libraries available at boot time. But compiling the
latest dmraid-sources statically failed because of some missing references... 

The question is: what changed in nash and the raid-modules from 2.6.15 to 2.6.16
to make the raid devices fail. I do not think that this is a dmraid-problem. To
me, it seems to be a kernel issue, as i had the same experiences with gentoo.
Maybe this bug should also be present in the kernel-bugs-list?

Comment 89 Dwaine Garden 2006-04-26 06:36:16 UTC
This was the patch which caused the grief.  The patch is correct though.

[PATCH] dm stripe: Fix bounds

The dm-stripe target currently does not enforce that
the size of a stripe
device be a multiple of the chunk-size.  Under certain
conditions, this can
lead to I/O requests going off the end of an
underlying device.  This
test-case shows one example.

Dwaine

Comment 90 Jochen Reinhardt 2006-04-29 18:55:04 UTC
OK, version 1.0.0-r11-pre1 of dmraid works with kernel 2.6.16. But I did not get
a booting ramdisk for Fedora. How can i create one? And is dmraid used in the
initrd? I did not find it in the ramdisk, but I am not sure if gunzipping and
extracting files with cpio reveals all files of the ramdisk...

I then decided to give gentoo another chnace and it works!

I had to insert the dmraid sources into the genkernel tool and fix the config
file to use the newest version. I then created the ramdisk and the computer
boots fine!

Thanks a lot.

Jochen Reinhardt

Comment 91 Dwaine Garden 2006-04-30 06:20:47 UTC
Help!!!!

Could someone help me.  I tried to update the initrd image, but now the computer
does not even boot anymore.

mkinitrd updated my 2.6.15 img file, then it crashes when booting.   Could
someone help me get mkinitrd to refresh my img file so I can at least boot with
2.6.15-2054 again.

What is the command line to get mkinitrd to properly refresh the image file for
kernel 2.6.15-2054?

Comment 92 erikj 2006-04-30 14:51:31 UTC
Something you could try is booting the FC5 installer in rescue mode.  Let it
mount your filesystems (hopefully it will :)

If I recall, the rescue env will mount your root at /mnt/sysimage and if you
had a separate /boot, there would be a /mnt/sysimage/boot.  It will give
you a shell.

When you are at the shell prompt, this is a good time to decide if you might
want to copy some older versions of RPMs in to /mnt/sysimage/tmp.  
For example, if the old 2.6.15-2054 kernel isn't installed any more, you 
might copy the RPM in to /mnt/sysimage/tmp.  If you need to back anything
else to original versions, say like the dmraid stuff, copy the respective
RPMs in to /mnt/sysimage/tmp.  (I can't remember where the rescue env
mounts the install CD at, but you should also be able to manually configure
networking if necessary).  You can figure out what rpm versions you have
from RPM something like this:

chroot /mnt/sysimage rpm -qa

Now you can chroot in to /mnt/sysimage:

chroot /mnt/sysimage

Now you're able force-install older versions of any RPMs you might need.
You could use 'rpm -Uvh <rpm> --force' or similar.  For the kernel,
if you want to keep old versions around, you might use 
'rpm -ivh <rpm> --force' instead.

Now, installing the kernel rpm should trigger a new mkinitrd run.  However,
to do it by hand (still within the chroot), you could do something like this:

mkinitrd /boot/config-2.6.15-1.2054_FC5smp 2.6.15-1.2054_FC5smp

(Take off the smp if you're using the single processor kernel).

Now you can exit the chroot, and exit the rescue environment.  Hopefully 
that will help you.

There are probably a zillion ways to look at this, and my untested 
instructions above are just one.  Maybe someone else has better ideas.
I"m not sure if this will help.  Good luck.

Comment 93 Dwaine Garden 2006-04-30 17:01:31 UTC
Thanks for responding Eirk.  I did exactly as your instuctions listed.   So now
I'm trying to back out of the dmraid test rpm.  Problem is with mounting the
rescue cd when booted into rescue mode.    Why is there no device reference in dev? 

I'm using the dvd in rescue mode.

I googled for mounting the cdrom in rescue mode, but did not find any reference.

So it looks like we need to update th dm stuff in the kernel too?  If not, then
the initrd image is being built with the original version with Fedora 5.

8*)  At least I do not get any error messages when running mkinitrd

Dwaine
 

Comment 94 erikj 2006-04-30 20:29:25 UTC
I just booted in to rescue to look around.  It appears the mknod command in
the rescue environment is special and doesn't require that you give it major
and minor numbers for well known devices.

Assuming you hve an IDE cdrom, you can use mknod to create the device file
and then mount it at /mnt/source.  In my case, the DVD rom is the primary
device on the 2nd IDE controller -- /dev/hdc.  So I did this:

mknod /dev/hdc

The special rescue mknod knows the right major/minors, and creates the device.
Then you can:

mount -o ro /dev/hdc /mnt/source

PS: You can use dmesg to find what your IDE controller device should be.
In my case:

hdc: LITE-ON DVDRW SHM-165H6S, ATAPI CD/DVD-ROM drive
hdc: ATAPI 48X DVD-ROM DVD-R-RAM CD-R/RW drive, 2048kB Cache, UDMA(66)


Comment 95 Dwaine Garden 2006-04-30 20:41:50 UTC
I setup a ftp server on my winxp box.   I'm missing the hdd for my cdrom when
using the rescue cd.  It's not in /dev.

Ok... I fixed the problem and reverted back to rc9 of dmraid.   Then updated
initrd image for 2054.  It now tries to boot, I get the status fedora screen,
but it crashes on the e2fsck check

I'm getting duplicate PV.
"Found duplicate PV (blah, blah) using /dev/dm-6 not /dev/dm-2.

I also noticed with the original dmraid, updating the initrd image of 2054
created the map of the volume.   The very last line in the update.

With using dmraid rc11, it does not display it's mapping the volume.  So it's
missing the last step.   I think this is the reason why dmraid rc11 is not working.

I just need to fix duplicate PV problem and I should be able to boot again.
Erik, any suggestions on fixing the duplicate PV problem.

I own you a beer.

Dwaine


Comment 96 Adam Linehan 2006-05-01 00:32:43 UTC
Hello,

I've made the changes that Dwaine suggested above to the dm-stripe.c file, and
have finally got some output from it. For my system, it reported the length
(965018874) was not divisible by the chunk size (8). This length value is the
old value, before the dmraid patch.

I also noticed an inconsistency in the length value reported by `dmraid -s` and
`dmsetup table` after installing the patch. The size reported by both utilities
before the patch was 965018874. After the patch, dmraid reported the new size,
965018856, while dmsetup still reported 965018874.

One problem I had was creating the initrd image. The mkinitrd script uses both
dmraid and dmsetup to figure out the RAID environment. Unfortunately, with the
patch, the sizes reported by dmraid and dmsetup are different, with the result
that the mkinitrd script does not include any dm information. Moving back to the
original dmraid allowed the mkinitrd script to work, but with the old wrong values.

So, it appears that the dmsetup (device-mapper) needs to be fixed along with the
dmraid. Or at least that's my guess for now... :-)




Adam




Comment 97 Dwaine Garden 2006-05-01 03:32:36 UTC
Adam, you are 100% right.  Here is what I get
Heinz, what changes did you make to dmraid.  We'll have to do the same for dmsetup.

dmraid -s (Size Looks Good)
*** Active Set
name   : via_ecfdfiehfa
size   : 312499968
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 2
spares : 0

dmsetup -table (Size is bad)
via_ecfdfiehfa2: 0 312287535 linear 253:0 208845
via_ecfdfiehfa1: 0 208782 linear 253:0 63
via_ecfdfiehfa: 0 312499998 striped 2 128 8:0 0 8:16 0
VolGroup00-LogVol01: 0 4063232 linear 253:2 308150656
VolGroup00-LogVol00: 0 308150272 linear 253:2 384

Dwaine

Comment 98 Johannes Engel 2006-05-01 11:14:49 UTC
Seems to me that Dwaine is on the right track. My infos:
dmraid reports
*** Active Set
name   : via_ebdfgdfgeg
size   : 144607488
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 2
spares : 0

dmsetup table gives
via_ebdfgdfgeg: 0 144607678 striped 2 128 8:0 0 8:16 0
via_ebdfgdfgeg2: 0 144392220 linear 253:0 208845
via_ebdfgdfgeg1: 0 208782 linear 253:0 63
VolGroup00-LogVol02: 0 16384000 linear 253:2 125829504
VolGroup00-LogVol01: 0 2097152 linear 253:2 142213504
VolGroup00-LogVol00: 0 125829120 linear 253:2 384
via_ebdfgdfgegp2: 0 144392220 linear 253:0 208845
via_ebdfgdfgegp1: 0 208782 linear 253:0 63

Comment 99 Dwaine Garden 2006-05-01 14:56:07 UTC
I'll check device-mapper and see what code needs to be patched.

Dwaine


Comment 100 Dwaine Garden 2006-05-01 21:00:54 UTC
I sent an e-mail to dm-devel.   Well see if they can patch dmsetup like dm-
stripe.c so we can all enjoy our Fedora 5 installations.

Dwaine


Comment 101 Adam Linehan 2006-05-02 01:18:20 UTC
Cool! Thanks Dwaine. I was just looking through the bugs, and i think 189794
maybe the dm-devel equivalent to this dmraid issue. 


Adam

Comment 102 Dwaine Garden 2006-05-04 16:27:45 UTC
I got this response from dm-devel group.   

On Mon May 1 2006 4:05 pm, Dwaine_Garden.ca wrote:
> It looks like dmsetup need to be patched as well.   There is a bug opened
> up for dmraid because after this
> patch, Fedora 5 installs would not boot with kernel 2.6.16.  There is a
> patch to round down the stripe size to be multiple of the chunk-size, but
> the computers still fail trying to boot and mount volumes.
>
> After Heinz patched dmraid, it now fixed.  It looks like dmsetup needs to
> be patched as well(Or something within device-mapper).  Via_ecfdfiehfa is
> not a multiple of chunk-size,   Both dmraid and dmsetup should match.   So
> all software raid0 setup with dmraid still fail to boot under 2.6.16+.
> Nvidia, Intel and Via chipsets are confirmed impacted, but all the chipsets
> which has this software raid0 will have this problem.

There's nothing in dmsetup to patch with regards to this problem. dmsetup and 
libdevmapper have no knowledge at all of any of the device-mapper kernel 
target modules (e.g. linear, striped, mirror, snapshot). They effectively 
just act as a convenient pass-thru to the device-mapper ioctl interface. If 
you're using dmsetup, you need to know about any restrictions required by the 
target you're using, and write your mappings appropriately.

Dwaine

Comment 103 Chris Chabot 2006-05-07 11:30:42 UTC
Same problem here. 2050 boots just fine (though with some I/O errors) and from
2080 on (2.6.16) it fails to boot.

Even if i nuke the linux partition tables and boot with a recent (May 7th)
rawhide rescue disk and try to do an install, the installer crashes on the dm setup.

The raid setup is a raid0 stripe pre-configured by dell (xps600 system with an
nvidia chipset (for intel cpu's), so i would hope there's no big errors in that? :-)

I've tried to install dmraid rc11 pre1 with a recent kernel, but the problems
persisted.

Is there any debugging info i could attach to this bug that would be of any use?

Comment 104 Chris Chabot 2006-05-07 13:24:21 UTC
Some info:
Chipset is: NFORCE-CK804
Drives are 2x identical: WDC WD2500JS-75N, 
488281250 512-byte hdwr sectors (250000 MB)

# dmraid -s
*** Active Set
name   : nvidia_dabhfihh
size   : 976562432
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 2
spares : 0

# dmraid -tay
nvidia_dabhfihh: 0 976562432 striped 2 128 /dev/sda 0 /dev/sdb 0
nvidia_dabhfihh1: 0 128457 linear /dev/mapper/nvidia_dabhfihh 63
nvidia_dabhfihh2: 0 894515265 linear /dev/mapper/nvidia_dabhfihh 128520
nvidia_dabhfihh3: 0 81915435 linear /dev/mapper/nvidia_dabhfihh 894643785

# dmsetup table 
nvidia_dabhfihh3: 0 81915435 linear 253:0 894643785
nvidia_dabhfihh2: 0 894515265 linear 253:0 128520
nvidia_dabhfihh1: 0 128457 linear 253:0 63
nvidia_dabhfihhp3: 0 81915435 linear 253:0 894643785
nvidia_dabhfihhp2: 0 894515265 linear 253:0 128520
nvidia_dabhfihhp1: 0 128457 linear 253:0 63
nvidia_dabhfihh: 0 976562496 striped 2 128 8:0 0 8:16 0

# dmraid -rD
/dev/sda: nvidia, "nvidia_dabhfihh", stripe, ok, 488281248 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_dabhfihh", stripe, ok, 488281248 sectors, data@ 0

Attaching meta data next

Any other data i could produce that could be helpfull to know?


Comment 105 Chris Chabot 2006-05-07 13:29:00 UTC
Created attachment 128707 [details]
nvidia raid0 metadata (dmraid -rD)

dmraid -rD metadata (kernel 2054, dmraid pre11)

Comment 106 Chris Chabot 2006-05-08 07:29:13 UTC
Ps the thing that has me worried is that even when i've removed all linux
partitions and only the windows & dell configuration/recovery utility partitions
remain, the anaconda installer (rawhide may 7th) crashes on the dm init step
before it wants to show the partitioning screen.

Also when i create a patch that undo's the "[PATCH] dm stripe: Fix bounds" path
(which is what suposedly creates these problems) and build new rpm's (2111 +
that undo patch), the kernel still errors out on setting up the dm raid devices,
and is still not functioning.

Oddly enough 2054 functions without any errors (except some i/o out of bounds
errors), so i know the system _is_ able to function..

Comment 107 Martin Bürge 2006-05-10 16:58:01 UTC
I'm back with news, also, I updated the initrd image, no, no more messages 
with divisible by chunk size, these messages are no more.

But, he boots, then comes a message that he would mounting the root filesystem 
and then he would mount /dev/mapper/nvidia_ibcffcghp9 but this partition is my 
swap partition, this means he can't mount it, and he don't made it.

I've changed the line in menu.lst from:

kernel /boot/vmlinuz-2.6.16-1.2080_FC5 ro root=LABEL=/ rhgb quiet

to:

kernel /boot/vmlinuz-2.6.16-1.2080_FC5 ro root=/dev/mapper/nvidia_ibcffcghp8 
(this is the root parition)

To(2nd chance)

kernel /boot/vmlinuz-2.6.16-1.2080_FC5 ro 
root=LABEL=/dev/mapper/nvidia_ibcffcghp8

But he by these two lines, he won't boot nvidia_ibcffcghp8, he boot still 
always nvidia_ibcffcghp9

I hope you check my English.
I can telephone you, if you would heinz, i'm coming from switzerland.

Comment 108 Martin Bürge 2006-05-11 16:32:29 UTC
Where can i set the boot setting, from which partiton he must boot, he boots 
on the wrong partition, i can do what i want, but kernel 2.6.15 boots from the 
right partition.

Comment 109 Martin Bürge 2006-05-13 13:53:45 UTC
Good News,

dmraid-1.0.0.rc11-pre1 runs by me, not in Fedora, but in GEntoo, i've 
installed it, and maked a new initramfs with dmradinitrd, it boots, why he 
won't boot in fedora, I don't know perhaps it's such a problem of my 
distribution.

But in Gentoo he boots properly.
:D Thank You Very Much Heinz

Comment 111 Heinz Mauelshagen 2006-05-15 11:08:19 UTC
All,

Heads up:
sorry for the daly.

I'm catching up wading through all your valuable work right now after my
father's funeral.

Comment 112 Heinz Mauelshagen 2006-05-15 11:12:27 UTC
s/daly/delay/ ;-)

Comment 113 Wayne D. 2006-05-15 11:42:05 UTC
Interesting Post here:

http://forums.fedoraforum.org/forum/showthread.php?t=106312&highlight=sata

Comment 114 Heinz Mauelshagen 2006-05-16 14:38:56 UTC
All,

I've (hopefully) checked all metadata samples you kindly provided here
(i.e. samples from Chris Chabot, David Chalmers, Erik Jacobson, Jerry Carter,
Martin Buerge, Adam Linehan, Dwaine Garden and Johannes Engel).

All samples conform to the 'target length needs to be divisable by chunk
(stride) size' requirement.

Making up an appropriate initrd containing the fixed 1.0.0.rc11 dmraid looks
like the issue which bites some people still.

If I missed your metadata sample, please tell me.

Comment 115 Peter Jones 2006-05-16 15:22:27 UTC
"nash", which provides the core functionality in the initrd, is statically
linked.  So to update the initrd, you'll need to follow these steps:

1) install the new dmraid package
2) rebuild the mkinitrd package
3) install the rebuilt mkinitrd package
4) run "mkinitrd -f /boot/initrd-`uname -r`.img `uname -r`"

That should fix it.

Comment 116 Dwaine Garden 2006-05-16 15:55:34 UTC
Heinz, I'm sorry to hear about your father.

Don't apologize, family is more important.

Dwaine


Comment 117 Heinz Mauelshagen 2006-05-16 16:13:06 UTC
Dwaine,

thank you very much.

Comment 118 Dwaine Garden 2006-05-17 03:49:20 UTC
I'm getting a problem with booting after all the changes.   The machine boots 
and I get the graphical boot screen.  As Fedora is booting, I drop to the 
command line with e2fsck problem.   

Setting up Locial Volume Management: 2 logical volumes(s) in volume 
group "VolumeGroup00" now active

Checking filesystems
e2fsck: Cannot continue, aborting.       [Failed]


*** An Error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
Give root password for maintenance
(or type Control-D to continue):

How do I fix this problem.  What is really weird is that when I boot off the 
Fedora Rescue DVD, the file system mounts perfectly fine.  I can 
chroot /mnt/sysimage the file system and do everything I need.

Could someone help me.  How do I fix this filesystem problem, I'm not sure how 
to proceed with the stripe raid0 setup.

Help..

Dwaine


Comment 119 Chris Chabot 2006-05-17 08:35:35 UTC
(In reply to comment #115)
> "nash", which provides the core functionality in the initrd, is statically
> linked.  So to update the initrd, you'll need to follow these steps:
<snip> 
> That should fix it.

Followed these steps, and on booting (tried 2111 and 2118) i still get the same
errors (i/o errors and then the could not mount root fs error). This is after
rebuilding dmraid (FC6 pre11 version just released) and mkinitrd (also rawhide
version: 5.0.39-1).

Also i took a peak at the 'init' file from the unpacked initrd image, and
couldn't find any references in it to edit to change the raid volume size
(trying to follow instructions in the forums.fedoraforum.org posts)

I'm more then willing to try any other work arounds :-)




Comment 120 Martin Bürge 2006-05-17 18:22:18 UTC
title Fedora Core (2.6.16-1.2080_FC5)
        root (hd0,7)
        kernel /boot/vmlinuz-2.6.16-1.2080_FC5 ro root=/dev/mapper/yourdevice
        initrd /boot/initrd-2.6.16-1.2080_FC5.img

And make the root label in you fstab correct like:

/dev/mapper/nvidia_ibcffcgh8      /    ext3    noatime        0 1

Boot with this settings, and you will see more details, perhaps you've got the 
same problem I have in Fedora. This settings are only to test. I hope I don't 
tell you uncorrect things.

Comment 121 Dwaine Garden 2006-05-17 22:32:02 UTC
Also i took a peak at the 'init' file from the unpacked initrd image, and
couldn't find any references in it to edit to change the raid volume size
(trying to follow instructions in the forums.fedoraforum.org posts)

You have to run mkinitrd command to rebuild the initrd image.  Once you do that,
the option for dmcreate and sector number will be present in the init file.

Dwaine

Comment 122 Dan Book 2006-05-21 23:59:39 UTC
Tried it out (the 4 step process); didn't work. Nash couldn't find any
partitions, as it did before with the 2.6.16 kernel; unfortunately, the 2.6.15
kernel couldn't find any either. Reinstalled old dmraid and mkinitrd, rebuilt
initrd and 2.6.15 worked again.
Do I need to use the mkinitrd (or dmraid) from rawhide?
Booting FC5 x86_64 on an NVRAID0 array.
-Dan

Comment 123 Dwaine Garden 2006-05-22 18:31:16 UTC
Stay at the old version of dmraid and mkinitrd.

Grab the new rc11 of dmraid from rawhide.  This newer rpm worked perfectly,
better than the test rpm that is linked in this bug report.

http://www.muug.mb.ca/pub/fedora/linux/core/development/i386/Fedora/RPMS/dmraid-1.0.0.rc11-FC6.i386.rpm
 

After you install this version of the rpm, you'll be able to yum update newer
kernels.

Dwaine

Comment 124 Dan Book 2006-05-22 23:07:19 UTC
Tried this; 2.6.16 still can't find any partitions.
2.6.15 still works with the new dmraid (but haven't updated its initrd).

Comment 125 Alexander Shiyan 2006-05-24 06:37:07 UTC
Hello,
I use Intel RAID and have some problem (see #189794).
As I wrote, I resolve this kernel problem by comment
dm-strippe patch
(http://www.kernel.org/git/?p=linux/kernel/git/stable/linux-2.6.16.y.git;a=commit;h=8ba32fde2c5be52865b2fd7e5e3752a46971fabe)
in kernel source package.
Currently kernel-2.6.16-1.2111_FC5 is worked fine for me. If anybody want to
try kernel with my patch, download
http://milas.spb.ru/files/kernel-2.6.16-1.2111_FC5.src.rpm (This is not original
Fedora kernel SRPM!). This SRPM contain original Fedora kernel context, but
kernel*.tar.bz2 is PATCHED, and You will need just recompile this with rpmbuild
for Your platform.
Maybe, this hack is not correct, but It work...


Comment 126 Chris Chabot 2006-05-25 23:02:21 UTC
Created attachment 130009 [details]
Anaconda crashing on dmraid too

As mentioned it makes my anaconda crash too (tried again with a fresh rescue
disc iso from May 26st which has kernel 2.6.16-1.2215_FC6 in it, and latest
dmraid i believe?).

Attached is the anaconda dump / trace from its crashing

My meta data has been attached before and is (hopefully :-)) unchanged

Comment 127 Joao Batista Gomes de Freitas 2006-05-27 13:08:10 UTC
*** Bug 192165 has been marked as a duplicate of this bug. ***

Comment 128 Dwaine Garden 2006-05-27 16:55:00 UTC
You guys should be able to download the rpm that I posted and it should be good
to boot.   All kernels which are updated from yum should work afterwards too.

http://www.muug.mb.ca/pub/fedora/linux/core/development/i386/Fedora/RPMS/dmraid-1.0.0.rc11-FC6.i386.rpm


Dwaine

Comment 129 Joao Batista Gomes de Freitas 2006-05-27 22:22:52 UTC
I have it working fine now. I Just followed this simple procedure :

1)Download mdraid-rpm
2)fix the raid size by typing âdmraid -a n && dmraid -a yâ
3)remove kernel 2.6.16 (if installed)
4)install the kernel again (this generate a proper initrd)
5)reboot and be happy

OK. Thank you guys, in less than 24 ours from my first contact here I have
everything working fine. It makes me feel good. So, if I can be of any help just
ask. I'll be glad to help.

Comment 130 Dan Book 2006-05-29 20:23:40 UTC
Hmm...After doing dmraid -an, dmraid -ay, during the yum reinstall of 2.6.16, i
got this error several (like 20) times:
device-mapper: deps ioctl failed: No such device or address
_deps: task run failed for (253:0)

Then:
grubby fatal error: unable to find a suitable template

Is it a factor that my root partition (and /usr and /home and /boot) is on the
RAID array in question?
I will try booting into that kernel now...
-Dan

Comment 131 Dan Book 2006-05-29 20:25:07 UTC
Actually, I will copy the dmraid binary off the array, and dmraid -an and dmraid
-ay and install the kernel from rescue mode.
-Dan

Comment 132 Joao Batista Gomes de Freitas 2006-05-29 20:50:49 UTC
 I have everything in the partition I was fixing and booting (2.6.15). I did
from a console (no X) and you could test to see if everything went right buy
typing: "dmsetup table" before and after dmraid and check if the total size is
right (must be a multiple of 512. You should do from a system in 2.6.15 to be
able to "see" the raid partition properly.

Comment 133 Dan Book 2006-05-29 20:54:28 UTC
Disregard that last post, it would have been pointlessly complicated to try that.
Anyway, I figured out the grubby fatal error just makes the entry not be added
to grub.conf (why, I do not know) and is nothing to worry about.
However, after inserting the entry myself, IT BOOTED! Thank you all that have
gotten this to work and who have written your experiences. I don't know if the
plethora of output from the kernel install is anything to worry about, but it
doesn't seem to be as of yet. I noticed that dmraid -an essentially did nothing,
and dmraid -ay added another copy of each of my partitions in /dev/dm-* and
/dev/mapper/. In the new kernel I still only have one copy in each though, so no
matter.
I'm going to install some non-GPL kernel modules now. Thanks again,
Dan

Comment 134 Dan Book 2006-05-29 20:58:07 UTC
Well... I ran dmsetup table, and:
nvidia_bfaeeaaf: 0 796593920 striped 2 128 8:0 0 8:16 0
796593920 (I'm guessing the total size) is a multiple of 256 but not 512...is
the number 128 in that line significant?
-Dan

Comment 135 Joao Batista Gomes de Freitas 2006-05-29 22:20:33 UTC
I don't know. You will have to wait for the experts to answer. I just was
following the forum mentioned in comment #113. :)

Comment 136 Heinz Mauelshagen 2006-05-30 09:21:42 UTC
the 128 is the stride (or chunk) size. i.e. 128 sectors or 64k

Comment 137 Joao Batista Gomes de Freitas 2006-05-30 13:27:08 UTC
My understanding is that when Heinz tell us that the chunk size is 128  * 64K,
this is valid for each physical volume in the raid (is it correct?). So, I
should expect that to fix it, considering that the initial allocation did not
leave any chunks free,  to go down to nearest multiple of 128 * 2 (2 disks).
After i ran âdmraid -a n && dmraid -a yâ I  got :

# dmsetup table
VolGroup00-lvol0: 0 157286400 linear 253:2 40960384
VolGroup00-homefake: 0 262144000 linear 253:2 303104384
nvidia_acaafeee: 0 976794112 striped 2 128 8:0 0 8:16 0
VolGroup01-LogVol01: 0 4063232 linear 3:2 45613440
VolGroup01-LogVol00: 0 45613056 linear 3:2 384
VolGroup00-LogVol01: 0 8388608 linear 253:2 909312384
VolGroup00-LogVol00: 0 40960000 linear 253:2 384
nvidia_acaafeeep2: 0 976575285 linear 253:0 208845
VolGroup00-tmpfake: 0 81920000 linear 253:2 827392384
VolGroup00-aux1: 0 157286400 linear 253:2 565248384
nvidia_acaafeeep1: 0 208782 linear 253:0 63

And the size of the dm array went down to the value of  976794112 as expected. I
removed and reinstalled the kernel 2.6.16 to create a new initrd and booted it
all right. Only after reading the last comments this morning I  checked  the
size of each disk . I was expecting to find each of them had gone down to the
nearest multiple of 128. But what I got was:

# dmraid -rD
/dev/sda: nvidia, "nvidia_acaafeee", stripe, ok, 488397166 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_acaafeee", stripe, ok, 488397166 sectors, data@ 0

 So, no changes. They still add to  976794332, the old size of the whole array.
 Is it important and I will run in problems when I try to allocate the 488397056
+ 1 sector? If it is true is there a way to fix it ? I have to do something or
it can be part of a new kernel update ?

Comment 138 Heinz Mauelshagen 2006-05-30 13:50:05 UTC
Joao,

the issue with sizing is, that dmraid ensures "size % stride size" for RAID0
mappings in a central location in the code (i.e. in the activation module),
whereas each metadata format handler provides the size information derived from
the vendor metadata. That (sometimes) leads to confusing numbers like the ones
reported by you. More thinking needed on how to adjust that better. This is not
kernel and hence, doesn't need any kernel fix.

Comment 139 Wayne D. 2006-06-03 18:22:29 UTC
Well, after trying and giving up a few times, and eventually installing Suse
10.1 (just because I've never tried Suse before) on a second system that doesn't
use RAID, I can't resist coming back to try again, lol.  Here's some of my
output though.
[root@localhost ~]# dmraid -s
*** Active Set
name   : via_bfjgeedjcf
size   : 781443840
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 2
spares : 0
[root@localhost ~]# dmsetup table
via_bfjgeedjcf3: 0 166818960 linear 253:0 614614770
via_bfjgeedjcfp1: 0 614405862 linear 253:0 63
via_bfjgeedjcf2: 0 208845 linear 253:0 614405925
via_bfjgeedjcf1: 0 614405862 linear 253:0 63
via_bfjgeedjcf: 0 781443934 striped 2 128 8:0 0 8:16 0
VolGroup00-LogVol01: 0 4063232 linear 253:3 162595200
VolGroup00-LogVol00: 0 162594816 linear 253:3 384
via_bfjgeedjcfp3: 0 166818960 linear 253:0 614614770
via_bfjgeedjcfp2: 0 208845 linear 253:0 614405925
[root@localhost ~]# ls -l /boot
total 4111
-rw-r--r-- 1 root root   63896 Mar 14 16:01 config-2.6.15-1.2054_FC5
drwxr-xr-x 2 root root    1024 May 18 04:09 grub
-rw-r--r-- 1 root root 1781938 May 18 04:01 initrd-2.6.15-1.2054_FC5.img
drwx------ 2 root root   12288 May 17 23:43 lost+found
-rw-r--r-- 1 root root  811765 Mar 14 16:01 System.map-2.6.15-1.2054_FC5
-rw-r--r-- 1 root root 1510257 Mar 14 16:01 vmlinuz-2.6.15-1.2054_FC5

However, I hope I'm not misunderstanding this, but is it
"initrd-2.6.15-1.2054_FC5.img" that needs resizing?  The linux partition is
approx. 79Gig, but that file is less than 2 Meg.  If so, can I update my Suse
distro with a new dmraid, yank the drive from the second computer, put it in
this computer and then fix my Fedora installation?  The Suse installation is
only on a 54Gig partition though.

Thanks.

Comment 140 Wayne D. 2006-06-03 18:23:43 UTC
Well, after trying and giving up a few times, and eventually installing Suse
10.1 (just because I've never tried Suse before) on a second system that doesn't
use RAID, I can't resist coming back to try again, lol.  Here's some of my
output though.
[root@localhost ~]# dmraid -s
*** Active Set
name   : via_bfjgeedjcf
size   : 781443840
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 2
spares : 0
[root@localhost ~]# dmsetup table
via_bfjgeedjcf3: 0 166818960 linear 253:0 614614770
via_bfjgeedjcfp1: 0 614405862 linear 253:0 63
via_bfjgeedjcf2: 0 208845 linear 253:0 614405925
via_bfjgeedjcf1: 0 614405862 linear 253:0 63
via_bfjgeedjcf: 0 781443934 striped 2 128 8:0 0 8:16 0
VolGroup00-LogVol01: 0 4063232 linear 253:3 162595200
VolGroup00-LogVol00: 0 162594816 linear 253:3 384
via_bfjgeedjcfp3: 0 166818960 linear 253:0 614614770
via_bfjgeedjcfp2: 0 208845 linear 253:0 614405925
[root@localhost ~]# ls -l /boot
total 4111
-rw-r--r-- 1 root root   63896 Mar 14 16:01 config-2.6.15-1.2054_FC5
drwxr-xr-x 2 root root    1024 May 18 04:09 grub
-rw-r--r-- 1 root root 1781938 May 18 04:01 initrd-2.6.15-1.2054_FC5.img
drwx------ 2 root root   12288 May 17 23:43 lost+found
-rw-r--r-- 1 root root  811765 Mar 14 16:01 System.map-2.6.15-1.2054_FC5
-rw-r--r-- 1 root root 1510257 Mar 14 16:01 vmlinuz-2.6.15-1.2054_FC5

However, I hope I'm not misunderstanding this, but is it
"initrd-2.6.15-1.2054_FC5.img" that needs resizing?  The linux partition is
approx. 79Gig, but that file is less than 2 Meg.  If so, can I update my Suse
distro with a new dmraid, yank the drive from the second computer, put it in
this computer and then fix my Fedora installation?  The Suse installation is
only on a 54Gig partition though.

Thanks.

Comment 141 Wayne D. 2006-06-03 18:27:14 UTC
sorry about the double post for 139 and 140


Comment 142 Wayne D. 2006-06-06 16:06:21 UTC
After some banging around and searching, I think I've got my Fedora Core 5
installation working now.  I'll just post what I did, just in case anyone else
runs across this post.

1.  Started with a fresh install of FC5.  No updates to the kernel, etc... so I
only had 2.6.15-1.2054_FC5 as a kernel.

2.  Installed the new dmraid that was posted earlier in this bug list in post
#47. 
http://people.redhat.com/heinzm/sw/dmraid/tst/dmraid-1.0.0.rc11-pre1-1.i386.rpm .  

3.  did "dmraid -s", and then "dmsetup table" to look at my settings.  In my
case it turned out to be:

[root@localhost ~]# dmraid -s
*** Active Set
name   : via_bfjgeedjcf
size   : 781443840
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 2
spares : 0
[root@localhost ~]# dmsetup table
via_bfjgeedjcf3: 0 166818960 linear 253:0 614614770
via_bfjgeedjcfp1: 0 614405862 linear 253:0 63
via_bfjgeedjcf2: 0 208845 linear 253:0 614405925
via_bfjgeedjcf1: 0 614405862 linear 253:0 63
via_bfjgeedjcf: 0 781443934 striped 2 128 8:0 0 8:16 0
VolGroup00-LogVol01: 0 4063232 linear 253:3 162595200
VolGroup00-LogVol00: 0 162594816 linear 253:3 384
via_bfjgeedjcfp3: 0 166818960 linear 253:0 614614770
via_bfjgeedjcfp2: 0 208845 linear 253:0 614405925

4.  From the "dmsetup table" I could see that it said "via_bfjgeedjcf: 0
781443934 striped 2 128 8:0 0 8:16 0"  which would be wrong, because
781443934/(2*128) = 3052515.367.

5.  Pulled out a piece of paper, did the calculation:
781443934/(2*128)= 3052515.367

Then did this next calculation using the first result with no decimals as follows:
3052515*(2*128)=781443840

Wrote down the result.

6.  Then I proceeded to unpack my initrdXXX.img file into a different directory
as follows:

cd /
mkdir initrd
cd initrd
cp /boot/initrd-2.6.15-1.2054_FC5.img ./
gunzip <initrd-2.6.15-1.2054_FC5.img >initrd.img
cpio -i <initrd.img
rm *.img
rm: remove regular file `initrd-2.6.15-1.2054_FC5.img'? y
rm: remove regular file `initrd.img'? y

This left me with the contents of the "initrd-2.6.15-1.2054_FC5.img" file.

7.  Edited the "init" file in the "/initrd" directory I created.  I did a search
for "781443934" which was the result from "dmsetup table" in step 3.  I found a
line saying:

dm create via_bfjgeedjcf 0 781443934 striped 2 128 8:0 0 8:16 0

and then edited the "781443934" and changed it to the result from the final
calculation in step 5, "781443840", so the line said:

dm create via_bfjgeedjcf 0 781443840 striped 2 128 8:0 0 8:16 0

Then saved the "init" file.

8.  Packed up the information from the "/initrd" directory into a new initrd.img
file:

find . | cpio -o -c | gzip -9 > ./initrd-new.img

9.  Copied the "initrd-new.img" file to my "/boot" directory:

cp initrd-new.img /boot/

10.  Edited "/boot/grub/grub.conf" so that I could boot using the
"initrd-new.img" file that I created.  So my new "grub.conf" looked like:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,1)
#          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
#          initrd /initrd-version.img
#boot=/dev/mapper/via_bfjgeedjcf
default=2
timeout=5
splashimage=(hd0,1)/grub/splash.xpm.gz
hiddenmenu
title Fedora Core (2.6.15-1.2054_FC5)
	root (hd0,1)
	kernel /vmlinuz-2.6.15-1.2054_FC5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
	initrd /initrd-2.6.15-1.2054_FC5.img
title WindowsXP
	rootnoverify (hd0,0)
	chainloader +1
title Fedora Core (2.6.15-1.2054_FC5-testversion)
	root (hd0,1)
	kernel /vmlinuz-2.6.15-1.2054_FC5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
	initrd /initrd-new.img


When I booted, I chose "Fedora Core (2.6.15-1.2054_FC5-testversion)" to see if
there were any block errors.  In my case there weren't any.  Everything worked
fine.  After this I did all the updates (which installed the new kernel
2.6.16-1.2122_FC5).  I did a reboot after that with the new kernel and there
were still no errors.

As I understand, there are some hazards to the way I did it, because there might
have been data in the missing blocks.  In my case though, there doesn't seem to
have been any, and my system is running fine with the new kernel.

I got my info. for this workaround by searching and reading:
this thread https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=186842, 
and http://forums.fedoraforum.org/forum/showthread.php?t=106312&highlight=sata


 



Comment 143 Dwaine Garden 2006-06-21 02:50:00 UTC
Has anyone been able to rpm update to a newer kernel than the first 2.6.16?

I'm getting anything after 2.6.16-1.2111_FC5smp, just dies trying to find the
volume.

All kernel after this keneral (PAE) named will not boot.

Dwaine

Comment 144 Ron Courtright 2006-06-25 04:24:24 UTC
I found that the kernel update to 2.6.17 overwrote the dmraid fix to an earlier
version (1.0.0.9).  I spent an afternoon trying to recover from that fiasco.  I
was able to get back to 2.6.16-1.2133_FC5.x86_64 so I guess it proved the maxim
that those things that do not kill us make us strong.

Comment 145 Dwaine Garden 2006-06-25 06:20:54 UTC
dmraid -V
dmraid version:         1.0.0.rc11 (2006.05.15) debug
dmraid library version: 1.0.0.rc11 (2006.05.15)
device-mapper version:  4.6.0

I still have the right fix.  I did notice that newer kernels do not generate the
proper initrd image.   It leave out the following.

rmparts sdb
rmparts sda
dm create via_ecfdfiehfa 0 312499968 striped 2 128 8:0 0 8:16 0
dm partadd via_ecfdfiehfa

Updating kernel with yum and or updating the initrd image manually does not
work.  I don't know why either, specially when earlier kernels created a good
initrd image file.

Dwaine


Comment 146 Wayne D. 2006-06-26 06:47:44 UTC
I'm running into the exact same problem Dwaine.  The system starts to boot, and
then there's all this gibberish about SCSI, and then it stops at:

device-mapper: 4.6.0-ioctl (2006-02-17) initialized: dm-devel

I can boot back into kernel 2.6.16 no problem.  And after examining the init
file, it's identical to the 2.6.16 kernel one.  There's no error messages
either, it just hangs.


Comment 147 Dwaine Garden 2006-06-26 07:17:22 UTC
Another problem with the initrd image being generated.

See https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=192157

Dwaine

Comment 148 Dwaine Garden 2006-07-01 15:50:30 UTC
Opened a new bug report....

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=196828

Right now I'm just copying the missing lines into init.

rmparts sdb
rmparts sda
dm create via_ecfdfiehfa 0 312499968 striped 2 128 8:0 0 8:16 0
dm partadd via_ecfdfiehfa



Comment 149 Joao Batista Gomes de Freitas 2006-07-01 19:47:49 UTC
I have all four lines generated in initrd-2.6.17-1.2139_FC5.img but it hungs
anyway. 
    .
    .
    .
echo "Loading dm-mod.ko module"
insmod /lib/dm-mod.ko
echo "Loading dm-mirror.ko module"
insmod /lib/dm-mirror.ko
echo "Loading dm-zero.ko module"
insmod /lib/dm-zero.ko
echo "Loading dm-snapshot.ko module"
insmod /lib/dm-snapshot.ko
echo Making device-mapper control node
mkdmnod
mkblkdevs
rmparts sdb
rmparts sda
dm create nvidia_acaafeee 0 976794112 striped 2 128 8:0 0 8:16 0
dm partadd nvidia_acaafeee
    .
    .
Is it in the right place ?

Comment 150 Joao Batista Gomes de Freitas 2006-07-15 03:55:57 UTC
I still cannot boot with 2.6.17-1.2145. Is it happening with someone else ?

Comment 151 Jack Holden 2006-07-15 18:52:22 UTC
I have not successfully booted with any kernel except the originally installed
one regardless of what changes I've made.

Comment 152 Joao Batista Gomes de Freitas 2006-07-16 01:27:10 UTC
It has happened again with kernel 2.6.17-1.2145. The first upgrade that failed
was to version 2.6.17-1.2139, the first one in the 17 series(??). I could
upgrade in series 16 after applying the fixes suggested in this list.
Jack, have you tried to upgrade for any version in the series 16 ?   
Just to make it clear, I am stuck at 2.6.16-1.2133.

Comment 153 Jack Holden 2006-07-16 17:53:25 UTC
I don't think I've tried the particular version you're using (2.6.16-2.2133). 
However, I have tried 16 series kernels.  They were the first to exhibit the
problem.  The first kernel I tried to upgrade to after installing FC5 in March
was a 16 series.

The one thing I haven't tried is decompressing my boot image and manually
changing the parameters.  From what I've read, this is somewhat risky.

I cannot install nVidia drivers with the default kernel, so I've pretty much
been stuck since March.  It's very frustrating.  I've considered switching
distros, but I can't find one that easily installs with dmraid yet.  Everyone
says to use kernel raid instead of nvraid, but this isn't an option if you dual
boot.  The net result is that linux is losing to windoz on my machine.

Comment 154 Dwaine Garden 2006-07-17 19:32:53 UTC
I opened up the bug report for the missing dm entries for the init script.  
But no one is posted to that bug, giving the fedora team an indication of the 
impacted people.

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=196828

Joao indicated that the dm entries did populate when installing the kernel 
with yum.  Joao, what version of fedora are you using?  It should boot fine if 
that is the case.

Dwaine


Comment 155 Joao Batista Gomes de Freitas 2006-07-18 01:27:08 UTC
I am using FC5. And, yes, I did not edit initrd to upgrade from 15-1.2054. I
have been updating form yum normally since then until the first series 17 kernel
(2.6.17-1.2139). And I have cheked that initrd and I had the lines :

rmparts sdb
rmparts sda
dm create nvidia_acaafeee 0 976794112 striped 2 128 8:0 0 8:16 0
dm partadd nvidia_acaafeee

in all kernels including the latest (2157). I still can not boot. I would like
to test xen and so now I need to upgrade again.

Comment 156 Tom Horsley 2006-07-22 21:03:14 UTC
My bug 199793 my be sorta the same as this, but I have a different problem.
There is Windows XP raid setup on some disks I don't want to access at all
when booting linux, but apparently linux is trying to be way too helpful
and it hangs during the boot attempting to recognize the raid. Any easy
way to make it stop fiddling with those disks completely. I don't even have
a raid on linux.


Comment 157 Jack Holden 2006-07-22 23:54:26 UTC
I've discovered that I have both problems...  The 2.6.17 kernel upgrade did not
include the dmraid setup in the init script.  (I'll post this there as well.) 
When I copied the entries from the initrd that works and rebuilt the initrd, I
started getting the error because the raid array wasn't a multiple of the chunk
size.  I performed the calculation shown above and changed the init entry.  Now
I get this error repeatedly upon boot:

init[1] trap divide error rip:4296d7 rsp:7fff23c9d070 error:0

Does anyone know what this means or how I should proceed?

Comment 158 Tom Horsley 2006-07-27 12:24:41 UTC
Just a general observation on this issue: While it may be true that RAIDs
created via linux tools will always have the proper chunk size multiple
sizes, you can (as I was trying to do) easily boot linux on a system which
has RAIDs created by other tools for other operating systems which don't
have the same restriction. Linux should not hang or crash when this happens.
It should at least be smart enough to simply ignore the RAIDs that it
can't deal with, not trying to set them up in device mapper at all.


Comment 159 SpuyMore 2006-08-04 22:10:30 UTC
(In reply to comment #142)

Wayne, thanks for your thorough explanation! Thanks to that I finally managed 
to upgrade the kernel that shipped with FC5 to 2.6.16-1.2133_FC5 on my x86_64 
box with VIA VT8237 RAID controller. All newer 2.6.17 kernels fail however with 
the same init[1] trap divide error's Jack mentions in comment #157. The init 
script generated in those kernels are now identical to the working one in the 
2.6.16 kernel, so these kernels seem to be introducing another problem! Hope I 
will read some solutions on that here some day...

For what its worth, after all it came down on installing the dmraid update and 
fixing the dm create value in the initrd image for the 2.6.15-2054 kernel 
first! In other words, upgrading while that initrd image was not fixed yet, 
using all other dmraid -an or other fixes mentioned by other earlier, didn't 
work for me!

Comment 160 Chris Chabot 2006-08-05 13:21:53 UTC
And for a while now, all the way upto the 2.6.17 24xx kernels, i've been able to
update my kernels by gunzipping / cpio'ing / editing the init script / packaging
it back up into a new initrd.img file ..

However since a week or so (maybe 2 ? i'm not sure), this trick didn't work
anymore .. the good news was that the dmraid setup commands were now automaticly
included in the init script again, however despite their presence at boot i am
now supprised by no error reports, until the 'could not mount the root device'
comes up and tells me that this attempt was an utter failure.

I've diffed the init scripts from my working booting kernel (which is
2.6.17-1.2364 for some obscure historical but unknown to me reason) and the one
in 2517, and there were no differences, so everything should work right! :-)

Anyone been having the same problems and been able to figure out what now is
making booting impossible? I'd be positively delighted to learn how to fix it
again :-) 

Comment 161 Chris Chabot 2006-08-05 14:21:20 UTC
ps i did manage to boot now,

i've unziped/cpio'd my 2367 initrd, copied over the .ko files from the 2527
initrd, cpio'd & gziped it all up to a new initrd for 2527 (so using old
nash/modprobe/insmod but new kernel modules) and it boots fine again

however now it does shout at me during boot:
device-mapper: ioctl: 4.7.0-ioctl (2006-06-24) initialised: dm-devel

=============================================
[ INFO: possible recursive locking detected ]
---------------------------------------------
init/1 is trying to acquire lock:
 (&md->io_lock){----}, at: [<ffffffff880d9654>] dm_request+0x25/0x130 [dm_mod]

but task is already holding lock:
 (&md->io_lock){----}, at: [<ffffffff880d9654>] dm_request+0x25/0x130 [dm_mod]

other info that might help us debug this:
1 lock held by init/1:
 #0:  (&md->io_lock){----}, at: [<ffffffff880d9654>] dm_request+0x25/0x130 [dm_mod]

stack backtrace:

Call Trace:
 [<ffffffff8026e73d>] show_trace+0xae/0x319
 [<ffffffff8026e9bd>] dump_stack+0x15/0x17
 [<ffffffff802a7f00>] __lock_acquire+0x135/0xa5f
 [<ffffffff802a8dcd>] lock_acquire+0x4b/0x69
 [<ffffffff802a58f9>] down_read+0x3e/0x4a
 [<ffffffff880d9654>] :dm_mod:dm_request+0x25/0x130
 [<ffffffff8021cf45>] generic_make_request+0x21a/0x235
 [<ffffffff880d8402>] :dm_mod:__map_bio+0xca/0x104
 [<ffffffff880d8e48>] :dm_mod:__split_bio+0x16a/0x36b
 [<ffffffff880d974c>] :dm_mod:dm_request+0x11d/0x130
 [<ffffffff8021cf45>] generic_make_request+0x21a/0x235
 [<ffffffff80235eb7>] submit_bio+0xcc/0xd5
 [<ffffffff8021b381>] submit_bh+0x100/0x124
 [<ffffffff802e1a3c>] block_read_full_page+0x283/0x2a1
 [<ffffffff802e40df>] blkdev_readpage+0x13/0x15
 [<ffffffff8021358d>] __do_page_cache_readahead+0x17b/0x1fc
 [<ffffffff80234e37>] blockable_page_cache_readahead+0x5f/0xc1
 [<ffffffff80214784>] page_cache_readahead+0x146/0x1bb
 [<ffffffff8020c2d6>] do_generic_mapping_read+0x157/0x4b4
 [<ffffffff8020c78e>] __generic_file_aio_read+0x15b/0x1b1
 [<ffffffff802c852e>] generic_file_read+0xc6/0xe0
 [<ffffffff8020b5fb>] vfs_read+0xcc/0x172
 [<ffffffff802121ae>] sys_read+0x47/0x6f
 [<ffffffff802603ce>] system_call+0x7e/0x83
DWARF2 unwinder stuck at system_call+0x7e/0x83
Leftover inexact backtrace:


Comment 162 Dwaine Garden 2006-08-09 04:21:58 UTC
Chris, I do get the same opps during booting.  I have not seen any problems
relating to it though.

Dwaine

Comment 163 Hans de Goede 2006-08-19 18:50:44 UTC
A friend of mine (chabotc) has been having problems booting his
Dell XPS with the factory default raid setup (which he doesnot want to change
because he doesn't want to remove all his data) ever since the dmraid alignment
checks (bug 186842) were added to the kernel.

Since many people are suffering from the same problem and this gives Linux /
Fedora a bad name, I decided last week to go and try to fix this.

So I've borrowed his PC and now after 8 full hours of debugging I've found the
problem:
1) The problem is no longer the problem reported in this bug (the multiple 
   chunksize problem), so this bug can and should be closed now.
2) Still dmraid setups don't work because there was a bug in nash where it 
   didn't add the nescesarry dm "setup" lines to the initrd script, this has
   been fixed in rescent mkinitrd versions, see bug 196828.
3) Still dmrais setups don't work because there is a bug in nash where it
   doesn't create the nescesarry /dev/dm-x nodes. I've filed a bug with patch
   for this, bug 203241.
4) Even when you've got a patched/updated mkinitrd with bug 203241 fixed, 
   chances are that your system still won't bood, because mkinitrd includes
   the usb-storage driver in the initrd, where it shouldn't. Its discutable if
   this is an mkinitrd bug though. Too fix this remove any lines containing
   usb-storage from /etc/modprobe.conf . An request to filter these lines
   in mkinitrd has been filed, bug 203244.


Comment 164 Ron Courtright 2006-08-24 23:13:55 UTC
Here is a testamonial:  I followed the suggestions offerred in Bug 203241 and 
can know upgrade and boot.

Comment 165 Ian M Thompson 2006-09-03 00:50:33 UTC
I have been having similar problems with a RAID1 setup, both in terms of the
mkinitrd and the dm create line either being missing or not correct.

See http://www.fedoraforum.org/forum/showthread.php?t=121446 for details.

Can I clarify what is needed to get RAID1 to boot with dmraid
1) upgrade dmraid to lated version ? where to I get this from ? the links above
to the dmraid rc11 all appear to be broken.
2) get new version of mkinitrd (which version? and where from ?) that fixes the
issue in the vanilla FC5 mkinitrd
3) patch nash with https://bugzilla.redhat.com/bugzilla/attachment.cgi?id=134511
(or is this already included in the new mkinird ???)
4) be sure that the ther is no usb-storage  in /etc/modprobe.conf ...
(looks like it should be OK as it's missing for me)


Comment 166 Hans de Goede 2006-09-03 04:55:02 UTC
(In reply to comment #165)
> I have been having similar problems with a RAID1 setup, both in terms of the
> mkinitrd and the dm create line either being missing or not correct.
> 
> See http://www.fedoraforum.org/forum/showthread.php?t=121446 for details.
> 
> Can I clarify what is needed to get RAID1 to boot with dmraid
> 1) upgrade dmraid to lated version ? where to I get this from ? the links above
> to the dmraid rc11 all appear to be broken.
> 2) get new version of mkinitrd (which version? and where from ?) that fixes the
> issue in the vanilla FC5 mkinitrd
> 3) patch nash with https://bugzilla.redhat.com/bugzilla/attachment.cgi?id=134511
> (or is this already included in the new mkinird ???)
> 4) be sure that the ther is no usb-storage  in /etc/modprobe.conf ...
> (looks like it should be OK as it's missing for me)
> 

Actually only 2 should be needed the new mkinitrd just released in rawhide fixes
 3 and 4, and AFAIK 1 never was an issue. See bug 203241 comment 12 for
instructions how to upgrade your mkinitrd to the latest:
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=203241#c12


Comment 167 Jon Cohen 2006-10-24 16:33:43 UTC
(In reply to comment #166)
> (In reply to comment #165)
> > Can I clarify what is needed to get RAID1 to boot with dmraid
> > 1) upgrade dmraid to lated version ? where to I get this from ? the links above
> > to the dmraid rc11 all appear to be broken.
> > 2) get new version of mkinitrd (which version? and where from ?) that fixes the
> > issue in the vanilla FC5 mkinitrd
> > 3) patch nash with https://bugzilla.redhat.com/bugzilla/attachment.cgi?id=134511
> > (or is this already included in the new mkinird ???)
> > 4) be sure that the ther is no usb-storage  in /etc/modprobe.conf ...
> > (looks like it should be OK as it's missing for me)
> > 
> 
> Actually only 2 should be needed the new mkinitrd just released in rawhide fixes
>  3 and 4, and AFAIK 1 never was an issue. See bug 203241 comment 12 for
> instructions how to upgrade your mkinitrd to the latest:
> https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=203241#c12
> 

I am experiencing this boot failure (target length must be multiple of chunk
size) on an NVRAID system (Dell XPS 700). I followed the instructions in bug
203241 comment 12 to update my mkinitrd to 5.1.19-1. I then install
kernel-smp-2.6.18-1.2200.fc5, but still experience the problem. Am I missing a
step here?


Comment 168 Hans de Goede 2006-10-24 17:31:51 UTC
Hmm, mkinitrd-5.1.19 should be fine with regards to the dmraid problems
mentioned in bug 203241 / bug 204768, so appearantly you are being bitten by
something else, it seems to me that you are experiencing the problem of mkinitrd
creating the wrong "create xxx" line for your dmraid setup, that is bug 196828.

I guess you're best course of action now is to add a comment with you're problem
and as much details as possible to bug 196828, or just open a new bug.


Comment 169 Jon Cohen 2006-10-27 17:58:59 UTC
(In reply to comment #168)
> Hmm, mkinitrd-5.1.19 should be fine with regards to the dmraid problems
> mentioned in bug 203241 / bug 204768, so appearantly you are being bitten by
> something else, it seems to me that you are experiencing the problem of mkinitrd
> creating the wrong "create xxx" line for your dmraid setup, that is bug 196828.
> 
> I guess you're best course of action now is to add a comment with you're problem
> and as much details as possible to bug 196828, or just open a new bug.
> 

Happily, the problem went away with Fedora Core 6, so I can get on with my life.


Comment 170 Alasdair Kergon 2007-01-10 16:24:02 UTC
*** Bug 189794 has been marked as a duplicate of this bug. ***

Comment 171 Alexander Shiyan 2007-01-31 16:34:35 UTC
Upgrade from fc5 to fc6 is passed fine, but kernel does not see partitions
again, as I write in #189794.
So now My FC6 system worked with old kernel (2.6.18-1.2254.fc5smp). It is one
possible way for me...
Please fix this problem again for FC6.

Comment 172 petrosyan 2008-03-13 23:25:51 UTC
Fedora Core 6 is no longer maintained. Is this bug still present in Fedora 7 or
Fedora 8?

Comment 173 Bug Zapper 2008-04-04 02:16:28 UTC
Fedora apologizes that these issues have not been resolved yet. We're
sorry it's taken so long for your bug to be properly triaged and acted
on. We appreciate the time you took to report this issue and want to
make sure no important bugs slip through the cracks.

If you're currently running a version of Fedora Core between 1 and 6,
please note that Fedora no longer maintains these releases. We strongly
encourage you to upgrade to a current Fedora release. In order to
refocus our efforts as a project we are flagging all of the open bugs
for releases which are no longer maintained and closing them.
http://fedoraproject.org/wiki/LifeCycle/EOL

If this bug is still open against Fedora Core 1 through 6, thirty days
from now, it will be closed 'WONTFIX'. If you can reporduce this bug in
the latest Fedora version, please change to the respective version. If
you are unable to do this, please add a comment to this bug requesting
the change.

Thanks for your help, and we apologize again that we haven't handled
these issues to this point.

The process we are following is outlined here:
http://fedoraproject.org/wiki/BugZappers/F9CleanUp

We will be following the process here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this
doesn't happen again.

And if you'd like to join the bug triage team to help make things
better, check out http://fedoraproject.org/wiki/BugZappers

Comment 174 Bug Zapper 2008-05-06 15:39:13 UTC
This bug is open for a Fedora version that is no longer maintained and
will not be fixed by Fedora. Therefore we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen thus bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.