Bug 140644 - LVM on top of RAID1 does not work correctly
Summary: LVM on top of RAID1 does not work correctly
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: lvm
Version: 3.0
Hardware: i686
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jeremy Katz
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2004-11-23 23:33 UTC by Demosthenes T. Mateo Jr.
Modified: 2007-11-30 22:07 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2005-02-01 22:14:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
mkinitrd output (877 bytes, text/plain)
2004-12-03 06:17 UTC, Demosthenes T. Mateo Jr.
no flags Details
dmesg output (14.94 KB, text/plain)
2004-12-03 06:19 UTC, Demosthenes T. Mateo Jr.
no flags Details

Description Demosthenes T. Mateo Jr. 2004-11-23 23:33:08 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.5)
Gecko/20041107 Firefox/1.0

Description of problem:
I created 2 partitions (type 0xfd) and made a RAID1 out of them
(/dev/md0). 
Then I did pvcreate on /dev/md0, created a volume group, created a
logical 
volume and then created an ext3 filesystem on the lv. I added an entry 
to /etc/fstab so that the lv mounts automatically at boot.

After rebooting the system, "mount" shows that my lv has been mounted 
successfuly. I tried writing and reading to that filesystem and was 
successful. However, pvdisplay on /dev/md0 showed nothing so I did a
cat on /proc/mdstat. RAID wasn't active at all!!! I did raidstart
/dev/md0 and that's when /dev/md0 showed up in pvdisplay.

Could somebody please give an explanation to the this. How was I able
to mount the lv when md0 wasn't even active? The command lsmod shows
lvm-mod but not raid1. 

Here are some info which might be helpful:

[root@wspanic root]# mount
/dev/hda2 on / type ext3 (rw)
none on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbdevfs on /proc/bus/usb type usbdevfs (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/hda7 on /usbstick type ext3 (rw)
/dev/vol01/logvol1 on /logvol type ext3 (rw)
//rhel3smb/public on /mnt/samba type smbfs (0)

[root@wspanic root]# dd if=/dev/zero of=/logvol/testfile bs=1M count=1
1+0 records in
1+0 records out

[root@wspanic root]# ls -l /logvol/testfile
-rw-r--r--    1 root     root      1048576 Nov 21 10:28 /logvol/testfile

[root@wspanic root]# cat /proc/mdstat
Personalities :
read_ahead not set
Event: 0
unused devices: <none>

[root@wspanic root]# raidstart /dev/md0

[root@wspanic root]# cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
Event: 1
md0 : active raid1 hda12[1] hda11[0]
      104320 blocks [2/2] [UU]

unused devices: <none>

[root@wspanic root]# ls -l /logvol/testfile
-rw-r--r--    1 root     root      1048576 Nov 21 10:28 /logvol/testfile
[root@wspanic root]# dd if=/dev/zero of=/logvol/testfile2 bs=1M count=1
1+0 records in
1+0 records out

[root@wspanic root]# ls -l /logvol/
total 2070
drwx------    2 root     root        12288 Nov 20 16:49 lost+found
-rw-r--r--    1 root     root      1048576 Nov 21 10:28 testfile
-rw-r--r--    1 root     root      1048576 Nov 21 10:29 testfile2

[root@wspanic root]# cat /etc/raidtab
# Sample raid-1 configuration
raiddev                 /dev/md0
raid-level              1
nr-raid-disks           2
nr-spare-disks          0
chunk-size              4

device                  /dev/hda11
raid-disk               0

device                  /dev/hda12
raid-disk               1

Version-Release number of selected component (if applicable):
lvm-1.0.8-5 / raidtools-1.00.3-7

How reproducible:
Always

Steps to Reproduce:
1. create /dev/md0 as RAID1 
2. pvcreate /dev/md0
3. create volume group with /dev/md0
4. create logical volume and make an ext3 fs on it
5. add entries in /etc/fstab for the lv
6. reboot machine

Actual Results:  logical volume is mounted, you can read/write to it
but /dev/md0 is not active.

Expected Results:  Both the logical volume and /dev/md0 should have
been activated. If not then the logical partition shouldn't have been
mounted. 

Additional info:

I created a new initrd file with the following options in
/etc/modules.conf:

alias md-personality-3 raid1
alias block-major-58 lvm-mod
alias char-major-109 lvm-mod

I tried building new initial ramdisks with just the raid1 module then
with only the lvm-mod and then with both. Got kernel panic in all cases.

I was able to work around the kernel panic by specifying the exact
partition 
of root instead of using labels like this:

   kernel /vmlinuz-xxxxx root=/dev/hda2 

It used to have something like this:

  kernel /vmlinuz-xxxxx root=LABEL=/

Somehow, the inclusion of the lvm-mod module must have lead to
something similar to bugzilla #109887 that is why the LABEL=/ was no
longer recognized. If I use the backup initrd file, I never have to
change root=LABEL=/.

Comment 2 Jeremy Katz 2004-11-25 05:00:51 UTC
Did you recreate your initrd?

Comment 3 Demosthenes T. Mateo Jr. 2004-11-25 05:35:32 UTC
Yes ... please check the Additional Info block above.

Comment 4 Jeremy Katz 2004-11-25 21:14:17 UTC
Please provide the output of mkinitrd -v 

Comment 5 Demosthenes T. Mateo Jr. 2004-11-29 22:39:00 UTC
Looking for deps of module ide-disk
Looking for deps of module lvm-mod
Looking for deps of module ext3 jbd
Looking for deps of module jbd
Using modules:  ./kernel/drivers/md/lvm-mod.o ./kernel/fs/jbd/jbd.o
./kernel/fs/ext3/ext3.o
Using loopback device /dev/loop0
/sbin/nash -> /tmp/initrd.edyIGp/bin/nash
/sbin/insmod.static -> /tmp/initrd.edyIGp/bin/insmod
`/lib/modules/2.4.21-20.ELsmp/./kernel/drivers/md/lvm-mod.o' ->
`/tmp/initrd.edyIGp/lib/lvm-mod.o'
`/lib/modules/2.4.21-20.ELsmp/./kernel/fs/jbd/jbd.o' ->
`/tmp/initrd.edyIGp/lib/jbd.o'
`/lib/modules/2.4.21-20.ELsmp/./kernel/fs/ext3/ext3.o' ->
`/tmp/initrd.edyIGp/lib/ext3.o'
Loading module lvm-mod
Loading module jbd
Loading module ext3

Comment 6 Demosthenes T. Mateo Jr. 2004-11-29 23:25:03 UTC
Looks like the raid1 module wasn't included. Could you please provide
the  options for mkinitrd that would make sure raid1 is loaded before
lvm-mod. On the other hand, is that necessary? Even without creating a
new initial ramdisk, I can see raid being initialized before the
system mounts anything in fstab.

Comment 7 Jeremy Katz 2004-12-01 03:38:55 UTC
This looks like mkinitrd isn't seeing your raid device as up when it's
creating the initrd, and thus it's not loading the modules.

Was the raid actually started after you created it?  I see the output
of /proc/mdstat, but it's not clear that that's from exactly when you
have it running.

Also, if you run mkinitrd with sh -x, you should be able to see
exactly what all it's seeing.

Comment 8 Demosthenes T. Mateo Jr. 2004-12-03 06:17:33 UTC
Created attachment 107821 [details]
mkinitrd output

Comment 9 Demosthenes T. Mateo Jr. 2004-12-03 06:19:02 UTC
Created attachment 107822 [details]
dmesg output

Comment 10 Demosthenes T. Mateo Jr. 2004-12-03 06:21:37 UTC
Attached are the output of mkinitrd after enabling /dev/md0 and dmesg
after rebooting. Still no go. I also had to replace "root=LABEL=/"
with "root=/dev/hda2" in grub.conf or else I'd get a kernel panic. 

Comment 11 Jeremy Katz 2004-12-03 16:04:12 UTC
Can you attach the initrd.

Comment 12 Jeremy Katz 2005-02-01 22:14:05 UTC
Closing due to inactivity.  Please reopen if you have further information to add
to this report.


Note You need to log in before you can comment on or make changes to this bug.