Bug 230190 - Unable to rename a volume group containing root LV (boot failed)
Summary: Unable to rename a volume group containing root LV (boot failed)
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 6
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Peter Jones
QA Contact: Corey Marthaler
URL: http://www.fedoraforum.org/forum/show...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-02-27 11:07 UTC by Rolf Linder
Modified: 2012-10-04 23:20 UTC (History)
8 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2007-07-24 11:44:11 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
init script (1.73 KB, text/plain)
2007-03-01 17:35 UTC, Rolf Linder
no flags Details
output of mkinitrd command (2.54 KB, text/plain)
2007-03-01 17:47 UTC, Rolf Linder
no flags Details

Description Rolf Linder 2007-02-27 11:07:31 UTC
Description of problem:


Version-Release number of selected component (if applicable):
problem occurs on FC6 (latest updates or fresh install) and as written in the
Thread (linked above) also on FC5.

in FC6 used default LVM Version (2)

How reproducible:
always


Steps to Reproduce:
1. Boot into rescue mode (no searching of installation -> skip)
2. Scan for LVM-VG's -> lvm vgscan
3. Check to not have activ lvm in kernel -> lvm vgchange -a n
4. Rename VG -> lvm vgrename VolGroup00 VG00
5. Again, scan for VG's -> lvm vgscan
6. Activate lvm -> lvm vgchange -a y
7. Create directorys for mounting -> mkdir /mnt/sysimage/
8. Mount root-Logical Volume -> mount /dev/VG00/LogVol00 /mnt/sysimage
9. Mount boot-partition -> mount /dev/hdX /mnt/sysimage/boot
10. Change VolGroup00 to VG00 in /boot/grub.con and /etc/fstab
11. -> Sync 
12. umount
13. reboot without rescue
14 -> Boot failed....

The System is now unable to boot....the problem reside's in the kernel. cause
the step "Scanning logical volumes..." completes successfully...but next step
"Activating logical volumes" still try to mount an unknown Volume Group (here
VolGroup00 isntead of VG00)...then the preceedings steps will also fail, cause
there can not be mounted a /dev, /proc,...and finnally kernel cannot switch to
new root filesystem with the message below system gets stopped:

"switchroot: mount failed: No such file or directory"

this problem has been descussed on two fedora forums (one thread is linked
above, the other (german) is here:
http://www.fedoraforum.de/viewtopic.php?t=9757&postdays=0&postorder=asc&start=15


  
Actual results:
system is unable to boot anymore

Expected results:
a clean booting

Additional info:

Comment 1 Alasdair Kergon 2007-02-27 15:23:46 UTC
If you want to rename the volume group of containing the root filesystem, you
also need to update your initrd as the name is also stored there.  See mkinitrd.


Comment 2 Rolf Linder 2007-02-28 18:39:12 UTC
hy, i'm sorry but the system still does not boot...

so here is what i did (the last two evenings):

(i assume to boot from rescue, mount the system-root, chroot into it,
umount/mount proc and sysfs filesystems to the right place into the chroot
environment)

1. try: mkinitrd -f -v /boot/initrd-xyz xyz (no errors while creating initrd) ->
reboot; unable to boot..
2. try: configured yum; removed kernel; reinstall kernel (yum install kernel) ->
reboot no success..
3. try: fresh install; vgrename; cp initrd to /tmp; cpio -i ....; change every
occur of old vgname; cpio -o < filelist > initrd -> reboot -> no success...

so this tends me to the question: what is the recommended way to rename a volume
group where a root-filesystem exists...

unfortunately i could not find an answer on the forums / internet....

sincerly

rolf linder

Comment 3 Dave Wysochanski 2007-02-28 18:50:26 UTC
With the new initrd, are you still getting this same problem or something different:
the step "Scanning logical volumes..." completes successfully...but next step
"Activating logical volumes" still try to mount an unknown Volume Group (here
VolGroup00 isntead of VG00)

Comment 4 Rolf Linder 2007-02-28 19:52:21 UTC
Yes and no...the effect is the same...but at a different step.

none of the steps appear. 

there comes up "Red Hat nash version 5.1.19 starting" (as i know where initrd ->
init starts)...

then the errors begins: 
"Unable to access resume device (/dev/VolGroup00/LogVol00)" -> mount of Logical
Volume Swap isn't successful
"mount: could not find filesystem '/dev/root'" -> ??
"setuproot: moving /dev failed: No such file or directory"
"setuproot: error mounting /proc: No such file or directory"
"setuproot: error mounting /sys: No such file or directory"
"switchroot: mount failed: No such file or directory"
"Kernel panic - not syncing: Attemted to kill init!"

so as it seems he can't find any filesytem related to the volume group...

Comment 5 Dave Wysochanski 2007-02-28 21:22:29 UTC
Hmmm....  Just guessing here, but this:
"Unable to access resume device (/dev/VolGroup00/LogVol00)" -> mount of Logical
Volume Swap isn't successful

would probably correlate back to the line in initrd's init file where it's
trying the resume operation.  Here's a snip from one of my FC6 machines:
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure  VolGroup00
>>>>> resume /tmp/swapfile-256MB
echo Creating root device.
mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/LogVol00
echo Mounting root filesystem.

Can you attach the text of your init file?  Another suggestion, add "-vvvv" to
the vgscan line in the init file of your initrd and watch the messages - this
will probably tell us more info about any LV/VGs that are being found at boot time.

Comment 6 Rolf Linder 2007-03-01 17:34:23 UTC
ok...now i'm wondering once more....

there doesn't exist any code for scanning for lvm or activating lvm in kernel...

so cause it wasn't possible to upload the file i'll copy it to here

*** ini start ***
#!/bin/nash

mount -t proc /proc /proc
setquiet
echo Mounting proc filesystem
echo Mounting sysfs filesystem
mount -t sysfs /sys /sys
echo Creating /dev
mount -o mode=0755 -t tmpfs /dev /dev
mkdir /dev/pts
mount -t devpts -o gid=5,mode=620 /dev/pts /dev/pts
mkdir /dev/shm
mkdir /dev/mapper
echo Creating initial device nodes
mknod /dev/null c 1 3
mknod /dev/zero c 1 5
mknod /dev/systty c 4 0
mknod /dev/tty c 5 0
mknod /dev/console c 5 1
mknod /dev/ptmx c 5 2
mknod /dev/rtc c 10 135
mknod /dev/tty0 c 4 0
mknod /dev/tty1 c 4 1
mknod /dev/tty2 c 4 2
mknod /dev/tty3 c 4 3
mknod /dev/tty4 c 4 4
mknod /dev/tty5 c 4 5
mknod /dev/tty6 c 4 6
mknod /dev/tty7 c 4 7
mknod /dev/tty8 c 4 8
mknod /dev/tty9 c 4 9
mknod /dev/tty10 c 4 10
mknod /dev/tty11 c 4 11
mknod /dev/tty12 c 4 12
mknod /dev/ttyS0 c 4 64
mknod /dev/ttyS1 c 4 65
mknod /dev/ttyS2 c 4 66
mknod /dev/ttyS3 c 4 67
echo Setting up hotplug.
hotplug
echo Creating block device nodes.
mkblkdevs
echo "Loading uhci-hcd.ko module"
insmod /lib/uhci-hcd.ko 
echo "Loading ohci-hcd.ko module"
insmod /lib/ohci-hcd.ko 
echo "Loading ehci-hcd.ko module"
insmod /lib/ehci-hcd.ko 
mount -t usbfs /proc/bus/usb /proc/bus/usb
echo "Loading jbd.ko module"
insmod /lib/jbd.ko 
echo "Loading ext3.ko module"
insmod /lib/ext3.ko 
echo "Loading dm-mod.ko module"
insmod /lib/dm-mod.ko 
echo "Loading dm-mirror.ko module"
insmod /lib/dm-mirror.ko 
echo "Loading dm-zero.ko module"
insmod /lib/dm-zero.ko 
echo "Loading dm-snapshot.ko module"
insmod /lib/dm-snapshot.ko 
mkblkdevs
resume /dev/VG01/LogVol00
echo Creating root device.
mkrootdev -t ext3 -o defaults,ro /dev/VG01/LogVol01
echo Mounting root filesystem.
mount /sysroot
echo Setting up other filesystems.
setuproot
echo Switching to new root and running init.
switchroot
*** ini end ***

so this is realy strange...cause with this init scripts it's absolutely normal
that no lvm support is there...

but why i'm unable to recreate a new initrd bootet from the rescue imiage (with
the proceeded steps above, comment #2)...

Comment 7 Rolf Linder 2007-03-01 17:35:41 UTC
Created attachment 149027 [details]
init script

Comment 8 Rolf Linder 2007-03-01 17:46:23 UTC
in short here the steps i executed to create the new initrd:

1. boot in rescue mode (skip installation detection)
2. mkdir /mnt/sysimage
3. lvm vgchange -ay
4. mount /dev/VG01/LogVol01 /mnt/sysimage
5. mount /dev/hda2 /mnt/sysimage/boot
6. chroot /mnt/sysimage
7. umount {proc,sys}
8. mount -t proc proc /proc
9. mount -t sysfs sysfs /sys
10. mkinitrd -f /boo/initrd-[version] [version]
11. sync; exit
12. umount {/dev/hda2,/mnt/sysimage}
13. exit -> reboot

so i created another attachment containing the output of the command 'mkinitrd
-v -f /boot/init...'


Comment 9 Rolf Linder 2007-03-01 17:47:18 UTC
Created attachment 149029 [details]
output of mkinitrd command

Comment 10 Bryn M. Reeves 2007-07-20 10:47:38 UTC
Dumb question, but did you update /etc/fstab with the new VG name?

That's what mkinitrd parses to determine the resume device:

    # find the first swap dev which would get used for swsusp
    swsuspdev=$(awk '/^[ \t]*[^#]/ { if ($3 == "swap") { print $1; exit }}' $fstab)
    if [ "$swsuspdev" = "${swsuspdev##LABEL=}" ]; then
        handlelvordev $swsuspdev
    fi


Comment 11 Rolf Linder 2007-07-20 17:16:27 UTC
Yes. I have...

here again the steps I made. The problem still exists even on fedora 7...

[prerequisite: installation of fedora with lvm]

1. Boot into rescue mode (no searching of installation -> skip)
2. Scan for LVM-VG's -> lvm vgscan
3. Check to not have activ lvm in kernel -> lvm vgchange -a n
4. Rename VG -> lvm vgrename VolGroup00 VG00
5. Again, scan for VG's -> lvm vgscan
6. Activate lvm -> lvm vgchange -a y
7. Create directorys for mounting -> mkdir /mnt/sysimage/
8. Mount root-Logical Volume -> mount /dev/VG00/LogVol00 /mnt/sysimage
9. Mount boot-partition -> mount /dev/sdX /mnt/sysimage/boot
10. Change to new root -> chroot /mnt/sysimage
11. Remount /proc and /sys
12. Change VolGroup00 to VG00 in /boot/grub.conf and /etc/fstab
13. Creating new initrd -> mkinitrd -f -v /boot/initrd-xyz xyz (no errors while
creating initrd)
14. -> Sync 
15. exit 
16. umount partitions
17. reboot
18 -> Boot failed....

It ends up in a kernel panic....

so in my opinion it is not possible the rename a VolumeGroup on which exists a
root-filesystem.

Comment 12 Peter Jones 2007-07-20 18:04:29 UTC
So those steps should probably be:

1. Boot into rescue mode (no searching of installation -> skip)
2. Scan for LVM-VG's -> lvm vgscan
3. Check to not have activ lvm in kernel -> lvm vgchange -a n
4. Rename VG -> lvm vgrename VolGroup00 VG00
5. Again, scan for VG's -> lvm vgscan
6. Activate lvm -> lvm vgchange -a y
7. Create directorys for mounting -> mkdir /mnt/sysimage/
8. Mount root-Logical Volume -> mount /dev/VG00/LogVol00 /mnt/sysimage
9. Mount boot-partition -> mount /dev/sdX /mnt/sysimage/boot

10. make /dev/root: mknod /dev/root b $(lvm lvdisplay -C --noheadings
--separator " " --options lvm_kernel_major,lvm_kernel_minor)
11. Mount /dev -> mount -o bind /dev /mnt/sysimage/dev

12. Change to new root -> chroot /mnt/sysimage
13. Remount /proc and /sys
14. Change VolGroup00 to VG00 in /boot/grub.conf and /etc/fstab
15. Creating new initrd -> mkinitrd -f -v /boot/initrd-xyz.img xyz
16. exit
17. umount partitions
18. reboot



Comment 13 Peter Jones 2007-07-20 18:08:24 UTC
Er, make step 10:

mknod /dev/root b $(lvm lvdisplay -C --noheadings --separator " " --options
lvm_kernel_major,lvm_kernel_minor VG00/LogVol00 )

Comment 14 Rolf Linder 2007-07-21 13:59:04 UTC
Hy Peter

Thank you so much!! This was it...

This is the solution for renaming an existing root-VG. By the way, the options
used to create the root-dev were not lvm_kernel... it was lv_kernel...

Again, thank you so much!

Best regards, Rolf Linder

Comment 15 Daryl Hochhalter 2007-12-30 17:52:51 UTC
LVM VGRENAME FEDORA 8 SYSTEM ROOT THE FOLLOWING STEPS: (WORKED FOR ME!)

1. boot from Fedora livecd logon as root
2. /sbin/swapoff -a
3. Deactivate with: /sbin/lvm vgchange -a n
4. Rename VG's and LV's with /sbin/lvm [vg | lv]rename oldname newname
5. Reactive with: /sbin/lvm vgchange -a y
6. umount /media/_boot
7. mkdir /mnt/sysimage
8. mount /dev/newname/newname /mnt/sysimage
9. mount /dev/sdaX /mnt/sysimage/boot
10. Make /dev/root with: mknod /dev/root b lv_kernel_major lv_kernel_minor
    (You might try $(/sbin/lvm lvs --noheadings --separator " " --options 
lv_kernel_major,lv_kernel_minor)
    as a variable for mknod or just use /sbin/lvm lvs -v to first get the 
major,minor values.
11. Bind /dev with: mount -o bind /dev /mnt/sysimage/dev
12. chroot /mnt/sysimage
13. umount {proc,sys} (probably not even mounted)
14. mount -t proc proc /proc
15. mount -t sysfs sysfs /sys
16. Open /mnt/sysimage/boot/grub/grub.conf & /mnt/sysimage/etc/fstab with gedit
    and change oldnames to newnames. (ie. VolGroup00 to NewVolGroup and LogVol00
    to NewLogVol) You could cd to directories under newroot and use VI to.
17. Make New initrd for each kernel: mkinitrd -f -v /boot/initrd-XVERX.img XVERX
18. umount {proc,sys,dev}
19. exit
20. umount /mnt/sysimage/boot
21. umount /mnt/sysimage
22. reboot (remove CD from Drive)


Comment 16 MarcH 2010-07-08 09:39:29 UTC
These instructions saved my life, thanks Peter.

A few minor updates with respect to Fedora 13:

- mount --bind  instead of "mount -o bind" (why?)

- before mknod, need to remove the existing symlink:  rm /dev/root

- in order to get the correct $major and $minor numbers for "mknod /dev/root b $major $minor" no need to bother with long and complicated "lvs" options. Simply use a good old "ls -l" instead:  ls -l /dev/mapper/; ls -l /dev/dm-*

Comment 17 Howie Marshall 2012-10-04 23:20:41 UTC
In reply to comments #15 and 16:

Not quite:  The two "mount -t ..." commands don't work after the chroot command.  But I did get this to work by removing steps 13-15 and adding them after step 11.

Likewise, do not unmount dev, sys and proc until after exiting from chroot.

With those minor adjustments, it works great!  Here is my revised list of steps:

1. boot from Fedora livecd logon as root
2. Release any swap devices with:  /sbin/swapoff -a
3. Deactivate VGs with: /sbin/lvm vgchange -a n
4. Rename VG's and LV's with /sbin/lvm [vg | lv]rename oldname newname
5. Reactive with: /sbin/lvm vgchange -a y
6. umount /media/_boot
7. mkdir /mnt/sysimage
8. mount /dev/newname/newname /mnt/sysimage
9. mount /dev/sdaX /mnt/sysimage/boot
10. Replace /dev/root with your newly renamed root LV:
        rm /dev/root
        mknod /dev/root b lv_kernel_major lv_kernel_minor
     You might try $(/sbin/lvm lvs --noheadings --separator " " --options lv_kernel_major,lv_kernel_minor)
     as a variable for mknod or just use
        /sbin/lvm lvs -v to first get the major,minor values.
     Or, you can use ls to find the major,minor values:
        ls -l /dev/mapper /dev/dm-*
11. Bind /dev, /proc and /sys with:
        mount -o bind /dev  /mnt/sysimage/dev
        mount -o bind /proc /mnt/sysimage/proc
        mount -o bind /sys  /mnt/sysimage/sys
      Note: you may need "--bind" instead of "-o bind" on some versions.
12. chroot /mnt/sysimage
      Note: when I did this, the root VG/LV was shown mounted under its old name,
      but that did not seem to cause any apparent problem.
13. Open /mnt/sysimage/boot/grub/grub.conf & /mnt/sysimage/etc/fstab with gedit
    and change oldnames to newnames. (ie. VolGroup00 to NewVolGroup and LogVol00
    to NewLogVol) You could cd to directories under newroot and use VI to.
14. Make New initrd for each kernel: mkinitrd -f -v /boot/initrd-XVERX.img XVERX
15. exit [from chroot]
16. umount /mnt/sysimage/{boot,dev,proc.sys}
17. umount /mnt/sysimage
18. reboot (remove CD from Drive)


Note You need to log in before you can comment on or make changes to this bug.