Bug 911939 - fedup boot fails, cannot find LVM volumes on cciss device
Summary: fedup boot fails, cannot find LVM volumes on cciss device
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: fedup
Version: 19
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Will Woods
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-02-16 20:38 UTC by Chris Caudle
Modified: 2015-02-17 14:46 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-02-17 14:46:34 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
Output of journalctl from fedup-dracut debug shell. (82.75 KB, text/plain)
2013-02-23 04:01 UTC, Chris Caudle
no flags Details
log reason for exit to dracut shell (72.68 KB, text/plain)
2013-07-07 20:23 UTC, Knut J BJuland
no flags Details
output of lsinitrd of current F19 system (77.49 KB, text/plain)
2013-12-26 05:14 UTC, Chris Caudle
no flags Details
output of lsinitrd of fedup initramfs image (316.39 KB, text/plain)
2013-12-26 05:15 UTC, Chris Caudle
no flags Details
Comment (75.09 KB, text/plain)
2013-07-12 01:09 UTC, Rob Thomas
no flags Details

Description Chris Caudle 2013-02-16 20:38:57 UTC
Description of problem: Attempted to upgrade an up to date F17 install using fedup.  After rebooting to the upgrader, Plymouth started to boot, then stopped at this point:
[OK] reached target Basic System
dracut-initqueue[228]: warning could not boot /dev/VolGroup00/LogVol00 does not exist


Version-Release number of selected component (if applicable): fedup from F17, dracut from fedup-dracut F18


How reproducible:


Steps to Reproduce:
1. Execut fedup from F17 installation
2. reboot to upgrader
 
Actual results: boot of upgrader fails

Expected results: boot to upgrader and finish installation.

Additional info: drives using cciss driver.  /boot on non-LVM partition.

pvdisplay
 --- Physical volume ---
  PV Name               /dev/cciss/c0d1p2
  VG Name               VolGroup00
  PV Size               124.89 GiB / not usable 20.46 MiB
  Allocatable           yes 
  PE Size               32.00 MiB
  Total PE              3996
  Free PE               1
  Allocated PE          3995
  PV UUID               uYDdQ6-61cZ-DqXY-oBKa-3BsC-QrBW-yWot1O
 
lvdisplay
 --- Logical volume ---
  LV Path                /dev/VolGroup00/LogVol00
  LV Name                LogVol00
  VG Name                VolGroup00
  LV UUID                3AvBo5-hg4L-GajM-sH22-NB5Y-Fc5f-vmMR7X
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                70.31 GiB
  Current LE             2250
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/VolGroup00/LogVol01
  LV Name                LogVol01
  VG Name                VolGroup00
  LV UUID                RtiCkf-5Lo7-NKSL-dfgf-sAN3-9u9A-h7CrNi
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 2
  LV Size                1.94 GiB
  Current LE             62
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/VolGroup00/LogVol02
  LV Name                LogVol02
  VG Name                VolGroup00
  LV UUID                T5sUh0-W7wK-dGxb-uJw6-0VaS-MTjl-bRFIM8
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                52.59 GiB
  Current LE             1683
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

Comment 1 Chris Caudle 2013-02-16 21:13:27 UTC
I booted into the upgrader again, and running lvm at the dracut debug shell shows that the physical volume is detected, or at least pvdisplay shows the correct disk partition information, and lvdisplay shows all the volume names, but with a status of not available.
What can I do from lvm to get more info on why the volumes are not available?  The volume are working fine on the F17 installation, which is currently running kernel 3.6.10, and the upgrader seems to be running 3.6.10 as well.  

Is there a way to transfer info back and forth between the upgrader process and my F17 install?  So far I have just been writing down error messages, but if I start dumping debug logs or something like that writing by hand won't be practical.  Would a usb key be the best plan for that?

Comment 2 Will Woods 2013-02-16 23:12:42 UTC
I've used a USB key for that in the past, that's fine.

Alternately, you can mount /boot somewhere and copy logs etc. there.

Is there anything fancy about your LVM setup? Do you have an /etc/lvm.conf or anything like that?

Similarly: does the CCISS array get set up automatically in firmware or is it configured inside Fedora somewhere?

Comment 3 Chris Caudle 2013-02-17 01:50:12 UTC
The array controller is set up offline, to the OS it just looks like multiple disks available.

I never edited /etc/lvm/lvm.conf.  I looked at the file and it appears to be the default installed by the lvm2 package.  RPM says the file is owned by lvm2-2.02.95-6.fc17.x86_64 and there is no lvm.conf.rpm like would be created if the file had been modified at some point.
The only metadata backup file in /etc/lvm/backup was created a few years ago, so probably when the LVM volumes were created.  As far as I can tell from lvm.conf the volumes are all detected by scanning at startup, there does not appear to be anything modifying the default behavior. I suppose there could not be, because the lvm.conf file is located on one of the lvm volumes, so it cannot even be read until the volumes are mounted.

Comment 4 Chris Caudle 2013-02-23 04:01:23 UTC
Created attachment 701481 [details]
Output of journalctl from fedup-dracut debug shell.

This is the output of the journalctl command from the debug shell of fedup-dracut on my system where the LVM volumes are not detected correctly.

Comment 5 Chris Caudle 2013-02-23 04:03:33 UTC
I booted a KDE Spin LiveDisc and all the volumes were found.  Likewise when booting the Fedora 18 full install DVD into rescue mode, all volumes were found and mounted correctly under /mnt/sysimage.  I chroot'ed to /mnt/sysimage and verified that I could read files and move around to the various volumes mounted and they all appeared to be intact and mounted correctly. I forgot to check the version of the kernel on the full DVD rescue mode, is it the same as the version of the kernel used for fedup-dracut?

Comment 6 Chris Caudle 2013-04-14 20:12:23 UTC
No progress on this?  No one else has hit something similar?  
I tried again Saturday (2013-04-14) to see if anything in the network repositories had been updated that might have improved the situation, but still the same problem of fedup not finding the LVM volumes, even though I am using them under the same kernel version on F17, and even though the live disc images can find the volumes with no problem.

When booted into the fedup installer, I ran lvm again to make sure the volumes could be discovered, then tried running vgmknodes to create the device nodes in /dev.  The command did not complain (vgmknodes -v VolGroup00), but no new nodes showed up in /dev or /dev/mapper.  The only entry in /dev/mapper is control, where on my running machine I have /dev/mapper/control, /dev/mapper/VolGroup00-LogVol00, /dev/mapper/VolGroup00-LogVol01 and /dev/mapper/VolGroup00-LogVol02.

I was looking for /dev/VolGroup00, but I see now that the entries are symbolic links, and the "real" device nodes show up in /dev/dm-0 through /dev/dm-2.  I'll have to reboot into fedup dracut again to see if the dm nodes show up when I run vgmknodes, but that still does not explain why the LVM devices do not get mapped automatically so the installer can run, or why the typical symbolic links do not get created.

What are the full steps in booting from LVM?  Does lvm create the mapper nodes, or only the /dev/dm-? nodes, and some other tool is supposed to run and create the /dev/mapper entries based on the /dev/dm-? entries?  I have been trying various tools by hand to see if I could get the LVM volgroups mounted, then exit the debug shell to see if the upgrade would continue, but haven't found the right combination yet.

I have tried comparing messages from my normal run time kernel boot messages and dracut journalctl, but dracut does not seem to have the messages that appear to be from udev and systemctl in my running system, so I'm not sure if an apples to apples comparison can be made.
On my F17 logs I see this:
OK Started udev Wait for Complete Device Initialization.
         Starting Wait for storage scan...
OK Started Wait for storage scan.
         Starting Initialize storage subsystems (RAID, LVM, etc.)...
OK Found device /dev/mapper/VolGroup00-LogVol01.
         Activating swap /dev/mapper/VolGroup00-LogVol01...

And the closest equivalent in the journalctl output seems to be:
localhost systemd[1]: Starting Basic System.
localhost systemd[1]: Reached target Basic System.
localhost dracut-initqueue[240]: Scanning devices  for LVM logical volumes VolGroup00/LogVol01 VolGroup00/LogVol00
localhost dracut-initqueue[240]: No volume groups found

Comment 7 Will Woods 2013-04-15 14:31:32 UTC
Yeah, for some reason lvm isn't finding its VGs. Mysterious.

The main difference between the F18 Live image and fedup is the boot args (rd.lvm.lv=XXX root=XXX) that are used - kernel and initrd should be otherwise the same.

What's the output of 'blkid' and 'lvm lvs' (as root) look like on your running system? 

What happens if you change "root=/dev/mapper/VolGroup00-LogVol01" to "root=UUID=<UUID for root device>"?

Comment 8 Chris Caudle 2013-04-15 19:17:35 UTC
Do you mean changing the root argument to the dracut entry from the grub menu?

On my running F17 system,the output of blkid and lvm lvs is:

$ su -c blkid
Password:
/dev/cciss/c0d0p1: UUID=90A0E44BA0E438FC TYPE=ntfs
/dev/cciss/c0d1p1: LABEL=/boot UUID=55d9c3bb-9166-4172-90ab-707a93a51c51 TYPE=ext3
/dev/cciss/c0d1p2: UUID=uYDdQ6-61cZ-DqXY-oBKa-3BsC-QrBW-yWot1O TYPE=LVM2_member
/dev/cciss/c0d2p1: LABEL=Videos UUID=96B0429CB042832B TYPE=ntfs
/dev/cciss/c0d3p1: UUID=ab0f827c-ea02-4d3b-bc3a-5c4a97f53c6c TYPE=xfs
/dev/mapper/VolGroup00-LogVol00: UUID=588525e6-b088-4dda-b026-be33ec5baa98 TYPE=ext4
/dev/mapper/VolGroup00-LogVol01: UUID=33e9a1c9-6ed1-4b9e-ae4b-af49fba46e70 TYPE=swap
/dev/mapper/VolGroup00-LogVol02: UUID=f52ec887-88bd-4b65-a996-403dddec0269 TYPE=ext4

$ su -c lvm lvs
Password:
  LV       VG         Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
  LogVol00 VolGroup00 -wi-ao-- 70.31g
  LogVol01 VolGroup00 -wi-ao--  1.94g
  LogVol02 VolGroup00 -wi-ao-- 52.59g

Comment 9 Chris Caudle 2013-04-15 19:21:03 UTC
Also, would the UUID to use for the root= argument be the UUID of the block device c0d1p2 or the UUID of /dev/mapper/VolGroup-LogVol00?  The UUID printed for the block device does not look like it conforms to the format of the other devices (doesn't look like it is printed in hex), so I am assuming that is not a valid UUID.

Comment 10 Chris Caudle 2013-04-16 14:47:49 UTC
After changing root=/dev/mapper/VolGroup00-LogVol00 to root=UID=588525e6-b088-4dda-b026-be33ec5baa98 basically the same results, except the end of the journalctl output now has an additional line after Warning: /dev/mapper/VolGroup00-LogVol00 does not exist that says something to the effect of warning /dev/disk/by-UUID/588525e6-b088-4dda-b026-be33ec5baa98 does not exist.

After thinking about it, that may be expected, since the root volume is on an LVM, the UUID I used pointed to the LVM.  If the kernel could load the LVM far enough to find the UUID, it should have been able to just use the LVM volume, since VolGroup00-LogVol00 isn't really ambiguous on this system, there is only the one VolGroup.

Comment 11 Chris Caudle 2013-04-16 14:57:18 UTC
By the way, which is the correct UUID to use?  Notice that the output of lvdisplay above shows the UUID for LogVoll00 as 3AvBo5-hg4L-GajM-sH22-NB5Y-Fc5f-vmMR7X but that doesn't look like all the other UUID's used by the system (e.g. in fstab).  All the other UUID appear to be hex values, so I assume that 588525e6-b088-4dda-b026-be33ec5baa98 was the correct UUID to use and not 3AvBo5-hg4L-GajM-sH22-NB5Y-Fc5f-vmMR7X.

Comment 12 Chris Caudle 2013-06-29 15:00:04 UTC
I just saw an update to fedup come down for my F17 system.  Are any of the changes expected to make a difference for this problem?  I was waiting for the U.S. July 4 holiday weekend for some down time to try and upgrade using yum before I gave up and did a re-install.  Is it worth trying the new version fedup first?  Or is dracut the same in the new version?

Comment 13 Will Woods 2013-07-01 18:23:41 UTC
The version of fedup-dracut in F18 will not change; we don't update the boot images/installer/etc. for a given Fedora version once released.

As for the boot arguments, after some research it seems like the usual form is something like:

  root=/dev/mapper/VolGroup00-LogVol00 rd.lvm.lv=VolGroup00-LogVol00

And that's what you have, so that's fine. But on further review of the logs, the interesting bit is:

  dracut-initqueue[240]: No volume groups found
  dracut-initqueue[240]: PARTIAL MODE. Incomplete logical volumes will be processed.

Why is it in partial mode? Why does it think there are missing PVs?

The output of: `vgs -o +devices` and/or ` lvs -P -a -o +devices` might be helpful here. Could you run those commands and paste their output?

Comment 14 Jeffrey C. Ollie 2013-07-01 19:47:46 UTC
(In reply to Will Woods from comment #13)
>
> The output of: `vgs -o +devices` and/or ` lvs -P -a -o +devices` might be
> helpful here. Could you run those commands and paste their output?

Here's the output from one of my systems that is having this problem:

[root@fw06 ~]# vgs -o +devices
  VG           #PV #LV #SN Attr   VSize  VFree  Devices               
  VG_FW06_BOOT   1   2   0 wz--n- 67.25g 43.25g /dev/cciss/c0d0p2(0)  
  VG_FW06_BOOT   1   2   0 wz--n- 67.25g 43.25g /dev/cciss/c0d0p2(256)
[root@fw06 ~]# lvs -P -a -o +devices
  Partial mode. Incomplete logical volumes will be processed.
  LV           VG           Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert Devices               
  LV_FW06_ROOT VG_FW06_BOOT -wi-ao-- 16.00g                                            /dev/cciss/c0d0p2(256)
  LV_FW06_SWAP VG_FW06_BOOT -wi-ao--  8.00g                                            /dev/cciss/c0d0p2(0)

Comment 15 Chris Caudle 2013-07-04 16:32:02 UTC
From my running F17 system:

[root@chubb /]# vgs -o +devices
  VG         #PV #LV #SN Attr   VSize   VFree  Devices                
  VolGroup00   1   3   0 wz--n- 124.88g 32.00m /dev/cciss/c0d1p2(0)   
  VolGroup00   1   3   0 wz--n- 124.88g 32.00m /dev/cciss/c0d1p2(3933)
  VolGroup00   1   3   0 wz--n- 124.88g 32.00m /dev/cciss/c0d1p2(2250)


The lvs command has odd formatting, I'm not sure how to best present it.
Interesting that it shows up as partial mode on the running system.  I'm not familiar enough with lvm to debug what is going on.  The lvdisplay command shows the three logical volumes with status of "available," with no  indication of partial.  The lvs command does not have any lines about missing PV, so how do I determine what is causing the partial mode?  I'm encouraged that at least there seems to be a possible explanation now for why the upgrade is failing, I at least have something I an try to fix.


[root@chubb /]# lvs -P -a -o +devices
  Partial mode. Incomplete logical volumes will be processed.
  LV       VG         Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert Devices                
  LogVol00 VolGroup00 -wi-ao-- 70.31g                                            /dev/cciss/c0d1p2(0)   
  LogVol01 VolGroup00 -wi-ao--  1.94g                                            /dev/cciss/c0d1p2(3933)
  LogVol02 VolGroup00 -wi-ao-- 52.59g                                            /dev/cciss/c0d1p2(2250)

Comment 16 Knut J BJuland 2013-07-07 20:19:50 UTC
Still present when upgrading from Fedora 18 to fedora 19

Comment 17 Knut J BJuland 2013-07-07 20:23:48 UTC
Created attachment 770173 [details]
log reason for exit to dracut shell

Comment 18 Michael 2013-07-08 08:10:10 UTC
I have the same problem trying to upgrade F18 to F19 !

[root@FedoraSRV ~]# vgs -o +devices
  VG         #PV #LV #SN Attr   VSize   VFree Devices        
  VolGroup10   1   3   0 wz--n- 924.56g    0  /dev/md1(0)    
  VolGroup10   1   3   0 wz--n- 924.56g    0  /dev/md1(1250) 
  VolGroup10   1   3   0 wz--n- 924.56g    0  /dev/md1(29585)
  VolGroup10   1   3   0 wz--n- 924.56g    0  /dev/md1(28960)


[root@FedoraSRV ~]# lvs -P -a -o +devices
  PARTIAL MODE. Incomplete logical volumes will be processed.
  LV       VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert Devices        
  LogVol00 VolGroup10 -wi-ao---  39.06g                                            /dev/md1(0)    
  LogVol01 VolGroup10 -wi-ao---  19.53g                                            /dev/md1(28960)
  LogVol02 VolGroup10 -wi-ao--- 865.97g                                            /dev/md1(1250) 
  LogVol02 VolGroup10 -wi-ao--- 865.97g                                            /dev/md1(29585)

Comment 19 Rob Thomas 2013-07-12 01:09:03 UTC
Created attachment 915732 [details]
Comment

(This comment was longer than 65,535 characters and has been moved to an attachment by Red Hat Bugzilla).

Comment 20 Rob Thomas 2013-07-13 04:31:41 UTC
Figured it out.  Not sure what I did different.  From the dracut shell, type in lvm.  Then vgscan --mknodes, then vgchange -ay
exit
exit
mount -a
It fires up from there.

Comment 21 Chris Caudle 2013-07-29 18:35:08 UTC
Just a correction to what Rob Thomas wrote:
lvm
<from within the lvm program>
vgscan --mknodes
vgchange -ay
exit
<exited lvm, back to dracut shell>
<mount -a did not work for me, because that relies on fstab, which in my case 
is located on  one of the lvm volumes, so not mounted>
mount <physical volume or log vol> <mountpoint under sysroot>
exit

The sequence as originally given, exit, exit, mount -a exits the dracut shell before you get a chance to give the mount command, so it should be exit, mount, exit.

I successfully used this work around to upgrade F17 to F18, then F18 to F19.  After the first upgrade the changes were not permanent, i.e. I had to use the work around again for the F18 to F19 upgrade.
It would be nice to understand what is wrong with the current setup that is causing dracut to not automatically mount the log vols.  If anyone figures out what to do to the running system so that this is not a problem again please append to this bug report.

Comment 22 Francois Cartegnie 2013-09-21 14:03:18 UTC
Doesn't work on my side

mdadm --assemble /dev/md0
mdadm --assemble /dev/md1
lvm stuff
mount /dev/VolGroup/root /sysroot
exit
exit

-> Starts regular init with fedup kernel....

Forcing systemd target to install just.... uninstalled the upgrade waiting in /var and fedup /boot entries.

Comment 23 Francois Cartegnie 2013-09-21 18:36:18 UTC
Updated using chroot from livecd.

Similar behaviour when you rebuild initrd without /run binded.

Comment 24 Will Woods 2013-10-09 22:01:45 UTC
So it seems there's some file/module missing from the initramfs that's needed to get cciss devices working? Is there a cciss.conf or somesuch? Or does it use mdadm.conf?

Does your normal initramfs have files in /etc/cmdline.d?

Comment 25 Chris Caudle 2013-10-31 02:03:19 UTC
Will Woods: not sure where you got the idea that cciss devices were not working.  If you read through the details of the problem you will note that the physical volumes are discovered, which necessarily implies that the cciss driver is working just fine.  The problem is that lvm determines that there is an inconsistency in the lvm volumes and will not mount them automatically.  The volumes are mounted without complaint on the running system, but lvm in dracut will not mount them to upgrade.  Manually forcing them to mount works OK, so there is nothing fundamentally wrong with the volumes, although I cannot rule out that there really is technically something not correct or not consistent with the volumes all the time, but if that were the case I would expect some error messages in /var/log/messages, but I find none.

Comment 26 Will Woods 2013-10-31 21:15:58 UTC
No, I understand that the devices are working, but something's causing the system to fail to mount the devices. That's the problem. It has a few possible causes, and I'm trying to eliminate some of them.


So let me rephrase: On a normal boot, the dracut initrd on your system brings up the cciss devices and LVM volumes without a problem, right? The fedup upgrade.img (aka /boot/initramfs-fedup.img) is just a normal dracut initrd. The only important differences are:

1) upgrade.img is built using the *new* system's kernel/dracut/etc.
   (for example: if you're upgrading to F20, upgrade.img is a F20 dracut image.)

2) upgrade.img is not built on your system.


Therefore the likely reasons that your initramfs.img mounts everything happily and upgrade.img doesn't are:

1) There's some new requirement in F20 that didn't exist in F19.

Example: F18 auto-assembled all LVM devices, so "root=/dev/mapper/fedora-root" works by itself; F19 requires 'rd.lvm.lv=fedora-root' to tell dracut to activate that LV, or 'rd.auto' to activate all disk devices it finds).

2) There's some file on your system that dracut puts into the initramfs which isn't present in upgrade.img, because it wasn't built on your system.

Example: /etc/mdadm.conf is required for some mdraid arrays to get the intended device number, so we need to pull that file into upgrade.img before rebooting.


So to try to rule out the second one, I'm asking you to check out the contents of:

   lsinitrd /boot/initramfs-$(uname -r).img
   lsinitrd /boot/initramfs-fedup.img

and look for files that the former has but the latter lacks, especially anything in /etc/cmdline.d, or /etc/*.conf.

Are there any such differences in the initrds?

Comment 27 Fedora End Of Life 2013-12-21 11:27:39 UTC
This message is a reminder that Fedora 18 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 18. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '18'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 18's end of life.

Thank you for reporting this issue and we are sorry that we may not be 
able to fix it before Fedora 18 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior to Fedora 18's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 28 Chris Caudle 2013-12-26 05:09:52 UTC
The initramfs of my current system does have /etc/cmdline.d/90lvm.conf but my system does not actually have a /etc/cmdline.d directory.  Where would that come from in the initramfs?  From the kernel build specs?

I will add the output of lsinitrd.  I will try tomorrow the F19 to F20 upgrade to see if it has the same issues as the F17 to F18 and F18 to F19 upgrades in the past.

Comment 29 Chris Caudle 2013-12-26 05:14:25 UTC
Created attachment 841779 [details]
output of lsinitrd of current F19 system

This is the output of lsinitrd of my currently running kernel, which happens to be a -RT from CCRMA.  Previously was running a stock fedora kernel.

Comment 30 Chris Caudle 2013-12-26 05:15:48 UTC
Created attachment 841780 [details]
output of lsinitrd of fedup  initramfs image

This is the output of lsinitrd from the initramfs image that fedup just placed in /boot (upgrading from F19 to F20)

Comment 31 Chris Caudle 2013-12-26 16:05:28 UTC
I tried the upgrade this morning and after the usual boot messages scrolled by (definitely saw the "reached target system upgrade" message) there were several messages that started with umount or umounting that scrolled by too fast to read, then the system rebooted without running the upgrade and went back to the installed system (F19).

I can't find any log files on the F19 filesystem.  Would fedup have dropped any log files onto the original system, or would all of the fedup logs have been lost when the system rebooted?

The fedup entry in grub.cfg was removed when the fedup image booted, so I am running fedup again to get the entry back into the grub menu.

I can't tell if this is related to the original problem or not.  My first instinct is no because of how different the behavior is.  Is there any way to tell fedup to halt on error instead of rebooting so I can check the console messages?

Comment 32 Rob Thomas 2013-12-27 02:30:45 UTC
Chris, I had that same problem on my desktop.  The solution turned out that I had two packages that were causing the problem.  The first one it couldn't upgrade or deal with so I removed it.  The second was a software package that didn't have a valid RPM key for it.  Wasn't a RedHat/Fedora package.  It was a package that works with the Ouya.  Since I didn't even use it, I blew it away.  Yum update using the published update option and no problem once the other issues were taken care of.  Pain when you really don't have time to deal with it and I didn't at the time.  Below is my history file that finally solved it.  First time I've tried to use fedup.  First time using many of the commands with the options shown below - from web pages on how to upgrade to new versions.

HTH.

  950  tail -500 fedup.log
  951  ls /var/lib/fedora-upgrade
  952  ls -l /sys*
  953  yum install rpmconf; rpmconf -a
  954  rpmconf -a
  955   find /etc /var -name '*?.rpm?*'
  956  rm /etc/pki/tls/certs/ca-bundle.trust.crt.rpmsave /etc/pki/tls/certs/ca-bundle.crt.rpmsave /etc/pki/java/cacerts.rpmsave /etc/rpm/macros.kde4.rpmsave
  957  package-cleanup
  958  package-cleanup --leaves
  959  yum remove libxml2-devel-2.9.1-1.fc19.x86_64 libuuid-devel-2.23.2-4.fc19.x86_64 libusbx-devel-doc-1.0.16-3.fc19.noarch libusb-devel-0.1.5-2.fc19.x86_64
  960  package-cleanup --leaves
  961  yum erase `package-cleanup --leaves`
  962  package-cleanup --orphans
  963  yum install fedora-upgrade 
  964  fedora-upgrade
  965  yum erase kde-settings-kdm-19-23.fc19.noarch
  966  yum check
  967  yum check all
  968  rpm -e kde-settings-kdm-19-23.fc19.noarch
  969  yum erase  kde-settings-kdm-19-23.fc19.noarch
  970  rpm -e -f kde-settings-kdm-19-23.fc19.noarch
  971  rpm -e --forcef kde-settings-kdm-19-23.fc19.noarch
  972  rpm -e --force  kde-settings-kdm-19-23.fc19.noarch
  973  man rpm
  974  rpm -e --noscripts  kde-settings-kdm-19-23.fc19.noarch
  975  yum check
  976  fedora-upgrade
  977  yum install autocorr-en
  978  rpm -Va --nofiles --nodigest
  979  yum update yum
  980  yum --releasever=20 distro-sync
  981  package-cleanup  --dupes
  982   yum-complete-transaction
  983  package-cleanup --problems
  984  yum clean all
  985  yum --releasever=20 distro-sync
  986  yum erase libreoffice-core
  987  yum --releasever=20 distro-sync
  988  rpm --import https://fedoraproject.org/static/246110C1.txt
  989  yum update yum
  990  yum --releasever=20 distro-sync
  991  rm /etc/pki/ca-trust/source/anchors/Cert-trust-test-ca.pem
  992  yum clean all
  993  yum --releasever=20 distro-sync
  994  rpm -qa | grep plex
  995  yum erase plexmediaserver-0.9.7.28.33-f80a4a2.x86_64
  996  yum --releasever=20 distro-sync
  997  sync
  998  cat /etc/redhat-*
  999  reboot

Comment 33 Chris Caudle 2014-01-06 18:37:30 UTC
Removed some packages, updated with YUM and got past the stop during upgrade.
The original problem with LVM volumes which occurred during the F17 to F18 and F18 to F19 upgrades did not occur during this F19 to F20 upgrade, so I think the problem can be marked as no longer existing.  Should I change status to closed in bugzilla, or does Will Woods do that?

Comment 34 Michael 2014-01-07 08:08:55 UTC
In my recent upgrade from F19 -> F20, i had the same problem :

warning could not boot /dev/VolGroup00/LogVol00 does not exist

the problem was, fedup does not assemble RAID drives !

So i run :

mdadm --assemble /dev/md0
mdadm --assemble /dev/md1

(i don't remember if i run any lvm command, as comment #22 suggest )

exiting from the shell, the process continue without problem.

Comment 35 Fedora End Of Life 2015-01-09 17:41:16 UTC
This message is a notice that Fedora 19 is now at end of life. Fedora 
has stopped maintaining and issuing updates for Fedora 19. It is 
Fedora's policy to close all bug reports from releases that are no 
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 19 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 36 Fedora End Of Life 2015-02-17 14:46:34 UTC
Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.