Bug 449299 - after system restore, could not find filesystem
Summary: after system restore, could not find filesystem
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 9
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-06-01 23:11 UTC by Gene Czarcinski
Modified: 2008-06-06 18:19 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-06-06 18:19:30 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
vmware console showing error messages on test8 (71.51 KB, image/png)
2008-06-01 23:11 UTC, Gene Czarcinski
no flags Details

Description Gene Czarcinski 2008-06-01 23:11:23 UTC
Description of problem:
This can be a long story or a short story.  Rather than write the "great
American novel" I will try to keep this short and just try an provide what I
believe are the significant facts.  If you need more info as to what/how I did
things, just ask.

All testing was done in vmware virtuals (guests) and all installs were i386
Fedora 9.  I do not believe that vmware is the issue but I can replicate what I
did on real hardware ... an old opteron 140.

1.  In a vmware virtual (call it f9-t2), do multiple installs of Fedora 9 with
grub installed in the partition except for one and use chainloader to boot
alternate systems.  I am focusing on a Fedora 9 minimal install on sda6
(everything in one partition) and using sda2 as swap ... sda6 is chainloader'ed
to boot.

2. To digress for a moment, I am testing a re-write/re-development of partimage
called partimage-ng.  I used partimage-ng to "full disk" backup everything on
f9-t2 and then did "full disk" restores of everything onto a second vmware
virtual (call it test8).  After restoring, test each system to make sure it
worked (they all did).  All backups and restores were performed running on the
rescue mode of the Fedora 9 network install cdrom.

3. Back to f9-t2 and boot up rescue mode.  Mount /dev/sda6 read-only and cd to
the mount point.  run "dd bs=512 count=1 if=/dev/sda6 of=/root/sda6.mbr" and
then scp sda6.mbr over to server storage. Run "dump -0 -f -" and then "tar
--xattrs -pzcf -" ... in each case, output is piped to server storage via an ssh
tunnel. [I really like the current capabilities of rescue mode] ... after
backups, poweroff f9-t2.

4. Move to test8 and bootup in rescue mode.  Do "mke2fs -v -j /dev/sda6", mount
/dev/sda6 and then cd to the mount point.  Input is piped in from server storage
via an ssh tunnel.  Run "restore rf -" and then "dd bs=512 count=1 of=dev/sda6
if=<xx>" and reboot ... select sda6 for bootup ... booting starts ... and then
dies.  A Screenshot is attached as being the best way to document the failure.

5. Repeat 4 except use "tar --xattrs -pzxf -" to do the restore ... same failure
and I will skip attaching this screenshot.

6. Go back to f9-t2 and update the sda6 system with kernel-2.6.25.3-18.fc9.i686
... reboot into rescue mode and do dump and tar backups again.  No error message
from dump or tar.

7.  Try the "4 process" (on test8) with the updated kernel ... same failure. 
Note that during restore, I got a lot of what looked like selinux error messages
... these did not occur the first time.

Version-Release number of selected component (if applicable):
1. fresh install of Fedora 9
2. also tried kernel 2.6.25.3-18.fc9 i686

How reproducible:
yes

Comment 1 Gene Czarcinski 2008-06-01 23:11:23 UTC
Created attachment 307324 [details]
vmware console showing error messages on test8

Comment 2 Gene Czarcinski 2008-06-02 03:45:26 UTC
problem duplicated on real hardware

Comment 3 Chuck Ebbert 2008-06-06 02:04:27 UTC
The UUID changed when you created a new paritition and used mke2fs to create the
filesystem...


Comment 4 Gene Czarcinski 2008-06-06 05:46:15 UTC
OK, ... WOW .... and where is this documented?

I tried changing /etc/fstab to us the device (dev/sda6) rather than the UUID 
.... same problem.

And what utility can I use to display/change the UUID on a partition??  I know I
can fiddle with UUIDs in LVM but how about a regular partition?

Comment 5 Chuck Ebbert 2008-06-06 09:43:03 UTC
(In reply to comment #4)
> OK, ... WOW .... and where is this documented?
> 
> I tried changing /etc/fstab to us the device (dev/sda6) rather than the UUID 
> .... same problem.
> 
> And what utility can I use to display/change the UUID on a partition??  I know I
> can fiddle with UUIDs in LVM but how about a regular partition?

tune2fs -U


Comment 6 Gene Czarcinski 2008-06-06 18:02:36 UTC
OK, it is kind of hard to argue with success ... I got it to work using tune2fs
... BUT, it would be more than nice if some of this was documented somewhere [if
it is, I could not find it] ... and it still leaves the question of some command
to DISPLAY the current UUID on a device ... I did find the "findfs" command
which will display a device given a UUID but not the reverse of displaying the
UUID given a device.

I still want to try editing the grub kernel line to say root=/dev/sda6 to see if
that would have given me a running system.  But, that will not work with "/" on
an LVM Logical Volume ... that will need the UUID.

With the emphasis that Fedora (and I assume RHEL) is putting on using LVM and
UUID, someone knowledgeable of UUID ins-and-outs needs to create some "tips and
tricks" on UUID related matters.  While I might be able to write something, as I
demonstrated, I am not that knowledgeable!


Arrgh ... I just did a quick test where I have a minimal install (like on sda6)
but this time on a LVM Logical Volume.  What I found is that root=UUID= on grub
kernel line is different than the LVM Logical Volume's UUID AND you need both
... you still need to do a tune2fs -U on the Logical Volume's ext3 filesystem.

I know that you folks at Red Hat are busy and that it is not your (Chuck's)
responsibility to get the right documentation written and posted in the
appropriate place and/or manual such as for RHEL.  So, whose chain do I pull to
get some doc out there (besides this BZ report)?

Something tells me that this information needs to be available so that someone
is not in a panic when they are trying to restore a system from bare-metal. 
Yes, they can always reinstall but, if someone has taken the time and effort to
create real backups, a reinstall does not look appealing.

Comment 7 Gene Czarcinski 2008-06-06 18:19:30 UTC
fedora-documentation docs-request: 
https://bugzilla.redhat.com/show_bug.cgi?id=450331

closing this as notabug since I consider it to be a lack of documentation rather
than a software problem.


Note You need to log in before you can comment on or make changes to this bug.