The following was filed automatically by anaconda: anaconda 11.5.0.52 exception report Traceback (most recent call first): File "/usr/lib/anaconda/storage/formats/fs.py", line 439, in doCheck raise FSError("filesystem check failed: %s" % rc) File "/usr/lib/anaconda/storage/formats/fs.py", line 373, in doResize self.doCheck(intf=intf) File "/usr/lib/anaconda/storage/deviceaction.py", line 358, in execute self.device.format.doResize(intf=intf) File "/usr/lib/anaconda/storage/devicetree.py", line 671, in processActions action.execute(intf=self.intf) File "/usr/lib/anaconda/storage/__init__.py", line 238, in doIt self.devicetree.processActions() File "/usr/lib/anaconda/packages.py", line 117, in turnOnFilesystems anaconda.id.storage.doIt() FSError: filesystem check failed: 8
Created attachment 344144 [details] Attached traceback automatically from anaconda.
Error code 8 from e2fsck is "Operational error", whatever that means.
Got system logs from this attempt that might log the fsck output? does fsck output go to logs?
esandeen: storage.log, program.log and other log files from the installer are attached in the anacdump.txt file. Does this provide the information you're looking for?
oh, oops, so they are. :) But unfortunately, nope: Running... ['e2fsck', '-f', '-p', '-C', '0', '/dev/mapper/vg_test150-lv_root'] <nada> doesn't seem to have logged the output. I take it you were resizing a filesystem? Looks like anaconda was doing its iterative search for minimum size, I thought only livecd creation did that... damn. It's potentially buggy, but I was willing to live with it in the livecd creation stuff. If anaconda is doing this it's a bigger problem.
This bug was encountered while attempting retesting a fix in bug#499662. = Steps to reproduce = * Install autopart * Initiate a new install, and select 'Custom partition' * resize previous '/' logical volume to 5000 * add a new '/' using remaining free space in volume group * resize previous '/boot' partition from 196 to 195
The minimum size calc in resize2fs was broken, and if you tried to resize below the real minimum, some error paths were such that corruption ensued. I'm hoping that the fix for bug #499452 should fix this as well, though it's a bit hard to tell since we don't actually see your fsck output anywhere :( -Eric
Created attachment 345491 [details] Attached traceback automatically from anaconda.
Reproduced again while retesting the fix for bug#499662 Test procedure outlined at https://bugzilla.redhat.com/show_bug.cgi?id=499662#c7
A couple more resize fixes were put into e2fsprogs-1.41.4-10 ... James, have you reproduced with that as well?
Retested in comment#8 on rawhide-20090526 which included e2fsprogs-1.41.4-10.fc11
Ok, we need to get fsck output via an update disk or something which doesn't throw away the result ....
I've hit this bug again while testing and at the suggestion of notting, reran the fsck manually. sh-4.0# e2fsck -f -p -C 0 /dev/mapper/vg_brutus-lv_root /dev/mapper/vg_brutus-lv_root: Invalid argument while reading block 19431424 /dev/mapper/vg_brutus-lv_root: Invalid argument reading journal superblock e2fsck: Invalid argument while checking ext3 journal for /dev/mapper/vg_brutus-lv_root
I should note, this system is available for remote access if needed.
The partition is ~19G: # grep dm /proc/partitions 253 0 20480000 dm-0 # bc 20480000*1024/1024/1024/1024 19 The filesystem thinks it's 149G! # dumpe2fs -h /dev/mapper/vg_brutus-lv_root | grep -i block dumpe2fs 1.41.4 (27-Jan-2009) Block count: 39133184 Reserved block count: 1956659 Free blocks: 37661316 First block: 0 Block size: 4096 # bc 39133184*4096/1024/1024/1024 149 It looks like something shrunk the lv without first resizing the filesystem?
From the logs of the system I see: Running... ['dumpe2fs', '-h', '/dev/mapper/vg_brutus-lv_root'] ... Block count: 39133184 Reserved block count: 1956659 Free blocks: 37661316 Free inodes: 9675499 First block: 0 Block size: 4096 ... and then: Running... ['resize2fs', '-P', '/dev/mapper/vg_brutus-lv_root'] Estimated minimum size of the filesystem: 940881 resize2fs 1.41.4 (27-Jan-2009) This gets us the estimated minimum size of the fs (940881 blocks) I don't know if that second "resize2fs 1.41.4 (27-Jan-2009)" is resize2fs actually running? It seems out of order. But later I see: Running... ['lvm', 'lvresize', '-L', '20000m', 'vg_brutus/lv_root'] Reducing logical volume lv_root to 19.53 GB Logical volume lv_root successfully resized I don't actually see the result of a resize2fs which did the actual resize, anywhere.
Created attachment 346039 [details] Attached traceback automatically from anaconda.
I got this problem when I was just formatting existing partitions without resizing anything. If I choose not to format the partitons, then I don't get this problem.
Andrew got: Running... ['e2fsck', '-f', '-p', '-C', '0', '/dev/mapper/vgmp01-root4'] VGMP01_ROOT4: The filesystem size (according to the superblock) is 10907648 blocks The physical size of the device is 10485760 blocks Either the superblock or the partition table is likely to be corrupt! so here again, fs larger than block device... by about 1.6G Toward the end of his attached logs I do see: [2009-05-31 18:31:50,299] INFO: executing action: Resize Format (Shrink) ext3 on vgmp01-root4 (lvmlv) Dunno what's going on here. I don't see any explicit calls to resize2fs (other than -P to get minimum (though I wonder why that's done?)) or to lvresize (or to mkfs for that matter). Andrew, what is the size of your existing device that had the problem, I guess /dev/mapper/vgmp01-root4 ? I wonder, if you manually mkfs.ext4 that and fsck it does it come up with a problem? -Eric
I'm not sure why the install is even looking at vgmp01-root4, I wasn't using that logical volume at all - I was using vgmp01-root2 and /dev/sdb6 only for the install. From /proc/partitions, the physical device that vgmp01-root4 is on is 976762584 blocks, whereas vgmp01-root4 is 41943040 (these are 1K blocks, no?) = 10485760 e2fsk's 4K blocks? # tune2fs /dev/mapper/vgmp01-root4 -l tune2fs 1.41.4 (27-Jan-2009) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 422e0390-6dff-4c1f-b27b-15e537c39619 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 2621440 Block count: 10485760 Reserved block count: 524288 Free blocks: 10276173 Free inodes: 2621429 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1021 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Filesystem created: Sun May 31 22:47:45 2009 Last mount time: Mon Jun 1 11:41:40 2009 Last write time: Mon Jun 1 12:27:27 2009 Mount count: 0 Maximum mount count: 26 Last checked: Mon Jun 1 12:27:27 2009 Check interval: 15552000 (6 months) Next check after: Sat Nov 28 11:27:27 2009 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 1cead13e-7b4b-47b8-8527-ba0f856249df Journal backup: inode blocks I'd chosen ext3 for both those partitions, and before I re-tried the install (without formatting during the installation) I had done a mkfs.ext on vgmp01-root4. fsck now reports no problem: # fsck -f -p -C /dev/mapper/vgmp01-root4 fsck 1.41.4 (27-Jan-2009) /dev/mapper/vgmp01-root4: 11/2621440 files (0.0% non-contiguous), 209587/10485760 blocks Its borks at -C 0 (with a help page), but not -C.
I wonder if it's possible that there was preexisting corruption on the device? But in that case I'm still not sure why anaconda was fscking it if you weren't "using" it or otherwise modifying anything on it .... (and, um, weird, -C should require an argument, looks like you've found a bug in fsck... in any case, anaconda was running e2fsck, not just fsck)
Retested using anaconda-11.5.0.59 (F-11-RC4) and no longer seeing this failure. Closing this bug.
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle. Changing version to '11'. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Created attachment 347933 [details] Attached traceback automatically from anaconda.