Bug 160612
Summary: | resize2fs wants to many credits, can't resize logical volume | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Justin Conover <justin.conover> |
Component: | kernel | Assignee: | Eric Sandeen <esandeen> |
Status: | CLOSED DUPLICATE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | rawhide | CC: | bmr, hoffmann, k.georgiou, mike.fleetwood, oskari, ralston, sct, urkle, zing |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | NeedsRetesting | ||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2007-10-02 22:53:54 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Justin Conover
2005-06-16 03:59:16 UTC
I blew out the box and installed FC4, same results. ext2online: ext2_ioctl: No space left on device ext2online: unable to resize /dev/mapper/VolGroup00-LogVol01 JBD: ext2online wants too many credits (2050 > 2048) ODD, I just extended /usr another 2 GB! Could there be a size limit problem happeneing? I know before that I was doing this on /home and it was pretty large, 40-80GB and now my home on this install today strated at 40GB and I tried to increase it another 5GB. /usr was 8GB and I just extended it to 10GB # tune2fs -l /dev/VolGroup00/LogVol01 tune2fs 1.37 (21-Mar-2005) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 3335579e-265f-4356-8eb4-8be804474449 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 11239424 Block count: 11239424 Reserved block count: 561962 Free blocks: 7952044 Free inodes: 11232257 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1021 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 32768 Inode blocks per group: 1024 Filesystem created: Thu Jun 16 15:26:40 2005 Last mount time: Fri Jun 17 17:33:38 2005 Last write time: Fri Jun 17 17:33:38 2005 Mount count: 9 Maximum mount count: -1 Last checked: Thu Jun 16 15:26:40 2005 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: tea Directory Hash Seed: 5319876d-5539-4021-8dd0-e4f50401f85d Journal backup: inode blocks Reassigning for now: "JBD: ext2online wants too many credits (2050 > 2048)" is an ext3 kernel message and we need to track that down before going any further. From: Justin Conover <justin.conover> ... This is the broken fc4 x86_64 box (forgot to include it) lvextend -L+2G /dev/VolGroup00/LogVol01 /dev/cdrom: open failed: Read-only file system Incorrect metadata area header checksum Extending logical volume LogVol01 to 57.00 GB Logical volume LogVol01 successfully resized [root@morpheus ~]# ext2online -d /dev/VolGroup00/LogVol01 ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b ext2_open ext2_bcache_init ext2_determine_itoffset setting itoffset to +1027 ext2_get_reserved Found 1021 blocks in s_reserved_gdt_blocks <cut> using itoffset of 1027 new block bitmap is at 0xab8401 new inode bitmap is at 0xab8402 new inode table is at 0xab8403-0xab8802 new group has 30717 free blocks new group has 32768 free inodes (1024 blocks) ext2_ioctl: ADD group 343 ext2online: ext2_ioctl: No space left on device ext2online: unable to resize /dev/mapper/VolGroup00-LogVol01 JBD: ext2online wants too many credits (2050 > 2048) # df -h /home Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol01 42G 27G 14G 67% /home I rebuilt the box for another test, this time makeing home 80GB, no problem resizing. I don't know if it is possible, but it appears that on 3 different box's, 2 different archs, 2 versions fc4/rawhide. That if you create a 40GB lv, your stuck with a 40GB lv. Is there any other test or debug info I can do? At the moment, my two box's that run rawhide only have 40GB and 28GB free on the lg so I would have to do a re-install to test any more 40GB theorys. ;) I just ran into this on a x86-64 box as well. My /home was 40G LVM, and after I extended it with 24G ext2online won't grow the filesystem beyond 42G. I get the same errors as Justin. I have used lvm and ext2online lots of times on other systems, and a few times on this system as well to grow other filesystems without any problems. (/srv went from 10G to 20G for example) The filesystem I was trying to extend was created by the installer. Manually creating a new 40G LV and ext3 filesystem and then enlarging it by 6G worked without problems. I am running FC4 with kernel 2.6.11-1.1369_FC4. I'm having the same issue on a x86 system. Fresh install of FC4, I set my /home to 20GB and resized to 40GB (lgextend -L +20GB /dev/mapper/CoreOS-Home) ext2online added 10GB, but refused to go any higheer and spits the "wants too many credits (2052 > 2048) line. Could this have to do w/ inode block\ sizes? dumpe2fs -h /dev/mapper/CoreOS-Home Filesystem UUID: d9a36266-bff3-4582-bb62-3743cd8f5c72 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 7962624 Block count: 7962624 Reserved block count: 398098 Free blocks: 898068 Free inodes: 5210931 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1024 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 32768 Inode blocks per group: 1024 Filesystem created: Sat Sep 17 16:14:37 2005 Last mount time: Fri Sep 23 00:06:15 2005 Last write time: Mon Sep 26 00:20:22 2005 Mount count: 6 Maximum mount count: -1 Last checked: Sat Sep 17 16:14:37 2005 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 First orphan inode: 4980898 Default directory hash: tea Directory Hash Seed: eb827e1a-d26d-4d52-91e7-400f218fd760 Journal backup: inode blocks Bug 166038 appears to be a dupe of this. It suggests that offline resizing solves this issue. I see the same problem with very small filesystems. Anything <~50MiB seems to suffer from this. Also, larger filesystems when the blocksize is set explicitly with -b <size> during mke2fs. In these cases, the fs only has one block group - increasing the size until at least two block groups are present makes the problem go away. Don't know if this is related to the problem, as tiny filesystems created without -b containing as many as 3 block groups still exhibit the 'ext2online: unable to resize /path/to/lv' and 'JBD: ext2online wants too many credits (M > N)' error messages. This happened to me as well, it was because the installer doesn't use a very large value (if any value at all) of "max-online-resize" as an extended-option in the options it sends to mke2fs. Offline resizing using resize2fs does work fine though. Perhaps the only thing to be done would be to have the installer use a heuristic multiple of the current filesystem size as a max-online-resize, or provide the option somewhere during installation to set it to a very large number for admins that know the filesystem will be part of an LVM set or similar. The max-online-resize value is a red herring. By default, mke2fs allocates space so that the block group descriptor table can grow by a factor of 1024. For example, if you create a 512MB filesystem, it will be able to grow to 512GB. You can verify this by looking for the "Maximum filesystem blocks" line in the output from mke2fs. I can reproduce this problem reliably by trying to extend a 512MB filesystem that was created with a 4K blocksize: $ lvcreate -L 512M -n test os Logical volume "test" created $ mke2fs -q -j -b -4096 -O dir_index /dev/os/test $ mount /dev/os/test /mnt $ lvresize -L 1G /dev/os/test Extending logical volume test to 1.00 GB Logical volume test successfully resized $ ext2online -C 0 /dev/os/test ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b ext2online: ext2_ioctl: No space left on device ext2online: unable to resize /dev/mapper/os-test If, however, I use a 1K blocksize (the default value for a 512MB filesystem), I don't encounter the problem: $ lvcreate -L 512M -n test os Logical volume "test" created $ mke2fs -q -j -b -1024 -O dir_index /dev/os/test $ mount /dev/os/test /mnt $ lvresize -L 1G /dev/os/test Extending logical volume test to 1.00 GB Logical volume test successfully resized $ ext2online -C 0 /dev/os/test ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b Added 524287/524287... One workaround is to resize offline using resize2fs. However, in my tests, no matter how many times I successively extended the filesystem using resize2fs, ext2online still wouldn't work. In other words, it appears that if a filesystem is affected by this issue, you must *always* use resize2fs; ext2online will never work. Unfortunately, this issue is biting me, because I routinely create 512MB filesystems using a 4K blocksize (in anticipation of potentially extending them). I'm seeing this problem on RHEL4 (current with errata updates) and FC5 (also current with errata updates). Thanks for that update, but it does not seem to be the same thing: the problem in this bug report is resize aborting with JBD: ext2online wants too many credits (2050 > 2048) not with ENOSPC. But I'll have a hunt anyway; thanks. It would be helpful to have a separate bugzilla opened for the 512MB 4k blocksize problem. No, it's the same problem. The "ext2_ioctl: No space left on device" message is the error message that ext2online prints. Simultaneously, the kernel syslogs the "ext2online wants too many credits" error message: May 1 14:48:44 example kernel: JBD: ext2online wants too many credits (1026 > 1024) There's no problem with 512MB partitions using a 4K blocksize per se. I just used that as an example because trying to extend a 512MB partition that was created with a 4K blocksize reliably reproduces the problem. Has there been any progress on this issue? (This is still broken under 2.6.17-1.2145_FC5.) A new kernel update has been released (Version: 2.6.18-1.2200.fc5) based upon a new upstream kernel release. Please retest against this new kernel, as a large number of patches go into each upstream release, possibly including changes that may address this problem. This bug has been placed in NEEDINFO state. Due to the large volume of inactive bugs in bugzilla, if this bug is still in this state in two weeks time, it will be closed. Should this bug still be relevant after this period, the reporter can reopen the bug at any time. Any other users on the Cc: list of this bug can request that the bug be reopened by adding a comment to the bug. In the last few updates, some users upgrading from FC4->FC5 have reported that installing a kernel update has left their systems unbootable. If you have been affected by this problem please check you only have one version of device-mapper & lvm2 installed. See bug 207474 for further details. If this bug is a problem preventing you from installing the release this version is filed against, please see bug 169613. If this bug has been fixed, but you are now experiencing a different problem, please file a separate bug for the new problem. Thank you. Hello, I also get exactly the same problem when trying to resize my root (/) filesystem from 30 GB to fill the LV just extended to 36 GB. OS: FC5 w/ all fixes Hardware: x86 RPMs: kernel-2.6.18-1.2200.fc5 e2fsprogs-1.38-12 [root@mfleetwo3 ~]# lvextend -l +220 /dev/VolGroup00/LogVol00 Extending logical volume LogVol00 to 36.16 GB Logical volume LogVol00 successfully resized [root@mfleetwo3 ~]# ext2online -d / ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b ext2_open ext2_bcache_init ext2_determine_itoffset setting itoffset to +1029 ext2_get_reserved Found 1024 blocks in s_reserved_gdt_blocks 243 old groups, 2 blocks 290 new groups, 3 blocks checking for group block 32771 in Bond checking for group block 98307 in Bond checking for group block 163843 in Bond checking for group block 229379 in Bond checking for group block 294915 in Bond checking for group block 819203 in Bond checking for group block 884739 in Bond checking for group block 1605635 in Bond checking for group block 2654211 in Bond checking for group block 4096003 in Bond ext2_ioctl: EXTEND group to 7962624 blocks using itoffset of 1029 new block bitmap is at 0x798403 new inode bitmap is at 0x798404 new inode table is at 0x798405-0x798801 new group has 30718 free blocks new group has 32672 free inodes (1021 blocks) ext2_ioctl: ADD group 243 ext2online: ext2_ioctl: No space left on device ext2online: unable to resize /dev/mapper/VolGroup00-LogVol00 [root@mfleetwo3 ~]# tune2fs -l /dev/mapper/VolGroup00-LogVol00 tune2fs 1.38 (30-Jun-2005) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: ede79e0e-571b-4464-abf3-569802325f16 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 7939296 Block count: 7962624 Reserved block count: 396899 Free blocks: 1014244 Free inodes: 7508316 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1024 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 32672 Inode blocks per group: 1021 Filesystem created: Mon Aug 1 11:27:18 2005 Last mount time: Wed Oct 18 09:11:05 2006 Last write time: Sat Nov 4 15:55:40 2006 Mount count: 16 Maximum mount count: -1 Last checked: Thu Sep 8 13:43:47 2005 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 First orphan inode: 426687 Default directory hash: tea Directory Hash Seed: fef3f985-9ff9-407e-b96f-59fdc644e6eb Journal backup: inode blocks At the same time this JBD error appears in the system log. kernel: JBD: ext2online wants too many credits (2049 > 2048) Thanks, Mike Echoing what Mike said in comment 17: 2.6.18-1.2200.fc5 is still broken; it fails in exactly the same way. In FC6, ext2online has been deprecated in favor of using resize2fs, which has been enhanced to permit increasing the size of online volumes. Using resize2fs under FC6, I do *not* experience any problems. So, in terms of Fedora Core <5, this problem moves to CLOSED CURRENTRELEASE. But in terms of RHEL4, this problem still exists, as RHEL5 (which should resolve this issue, since it is largely based on FC6) hasn't been released yet. Justin, please consider changing the Product/Version fields to RHEL and 4 (respectively)... clearing needinfo latest FC5 still has this problem, per comment #18. I run fc6 and rawhide so I can't confirm this anymore ;) I can no longer re-create this on FC6 either. (and have been resizing to my hearts content lately) I also haven't had this issue on my newly setup RHEL4 U4 system. and have resized small 15G partitions to upwards of 30-75G without running into this problem.. Where when I setup my FC4 system with 20G partitions I ran into this issue trying to resize to 30G Edward, were you *ever* able to reproduce this problem on FC6? Because FC6 doesn't have ext2online, and I don't think anyone has been able to reproduce this problem with resize2fs. Did you mean FC5 instead? Also, can you reproduce the problem using the recipe I posted in comment 12? your recipe causes the error on both my FC5 and my RHEL4 system still :( However my existing volumes on my FC5 systems which HAD the issue when I tried to expand them from 20G to > 30G no longer have that issue and I have been able to resize online w/ no issues (currently at 76G and 63G) After reading comment 18 from James, that FC6 has a newer version of e2fsprogs in which resize2fs has online capabilities, I decided to recompile e2fsprogs-1.39 from FC6 on my FC5 box and try it. Unfortunately online resizing of root (/) still fails in the same way using resize2fs instead of ext2online. [root@mfleetwo3 ~]# resize2fs /dev/mapper/VolGroup00-LogVol00 resize2fs 1.39 (29-May-2006) Filesystem at /dev/mapper/VolGroup00-LogVol00 is mounted on /; on-line resizing required Performing an on-line resize of /dev/mapper/VolGroup00-LogVol00 to 9478144 (4k) blocks. resize2fs: No space left on device While trying to add group #243 And the same error appears in the system log. kernel: JBD: ext2online wants too many credits (2049 > 2048) Mike Mike's comment 24 prompted me to go test this again, and it turns out I *can* reproduce this problem on FC6: $ rpm -q --qf '%{name}-%{version}-%{release}.%{arch}\n' e2fsprogs e2fsprogs-libs kernel e2fsprogs-1.39-7.fc6.x86_64 e2fsprogs-libs-1.39-7.fc6.i386 e2fsprogs-libs-1.39-7.fc6.x86_64 kernel-2.6.18-1.2869.fc6.x86_64 $ lvcreate -L 512M -n test os Logical volume "test" created $ mke2fs -q -j -b -4096 -O dir_index /dev/os/test $ mount /dev/os/test /mnt $ lvresize -L 1G /dev/os/test Extending logical volume test to 1.00 GB Logical volume test successfully resized $ resize2fs -p /dev/os/test resize2fs 1.39 (29-May-2006) Filesystem at /dev/os/test is mounted on /mnt; on-line resizing required Performing an on-line resize of /dev/os/test to 262144 (4k) blocks. resize2fs: No space left on device While trying to add group #4 Simultaneously, this message is syslogged: Jan 4 12:54:41 example kernel: JBD: resize2fs wants too many credits (1026 > 1024) I'm not sure if I was running a faulty test when I claimed in comment 18 that I could no longer reproduce this problem, or if the problem truly did go away for a while, or if there's an additional dependency that we haven't figured out yet. But regardless, this problem still exists. This isn't the same issue, but it might be a clue as to what's happening with the resizing problems: http://osdir.com/ml/linux-ext4@vger.kernel.org/msg00592.html I finally figured this out: the problem is that the journal isn't large enough to allow online resizing. Apparently, this can happen if the original file system was created with a very small size: http://lists.openwall.net/linux-ext4/2007/05/07/26 The good news is that this problem is easy to fix; the bad news is that you have to unmount the filesystem to do it. But that's not too awfully inconvenient, because if the filesystem's journal is too small to permit online resizing, then you're going to have to unmount it to resize it anyway. So, after you run resize2fs, just just use tune2fs to delete and then recreate the journal. E.g.: $ e2fsck -C 0 /dev/os/test e2fsck 1.40.2 (12-Jul-2007) /dev/os/test: clean, 11/524288 files, 24822/524288 blocks $ tune2fs -O ^has_journal /dev/os/test tune2fs 1.40.2 (12-Jul-2007) $ tune2fs -j /dev/os/test tune2fs 1.40.2 (12-Jul-2007) Creating journal inode: done This filesystem will be automatically checked every 33 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. $ e2fsck -C 0 /dev/os/test e2fsck 1.40.2 (12-Jul-2007) /dev/os/test: clean, 11/524288 files, 24822/524288 blocks As long as the new size of the filesystem is greater than 512MB, the default journal size that tune2fs uses will be large enough to permit online resizing from that point forward. (If only the error message hadn't been so damn cryptic...) As an aside, here's something interesting: resize2fs never changes the size of the journal, even if you run it in offline mode. I'm not sure that it matters (once a journal is "big enough", does this size matter beyond that?), but it might be something to keep in mind if you've resized a small filesystem many times. One additional caveat: make *SURE* the filesystem was unmounted cleanly before you delete and the recreate the journal. If the filesystem is dirty, and you delete the journal, Interesting Things will probably happen (where interesting = painful). Ah, thanks. I've read both that post and this bug but didn't get them all together. :) (trying to get the bug count down so it fits in my brain...) So, there's no real bug here, though a better message might be helpful? -Eric Exactly. I'm not really sure I can throw stones at resize2fs about its error message, since the ioctl() it calls fails with ENOSPC, which is the error it reports. But the error message the kernel logs could definitely be more helpful. No one but an ext3/ext4 developer is going to know what "resize2fs wants too many credits" means; something like "journal too small for online resize" would be a lot more helpful. Although, there are probably conditions other than resize which may trip that message in the kernel. -Eric We can probably get a better error message out of resize2fs, though. -Eric Update version & utility for this year's model. :) Hi, My filesystem is still in the same situation described above in comment #17 and comment #24. Initially created 30G but won't expand to 36G. The machine is now running Fedora 7 with kernel-2.6.21-1.3194.fc7 and e2fsprogs-1.40.2-2.fc7. Some of the discussions I have seen mention that this fault occurs when starting with a very small initial filesystem and thus with a very small journal which prevents the filesystem growing by orders of magnitude. I am only trying to grow my filesystem by a modest 20%. Trying again ... [root@mfleetwo3 ~]# resize2fs /dev/mapper/VolGroup00-LogVol00 resize2fs 1.40.2 (12-Jul-2007) Filesystem at /dev/mapper/VolGroup00-LogVol00 is mounted on /; on-line resizing required old desc_blocks = 2, new_desc_blocks = 3 Performing an on-line resize of /dev/mapper/VolGroup00-LogVol00 to 9478144 (4k) blocks. resize2fs: No space left on device While trying to add group #243 This appears in the system log: Sep 25 08:13:13 mfleetwo3 kernel: JBD: resize2fs wants too many credits (2049 > 2048) Journal size: [root@mfleetwo3 ~]# dumpe2fs -h /dev/mapper/VolGroup00-LogVol00 | grep 'Journal size' dumpe2fs 1.40.2 (12-Jul-2007) Journal size: 32M Is my journal too small to allow the filesystem expansion? Thanks, Mike Mike, yes, I think this does reflect a too-small journal. The calculations that go into the reservation are pretty convoluted, and I haven't yet tracked through them all for your geometry. But, there are situations where it doesn't matter if the growth is modest; it has more to do simply with the existing geometry, not the newly-grown geometry. I think that if you make a larger journal inode, you should be able to grow this filesystem. (you'll need at least 2049*4 blocks in the journal to get past this error - I'd probably just make it 64M or 128M; my never-grown 32G root filesystem has a 128M log.) I think this needs to be made more robust / intuitive in a few ways; if this fails, a helpful hint about journal size would be good. Also, I think there are cases where mkfs creates geometries which simply cannot be grown at all. -Eric Also, as far as I can tell, whether the journal is large enough to support online resizing is not a function of the target (new) size; it depends solely on the size of the journal and the current size of the filesystem. Eric, I agree with your "more robust / intuitive" comment. The error messages could be more helpful, but I think it's an error for mke2fs/tune2fs to create a journal that subsequently prohibits online resizing. Thank you Eric for your response. I decided to do some archaeology and testing. Back in FC4 when I installed the box, circa August 2005, it came with e2fsprogs-1.37. At the time mke2fs defaulted to creating a 32MB journal for any filesystem >= 1GB. Now F7 comes with e2fsprogs-1.39 and mke2fs defaults to creating a 128MB journal for any filesystem >= 4GB. This explains the difference between the size of the journals reported by Eric and me. (See e2fsprogs-1.39/misc/util.c:figure_journal_size() for the code. Figure is actually calculated in blocks using a stepped scale based on the size of the filesystem. Above figures assume a 4K block size). So I got out the Fedora rescue CD and re-created the journal on the / (root) filesystem, following the instructions in James' comment #27 above. This re-created a 128MB journal. I then successfully expanded / (root) filesystem online. [root@mfleetwo3 ~]# dumpe2fs -h /dev/mapper/VolGroup00-LogVol00 | grep 'Journal size' dumpe2fs 1.40.2 (12-Jul-2007) Journal size: 128M [root@mfleetwo3 ~]# resize2fs /dev/mapper/VolGroup00-LogVol00 resize2fs 1.40.2 (12-Jul-2007) Filesystem at /dev/mapper/VolGroup00-LogVol00 is mounted on /; on-line resizing required old desc_blocks = 2, new_desc_blocks = 3 Performing an on-line resize of /dev/mapper/VolGroup00-LogVol00 to 9478144 (4k) blocks. The filesystem on /dev/mapper/VolGroup00-LogVol00 is now 9478144 blocks long. Thanks a lot, Mike great, glad the workaround... worked. James, I agree, looks like there are some filesystems out there which simply can't be resized. (re: comment #39 - with the stock journal size, that is) Actually, I've got a different approach, a change to the kernel code to let even small journals resize. I've sent it to the ext list for comment. Going to dup this one over to a similar RHEL4 bug even though this one was filed first, I'll probably push this into RHEL4 & 5 if it passes muster upstream. *** This bug has been marked as a duplicate of 166038 *** |