Bug 492932
Summary: | GFS2: umount hung after grow | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Nate Straz <nstraz> |
Component: | kernel | Assignee: | Robert Peterson <rpeterso> |
Status: | CLOSED DUPLICATE | QA Contact: | Cluster QE <mspqa-list> |
Severity: | medium | Docs Contact: | |
Priority: | low | ||
Version: | 5.3 | CC: | cluster-maint, edamato, swhiteho |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2009-03-31 14:12:51 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nate Straz
2009-03-30 20:01:34 UTC
When you ran this, did you have the fix for bug #469773 on your system? Following the exact same commands, but using the latest and greatest gfs2 code (almost--2.6.18-137 kernel) compiled from the RHEL5 branch of the cluster git tree, I was not able to recreate the problem. Unmount said, "/sbin/umount.gfs2: /mnt/gfs2: device is busy." until the dd ran out of space on the device, then the unmount worked properly. The gfs2_grow, however, did not work because it complained that it had grown too little, and maybe that's why I was able to unmount. No, I did not have the fix for 469773, but I wasn't able to recreate that bug either. When I ran the testcase, #4 would fill the file system before I got to #6. I was able to reproduce this yesterday while omitting #4. This works for me, as long as I have the fix for bug #469733 on my system, with or without the dd. A stupid typing blunder caused my earlier problem I reported. [root@roth-01 ../bob/cluster/gfs2/mkfs]# lvcreate -n grow -L 2G roth_vg Logical volume "grow" created [root@roth-01 ../bob/cluster/gfs2/mkfs]# mkfs -t gfs2 -j 1 -p lock_nolock -O /dev/roth_vg/grow Device: /dev/roth_vg/grow Blocksize: 4096 Device Size 2.00 GB (524288 blocks) Filesystem Size: 2.00 GB (524288 blocks) Journals: 1 Resource Groups: 8 Locking Protocol: "lock_nolock" Lock Table: "" UUID: 8B632434-093F-879D-E081-B2DE8BCCBF68 [root@roth-01 ../bob/cluster/gfs2/mkfs]# mount -t gfs2 /dev/roth_vg/grow /mnt/gfs2 [root@roth-01 ../bob/cluster/gfs2/mkfs]# lvextend -L +2G /dev/roth_vg/grow Extending logical volume grow to 4.00 GB Logical volume grow successfully resized [root@roth-01 ../bob/cluster/gfs2/mkfs]# gfs2_grow /mnt/gfs2 FS: Mount Point: /mnt/gfs2 FS: Device: /dev/mapper/roth_vg-grow FS: Size: 524288 (0x80000) FS: RG size: 65533 (0xfffd) DEV: Size: 1048576 (0x100000) The file system grew by 2048MB. gfs2_grow complete. [root@roth-01 ../bob/cluster/gfs2/mkfs]# umount /mnt/gfs2 [root@roth-01 ../bob/cluster/gfs2/mkfs]# lvremove -f /dev/roth_vg/grow Logical volume "grow" successfully removed I'm going to close this as a duplicate of bug #469773 although some of your comments indicate you may have also run into bug #490649. I guess the summary of #469733 indicates block sizes other than 4K, if the file system is small enough, the miscalculations can still cause the problem on small file systems. I'll append a comment to that affect to that bz. I know the fix for bug #469733 has not made it to a build yet, but perhaps we can get one built for testing so you can verify this fixes the problem. If it doesn't, feel free to re-open the bz. *** This bug has been marked as a duplicate of bug 469773 *** |