+++ This bug was initially created as a clone of Bug #481041 +++ Description of problem: When statfs_fast is set on a GFS mount point, the system does not notice when the file system size has changed with gfs_grow. Version-Release number of selected component (if applicable): kernel-2.6.18-128.el5 kmod-gfs-0.1.31-3.el5 gfs-utils-0.1.18-1.el5 How reproducible: Every time Steps to Reproduce: 1. mkfs, mount a GFS file system 2. gfs_tool settune /mnt/foo statfs_fast 1 3. df /mnt/foo 4. increase the size of the file system (lvresize) 5. gfs_grow /mnt/foo 6. df /mnt/foo, compare with step 3 Actual results: [root@dash-02 ~]# lvcreate -n one -L 5G growfs Logical volume "one" created [root@dash-02 ~]# mkfs -t gfs -p lock_dlm -t dash:one -j 3 /dev/growfs/one This will destroy any data on /dev/growfs/one. It appears to contain a gfs filesystem. Are you sure you want to proceed? [y/n] y Device: /dev/growfs/one Blocksize: 4096 Filesystem Size: 1212292 Journals: 3 Resource Groups: 20 Locking Protocol: lock_dlm Lock Table: dash:one Syncing... All Done [root@dash-02 ~]# mount /dev/growfs/one /mnt/one Trying to join cluster "lock_dlm", "dash:one" Joined cluster. Now mounting FS... GFS: fsid=dash:one.1: jid=1: Trying to acquire journal lock... GFS: fsid=dash:one.1: jid=1: Looking at journal... GFS: fsid=dash:one.1: jid=1: Done [root@dash-02 ~]# df -h /mnt/one Filesystem Size Used Avail Use% Mounted on /dev/mapper/growfs-one 4.7G 20K 4.7G 1% /mnt/one [root@dash-02 ~]# gfs_tool settune /mnt/one statfs_fast 1 GFS: fsid=dash:one.1: fast statfs start time = 1232572768 ## filled the file system with dd [root@dash-01 ~]# df /mnt/one Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/growfs-one 4849168 4842988 6180 100% /mnt/one [root@dash-01 ~]# lvresize -L +5G /dev/growfs/one Extending logical volume one to 10.00 GB Logical volume one successfully resized [root@dash-01 ~]# gfs_grow /mnt/one FS: Mount Point: /mnt/one FS: Device: /dev/mapper/growfs-one FS: Options: rw,hostdata=jid=0:id=1507329:first=1 FS: Size: 1310720 DEV: Size: 2621440 Preparing to write new FS information... Done. [root@dash-01 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/growfs-one 4.7G 4.7G 6.1M 100% /mnt/one [root@dash-01 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-ao 67.56G LogVol01 VolGroup00 -wi-ao 6.81G one growfs -wi-ao 10.00G ## Add more to the file system [root@dash-03 ~]# dd if=/dev/zero of=/mnt/one/`uname -n`-2 bs=1M dd: writing `/mnt/one/dash-03-2': No space left on device 1727+0 records in 1726+0 records out 1809842176 bytes (1.8 GB) copied, 14.1452 seconds, 128 MB/s [root@dash-03 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/growfs-one 4.7G 4.7G 0 100% /mnt/one [root@dash-03 ~]# ls -lh /mnt/one total 9.7G -rw-r--r-- 1 root root 1.6G Jan 21 15:20 dash-01 -rw-r--r-- 1 root root 1.6G Jan 21 15:26 dash-01-2 -rw-r--r-- 1 root root 1.7G Jan 21 15:20 dash-02 -rw-r--r-- 1 root root 1.8G Jan 21 15:26 dash-02-2 -rw-r--r-- 1 root root 1.4G Jan 21 15:20 dash-03 -rw-r--r-- 1 root root 1.7G Jan 21 15:26 dash-03-2 [root@dash-03 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/growfs-one 4.7G 4.7G 0 100% /mnt/one [root@dash-01 ~]# du -sh /mnt/one 9.7G /mnt/one Expected results: The df output should show the new file system size on all nodes immediately after gfs_grow completes. Additional info:
I hit this during RHEL 4.8 testing with the following packages: kernel-2.6.9-80.EL GFS-kernel-2.6.9-81.2.el4 GFS-6.1.18-1
There is a simple circumvention that I believe works, but I have not tested myself. That is, to turn fast_statfs off on all nodes, then turn it back on again. When all is said and done, the df information should be properly synched with the new file system size from gfs_grow.
Bumping to 4.9.
This will hopefully be fixed by my final patch for bug #488318, which I'm working on now.
As predicted, my patch for bug #488318 fixes this problem, so I'm closing this one as a duplicate of that, even though this one was opened first. *** This bug has been marked as a duplicate of bug 488318 ***