Description of problem: Running 'gfs2_grow -T' on a CLVM LV says that it will test, but actually grows the FS. Version-Release number of selected component (if applicable): cman-2.0.115-68.el5_6.3 gfs2-utils-0.1.62-28.el5 How reproducible: Only tested once, uncertain. Steps to Reproduce: 1. Use 'lvextend' to extend a clustered LV. 2. Use 'gfs2_grow -T <args>' to test. 3. use 'df -h' to see that the FS is already grown, despite the '-T'est. Actual results: Grew the file system. Expected results: Not to grow the file system. Additional info: [root@xenmaster003 ~]# lvextend -L +50G /dev/drbd_sh1_vg0/cluster_files /dev/drbd3 Extending logical volume cluster_files to 250.00 GB Logical volume cluster_files successfully resized [root@xenmaster003 ~]# gfs2_grow -T /dev/drbd_sh1_vg0/cluster_files /cluster_files/ (Test mode--File system will not be changed) FS: Mount Point: /cluster_files FS: Device: /dev/mapper/drbd_sh1_vg0-cluster_files FS: Size: 52428798 (0x31ffffe) FS: RG size: 65535 (0xffff) DEV: Size: 65536000 (0x3e80000) The file system grew by 51200MB. FS: Mount Point: /cluster_files FS: Device: /dev/mapper/drbd_sh1_vg0-cluster_files FS: Size: 52428798 (0x31ffffe) FS: RG size: 65535 (0xffff) DEV: Size: 65536000 (0x3e80000) The file system grew by 51200MB. gfs2_grow complete. [root@xenmaster003 ~]# gfs2_grow /dev/drbd_sh1_vg0/cluster_files /cluster_files/ FS: Mount Point: /cluster_files FS: Device: /dev/mapper/drbd_sh1_vg0-cluster_files FS: Size: 52428798 (0x31ffffe) FS: RG size: 65535 (0xffff) DEV: Size: 65536000 (0x3e80000) The file system grew by 51200MB. Error: The device has grown by less than one Resource Group (RG). The device grew by 0MB. One RG is 255MB for this file system. gfs2_grow complete. [root@xenmaster003 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 57G 2.7G 51G 6% / /dev/md0 251M 52M 187M 22% /boot tmpfs 7.7G 0 7.7G 0% /dev/shm /dev/mapper/drbd_sh0_vg0-xen_shared 56G 259M 56G 1% /xen_shared /dev/mapper/drbd_sh1_vg0-cluster_files 250G 145G 106G 58% /cluster_files
Did you run the df -h after the initial grow -T or only after the second (real) grow command? It looks like the second command (the real one) did the right thing. I'm not sure why the first (test) command printed the info twice, which is a bit strange. I would be quite surprised if the gfs2_grow command did manage to grow the filesystem when in test mode, since it opens all its files in read only mode for safety in that case. If you are able to reproduce, can you get an strace of the gfs2_grow -T command? Also a df -h both before and after the grow -T would be helpful.
I tried and was unable to recreate the problem: [root@roth-01 ../gfs2/mkfs]# rpm -q gfs2-utils gfs2-utils-0.1.62-28.el5 [root@roth-01 ../gfs2/mkfs]# lvresize -L50G /dev/roth_vg/roth_lv WARNING: Reducing active logical volume to 50.00 GB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce roth_lv? [y/n]: y Reducing logical volume roth_lv to 50.00 GB Logical volume roth_lv successfully resized [root@roth-01 ../gfs2/mkfs]# mkfs.gfs2 -O -t bobs_roth:roth_lv -p lock_dlm -j 4 /dev/roth_vg/roth_lv Device: /dev/roth_vg/roth_lv Blocksize: 4096 Device Size 50.00 GB (13107200 blocks) Filesystem Size: 50.00 GB (13107198 blocks) Journals: 4 Resource Groups: 200 Locking Protocol: "lock_dlm" Lock Table: "bobs_roth:roth_lv" UUID: 4B910756-5C6D-F16A-706D-1C4A9B9F101D [root@roth-01 ../gfs2/mkfs]# mount -tgfs2 /dev/roth_vg/roth_lv /mnt/gfs2 [root@roth-01 ../gfs2/mkfs]# lvextend -L +50G /dev/roth_vg/roth_lv Extending logical volume roth_lv to 100.00 GB Logical volume roth_lv successfully resized [root@roth-01 ../gfs2/mkfs]# gfs2_grow -T /dev/roth_vg/roth_lv (Test mode--File system will not be changed) FS: Mount Point: /mnt/gfs2 FS: Device: /dev/mapper/roth_vg-roth_lv FS: Size: 13107198 (0xc7fffe) FS: RG size: 65535 (0xffff) DEV: Size: 26214400 (0x1900000) The file system grew by 51200MB. gfs2_grow complete. [root@roth-01 ../gfs2/mkfs]# gfs2_grow /dev/roth_vg/roth_lv FS: Mount Point: /mnt/gfs2 FS: Device: /dev/mapper/roth_vg-roth_lv FS: Size: 13107198 (0xc7fffe) FS: RG size: 65535 (0xffff) DEV: Size: 26214400 (0x1900000) The file system grew by 51200MB. gfs2_grow complete. [root@roth-01 ../gfs2/mkfs]# The first gfs2_grow gave two reports of the file system size, and the only way that should be able to happen is if two file systems were specified on one gfs2_grow. However, once -T mode is specified, all the specified file systems are treated the same way: as read-only. So even when specified twice, the problem did not recreate. Here's what I get when I specified the same file system twice: [root@roth-01 ../gfs2/mkfs]# umount /mnt/gfs2 [root@roth-01 ../gfs2/mkfs]# lvresize -L50G /dev/roth_vg/roth_lv WARNING: Reducing active logical volume to 50.00 GB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce roth_lv? [y/n]: y Reducing logical volume roth_lv to 50.00 GB Logical volume roth_lv successfully resized [root@roth-01 ../gfs2/mkfs]# mkfs.gfs2 -O -t bobs_roth:roth_lv -p lock_dlm -j 4 /dev/roth_vg/roth_lv Device: /dev/roth_vg/roth_lv Blocksize: 4096 Device Size 50.00 GB (13107200 blocks) Filesystem Size: 50.00 GB (13107198 blocks) Journals: 4 Resource Groups: 200 Locking Protocol: "lock_dlm" Lock Table: "bobs_roth:roth_lv" UUID: 47739572-5BD8-E05F-8B1F-F913346D0FF1 [root@roth-01 ../gfs2/mkfs]# mount -tgfs2 /dev/roth_vg/roth_lv /mnt/gfs2 [root@roth-01 ../gfs2/mkfs]# lvextend -L +50G /dev/roth_vg/roth_lv Extending logical volume roth_lv to 100.00 GB Logical volume roth_lv successfully resized [root@roth-01 ../gfs2/mkfs]# gfs2_grow -T /dev/roth_vg/roth_lv /dev/roth_vg/roth_lv (Test mode--File system will not be changed) FS: Mount Point: /mnt/gfs2 FS: Device: /dev/mapper/roth_vg-roth_lv FS: Size: 13107198 (0xc7fffe) FS: RG size: 65535 (0xffff) DEV: Size: 26214400 (0x1900000) The file system grew by 51200MB. FS: Mount Point: /mnt/gfs2 FS: Device: /dev/mapper/roth_vg-roth_lv FS: Size: 13107198 (0xc7fffe) FS: RG size: 65535 (0xffff) DEV: Size: 26214400 (0x1900000) The file system grew by 51200MB. gfs2_grow complete. [root@roth-01 ../gfs2/mkfs]# gfs2_grow /dev/roth_vg/roth_lv FS: Mount Point: /mnt/gfs2 FS: Device: /dev/mapper/roth_vg-roth_lv FS: Size: 13107198 (0xc7fffe) FS: RG size: 65535 (0xffff) DEV: Size: 26214400 (0x1900000) The file system grew by 51200MB. gfs2_grow complete. [root@roth-01 ../gfs2/mkfs]# When -T is specified, the device is opened O_RDONLY, so writes to it should be impossible. The code looks correct. So at this point I'm baffled. How did you do that?
I'll retest it shortly and run 'df -h' after each step. If I am not able to reproduce it, I'll close it as NOTABUG. Though, this still confuses me: The file system grew by 51200MB. Error: The device has grown by less than one Resource Group (RG). The device grew by 0MB. One RG is 255MB for this file system. gfs2_grow complete. I'll try to get a new test done tonight.
I can't reproduce it, either. Sorry about the line noise, please close as NOTABUG.
No problem. Closed as per comment #4.