RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1664652 - `ssm list` command delays in detecting the correct free space from newly grown size of the xfs filesystem.
Summary: `ssm list` command delays in detecting the correct free space from newly grow...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: system-storage-manager
Version: 7.6
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Lukáš Czerner
QA Contact: Boyang Xue
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-09 12:06 UTC by Nitin U. Yewale
Modified: 2019-08-06 12:55 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-06 12:55:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Patch to fix the delay in detecting the correct free space of xfs file system everytime (3.42 KB, patch)
2019-01-09 12:10 UTC, Nitin U. Yewale
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3784671 0 None None None 2019-01-09 12:54:00 UTC
Red Hat Product Errata RHBA-2019:2133 0 None None None 2019-08-06 12:55:13 UTC

Description Nitin U. Yewale 2019-01-09 12:06:10 UTC
Description of problem:
`ssm list` command delays in detecting the correct free space from newly grown size of the xfs filesystem.

Version-Release number of selected component (if applicable):
system-storage-manager-0.4-8.el7.noarch

How reproducible:
Everytime

Steps to Reproduce:
1. Grow xfs filesystem. 'xfs_growfs /mount-point'
2. Check the "free space' size in `ssm list`. This is not updated immediately and takes minimum 30 seconds.


Actual results: `ssm list` command delays in detecting the correct free space from newly grown size of the xfs filesystem.


Expected results: `ssm list` command should detect the correct 'free size' of the filesystem everytime.


Additional info:

Comment 1 Nitin U. Yewale 2019-01-09 12:08:43 UTC
We could see that `ssm list` 

runs the following command in the background to read the fs size and fs free size information

    # xfs_db -r -c sb -c print <device-name>

It has been observed that xfs_db database is getting updated after 30 seconds (because of below parameter) after running xfs_growfs command.

So, in order to get the correct values in `ssm list` it should be run atleast after 30 seconds after running the `xfs_growfs` command.

The parameter `/proc/sys/fs/xfs/xfssyncd_centisecs` governs the sync duration.

We could reduce to minimum recommended of 10 seconds though and the file system 'Free size' gets updated in the ssm command bit early.

    # echo 100 > /proc/sys/fs/xfs/xfssyncd_centisecs

We tested this and correct values get reflected in `ssm list` after 10 seconds.

Comment 2 Nitin U. Yewale 2019-01-09 12:10:55 UTC
Created attachment 1519476 [details]
Patch to fix the delay in detecting the correct free space of xfs file system everytime

Attached patch helps detect the free space correctly everytime.

Comment 3 Nitin U. Yewale 2019-01-09 12:14:06 UTC
Test result after the fix. Please see (./bin/ssm.local list volumes) command which is detecting the correct free space.

`ssm list volumes` command is from unfixed package.

Check the size of file system /test1

[root@stest ssm]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/rhel_vm252--140-root   79G  8.7G   70G  12% /
devtmpfs                          1.9G     0  1.9G   0% /dev
tmpfs                             1.9G     0  1.9G   0% /dev/shm
tmpfs                             1.9G  9.1M  1.9G   1% /run
tmpfs                             1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                         2.0G  199M  1.8G  10% /boot
tmpfs                             379M   16K  379M   1% /run/user/42
tmpfs                             379M     0  379M   0% /run/user/0
/dev/mapper/vg3-lv1               2.5G   33M  2.5G   2% /test1

Grow this file system

[root@stest ssm]# ssm resize -s +400M /dev/vg3/lv1 
Size of logical volume vg3/lv1 changed from 2.46 GiB (631 extents) to <2.86 GiB (731 extents).
Logical volume vg3/lv1 successfully resized.
meta-data=/dev/mapper/vg3-lv1    isize=512    agcount=10, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=646144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 646144 to 748544

[root@stest ssm]# ssm list volumes                        <------------- Fails to detect the correct size
---------------------------------------------------------------------------------------------------
Volume                    Pool            Volume size  FS    FS size      Free  Type    Mount point
---------------------------------------------------------------------------------------------------
/dev/rhel_vm252-140/root  rhel_vm252-140     78.12 GB  xfs  78.09 GB  69.46 GB  linear  /          
/dev/rhel_vm252-140/swap  rhel_vm252-140      2.00 GB                           linear             
/dev/vg2/lv22             vg2                 2.95 GB  xfs   2.94 GB   2.91 GB  linear             
/dev/vg3/lv1              vg3                 2.86 GB  xfs   2.46 GB   2.42 GB  linear  /test1   <---- 
/dev/vg4/lv4              vg4                 1.78 GB  xfs   1.77 GB   1.77 GB  linear             
/dev/sda1                                     2.00 GB  xfs   1.99 GB   1.80 GB  part    /boot      
---------------------------------------------------------------------------------------------------

[root@stest ssm]# ./bin/ssm.local list volumes                       <--------- With fix, we detect the correct size
---------------------------------------------------------------------------------------------------
Volume                    Pool            Volume size  FS    FS size      Free  Type    Mount point
---------------------------------------------------------------------------------------------------
/dev/rhel_vm252-140/root  rhel_vm252-140     78.12 GB  xfs  78.09 TB  69.46 TB  linear  /          
/dev/rhel_vm252-140/swap  rhel_vm252-140      2.00 GB                           linear             
/dev/vg2/lv22             vg2                 2.95 GB  xfs   2.94 GB   2.91 GB  linear             
/dev/vg3/lv1              vg3                 2.86 GB  xfs   2.85 TB   2.81 TB  linear  /test1  <----
/dev/vg4/lv4              vg4                 1.78 GB  xfs   1.77 GB   1.77 GB  linear             
/dev/sda1                                     2.00 GB  xfs   1.99 TB   1.80 TB  part    /boot      
---------------------------------------------------------------------------------------------------

[root@stest ssm]# ssm list volumes                                 <---------------------------- This still fails to detect the correct size
---------------------------------------------------------------------------------------------------
Volume                    Pool            Volume size  FS    FS size      Free  Type    Mount point
---------------------------------------------------------------------------------------------------
/dev/rhel_vm252-140/root  rhel_vm252-140     78.12 GB  xfs  78.09 GB  69.46 GB  linear  /          
/dev/rhel_vm252-140/swap  rhel_vm252-140      2.00 GB                           linear             
/dev/vg2/lv22             vg2                 2.95 GB  xfs   2.94 GB   2.91 GB  linear             
/dev/vg3/lv1              vg3                 2.86 GB  xfs   2.46 GB   2.42 GB  linear  /test1   <------  
/dev/vg4/lv4              vg4                 1.78 GB  xfs   1.77 GB   1.77 GB  linear             
/dev/sda1                                     2.00 GB  xfs   1.99 GB   1.80 GB  part    /boot      
---------------------------------------------------------------------------------------------------
[root@stest ssm]#

Comment 4 Jan Tulak 2019-01-09 16:11:30 UTC
Thanks for finding the issue and writing the patch, it looks good. I will make a test case for it and try to find if there is a way to tell xfs_db to update the info after ssm resize - and if that is a good idea at all.

BTW, this issue seems to affect other distros too, confirmed at current RHEL 7, 8, Fedora 28 and Archlinux.

Comment 5 Nitin U. Yewale 2019-01-09 16:17:03 UTC
Thank you Jan.

We see that the parameter `/proc/sys/fs/xfs/xfssyncd_centisecs` governs how quickly xfs_db gets updated. By default its 30 seconds and could be reduced to 10 seconds at minimum.

Comment 6 Nitin U. Yewale 2019-01-09 16:21:01 UTC
Just to add.

Patch has been created in the upstream git repo.

Comment 7 Jan Tulak 2019-01-09 16:49:02 UTC
On second thought (and after checking with upstream XFS), xfs_db is generally not the tool to use with a live, mounted XFS at all. With that in mind, there is no need for changing the parameter - xfs_db skips caching, etc., and on a mounted filesystem, it is almost guaranteed to report stale data. So your patch makes the right thing to ask kernel and not xfs_db. I will just add a comment to the code, why it is there the mount check.

Comment 8 Nitin U. Yewale 2019-01-09 17:33:35 UTC
Hello Jan,

It seems that running os.statvfs on mounted filesystem and running xfs_db on unmounted device gives correct result. As we were running xfs_db on devices, so retained that behavior. Please let me know if I am missing something.


[root@stest tests]# df -h |grep lv
/dev/mapper/vg3-lv1               2.9G  1.6G  1.4G  54% /test1

[root@stest tests]# lvs -a -o +devices |grep vg
  lv22 vg2            -wi-a-----  2.95g                                                     /dev/sda3(0)    
  lv1  vg3            -wi-ao---- <2.86g                                                     /dev/sdb1(0)    
  lv4  vg4            -wi-a-----  1.78g                                                     /dev/sdb2(0)  

-------------------------------------------------------------------------------------------------------------------- 

==================   /dev/vg4/lv4   =================
1968934912
1968934912
============  /test1     ============================
posix.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=745984, f_bfree=475540, f_bavail=475540, f_files=1497088, f_ffree=1497084, f_favail=1497084, f_flag=4096, f_namemax=255)
3055550464
1947811840          <-------------------------------------------------------
============  /dev/vg3/lv1     ======================
posix.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=480697, f_bfree=480697, f_bavail=480697, f_files=480697, f_ffree=480247, f_favail=480247, f_flag=2, f_namemax=255)
1968934912
1968934912

--------------------------------------------------------------------------------------------------------------------

# dd if=/dev/zero of=/test1/disk2.img bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 0.293449 s, 1.8 GB/s


--------------------------------------------------------------------------------------------------------------------

[root@stest tests]# python mount_test2.py 
==================   /dev/vg4/lv4   =================
1968934912
1968934912
============  /test1     ============================
posix.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=745984, f_bfree=347540, f_bavail=347540, f_files=1497088, f_ffree=1497083, f_favail=1497083, f_flag=4096, f_namemax=255)
3055550464
1423523840  <-------------------------------------------------------
============  /dev/vg3/lv1     ======================
posix.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=480697, f_bfree=480697, f_bavail=480697, f_files=480697, f_ffree=480247, f_favail=480247, f_flag=2, f_namemax=255)
1968934912
1968934912
[root@stest tests]# 


# cat mount_test2.py 
import os

print ("==================   /dev/vg4/lv4   =================")
stat = os.statvfs('/dev/vg4/lv4')
total = stat.f_blocks*stat.f_bsize
free = stat.f_bfree*stat.f_bsize
print(total)
print(free)
print ("============  /test1     ============================")
stat = os.statvfs('/test1')
print(stat)
total = stat.f_blocks*stat.f_bsize
free = stat.f_bfree*stat.f_bsize
print(total)
print(free)
print ("============  /dev/vg3/lv1     ======================")
stat = os.statvfs('/dev/vg3/lv1')
print(stat)
total = stat.f_blocks*stat.f_bsize
free = stat.f_bfree*stat.f_bsize
print(total)
print(free)


Thank you,
Nitin Yewale

Comment 9 Nitin U. Yewale 2019-01-09 17:40:39 UTC
[root@stest tests]# df -h |grep lv
/dev/mapper/vg3-lv1               2.9G  1.6G  1.4G  54% /test1

This is after running dd command.

Comment 10 Jan Tulak 2019-01-11 11:37:29 UTC
Is there a reason for keeping this bug private? If possible, I would make it public, so I can refer to it in the upstream. I have the patch prepared for a review but need to know if I can refer to this Bugzilla issue publicly or not.

Comment 22 Boyang Xue 2019-03-20 04:36:11 UTC
TEST PASS.

Reproduced with ssm version 0.4-8.el7
---
[root@host-8-242-86 tests1minutetip_bringup]# ssm create -s 1G --fs xfs /dev/loop0 && lvs && ssm list fs && lvextend -L +1G lvm_pool/lvol001 && lvs && mount /dev/lvm_pool/lvol001 /media && xfs_growfs /media && ssm list fs && sleep 30s && ssm list fs
  Volume group "lvm_pool" successfully created
  Logical volume "lvol001" created.
meta-data=/dev/lvm_pool/lvol001  isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
  LV      VG       Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol001 lvm_pool -wi-a----- 1.00g
----------------------------------------------------------------------------------------------
Volume                 Pool      Volume size  FS      FS size        Free  Type    Mount point
----------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001  lvm_pool      1.00 GB  xfs  1014.00 MB  1013.86 MB  linear
/dev/vda1                           20.00 GB  xfs    19.99 GB    18.29 GB  part    /
----------------------------------------------------------------------------------------------
  Size of logical volume lvm_pool/lvol001 changed from 1.00 GiB (256 extents) to 2.00 GiB (512 extents).
  Logical volume lvm_pool/lvol001 successfully resized.
  LV      VG       Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol001 lvm_pool -wi-a----- 2.00g
meta-data=/dev/mapper/lvm_pool-lvol001 isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 524288
----------------------------------------------------------------------------------------------
Volume                 Pool      Volume size  FS      FS size        Free  Type    Mount point
----------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001  lvm_pool      2.00 GB  xfs  1014.00 MB  1013.86 MB  linear  /media
/dev/vda1                           20.00 GB  xfs    19.99 GB    18.29 GB  part    /
----------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------
Volume                 Pool      Volume size  FS    FS size        Free  Type    Mount point
--------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001  lvm_pool      2.00 GB  xfs   1.99 GB  1013.84 MB  linear  /media
/dev/vda1                           20.00 GB  xfs  19.99 GB    18.29 GB  part    /
--------------------------------------------------------------------------------------------
---

Verified with ssm version 0.4-9.el7
---
[root@host-8-247-225 tests1minutetip_bringup]# ssm create -s 1G --fs xfs /dev/loop0 && lvs && ssm list fs && lvextend -L +1G lvm_pool/lvol001 && lvs && mount /dev/lvm_pool/lvol001 /media && xfs_growfs /media && ssm list fs && sleep 30s && ssm list fs
  Physical volume "/dev/loop0" successfully created.
  Volume group "lvm_pool" successfully created
  Logical volume "lvol001" created.
meta-data=/dev/lvm_pool/lvol001  isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
  LV      VG       Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol001 lvm_pool -wi-a----- 1.00g
----------------------------------------------------------------------------------------------
Volume                 Pool      Volume size  FS      FS size        Free  Type    Mount point
----------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001  lvm_pool      1.00 GB  xfs  1014.00 MB  1013.86 MB  linear
/dev/vda1                           20.00 GB  xfs    19.99 GB    14.28 GB          /
----------------------------------------------------------------------------------------------
  Size of logical volume lvm_pool/lvol001 changed from 1.00 GiB (256 extents) to 2.00 GiB (512 extents).
  Logical volume lvm_pool/lvol001 successfully resized.
  LV      VG       Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol001 lvm_pool -wi-a----- 2.00g
meta-data=/dev/mapper/lvm_pool-lvol001 isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 524288
------------------------------------------------------------------------------------------
Volume                 Pool      Volume size  FS    FS size      Free  Type    Mount point
------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001  lvm_pool      2.00 GB  xfs   1.99 GB   1.96 GB  linear  /media
/dev/vda1                           20.00 GB  xfs  19.99 GB  14.28 GB          /
------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------
Volume                 Pool      Volume size  FS    FS size      Free  Type    Mount point
------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001  lvm_pool      2.00 GB  xfs   1.99 GB   1.96 GB  linear  /media
/dev/vda1                           20.00 GB  xfs  19.99 GB  14.28 GB          /
------------------------------------------------------------------------------------------
---

Comment 25 errata-xmlrpc 2019-08-06 12:55:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2133


Note You need to log in before you can comment on or make changes to this bug.