Description of problem: The method used to calculate the power of two value for a type is off by 1 causing twice the required amount of memory to be allocated. For example, cComparing the information for inode_t in statedumps from 3.4.4 and 3.5.0: 3.4.4: ------ pool-name=inode_t active-count=15408 sizeof-type=168 padded-sizeof=256 size=3944448 shared-pool=0x7fac27a7b468 -----=----- 3.5.0: ------ pool-name=inode_t active-count=2 sizeof-type=255 <--- actual sizeof inode_t is 168 padded-sizeof=512 <--- padded size is twice the required amount size=1024 shared-pool=0x7f1103b5b6d0 Version-Release number of selected component (if applicable): 3.5.0 How reproducible: Steps to Reproduce: 1. Create volume, fuse mount it and create some files and dirs on it 2. Take a statedump of the gluster mount process (kill -SIGUSR1 <pid>) 3. Compare the sizeof-type and padded-sizeof values in the state releases. Actual results: The padded-sizeof is twice the smallest power of two value for sizeof-type + sizeof(obj header) Expected results: The padded-sizeof should be the smallest power of two value for sizeof-type + sizeof(obj header) Additional info:
*** Bug 1723889 has been marked as a duplicate of this bug. ***
> Upstream patch: https://review.gluster.org/c/glusterfs/+/22921 Please backport this fix to downstream.
Steps performed to move the bug to verified: 1.Created pure replicate volume in 3.4.4 and 3.5.0 setup and fuse mounted it and created some files and dirs on it . 2.Took statedump of the gluster mount process. 3.Following is output for inode_t from 3.4.4 and 3.5.0 3.4.4 ######################### pool-name=inode_t active-count=66635 sizeof-type=168 padded-sizeof=256 size=17058560 shared-pool=0x7f626e494aa8 pool-name=inode_t active-count=1 sizeof-type=168 padded-sizeof=256 size=256 shared-pool=0x7f626e494aa8 3.5.0 ######################### pool-name=inode_t active-count=2929 sizeof-type=168 padded-sizeof=256 size=749824 shared-pool=0x7f6dcae44168 pool-name=inode_t active-count=1 sizeof-type=168 padded-sizeof=256 size=256 shared-pool=0x7f6dcae44168 Padded-size is 256 for this pool Do I need to check for all pools in the statedumps? Please,Let me know so that i can move the bug to verified. Thanks Mugdha
Value for pool-name=dentry for 3.4.4 and 3.5.0 is mentioned below :- 3.4.4 ############################## pool-name=dentry_t active-count=0 sizeof-type=56 padded-sizeof=128 size=0 shared-pool=0x7f6ea44609e0 pool-name=dentry_t active-count=0 sizeof-type=56 padded-sizeof=128 size=0 shared-pool=0x7f6ea44609e0 3.5.5 ############################### pool-name=dentry_t active-count=2923 sizeof-type=56 padded-sizeof=128 size=374144 shared-pool=0x7f6dcae44140 pool-name=dentry_t active-count=0 sizeof-type=56 padded-sizeof=128 size=0 shared-pool=0x7f6dcae44140 The values match for pool-name=dentry and the other outputs are mentioned in comment#11 . Based on the outputs ,the bug is being moved to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249