Bug 1722801

Summary: Incorrect power of two calculation in mem_pool_get_fn
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nithya Balachandran <nbalacha>
Component: coreAssignee: Xavi Hernandez <jahernan>
Status: CLOSED ERRATA QA Contact: Mugdha Soni <musoni>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.5CC: amukherj, jahernan, nchilaka, rhs-bugs, sheggodu, storage-qa-internal, vdas
Target Milestone: ---Keywords: Regression
Target Release: RHGS 3.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1722802 (view as bug list) Environment:
Last Closed: 2019-10-30 12:22:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1722802, 1748774    
Bug Blocks: 1696809    

Description Nithya Balachandran 2019-06-21 11:10:11 UTC
Description of problem:

The method used to calculate the power of two value for a type is off by 1 causing twice the required amount of memory to be allocated.
For example, cComparing the information for inode_t in statedumps from 3.4.4 and 3.5.0:

3.4.4:
------

pool-name=inode_t                                                               
active-count=15408                                                              
sizeof-type=168                                                                 
padded-sizeof=256                                                               
size=3944448                                                                    
shared-pool=0x7fac27a7b468                                                      
-----=-----                



3.5.0:
------
pool-name=inode_t                                                               
active-count=2                                                                  
sizeof-type=255     <--- actual sizeof inode_t is 168                                                               
padded-sizeof=512   <--- padded size is twice the required amount                                                            
size=1024                                                                       
shared-pool=0x7f1103b5b6d0





Version-Release number of selected component (if applicable):
3.5.0

How reproducible:


Steps to Reproduce:
1. Create volume, fuse mount it and create some files and dirs on it
2. Take a statedump of the gluster mount process (kill -SIGUSR1 <pid>)
3. Compare the sizeof-type and padded-sizeof values in the state releases.

Actual results:
The padded-sizeof is twice the smallest power of two value for sizeof-type + sizeof(obj header)

Expected results:

The padded-sizeof should be the smallest power of two value for sizeof-type + sizeof(obj header)


Additional info:

Comment 6 Xavi Hernandez 2019-06-26 06:32:15 UTC
*** Bug 1723889 has been marked as a duplicate of this bug. ***

Comment 8 Sunil Kumar Acharya 2019-06-26 09:53:24 UTC
> Upstream patch: https://review.gluster.org/c/glusterfs/+/22921

Please backport this fix to downstream.

Comment 11 Mugdha Soni 2019-08-20 15:21:11 UTC
Steps performed to move the bug to verified:

1.Created pure replicate volume in 3.4.4 and 3.5.0 setup and fuse mounted it and created some files and dirs on it .
2.Took statedump of the gluster mount process.
3.Following is output for inode_t from 3.4.4 and 3.5.0
  
3.4.4
#########################
pool-name=inode_t
active-count=66635
sizeof-type=168
padded-sizeof=256
size=17058560
shared-pool=0x7f626e494aa8


pool-name=inode_t
active-count=1
sizeof-type=168
padded-sizeof=256
size=256
shared-pool=0x7f626e494aa8

3.5.0
#########################

pool-name=inode_t
active-count=2929
sizeof-type=168
padded-sizeof=256
size=749824
shared-pool=0x7f6dcae44168

pool-name=inode_t
active-count=1
sizeof-type=168
padded-sizeof=256
size=256
shared-pool=0x7f6dcae44168

Padded-size is 256 for this pool 
Do I need to check for all pools in the statedumps?
Please,Let me know so that i can move the bug to verified.

Thanks
Mugdha

Comment 13 Mugdha Soni 2019-08-21 07:48:43 UTC
Value for pool-name=dentry for 3.4.4 and 3.5.0 is mentioned below :-

3.4.4
##############################
pool-name=dentry_t
active-count=0
sizeof-type=56
padded-sizeof=128
size=0
shared-pool=0x7f6ea44609e0




pool-name=dentry_t
active-count=0
sizeof-type=56
padded-sizeof=128
size=0
shared-pool=0x7f6ea44609e0



3.5.5
###############################
pool-name=dentry_t
active-count=2923
sizeof-type=56
padded-sizeof=128
size=374144
shared-pool=0x7f6dcae44140


pool-name=dentry_t
active-count=0
sizeof-type=56
padded-sizeof=128
size=0
shared-pool=0x7f6dcae44140


The values match for  pool-name=dentry and the other outputs are mentioned in comment#11 .
Based on the outputs ,the bug is being moved to verified state.

Comment 15 errata-xmlrpc 2019-10-30 12:22:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249