Bug 1722801 - Incorrect power of two calculation in mem_pool_get_fn
Summary: Incorrect power of two calculation in mem_pool_get_fn
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Xavi Hernandez
QA Contact: Mugdha Soni
URL:
Whiteboard:
: 1723889 (view as bug list)
Depends On: 1722802 1748774
Blocks: 1696809
TreeView+ depends on / blocked
 
Reported: 2019-06-21 11:10 UTC by Nithya Balachandran
Modified: 2019-10-30 12:22 UTC (History)
7 users (show)

Fixed In Version: glusterfs-6.0-7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1722802 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:22:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:22:22 UTC

Description Nithya Balachandran 2019-06-21 11:10:11 UTC
Description of problem:

The method used to calculate the power of two value for a type is off by 1 causing twice the required amount of memory to be allocated.
For example, cComparing the information for inode_t in statedumps from 3.4.4 and 3.5.0:

3.4.4:
------

pool-name=inode_t                                                               
active-count=15408                                                              
sizeof-type=168                                                                 
padded-sizeof=256                                                               
size=3944448                                                                    
shared-pool=0x7fac27a7b468                                                      
-----=-----                



3.5.0:
------
pool-name=inode_t                                                               
active-count=2                                                                  
sizeof-type=255     <--- actual sizeof inode_t is 168                                                               
padded-sizeof=512   <--- padded size is twice the required amount                                                            
size=1024                                                                       
shared-pool=0x7f1103b5b6d0





Version-Release number of selected component (if applicable):
3.5.0

How reproducible:


Steps to Reproduce:
1. Create volume, fuse mount it and create some files and dirs on it
2. Take a statedump of the gluster mount process (kill -SIGUSR1 <pid>)
3. Compare the sizeof-type and padded-sizeof values in the state releases.

Actual results:
The padded-sizeof is twice the smallest power of two value for sizeof-type + sizeof(obj header)

Expected results:

The padded-sizeof should be the smallest power of two value for sizeof-type + sizeof(obj header)


Additional info:

Comment 6 Xavi Hernandez 2019-06-26 06:32:15 UTC
*** Bug 1723889 has been marked as a duplicate of this bug. ***

Comment 8 Sunil Kumar Acharya 2019-06-26 09:53:24 UTC
> Upstream patch: https://review.gluster.org/c/glusterfs/+/22921

Please backport this fix to downstream.

Comment 11 Mugdha Soni 2019-08-20 15:21:11 UTC
Steps performed to move the bug to verified:

1.Created pure replicate volume in 3.4.4 and 3.5.0 setup and fuse mounted it and created some files and dirs on it .
2.Took statedump of the gluster mount process.
3.Following is output for inode_t from 3.4.4 and 3.5.0
  
3.4.4
#########################
pool-name=inode_t
active-count=66635
sizeof-type=168
padded-sizeof=256
size=17058560
shared-pool=0x7f626e494aa8


pool-name=inode_t
active-count=1
sizeof-type=168
padded-sizeof=256
size=256
shared-pool=0x7f626e494aa8

3.5.0
#########################

pool-name=inode_t
active-count=2929
sizeof-type=168
padded-sizeof=256
size=749824
shared-pool=0x7f6dcae44168

pool-name=inode_t
active-count=1
sizeof-type=168
padded-sizeof=256
size=256
shared-pool=0x7f6dcae44168

Padded-size is 256 for this pool 
Do I need to check for all pools in the statedumps?
Please,Let me know so that i can move the bug to verified.

Thanks
Mugdha

Comment 13 Mugdha Soni 2019-08-21 07:48:43 UTC
Value for pool-name=dentry for 3.4.4 and 3.5.0 is mentioned below :-

3.4.4
##############################
pool-name=dentry_t
active-count=0
sizeof-type=56
padded-sizeof=128
size=0
shared-pool=0x7f6ea44609e0




pool-name=dentry_t
active-count=0
sizeof-type=56
padded-sizeof=128
size=0
shared-pool=0x7f6ea44609e0



3.5.5
###############################
pool-name=dentry_t
active-count=2923
sizeof-type=56
padded-sizeof=128
size=374144
shared-pool=0x7f6dcae44140


pool-name=dentry_t
active-count=0
sizeof-type=56
padded-sizeof=128
size=0
shared-pool=0x7f6dcae44140


The values match for  pool-name=dentry and the other outputs are mentioned in comment#11 .
Based on the outputs ,the bug is being moved to verified state.

Comment 15 errata-xmlrpc 2019-10-30 12:22:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.