Bug 1158142

Summary: mkfs.gfs2 should choose the number of resource groups more intelligently
Product: Red Hat Enterprise Linux 7 Reporter: Andrew Price <anprice>
Component: gfs2-utilsAssignee: Andrew Price <anprice>
Status: CLOSED DUPLICATE QA Contact: cluster-qe <cluster-qe>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 7.1CC: anprice, cluster-maint, gfs2-maint, swhiteho
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-12 10:34:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Andrew Price 2014-10-28 17:14:24 UTC
Following on from bug 1154782 there was some discussion about the default size and number of resource groups chosen, and the factors which should be taken into account when choosing them. The key point was that it's more important to optimise for a large number of rgrps per node than only taking the device size into account. We should be able to make better decisions for smaller file systems if we take the number of journals to indicate the number of nodes.

Where possible, #rgrps should be at least 10x #journals. This should be increased slightly to pre-empt journals being added subsequently.

We should make sure that these factors are mentioned in manpages so that users will know that choosing a number of journals well at mkfs time may give better results than relying on gfs2_jadd later.

Comment 4 Andrew Price 2018-04-12 10:34:58 UTC
I'm closing this as a duplicate of bug 1498068 as the work that set the default size of the journals conditionally essentially enforced a minimum proportion of journal space for small filesystems (maximising the space used by resource groups). See 04598c779cb1309f342ca14b555df5eb676e38c9

*** This bug has been marked as a duplicate of bug 1498068 ***