| Summary: | mkfs.gfs2: Increase default resource group size | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Andrew Price <anprice> |
| Component: | gfs2-utils | Assignee: | Andrew Price <anprice> |
| Status: | CLOSED WONTFIX | QA Contact: | cluster-qe <cluster-qe> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | medium | ||
| Version: | 8.0 | CC: | cluster-maint, gfs2-maint, rhandlin, sbradley, swhiteho |
| Target Milestone: | rc | Keywords: | Triaged |
| Target Release: | 8.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-12-01 07:27:35 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 1111393, 1497636 | ||
|
Description
Andrew Price
2016-11-10 21:03:52 UTC
There may be an issue with performance at larger RG sizes with multiple processes. I ran a matrix of RG size vs processes and got these results: RG size procs bandwidth 256 1 539466KB/s 512 1 537059KB/s 1024 1 560422KB/s 2048 1 552481KB/s 256 3 472423KB/s 512 3 368493KB/s 1024 3 354238KB/s 2048 3 343896KB/s 256 6 508019KB/s 512 6 422882KB/s 1024 6 313730KB/s 2048 6 275803KB/s 256 12 546520KB/s 512 12 390912KB/s 1024 12 284684KB/s 2048 12 198894KB/s Is that because they are all using the same rgrp? If each thread runs in a different top level dir (or subdir of a different top level dir) then does that issue go away? You have a point. I figured out how to accomplish that with fio and here are the results: RG size procs bandwidth 256 1 537019KB/s 512 1 547736KB/s 1024 1 565922KB/s 2048 1 530186KB/s 256 3 606669KB/s 512 3 563094KB/s 1024 3 718381KB/s 2048 3 559343KB/s 256 6 665150KB/s 512 6 577753KB/s 1024 6 676871KB/s 2048 6 573286KB/s 256 12 666406KB/s 512 12 670210KB/s 1024 12 654321KB/s 2048 12 696050KB/s After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |