Bug 1162817
| Summary: | mkfs.gfs2 should use a smaller default rgrp size for RHEL7.1 | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Robert Peterson <rpeterso> | ||||
| Component: | gfs2-utils | Assignee: | Andrew Price <anprice> | ||||
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||
| Severity: | urgent | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 7.1 | CC: | cluster-maint, gfs2-maint, jharriga | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | gfs2-utils-3.1.7-4.el7 | Doc Type: | Bug Fix | ||||
| Doc Text: |
Cause: The default resource group size was set to the maximum resource group size in mkfs.gfs2.
Consequence: File system performance could be affected in some cases.
Fix: The default resource group size was reverted to the previous default size.
Result: Users should no longer encounter the file system performance issues that this change caused.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2015-03-05 09:27:05 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Robert Peterson
2014-11-11 18:40:01 UTC
Created attachment 957187 [details]
Patch to revert the default rgrp size to 256M
This patch has been submitted to cluster-devel.
Verified default resource group size is 256MB instead of 2GB. [root@host-075 ~]# rpm -q gfs2-utils gfs2-utils-3.1.7-5.el7.x86_64 [root@host-075 ~]# vgs VG #PV #LV #SN Attr VSize VFree growfs 1 0 0 wz--nc 7.14g 7.14g rhel_host-075 1 2 0 wz--n- 7.51g 40.00m [root@host-075 ~]# lvcreate -n grow1 -L 5G growfs WARNING: gfs2 signature detected on /dev/growfs/grow1 at offset 65536. Wipe it? [y/n]: y Wiping gfs2 signature on /dev/growfs/grow1. Logical volume "grow1" created. [root@host-075 ~]# mkfs.gfs2 -O -p lock_nolock -j 1 /dev/growfs/grow1 /dev/growfs/grow1 is a symbolic link to /dev/dm-2 This will destroy any data on /dev/dm-2 Device: /dev/growfs/grow1 Block size: 4096 Device size: 5.00 GB (1310720 blocks) Filesystem size: 5.00 GB (1310718 blocks) Journals: 1 Resource groups: 21 Locking protocol: "lock_nolock" Lock table: "" UUID: 5c6d4250-4345-259f-6f2f-928e38a6e218 [root@host-075 ~]# gfs2_edit -p rindex /dev/growfs/grow1 Block #33122 (0x8162) of 1310720 (0x140000) (disk inode) ---------------- rindex file ------------------- Dinode: mh_magic 0x01161970(hex) mh_type 4 0x4 mh_format 400 0x190 no_formal_ino 10 0xa no_addr 33122 0x8162 di_mode 0100600(decimal) di_uid 0 0x0 di_gid 0 0x0 di_nlink 1 0x1 di_size 2016 0x7e0 di_blocks 1 0x1 di_atime 1416930788 0x5474a5e4 di_mtime 1416930788 0x5474a5e4 di_ctime 1416930788 0x5474a5e4 di_major 0 0x0 di_minor 0 0x0 di_goal_meta 33122 0x8162 di_goal_data 33122 0x8162 di_flags 0x00000201(hex) di_payload_format 1100 0x44c di_height 0 0x0 di_depth 0 0x0 di_entries 0 0x0 di_eattr 0 0x0 RG index entries found: 21. RG #0 ri_addr 17 0x11 ri_length 3 0x3 ri_data0 20 0x14 ri_data 32836 0x8044 ri_bitbytes 8209 0x2011 RG #1 ri_addr 32856 0x8058 ri_length 4 0x4 ri_data0 32860 0x805c ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #2 ri_addr 96750 0x179ee ri_length 4 0x4 ri_data0 96754 0x179f2 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #3 ri_addr 160644 0x27384 ri_length 4 0x4 ri_data0 160648 0x27388 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #4 ri_addr 224538 0x36d1a ri_length 4 0x4 ri_data0 224542 0x36d1e ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #5 ri_addr 288431 0x466af ri_length 4 0x4 ri_data0 288435 0x466b3 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #6 ri_addr 352324 0x56044 ri_length 4 0x4 ri_data0 352328 0x56048 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #7 ri_addr 416217 0x659d9 ri_length 4 0x4 ri_data0 416221 0x659dd ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #8 ri_addr 480110 0x7536e ri_length 4 0x4 ri_data0 480114 0x75372 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #9 ri_addr 544003 0x84d03 ri_length 4 0x4 ri_data0 544007 0x84d07 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #10 ri_addr 607896 0x94698 ri_length 4 0x4 ri_data0 607900 0x9469c ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #11 ri_addr 671789 0xa402d ri_length 4 0x4 ri_data0 671793 0xa4031 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #12 ri_addr 735682 0xb39c2 ri_length 4 0x4 ri_data0 735686 0xb39c6 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #13 ri_addr 799575 0xc3357 ri_length 4 0x4 ri_data0 799579 0xc335b ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #14 ri_addr 863468 0xd2cec ri_length 4 0x4 ri_data0 863472 0xd2cf0 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #15 ri_addr 927361 0xe2681 ri_length 4 0x4 ri_data0 927365 0xe2685 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #16 ri_addr 991254 0xf2016 ri_length 4 0x4 ri_data0 991258 0xf201a ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #17 ri_addr 1055147 0x1019ab ri_length 4 0x4 ri_data0 1055151 0x1019af ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #18 ri_addr 1119040 0x111340 ri_length 4 0x4 ri_data0 1119044 0x111344 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #19 ri_addr 1182933 0x120cd5 ri_length 4 0x4 ri_data0 1182937 0x120cd9 ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 RG #20 ri_addr 1246826 0x13066a ri_length 4 0x4 ri_data0 1246830 0x13066e ri_data 63888 0xf990 ri_bitbytes 15972 0x3e64 ------------------------------------------------------ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0428.html |