Bug 1244865
Summary: | [rhs] xfs filesystem is created with wrong inode size when setting up a brick | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Martin Bukatovic <mbukatov> | |
Component: | rhsc | Assignee: | Timothy Asir <tjeyasin> | |
Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | asrivast, bengland, bmohanra, dahorak, knarra, mpillai, nlevinki, rhs-bugs, sabose, sgraf, tjeyasin, vagarwal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.1.1 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | vdsm-4.16.20-1.3 | Doc Type: | Bug Fix | |
Doc Text: |
Previously, when bricks were created using the UI, the xfs file system was created with the inode size set to 256 bytes rather than the recommended 512 bytes for disk types other than RAID6 and RAID10. This has now been fixed to use the recommended 512 bytes size.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1293537 (view as bug list) | Environment: | ||
Last Closed: | 2015-10-05 09:22:49 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1216951, 1251815, 1293537 |
Description
Martin Bukatovic
2015-07-20 16:00:15 UTC
Hi Manoj, Do you think we should take this as blocker for 3.1? Thanks kasturi The mkfs.xfs options we recommend (in addition to RAID striping parameters) are: "-i size=512 -n size=8192" (inode size and directory block size) From the xfs_info output in comment 0 it appears that neither is applied. The runs to come up with these recommendations were done in bygone days, I haven't seen the actual measured impact of these options. But given our heavy reliance on extended attributes, my inclination is to mark the "-i size=512" omission as a blocker. Checking with Ben to see if he has anything to add. Currently vdsm sets inode size to 256 only for the (normal) disk other than raid devices. It set inode size to 512 for RAID6 and RAID10. As a workaround, the admin can fix this issue by recreate XFS filesystem with inode size of 512 and directory block size of 8192 and remount the device. ex:- mkfs.xfs -i size=512 -n size=8192 <device path> -f Doc text is edited. Please sign off to be included in Known Issues. Edited and signed off Verified and works fine with build vdsm-4.16.20-1.3.el7rhgs.x86_64 Created a brick on RHEL7 using RAID6 device and the inode size is set as 512: xfs_info /rhgs/brick1/ meta-data=/dev/mapper/vg--brick1-brick1 isize=512 agcount=32, agsize=146410912 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=4685148416, imaxpct=5 = sunit=32 swidth=320 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=32 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 On a RHEL7 machine created a partition on a disk and created brick using that parition and the inode size set as 512: [root@rhs-client24 ~]# xfs_info /rhgs/brick3 meta-data=/dev/mapper/vg--brick3-brick3 isize=512 agcount=16, agsize=810928 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=12974848, imaxpct=25 = sunit=16 swidth=16 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=6336, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 On a RHEL7 machine created brick using RAID10 device and following is the inode size: [root@birdman ~]# xfs_info /rhgs/brick1 meta-data=/dev/mapper/vg--brick1-brick1 isize=512 agcount=32, agsize=76176768 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=2437656576, imaxpct=5 = sunit=64 swidth=64 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=64 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 On a RHEL7 machine created brick using RAID0 device and following is the inode size: [root@birdman ~]# xfs_info /rhgs/brick2 meta-data=/dev/mapper/vg--brick2-brick2 isize=512 agcount=32, agsize=15108992 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=483485696, imaxpct=5 = sunit=128 swidth=256 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=236080, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 In all the above cases inode size is 512. Marking this bug verified. Hi Tim, The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1848.html The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |