Description of problem ====================== When Gluster Storage Console creates XFS filesystem to setup a gluster brick, it doesn't set inode size to 512 as suggested in Admin. Guide so that the default value of 256 is used instead. See what "Red Hat Gluster Storage 3.1 Administration Guide" states: > XFS Inode Size > > As Red Hat Gluster Storage makes extensive use of extended attributes, an XFS > inode size of 512 bytes works better with Red Hat Gluster Storage than the > default XFS inode size of 256 bytes. So, inode size for XFS must be set to > 512 bytes while formatting the Red Hat Gluster Storage bricks. To set the > inode size, you have to use -i size option with the mkfs.xfs command as shown > in the following Logical Block Size for the Directory section. Version-Release number of selected component (if applicable) ============================================================ rhsc-3.1.0-0.62.el6.noarch Steps to Reproduce ================== 1. Create gluster bricks via Console (tab "Storage Devices" of particular host) 2. Create gluster volume using previously created bricks 3. Examine inode size of xfs filesystem backing bricks of just created volume Actual results ============== On storage nodes, I see that inode size is set to 256: ~~~ # xfs_info /rhgs/brick1/ meta-data=/dev/mapper/vg--brick1-brick1 isize=256 agcount=16, agsize=1621936 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=25950976, imaxpct=25 = sunit=16 swidth=16 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=12672, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 ~~~ Expected results ================ Inode size of xfs filesystem should be set to 512.
Hi Manoj, Do you think we should take this as blocker for 3.1? Thanks kasturi
The mkfs.xfs options we recommend (in addition to RAID striping parameters) are: "-i size=512 -n size=8192" (inode size and directory block size) From the xfs_info output in comment 0 it appears that neither is applied. The runs to come up with these recommendations were done in bygone days, I haven't seen the actual measured impact of these options. But given our heavy reliance on extended attributes, my inclination is to mark the "-i size=512" omission as a blocker. Checking with Ben to see if he has anything to add.
Currently vdsm sets inode size to 256 only for the (normal) disk other than raid devices. It set inode size to 512 for RAID6 and RAID10.
As a workaround, the admin can fix this issue by recreate XFS filesystem with inode size of 512 and directory block size of 8192 and remount the device. ex:- mkfs.xfs -i size=512 -n size=8192 <device path> -f
Doc text is edited. Please sign off to be included in Known Issues.
Edited and signed off
Verified and works fine with build vdsm-4.16.20-1.3.el7rhgs.x86_64 Created a brick on RHEL7 using RAID6 device and the inode size is set as 512: xfs_info /rhgs/brick1/ meta-data=/dev/mapper/vg--brick1-brick1 isize=512 agcount=32, agsize=146410912 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=4685148416, imaxpct=5 = sunit=32 swidth=320 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=32 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 On a RHEL7 machine created a partition on a disk and created brick using that parition and the inode size set as 512: [root@rhs-client24 ~]# xfs_info /rhgs/brick3 meta-data=/dev/mapper/vg--brick3-brick3 isize=512 agcount=16, agsize=810928 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=12974848, imaxpct=25 = sunit=16 swidth=16 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=6336, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 On a RHEL7 machine created brick using RAID10 device and following is the inode size: [root@birdman ~]# xfs_info /rhgs/brick1 meta-data=/dev/mapper/vg--brick1-brick1 isize=512 agcount=32, agsize=76176768 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=2437656576, imaxpct=5 = sunit=64 swidth=64 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=64 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 On a RHEL7 machine created brick using RAID0 device and following is the inode size: [root@birdman ~]# xfs_info /rhgs/brick2 meta-data=/dev/mapper/vg--brick2-brick2 isize=512 agcount=32, agsize=15108992 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=483485696, imaxpct=5 = sunit=128 swidth=256 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=236080, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 In all the above cases inode size is 512. Marking this bug verified.
Hi Tim, The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1848.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days