Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.
Fixed bug tickets must have version flags set prior to fixing them. Please set the correct version flags and move the bugs back to the previous status after this is corrected.
cc'ed Manoj and John... So this really matters for small-file workloads because use of inode size 256 with lots of xattrs guarantees that an additional 4-KB block will have to be allocated and written by XFS to update XATTRs. It's not clear to me that 512 is sufficient in all cases though. It used to be 3 years ago, but Gluster has added many xattrs since then. It's easy to check using "xfs_bmap -a". Keep in mind that any xattr bigger than 254 bytes will automatically trigger separate 4-KB block allocation for xattr data (I learned this from Ceph developers). https://home.corp.redhat.com/wiki/xfs-filesystem-parameters-red-hat-storage-gluster-bricks With next higher size, 1024, we saw problems with inodes consuming lots of slab space, and also XFS began to have problems allocating inodes (may have been fixed since 3 years ago. http://oss.sgi.com/archives/xfs/2014-11/msg00456.html
Description of problem ====================== When Gluster Storage Console creates XFS filesystem to setup a gluster brick, it doesn't set inode size to 512 as suggested in Admin. Guide so that the default value of 256 is used instead. See what "Red Hat Gluster Storage 3.1 Administration Guide" states: > XFS Inode Size > > As Red Hat Gluster Storage makes extensive use of extended attributes, an XFS > inode size of 512 bytes works better with Red Hat Gluster Storage than the > default XFS inode size of 256 bytes. So, inode size for XFS must be set to > 512 bytes while formatting the Red Hat Gluster Storage bricks. To set the > inode size, you have to use -i size option with the mkfs.xfs command as shown > in the following Logical Block Size for the Directory section. Version-Release number of selected component (if applicable) ============================================================ rhsc-3.1.0-0.62.el6.noarch Steps to Reproduce ================== 1. Create gluster bricks via Console (tab "Storage Devices" of particular host) 2. Create gluster volume using previously created bricks 3. Examine inode size of xfs filesystem backing bricks of just created volume Actual results ============== On storage nodes, I see that inode size is set to 256: ~~~ # xfs_info /rhgs/brick1/ meta-data=/dev/mapper/vg--brick1-brick1 isize=256 agcount=16, agsize=1621936 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=25950976, imaxpct=25 = sunit=16 swidth=16 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=12672, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 ~~~ Expected results ================ Inode size of xfs filesystem should be set to 512.
Ramesh, can this be moved to ON_QA? Has the patch been merged to 3.6?
Tested with RHEV 3.6.3.4 and RHGS 3.1.2 ( adding the RHGS 3.1.2 node to 3.5 cluster compatibility ) When the brick is created from UI, the brick is formatted with XFS filesystem with inode size set to 512 [root@ ~]# xfs_info /gluster-bricks/brick1 | grep isize meta-data=/dev/mapper/vg--brick1-brick1 isize=512 agcount=8, agsize=163776 blks