Bug 1293537 - [gluster] xfs filesystem is created with wrong inode size when setting up a brick
Summary: [gluster] xfs filesystem is created with wrong inode size when setting up a b...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Gluster
Version: 4.17.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-3.6.5
: 4.17.15
Assignee: Ramesh N
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1244865
Blocks: 1216951 1251815
TreeView+ depends on / blocked
 
Reported: 2015-12-22 05:06 UTC by Ramesh N
Modified: 2016-04-21 14:40 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1244865
Environment:
Last Closed: 2016-04-21 14:40:08 UTC
oVirt Team: Gluster
Embargoed:
rule-engine: ovirt-3.6.z+
ylavi: planning_ack+
rnachimu: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 43847 0 None None None 2015-12-22 05:06:17 UTC
oVirt gerrit 45299 0 None None None 2015-12-22 05:06:17 UTC
oVirt gerrit 50849 0 ovirt-3.6 MERGED gluster: fix xfs filesystem is created with wrong inode size 2015-12-24 08:46:18 UTC

Comment 1 Red Hat Bugzilla Rules Engine 2015-12-22 05:39:32 UTC
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.

Comment 2 Red Hat Bugzilla Rules Engine 2015-12-24 08:46:24 UTC
Fixed bug tickets must have version flags set prior to fixing them. Please set the correct version flags and move the bugs back to the previous status after this is corrected.

Comment 3 Ben England 2016-01-04 14:53:22 UTC
cc'ed Manoj and John...

So this really matters for small-file workloads because use of inode size 256 with lots of xattrs guarantees that an additional 4-KB block will have to be allocated and written by XFS to update XATTRs.  It's not clear to me that 512 is sufficient in all cases though.  It used to be 3 years ago, but Gluster has added many xattrs since then.  It's easy to check using "xfs_bmap -a".  Keep in mind that any xattr bigger than 254 bytes will automatically trigger separate 4-KB block allocation for xattr data (I learned this from Ceph developers).

https://home.corp.redhat.com/wiki/xfs-filesystem-parameters-red-hat-storage-gluster-bricks

With next higher size, 1024, we saw problems with inodes consuming lots of slab space, and also XFS began to have problems allocating inodes (may have been fixed since 3 years ago.

http://oss.sgi.com/archives/xfs/2014-11/msg00456.html

Comment 4 Sahina Bose 2016-01-13 09:50:42 UTC
Description of problem
======================

When Gluster Storage Console creates XFS filesystem to setup a gluster brick,
it doesn't set inode size to 512 as suggested in Admin. Guide so that the
default value of 256 is used instead.

See what "Red Hat Gluster Storage 3.1 Administration Guide" states:

> XFS Inode Size
>
> As Red Hat Gluster Storage makes extensive use of extended attributes, an XFS
> inode size of 512 bytes works better with Red Hat Gluster Storage than the
> default XFS inode size of 256 bytes. So, inode size for XFS must be set to
> 512 bytes while formatting the Red Hat Gluster Storage bricks. To set the
> inode size, you have to use -i size option with the mkfs.xfs command as shown
> in the following Logical Block Size for the Directory section. 

Version-Release number of selected component (if applicable)
============================================================

rhsc-3.1.0-0.62.el6.noarch

Steps to Reproduce
==================

1. Create gluster bricks via Console (tab "Storage Devices" of particular host)
2. Create gluster volume using previously created bricks
3. Examine inode size of xfs filesystem backing bricks of just created volume

Actual results
==============

On storage nodes, I see that inode size is set to 256:

~~~
# xfs_info /rhgs/brick1/
meta-data=/dev/mapper/vg--brick1-brick1 isize=256    agcount=16, agsize=1621936 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=25950976, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=12672, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
~~~

Expected results
================

Inode size of xfs filesystem should be set to 512.

Comment 5 Sahina Bose 2016-01-13 09:51:01 UTC
Description of problem
======================

When Gluster Storage Console creates XFS filesystem to setup a gluster brick,
it doesn't set inode size to 512 as suggested in Admin. Guide so that the
default value of 256 is used instead.

See what "Red Hat Gluster Storage 3.1 Administration Guide" states:

> XFS Inode Size
>
> As Red Hat Gluster Storage makes extensive use of extended attributes, an XFS
> inode size of 512 bytes works better with Red Hat Gluster Storage than the
> default XFS inode size of 256 bytes. So, inode size for XFS must be set to
> 512 bytes while formatting the Red Hat Gluster Storage bricks. To set the
> inode size, you have to use -i size option with the mkfs.xfs command as shown
> in the following Logical Block Size for the Directory section. 

Version-Release number of selected component (if applicable)
============================================================

rhsc-3.1.0-0.62.el6.noarch

Steps to Reproduce
==================

1. Create gluster bricks via Console (tab "Storage Devices" of particular host)
2. Create gluster volume using previously created bricks
3. Examine inode size of xfs filesystem backing bricks of just created volume

Actual results
==============

On storage nodes, I see that inode size is set to 256:

~~~
# xfs_info /rhgs/brick1/
meta-data=/dev/mapper/vg--brick1-brick1 isize=256    agcount=16, agsize=1621936 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=25950976, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=12672, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
~~~

Expected results
================

Inode size of xfs filesystem should be set to 512.

Comment 6 Sahina Bose 2016-01-13 10:25:59 UTC
Ramesh, can this be moved to ON_QA? Has the patch been merged to 3.6?

Comment 7 SATHEESARAN 2016-03-16 02:45:20 UTC
Tested with RHEV 3.6.3.4 and RHGS 3.1.2 ( adding the RHGS 3.1.2 node to 3.5 cluster compatibility )

When the brick is created from UI, the brick is formatted with XFS filesystem with inode size set to 512

[root@ ~]# xfs_info /gluster-bricks/brick1 | grep isize
meta-data=/dev/mapper/vg--brick1-brick1 isize=512    agcount=8, agsize=163776 blks


Note You need to log in before you can comment on or make changes to this bug.