Bug 1244865 - [rhs] xfs filesystem is created with wrong inode size when setting up a brick
Summary: [rhs] xfs filesystem is created with wrong inode size when setting up a brick
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.1.1
Assignee: Timothy Asir
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: 1216951 1251815 1293537
TreeView+ depends on / blocked
 
Reported: 2015-07-20 16:00 UTC by Martin Bukatovic
Modified: 2023-09-14 03:02 UTC (History)
12 users (show)

Fixed In Version: vdsm-4.16.20-1.3
Doc Type: Bug Fix
Doc Text:
Previously, when bricks were created using the UI, the xfs file system was created with the inode size set to 256 bytes rather than the recommended 512 bytes for disk types other than RAID6 and RAID10. This has now been fixed to use the recommended 512 bytes size.
Clone Of:
: 1293537 (view as bug list)
Environment:
Last Closed: 2015-10-05 09:22:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:1848 0 normal SHIPPED_LIVE Red Hat Gluster Storage Console 3.1 update 1 bug fixes 2015-10-05 13:19:50 UTC
oVirt gerrit 43847 0 master MERGED gluster: fix xfs filesystem is created with wrong inode size 2020-03-11 02:36:11 UTC
oVirt gerrit 45299 0 ovirt-3.5-gluster MERGED gluster: fix xfs filesystem is created with wrong inode size 2020-03-11 02:36:11 UTC
oVirt gerrit 50849 0 ovirt-3.6 MERGED gluster: fix xfs filesystem is created with wrong inode size 2020-03-11 02:36:11 UTC

Description Martin Bukatovic 2015-07-20 16:00:15 UTC
Description of problem
======================

When Gluster Storage Console creates XFS filesystem to setup a gluster brick,
it doesn't set inode size to 512 as suggested in Admin. Guide so that the
default value of 256 is used instead.

See what "Red Hat Gluster Storage 3.1 Administration Guide" states:

> XFS Inode Size
>
> As Red Hat Gluster Storage makes extensive use of extended attributes, an XFS
> inode size of 512 bytes works better with Red Hat Gluster Storage than the
> default XFS inode size of 256 bytes. So, inode size for XFS must be set to
> 512 bytes while formatting the Red Hat Gluster Storage bricks. To set the
> inode size, you have to use -i size option with the mkfs.xfs command as shown
> in the following Logical Block Size for the Directory section. 

Version-Release number of selected component (if applicable)
============================================================

rhsc-3.1.0-0.62.el6.noarch

Steps to Reproduce
==================

1. Create gluster bricks via Console (tab "Storage Devices" of particular host)
2. Create gluster volume using previously created bricks
3. Examine inode size of xfs filesystem backing bricks of just created volume

Actual results
==============

On storage nodes, I see that inode size is set to 256:

~~~
# xfs_info /rhgs/brick1/
meta-data=/dev/mapper/vg--brick1-brick1 isize=256    agcount=16, agsize=1621936 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=25950976, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=12672, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
~~~

Expected results
================

Inode size of xfs filesystem should be set to 512.

Comment 1 RamaKasturi 2015-07-21 05:42:18 UTC
Hi Manoj,
 
   Do you think we should take this as blocker for 3.1?

Thanks
kasturi

Comment 2 Manoj Pillai 2015-07-21 06:11:48 UTC
The mkfs.xfs options we recommend (in addition to RAID striping parameters) are:
"-i size=512 -n size=8192"
(inode size and directory block size)

From the xfs_info output in comment 0 it appears that neither is applied. The runs to come up with these recommendations were done in bygone days, I haven't seen the actual measured impact of these options.

But given our heavy reliance on extended attributes, my inclination is to mark the "-i size=512" omission as a blocker.

Checking with Ben to see if he has anything to add.

Comment 3 Timothy Asir 2015-07-21 09:39:06 UTC
Currently vdsm sets inode size to 256 only for the (normal) disk other than raid devices. It set inode size to 512 for RAID6 and RAID10.

Comment 4 Timothy Asir 2015-07-22 08:11:07 UTC
As a workaround, the admin can fix this issue by recreate XFS filesystem with inode size of 512 and directory block size of 8192 and remount the device.

ex:- mkfs.xfs -i size=512 -n size=8192 <device path> -f

Comment 7 monti lawrence 2015-07-24 14:55:10 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 8 Sahina Bose 2015-07-24 15:27:29 UTC
Edited and signed off

Comment 12 RamaKasturi 2015-08-28 07:12:19 UTC
Verified and works fine with build vdsm-4.16.20-1.3.el7rhgs.x86_64

Created a brick on RHEL7 using RAID6 device and the inode size is set as 512:

 xfs_info /rhgs/brick1/
meta-data=/dev/mapper/vg--brick1-brick1 isize=512    agcount=32, agsize=146410912 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=4685148416, imaxpct=5
         =                       sunit=32     swidth=320 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=32 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

On a RHEL7 machine created a partition on a disk and created brick using that parition and the inode size set as 512:

[root@rhs-client24 ~]# xfs_info /rhgs/brick3
meta-data=/dev/mapper/vg--brick3-brick3 isize=512    agcount=16, agsize=810928 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=12974848, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=6336, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

On a RHEL7 machine created brick using RAID10 device and following is the inode size:

[root@birdman ~]# xfs_info /rhgs/brick1
meta-data=/dev/mapper/vg--brick1-brick1 isize=512    agcount=32, agsize=76176768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2437656576, imaxpct=5
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

On a RHEL7 machine created brick using RAID0 device and following is the inode size:

[root@birdman ~]# xfs_info /rhgs/brick2
meta-data=/dev/mapper/vg--brick2-brick2 isize=512    agcount=32, agsize=15108992 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=483485696, imaxpct=5
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=236080, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

In all the above cases inode size is 512. Marking this bug verified.

Comment 13 Bhavana 2015-09-22 06:35:24 UTC
Hi Tim,

The doc text is updated. Please review it and share your technical review comments. If it looks ok, then sign-off on the same.

Comment 15 errata-xmlrpc 2015-10-05 09:22:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1848.html

Comment 16 Red Hat Bugzilla 2023-09-14 03:02:18 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.