Bug 1288238 - failed to get inode size
failed to get inode size
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.5.5
x86_64 Linux
unspecified Severity low
: ---
: ---
Assigned To: bugs@gluster.org
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-03 17:40 EST by Neil Van Lysel
Modified: 2016-06-17 11:57 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-06-17 11:57:15 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Neil Van Lysel 2015-12-03 17:40:25 EST
Description of problem:
The following errors appear over and over in the etc-glusterfs-glusterd.vol.log log:
[2015-12-03 22:19:14.147379] E [glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2015-12-03 22:22:14.100826] E [glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management: xfs_info exited with non-zero exit status

Version-Release number of selected component (if applicable):

How reproducible:
always

Steps to Reproduce:
1. Format bricks with xfs
2. Create 8x2 distributed-replicate volume
3. Start volume

Actual results:

Expected results:

Additional info:
[root@storage-1 ~]# df -hT /brick1
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb1      xfs    28T  373G   27T   2% /brick1


Inode size is clearly defined.
[root@storage-1 ~]# xfs_info /brick1
meta-data=/dev/sdb1              isize=512    agcount=28, agsize=268435455 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=7324302848, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@storage-1 ~]# xfs_info /brick1/home
meta-data=/dev/sdb1              isize=512    agcount=28, agsize=268435455 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=7324302848, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


Note the value of inode size below.
[root@storage-1 ~]# gluster volume status home detail
Status of volume: home
------------------------------------------------------------------------------
Brick                : Brick storage-1:/brick1/home
Port                 : 49152               
Online               : Y                   
Pid                  : 8281                
File System          : xfs                 
Device               : /dev/sdb1           
Mount Options        : rw,noatime,nodiratime,barrier,largeio,inode64
Inode Size           : N/A                 
Disk Space Free      : 33.1TB              
Total Disk Space     : 36.4TB              
Inode Count          : 3906469632          
Free Inodes          : 3902138232  


[root@storage-1 ~]# gluster volume info
Volume Name: home
Type: Distributed-Replicate
Volume ID: 2694f438-08f6-48fc-a072-324d4701f112
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: storage-7:/brick1/home
Brick2: storage-8:/brick1/home
Brick3: storage-9:/brick1/home
Brick4: storage-10:/brick1/home
Brick5: storage-1:/brick1/home
Brick6: storage-2:/brick1/home
Brick7: storage-3:/brick1/home
Brick8: storage-4:/brick1/home
Brick9: storage-5:/brick1/home
Brick10: storage-6:/brick1/home
Brick11: storage-11:/brick1/home
Brick12: storage-12:/brick1/home
Brick13: storage-13:/brick1/home
Brick14: storage-14:/brick1/home
Brick15: storage-15:/brick1/home
Brick16: storage-16:/brick1/home
Options Reconfigured:
performance.cache-size: 100MB
performance.write-behind-window-size: 100MB
nfs.disable: on
features.quota: on
features.default-soft-limit: 90%


GLUSTER SERVER PACKAGES:
[root@storage-1 ~]# rpm -qa |grep gluster
glusterfs-cli-3.5.5-2.el6.x86_64
glusterfs-server-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
glusterfs-api-3.5.5-2.el6.x86_64


XFSPROGS PACKAGE:
[root@storage-1 ~]# rpm -qa |grep xfsprogs
xfsprogs-3.1.1-16.el6.x86_64
Comment 1 Atin Mukherjee 2015-12-08 07:15:03 EST
We'd need to backport http://review.gluster.org/#/c/8492/ to 3.5 branch to fix this. Any volunteers?
Comment 2 Niels de Vos 2016-06-17 11:57:15 EDT
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.

Note You need to log in before you can comment on or make changes to this bug.