Bug 1288238 - failed to get inode size
Summary: failed to get inode size
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.5.5
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-03 22:40 UTC by Neil Van Lysel
Modified: 2016-06-17 15:57 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-06-17 15:57:15 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Neil Van Lysel 2015-12-03 22:40:25 UTC
Description of problem:
The following errors appear over and over in the etc-glusterfs-glusterd.vol.log log:
[2015-12-03 22:19:14.147379] E [glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size
[2015-12-03 22:22:14.100826] E [glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management: xfs_info exited with non-zero exit status

Version-Release number of selected component (if applicable):

How reproducible:
always

Steps to Reproduce:
1. Format bricks with xfs
2. Create 8x2 distributed-replicate volume
3. Start volume

Actual results:

Expected results:

Additional info:
[root@storage-1 ~]# df -hT /brick1
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb1      xfs    28T  373G   27T   2% /brick1


Inode size is clearly defined.
[root@storage-1 ~]# xfs_info /brick1
meta-data=/dev/sdb1              isize=512    agcount=28, agsize=268435455 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=7324302848, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@storage-1 ~]# xfs_info /brick1/home
meta-data=/dev/sdb1              isize=512    agcount=28, agsize=268435455 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=7324302848, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


Note the value of inode size below.
[root@storage-1 ~]# gluster volume status home detail
Status of volume: home
------------------------------------------------------------------------------
Brick                : Brick storage-1:/brick1/home
Port                 : 49152               
Online               : Y                   
Pid                  : 8281                
File System          : xfs                 
Device               : /dev/sdb1           
Mount Options        : rw,noatime,nodiratime,barrier,largeio,inode64
Inode Size           : N/A                 
Disk Space Free      : 33.1TB              
Total Disk Space     : 36.4TB              
Inode Count          : 3906469632          
Free Inodes          : 3902138232  


[root@storage-1 ~]# gluster volume info
Volume Name: home
Type: Distributed-Replicate
Volume ID: 2694f438-08f6-48fc-a072-324d4701f112
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: storage-7:/brick1/home
Brick2: storage-8:/brick1/home
Brick3: storage-9:/brick1/home
Brick4: storage-10:/brick1/home
Brick5: storage-1:/brick1/home
Brick6: storage-2:/brick1/home
Brick7: storage-3:/brick1/home
Brick8: storage-4:/brick1/home
Brick9: storage-5:/brick1/home
Brick10: storage-6:/brick1/home
Brick11: storage-11:/brick1/home
Brick12: storage-12:/brick1/home
Brick13: storage-13:/brick1/home
Brick14: storage-14:/brick1/home
Brick15: storage-15:/brick1/home
Brick16: storage-16:/brick1/home
Options Reconfigured:
performance.cache-size: 100MB
performance.write-behind-window-size: 100MB
nfs.disable: on
features.quota: on
features.default-soft-limit: 90%


GLUSTER SERVER PACKAGES:
[root@storage-1 ~]# rpm -qa |grep gluster
glusterfs-cli-3.5.5-2.el6.x86_64
glusterfs-server-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
glusterfs-api-3.5.5-2.el6.x86_64


XFSPROGS PACKAGE:
[root@storage-1 ~]# rpm -qa |grep xfsprogs
xfsprogs-3.1.1-16.el6.x86_64

Comment 1 Atin Mukherjee 2015-12-08 12:15:03 UTC
We'd need to backport http://review.gluster.org/#/c/8492/ to 3.5 branch to fix this. Any volunteers?

Comment 2 Niels de Vos 2016-06-17 15:57:15 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.