Bug 1743215 - glusterd-utils: 0-management: xfs_info exited with non-zero exit status [Permission denied]
Summary: glusterd-utils: 0-management: xfs_info exited with non-zero exit status [Perm...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-19 11:09 UTC by Jóhann B. Guðmundsson
Modified: 2020-03-12 12:13 UTC (History)
11 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-03-12 12:13:47 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Jóhann B. Guðmundsson 2019-08-19 11:09:24 UTC
Description of problem:

I'm seeing periodically in glusterd log 

2019-08-19 10:53:31.512591] E [MSGID: 106334] [glusterd-utils.c:6990:glusterd_add_inode_size_to_dict] 0-management: xfs_info exited with non-zero exit status [Permission denied]
[2019-08-19 10:53:31.512638] E [MSGID: 106419] [glusterd-utils.c:7015:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size

Not sure if this is an misleading error msg since if I run xfs_info on the mounted xfs filesystem ( /var/lib/libvirt/images ) I get "XFS_IOC_FSGEOMETRY: Function not implemented" error msg which indicates misleading error msg in gluster-utils and or bug in xfsprogs or simply a configuration error on my behalf. ( fstab entry "rw,inode64,noatime,nouuid      1 2" is taken from upstream documentation )

Here are the related fstab entries

UUID=fad0263f-9858-498d-b146-45bed1daacbb	/srv/glusterfs		 xfs rw,inode64,noatime,nouuid   1 2
localhost:/virt01                               /var/lib/libvirt/images glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0



Version-Release number of selected component (if applicable):

glusterfs-6.5-1.fc30.x86_64

How reproducible:

Steps to Reproduce:
1. 
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Atin Mukherjee 2019-08-27 08:28:42 UTC
I do not believe that this is a glusterfs bug. 

glusterd_add_inode_size_to_dict () would pass the device (the underlying device on which brick is mounted) to the xfs_prog. If runner framework is unable to execute the xfs_prog call successfully it will throw up the errno and in this case it's permission denied.

Could you point out your brick paths from gluster volume info output and then run a df command to find the device of the brick paths and then execute xfs_prog <device> to see what it throws up?

Comment 2 Jóhann B. Guðmundsson 2019-08-28 13:48:05 UTC
There is no such thing as xfs_prog in Fedora so presumably you just meant xfs_info which triggers this error if not you probably need to ping Eric.

# gluster volume info
 
Volume Name: virt01
Type: Replicate
Volume ID: <zanitized>
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: <zanitized>:/srv/glusterfs/images
Brick2: <zanitized>:/srv/glusterfs/images
Brick3: <zanitized>:/srv/glusterfs/images
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
server.allow-insecure: on

The glusterfs mount point is the same on all hosts

localhost:/virt01                      7.3T  149G  7.2T   2% /var/lib/libvirt/images

If I run it against the mapped device 
# xfs_info /dev/mapper/ht_gluster01-lv_gluster01
xfs_info: /dev/mapper/ht_gluster01-lv_gluster01 contains a mounted filesystem

fatal error -- couldn't initialize XFS library

If I run it against the mapped mount point for the device ( /srv/glusterfs/ ) 
# xfs_info /srv/glusterfs/
meta-data=/dev/mapper/ht_gluster01-lv_gluster01 isize=512    agcount=32, agsize=61047296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=1953513472, imaxpct=5
         =                       sunit=32     swidth=320 blks
naming   =version 2              bsize=8192   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

If I run it against the Brick path which is a directory called images and resides under /srv/glusterfs/ xfs mount point  ( /srv/glusterfs/images )
xfs_info /srv/glusterfs/images
/srv/glusterfs/images: Not a XFS mount point.

Note that none of the error msg xfs_info provided are permissions error

Comment 3 bueche 2019-10-08 14:05:12 UTC
Hello,

I'm victim of this bug as well. I run 6.5 on SLES 12. I went a little further and patched /usr/sbin/xfs_info (bash script) to show the command executed. First, the filesystems involved :

------------------------------------------------------
root@srv1:~ # df
Filesystem                     1K-blocks     Used Available Use% Mounted on
...
not relevant
...
/dev/mapper/vgroot-lv_data     104806400 39871108  64935292  39% /data
srv1:repl-vol          104806400 40919172  63887228  40% /data/glusterfs/mnt/repl-vol
------------------------------------------------------

The command : 

/usr/sbin/gluster volume status repl-vol srv1:/data/glusterfs/vol0/brick0 detail

Status of volume: repl-vol
------------------------------------------------------------------------------
Brick                : Brick srv1:/data/glusterfs/vol0/brick0
...
Inode Size           : N/A                 
...

------------------------------------------------------

log file glusterd.log :

[2019-10-08 14:00:47.819517] I [MSGID: 106499] [glusterd-handler.c:4429:__glusterd_handle_status_volume] 0-management: Received status volume req for volume repl-vol
[2019-10-08 14:00:47.838388] E [MSGID: 106419] [glusterd-utils.c:6998:glusterd_add_inode_size_to_dict] 0-management: Unable to retrieve inode size using xfs_info
[2019-10-08 14:00:47.838416] E [MSGID: 106419] [glusterd-utils.c:7016:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size

------------------------------------------------------

and here it is ::: the command forked by glusterd-utils is using the wrong data : the device instead of the mount point :

root@srv1:~ # xfs_growfs -p xfs_info -n  "/dev/mapper/vgroot-lv_data"
xfs_info: /dev/mapper/vgroot-lv_data is not a mounted XFS filesystem

It should be

root@srv1:~ # xfs_growfs -p xfs_info -n  "/data"                     
meta-data=/dev/mapper/vgroot-lv_data isize=512    agcount=4, agsize=6553600 blks
         =                       sectsz=512   attr=2, projid32bit=1
...

So I think it uses the wrong argument passed to xfs_info.

Comment 4 Worker Ant 2020-03-12 12:13:47 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/845, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.