Bug 1037511 - Operation not permitted occurred during setattr of <nul>
Summary: Operation not permitted occurred during setattr of <nul>
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-03 10:10 UTC by cyber
Modified: 2019-05-07 13:49 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-07 13:49:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description cyber 2013-12-03 10:10:12 UTC
Description of problem:

When a user creates a directory, the server gets Errors in logs and `gluster volume heal volume1 info` never goes empty.

==> /var/log/glusterfs/bricks/mnt-brick1.log <==
[2013-12-03 09:26:45.548638] E [marker.c:2080:marker_setattr_cbk] 0-volume1-marker: Operation not permitted occurred during setattr of <nul>
[2013-12-03 09:26:45.548662] I [server-rpc-fops.c:1778:server_setattr_cbk] 0-volume1-server: 112: SETATTR /testdir/dir1 (9787c62c-534c-4a91-93b6-d87514e7a2b5) ==> (Operation not permitted)

==> /var/log/glusterfs/bricks/mnt-brick2.log <==
[2013-12-03 09:26:45.548727] E [marker.c:2080:marker_setattr_cbk] 0-volume1-marker: Operation not permitted occurred during setattr of <nul>
[2013-12-03 09:26:45.548743] I [server-rpc-fops.c:1778:server_setattr_cbk] 0-volume1-server: 209: SETATTR /testdir/dir1 (9787c62c-534c-4a91-93b6-d87514e7a2b5) ==> (Operation not permitted)

# gluster volume heal volume1 info
Gathering Heal info on volume volume1 has been successful

Brick gluster:/mnt/brick1
Number of entries: 2
/testdir/dir2
/testdir/dir1

Brick gluster:/mnt/brick2
Number of entries: 2
/testdir/dir2
/testdir/dir1


root@default:/mnt# getfattr -d -m . -e hex brick1/testdir/dir1
# file: brick1/testdir/dir1
trusted.afr.volume1-client-0=0x000000000000000100000000
trusted.afr.volume1-client-1=0x000000000000000100000000
trusted.gfid=0x9787c62c534c4a9193b6d87514e7a2b5
trusted.glusterfs.dht=0x000000010000000000000000ffffffff

root@default:/mnt# getfattr -d -m . -e hex brick2/testdir/dir1
# file: brick2/testdir/dir1
trusted.afr.volume1-client-0=0x000000000000000100000000
trusted.afr.volume1-client-1=0x000000000000000100000000
trusted.gfid=0x9787c62c534c4a9193b6d87514e7a2b5
trusted.glusterfs.dht=0x000000010000000000000000ffffffff

Version-Release number of selected component (if applicable):
3.4.1-2

How reproducible:

Prepare servers:
server_with_glusterfs_server:
# uname -a
 Linux default 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux
# dpkg -l | grep glusterfs
 ii  glusterfs-client                   3.4.1-2                       amd64        clustered file-system (client package)
 ii  glusterfs-common                   3.4.1-2                       amd64        GlusterFS common libraries and translator modules
 ii  glusterfs-server                   3.4.1-2                       amd64        clustered file-system (server package)

# echo '192.168.212.2 gluster' >> /etc/hosts
# gluster volume create volume1 replica 2 gluster:/mnt/brick1 gluster:/mnt/brick2
# gluster volume start volume1

server_with_nfs_mount:
# adduser phpuser
 Adding new group `phpuser' (1004) ...
 Adding new user `phpuser' (1004) with group `phpuser' ...
# id phpuser
uid=1004(phpuser) gid=1004(phpuser) groups=1004(phpuser)
# groupadd othergroup
# mount 192.168.212.2:/volume1 /mnt/volume1

Steps to Reproduce:
At this moment we have user 'phpuser' on server_with_nfs_mount with uid=1004 and 'othergroup' with gid=1005
No user and group on server_with_glusterfs_server (uid 1004 not used, gid 1005 not used)
server_with_nfs_mount:
# cd /mnt/volume1
# mkdir testdir
# chown phpuser testdir
# chgrp othergroup testdir
# chmod g+s testdir
# sudo -u phpuser mkdir testdir/dir1

After the last command we have errors in log file and heal info.

Actual results:
gluster volume heal volume1 info
Number of entries: 2

Expected results:
gluster volume heal volume1 info
Number of entries: 0

Comment 1 cyber 2014-01-13 06:08:06 UTC
3.4.2 have same bug.
umask 0002

How i can repair volumes with this subdirs inconsistency?

Comment 2 Peter Auyeung 2014-08-25 22:15:16 UTC
3.5.2 on ubuntu having the same bug.

Heal looks ok but keep having log complaint

Comment 3 Peter Auyeung 2014-08-27 17:11:56 UTC
I am seeing similar issue on 3.5.2 ubuntu 12.04 xfs bricks

when files got renamed, deleted, moved or gziped, we randomly getting these:

e.g. the bcp files got gzipped:
2014-08-27 13:15:20.615627] E [marker.c:2482:marker_setattr_cbk] 0-sas03-marker: Operation not permitted occurred during setattr of /TrafficPrDataFc01//TrafficCost/mfo/costdtl_mfo_20140827060518_018.bcp
&yellow .2014-08-27 13:15:21.509260] E [marker.c:2482:marker_setattr_cbk] 0-sas03-marker: Operation not permitted occurred during setattr of /TrafficPrDataFc01//TrafficCost/mbq/costdtl_mbq_20140827060615_015.bcp

e.g. Directories got moved or changed
 .2014-08-27 12:08:49.410509] E [posix.c:4294:_posix_handle_xattr_keyvalue_pair] 0-sas03-posix: setxattr failed on /brick03/gfs/DevMordorDataSata01//copd//dev//fetch//Combined_Servers//.nfsbd981838f8bb4252000008a3 while doing xattrop: key=trusted.glusterfs.quota.2675c23f-bc0e-4b30-ba3d-9b8112909318.contri (No such file or directory)
2014-08-27 00:00:01.970630] E [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status] 0-sas03-replicate-0:  metadata self heal  failed,   on /DevMordorDataSata01/copd
&yellow .2014-08-27 00:00:01.977268] E [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status] 0-sas03-replicate-0:  metadata self heal  failed,   on /DevMordorDataSata01/copd/dev/fetch


e.g. sometimes even the .glusterfs got issue?
2014-08-27 04:00:02.710194] E [posix.c:4294:_posix_handle_xattr_keyvalue_pair] 0-sas03-posix: setxattr failed on /brick03/gfs/.glusterfs/cd/0d/cd0dd36c-c471-4777-8bd8-6415466434e3 while doing xattrop: key=trusted.glusterfs.quota.04bfaa62-f38c-4e4f-a962-42e7a3012b26.contri (No such file or directory)

Comment 4 Niels de Vos 2015-05-17 21:57:09 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 5 cyber 2015-06-01 06:44:02 UTC
Debian Wheezy:
Linux default 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux

dpkg -l | grep gluster
ii  glusterfs-client                   3.6.3-1                       amd64        clustered file-system (client package)
ii  glusterfs-common                   3.6.3-1                       amd64        GlusterFS common libraries and translator modules
ii  glusterfs-server                   3.6.3-1                       amd64        clustered file-system (server package)

Heal seems ok:
gluster volume heal volume1 info
Brick default:/mnt/brick1/gluster/
Number of entries: 0

Brick default:/mnt/brick2/gluster/
Number of entries: 0

But /var/log/glusterfs/nfs.log have entries:
[2015-06-01 06:39:39.250324] W [client-rpc-fops.c:2145:client3_3_setattr_cbk] 0-volume1-client-0: remote operation failed: Operation not permitted
[2015-06-01 06:39:39.250339] W [client-rpc-fops.c:2145:client3_3_setattr_cbk] 0-volume1-client-1: remote operation failed: Operation not permitted
[2015-06-01 06:39:39.250433] I [MSGID: 109036] [dht-common.c:6222:dht_log_new_layout_for_dir_selfheal] 0-volume1-dht: Setting layout of /testdir/dir3 with [Subvol_name: volume1-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 ]

Comment 6 Nithya Balachandran 2016-03-09 13:36:51 UTC
Apologies for the delay responding. Are these messages still showing up? If yes, can you send us the client and brick logs ?

Comment 7 cyber 2016-03-14 05:15:21 UTC
ii  glusterfs-client               3.5.2-2+deb8u1              amd64        clustered file-system (client package)
ii  glusterfs-common               3.5.2-2+deb8u1              amd64        GlusterFS common libraries and translator modules
ii  glusterfs-server               3.5.2-2+deb8u1              amd64        clustered file-system (server package)

nsf.log
[2016-03-14 04:43:12.086631] W [client-rpc-fops.c:2150:client3_3_setattr_cbk] 0-volume1-client-1: remote operation failed: Operation not permitted
[2016-03-14 04:43:12.086642] W [client-rpc-fops.c:2150:client3_3_setattr_cbk] 0-volume1-client-0: remote operation failed: Operation not permitted

brick1.log
[2016-03-14 04:43:12.086602] E [marker.c:2482:marker_setattr_cbk] 0-volume1-marker: Operation not permitted occurred during setattr of <nul>
[2016-03-14 04:43:12.086612] I [server-rpc-fops.c:1748:server_setattr_cbk] 0-volume1-server: 123: SETATTR /testdir/dir3 (26b1eba0-84bd-4430-bdf6-c626e705a511) ==> (Operation not permitted)

brick2.log
[2016-03-14 04:43:12.086573] E [marker.c:2482:marker_setattr_cbk] 0-volume1-marker: Operation not permitted occurred during setattr of <nul>
[2016-03-14 04:43:12.086585] I [server-rpc-fops.c:1748:server_setattr_cbk] 0-volume1-server: 188: SETATTR /testdir/dir3 (26b1eba0-84bd-4430-bdf6-c626e705a511) ==> (Operation not permitted)

You can easily reproduce this errors with clean Debian Jessie and steps from the first message.

btw. i cannot install latest glusterfs-server 3.6.9-1 from 'download.gluster.org' to Debian Jessie because of unmet dependencies which exist only in Debian Stretch.

Comment 8 Niels de Vos 2016-08-30 12:48:38 UTC
GlusterFS-3.6 is nearing its End-Of-Life, only important security bugs still make a chance on getting fixed. Moving this to the mainline 'version'. If this needs to get fixed in 3.7 or 3.8 this bug should get cloned.

Comment 9 Nithya Balachandran 2017-05-22 06:46:29 UTC
Assigning this to quota as these appear to originate in the marker xlator.

Comment 11 Amar Tumballi 2019-05-07 13:49:43 UTC
Not heard of this issue in sometime. Regret that it was open for a long time, and apologize for the same. But we are not 'concentrating' on Quota feature at the moment, and hence marking bug as DEFERRED. Will reopen if we get cycles to look into after couple of releases. Please raise the concern in mailing list if this is concerning.

Meantime, we request to upgrade to glusterfs-6.x releases to utilize some stability improvements over the years.


Note You need to log in before you can comment on or make changes to this bug.