Description of problem: I was running rm -rf * on the master mount point and saw lots of Warning messages in the auxiliary mount logs in the slave. The messages were [2013-07-31 11:11:59.688460] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory [2013-07-31 11:11:59.688548] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory [2013-07-31 11:11:59.690008] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory [2013-07-31 11:11:59.691900] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory [2013-07-31 11:11:59.691959] I [afr-lk-common.c:1085:afr_lock_blocking] 0-hosa_slave-replicate-1: unable to lock on even one child [2013-07-31 11:11:59.691993] I [afr-transaction.c:1063:afr_post_blocking_inodelk_cbk] 0-hosa_slave-replicate-1: Blocking inodelks failed. [2013-07-31 11:11:59.693990] W [client-rpc-fops.c:2523:client3_3_opendir_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050) [2013-07-31 11:11:59.694328] W [client-rpc-fops.c:2523:client3_3_opendir_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050) Version-Release number of selected component (if applicable): mailine How reproducible: Hit once. Not sure about reproducibility. Steps to Reproduce: 1. Create and start a geo-rep session between master and slave. 2. Untar the linux kernel twice in 2 different locations on the master mountpoint. 3. Now run from master mount point run, rm -rf * Actual results: hosa_slave-client-2 [2013-07-31 11:11:58.042628] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory [2013-07-31 11:11:58.042946] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory [2013-07-31 11:11:58.087397] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-0: remote operation failed: No such file or directory [2013-07-31 11:11:58.088347] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-1: remote operation failed: No such file or directory [2013-07-31 11:11:58.088422] W [fuse-bridge.c:1624:fuse_unlink_cbk] 0-glusterfs-fuse: 1528415: RMDIR() <gfid:69f85589-6525-4cdb-a308-9dae4d6c6935>/vhost_test => -1 (No such file or directory) [2013-07-31 11:11:58.291267] E [afr-self-heal-entry.c:2325:afr_sh_post_nonblocking_entry_cbk] 0-hosa_slave-replicate-1: Non Blocking entrylks failed for <gfid:69f85589-6525-4cdb-a308-9dae4d6c6935>/virtio-trace. [2013-07-31 11:11:58.291409] E [afr-self-heal-common.c:2722:afr_log_self_heal_completion_status] 0-hosa_slave-replicate-1: foreground gfid or missing entry self heal is not attempted, medatadata self heal is not attempted, data self heal is not attempted, entry self heal failed on <gfid:69f85589-6525-4cdb-a308-9dae4d6c6935>/virtio-trace [2013-07-31 11:11:58.296013] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory [2013-07-31 11:11:58.296128] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory [2013-07-31 11:11:58.324498] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-0: remote operation failed: No such file or directory [2013-07-31 11:11:58.325157] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-1: remote operation failed: No such file or directory [2013-07-31 11:11:58.325237] W [fuse-bridge.c:1624:fuse_unlink_cbk] 0-glusterfs-fuse: 1581366: RMDIR() <gfid:69f85589-6525-4cdb-a308-9dae4d6c6935>/virtio-trace => -1 (No such file or directory) [2013-07-31 11:11:59.613599] I [dht-layout.c:636:dht_layout_normalize] 0-hosa_slave-dht: found anomalies in <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools. holes=1 overlaps=1 missing=1 down=0 misc=0 [2013-07-31 11:11:59.674199] W [client-rpc-fops.c:1983:client3_3_setattr_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory [2013-07-31 11:11:59.674292] W [client-rpc-fops.c:1983:client3_3_setattr_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory [2013-07-31 11:11:59.675293] W [client-rpc-fops.c:1744:client3_3_xattrop_cbk] 0-hosa_slave-client-2: remote operation failed: Success. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050) [2013-07-31 11:11:59.675997] W [client-rpc-fops.c:1744:client3_3_xattrop_cbk] 0-hosa_slave-client-3: remote operation failed: Success. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050) [2013-07-31 11:11:59.688460] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory [2013-07-31 11:11:59.688548] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory [2013-07-31 11:11:59.690008] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory [2013-07-31 11:11:59.691900] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory [2013-07-31 11:11:59.691959] I [afr-lk-common.c:1085:afr_lock_blocking] 0-hosa_slave-replicate-1: unable to lock on even one child [2013-07-31 11:11:59.691993] I [afr-transaction.c:1063:afr_post_blocking_inodelk_cbk] 0-hosa_slave-replicate-1: Blocking inodelks failed. [2013-07-31 11:11:59.693990] W [client-rpc-fops.c:2523:client3_3_opendir_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050) [2013-07-31 11:11:59.694328] W [client-rpc-fops.c:2523:client3_3_opendir_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050) Expected results: Not so much of warning and error logs.
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#1) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#2) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#3) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#4) for review on master by Vijay Bellur (vbellur)
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#5) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#6) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#7) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#8) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10184 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#9) for review on master by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/10184 committed in master by Vijay Bellur (vbellur) ------ commit 009363dbc162dca71efce6689497b3948a14bd82 Author: Kotresh HR <khiremat> Date: Thu Apr 9 19:00:58 2015 +0530 geo-rep: Don't log geo-rep safe errors in mount logs EEXIST and ENOENT are safe errors for geo-replication. Since mkdir is captured in all the bricks of the changelog. mkdir is tried multiple times as per the number of bricks. The first one to process by gsyncd will succeed and all others will get EEXIST. Hence EEXIST is a safe error and can be ignored. Similarly ENOENT also in rm -rf case. And also gsyncd validates these errors and log them in master if it is genuine error. This is coming up with the patch http://review.gluster.org/#/c/10048/ Hence ignoring above said safe errors. Change-Id: I10ae86b11d49c7c3ba2be3110dace6b33daa509e BUG: 1210562 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/10184 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
REVIEW: http://review.gluster.org/10422 (geo-rep: Fix ignoring geo-rep safe errors) posted (#1) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10422 (geo-rep: Fix ignoring geo-rep safe errors) posted (#2) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10422 (geo-rep: Fix ignoring geo-rep safe errors) posted (#3) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10422 (geo-rep: Fix ignoring geo-rep safe errors) posted (#4) for review on master by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/10422 committed in master by Vijay Bellur (vbellur) ------ commit 02223349a0a141d5bd09edcd571a703bb2e5a94d Author: Kotresh HR <khiremat> Date: Tue Apr 28 18:09:29 2015 +0530 geo-rep: Fix ignoring geo-rep safe errors Fix ignoring geo-rep safe errors in fuse layer and also ignore logging in client translator for mknod. Though it is rare, to happen with mknod, it might happen with history crawl on overlapping changelogs replay. Change-Id: I7e145cd1dc53f04d444ad2e68e66e648be448e61 BUG: 1210562 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/10422 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Aravinda VK <avishwan>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user