Bug 1283473 - Dist-geo-rep: Too many "remote operation failed: No such file or directory" warning messages in auxilary mount log on slave while executing "rm -rf"
Summary: Dist-geo-rep: Too many "remote operation failed: No such file or directory" w...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.7.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1250009
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-19 04:43 UTC by Aravinda VK
Modified: 2016-04-19 07:48 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7.7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1250009
Environment:
Last Closed: 2016-03-22 08:14:58 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Aravinda VK 2015-11-19 04:43:15 UTC
+++ This bug was initially created as a clone of Bug #1250009 +++

Description of problem:

NOTE: BZ 1210562 is not completely fixed, hence opening a new bug

I was running rm -rf * on the master mount point and saw lots of Warning messages in the auxiliary mount logs in the slave.

The messages were 

[2013-07-31 11:11:59.688460] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory
[2013-07-31 11:11:59.688548] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory
[2013-07-31 11:11:59.690008] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory
[2013-07-31 11:11:59.691900] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory
[2013-07-31 11:11:59.691959] I [afr-lk-common.c:1085:afr_lock_blocking] 0-hosa_slave-replicate-1: unable to lock on even one child
[2013-07-31 11:11:59.691993] I [afr-transaction.c:1063:afr_post_blocking_inodelk_cbk] 0-hosa_slave-replicate-1: Blocking inodelks failed.
[2013-07-31 11:11:59.693990] W [client-rpc-fops.c:2523:client3_3_opendir_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050)
[2013-07-31 11:11:59.694328] W [client-rpc-fops.c:2523:client3_3_opendir_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050)


Version-Release number of selected component (if applicable):
mailine

How reproducible:
Hit once. Not sure about reproducibility.

Steps to Reproduce:
1. Create and start a geo-rep session between master and slave.
2. Untar the linux kernel twice in 2 different locations on the master mountpoint.
3. Now run from master mount point run, rm -rf *

Actual results:
 hosa_slave-client-2
[2013-07-31 11:11:58.042628] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory
[2013-07-31 11:11:58.042946] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory
[2013-07-31 11:11:58.087397] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-0: remote operation failed: No such file or directory
[2013-07-31 11:11:58.088347] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-1: remote operation failed: No such file or directory
[2013-07-31 11:11:58.088422] W [fuse-bridge.c:1624:fuse_unlink_cbk] 0-glusterfs-fuse: 1528415: RMDIR() <gfid:69f85589-6525-4cdb-a308-9dae4d6c6935>/vhost_test => -1 (No such file or directory)
[2013-07-31 11:11:58.291267] E [afr-self-heal-entry.c:2325:afr_sh_post_nonblocking_entry_cbk] 0-hosa_slave-replicate-1: Non Blocking entrylks failed for <gfid:69f85589-6525-4cdb-a308-9dae4d6c6935>/virtio-trace.
[2013-07-31 11:11:58.291409] E [afr-self-heal-common.c:2722:afr_log_self_heal_completion_status] 0-hosa_slave-replicate-1: foreground gfid or missing entry self heal  is not attempted, medatadata self heal  is not attempted, data self heal  is not attempted, entry self heal  failed on  <gfid:69f85589-6525-4cdb-a308-9dae4d6c6935>/virtio-trace
[2013-07-31 11:11:58.296013] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory
[2013-07-31 11:11:58.296128] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory
[2013-07-31 11:11:58.324498] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-0: remote operation failed: No such file or directory
[2013-07-31 11:11:58.325157] W [client-rpc-fops.c:695:client3_3_rmdir_cbk] 0-hosa_slave-client-1: remote operation failed: No such file or directory
[2013-07-31 11:11:58.325237] W [fuse-bridge.c:1624:fuse_unlink_cbk] 0-glusterfs-fuse: 1581366: RMDIR() <gfid:69f85589-6525-4cdb-a308-9dae4d6c6935>/virtio-trace => -1 (No such file or directory)
[2013-07-31 11:11:59.613599] I [dht-layout.c:636:dht_layout_normalize] 0-hosa_slave-dht: found anomalies in <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools. holes=1 overlaps=1 missing=1 down=0 misc=0
[2013-07-31 11:11:59.674199] W [client-rpc-fops.c:1983:client3_3_setattr_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory
[2013-07-31 11:11:59.674292] W [client-rpc-fops.c:1983:client3_3_setattr_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory
[2013-07-31 11:11:59.675293] W [client-rpc-fops.c:1744:client3_3_xattrop_cbk] 0-hosa_slave-client-2: remote operation failed: Success. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050)
[2013-07-31 11:11:59.675997] W [client-rpc-fops.c:1744:client3_3_xattrop_cbk] 0-hosa_slave-client-3: remote operation failed: Success. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050)
[2013-07-31 11:11:59.688460] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory
[2013-07-31 11:11:59.688548] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory
[2013-07-31 11:11:59.690008] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory
[2013-07-31 11:11:59.691900] W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory
[2013-07-31 11:11:59.691959] I [afr-lk-common.c:1085:afr_lock_blocking] 0-hosa_slave-replicate-1: unable to lock on even one child
[2013-07-31 11:11:59.691993] I [afr-transaction.c:1063:afr_post_blocking_inodelk_cbk] 0-hosa_slave-replicate-1: Blocking inodelks failed.
[2013-07-31 11:11:59.693990] W [client-rpc-fops.c:2523:client3_3_opendir_cbk] 0-hosa_slave-client-2: remote operation failed: No such file or directory. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050)
[2013-07-31 11:11:59.694328] W [client-rpc-fops.c:2523:client3_3_opendir_cbk] 0-hosa_slave-client-3: remote operation failed: No such file or directory. Path: <gfid:ec6152e9-e1e9-498c-9e05-46edb9d0bc07>/tools (dce9238f-e5c7-49c1-9741-73388f92f050)


Expected results:
Not so much of warning and error logs.

--- Additional comment from Anand Avati on 2015-08-05 06:46:00 EDT ---

REVIEW: http://review.gluster.org/11833 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#1) for review on master by Kotresh HR (khiremat)

--- Additional comment from Vijay Bellur on 2015-11-18 23:01:41 EST ---

COMMIT: http://review.gluster.org/11833 committed in master by Venky Shankar (vshankar) 
------
commit a52fd2cb7fa3aaff74461f58f32f4ff0b8e0904d
Author: Kotresh HR <khiremat>
Date:   Tue Aug 4 16:20:13 2015 +0530

    geo-rep: Don't log geo-rep safe errors in mount logs
    
    ENOENT is a safe error for geo-replication in case of
    rm -rf. RMDIR is recorded in changelog of each brick,
    geo-rep processes all changelogs among which one will
    succeed and rest will get ENOENT which can be ignored.
    Similarly ENOENT can also be ignored in case of all
    unlink operation during changelog replay that can
    happen when worker goes down and comes back.
    
    Change-Id: I6756f8f4c3fce7a159751a2bfce891ff16ad31a4
    BUG: 1250009
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/11833
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Milind Changire <mchangir>
    Reviewed-by: Saravanakumar Arumugam <sarumuga>
    Reviewed-by: Venky Shankar <vshankar>

Comment 1 Vijay Bellur 2015-11-19 04:44:11 UTC
REVIEW: http://review.gluster.org/12651 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#1) for review on release-3.7 by Aravinda VK (avishwan)

Comment 2 Vijay Bellur 2015-11-21 14:20:57 UTC
REVIEW: http://review.gluster.org/12651 (geo-rep: Don't log geo-rep safe errors in mount logs) posted (#2) for review on release-3.7 by Aravinda VK (avishwan)

Comment 3 Vijay Bellur 2015-11-23 19:10:45 UTC
COMMIT: http://review.gluster.org/12651 committed in release-3.7 by Vijay Bellur (vbellur) 
------
commit 2cd331dc3080f3a66edb12fa6cbab75c69b0147f
Author: Kotresh HR <khiremat>
Date:   Tue Aug 4 16:20:13 2015 +0530

    geo-rep: Don't log geo-rep safe errors in mount logs
    
    ENOENT is a safe error for geo-replication in case of
    rm -rf. RMDIR is recorded in changelog of each brick,
    geo-rep processes all changelogs among which one will
    succeed and rest will get ENOENT which can be ignored.
    Similarly ENOENT can also be ignored in case of all
    unlink operation during changelog replay that can
    happen when worker goes down and comes back.
    
    Change-Id: I6756f8f4c3fce7a159751a2bfce891ff16ad31a4
    BUG: 1283473
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/11833
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Milind Changire <mchangir>
    Reviewed-by: Saravanakumar Arumugam <sarumuga>
    Reviewed-by: Venky Shankar <vshankar>
     (cherry picked from commit a52fd2cb7fa3aaff74461f58f32f4ff0b8e0904d)
    Reviewed-on: http://review.gluster.org/12651
    Reviewed-by: Vijay Bellur <vbellur>

Comment 4 Kaushal 2016-04-19 07:48:31 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report.

glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.