Bug 1232912 - [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
Summary: [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory fro...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
Assignee: Aravinda VK
QA Contact:
URL:
Whiteboard:
Depends On: 1222856 1223286
Blocks: 1202842 1223636 1236093
TreeView+ depends on / blocked
 
Reported: 2015-06-17 19:44 UTC by Aravinda VK
Modified: 2016-06-16 13:13 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1222856
: 1236093 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:13:39 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Aravinda VK 2015-06-17 19:44:43 UTC
+++ This bug was initially created as a clone of Bug #1222856 +++

Description of problem:
=======================

Whenever perfomred rm -rf on the master volume, the worker died with the backtrace as:


[2015-05-19 15:33:13.868683] E [syncdutils(/rhs/brick2/b2):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 659, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1440, in service_loop
    g3.crawlwrap(oneshot=True)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 580, in crawlwrap
    self.crawl()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1150, in crawl
    self.changelogs_batch_process(changes)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1059, in changelogs_batch_process
    self.process(batch)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 946, in process
    self.process_change(change, done, retry)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 902, in process_change
    failures = self.slave.server.entry_ops(entries)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in __call__
    return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in __call__
    raise res
OSError: [Errno 116] Stale file handle
[2015-05-19 15:33:13.870326] I [syncdutils(/rhs/brick2/b2):220:finalize] <top>: exiting.
[2015-05-19 15:33:13.874784] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF.

And with everytime monitor tries to spawn the process, it dies in startup phase.


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.0-2.el6rhs.x86_64


How reproducible:
================

Tried couple of times and was successful in reproducing it in as many times


Steps Carried:
==============
1. Created master cluster 
2. Created and started master volume
3. Created shared volume (gluster_shared_storage)
4. Mounted the shared volume on /var/run/gluster/shared_storage
5. Created Slave cluster
6. Created and Started slave volume
7. Created geo-rep session between master and slave
8. Configured use_meta_volume true
9. Started geo-rep
10. Mounted master volume over Fuse and NFS to client
11. Copied files /etc{1..10} from fuse mount
12. Copied files /etc{11.20} from NFS mount
13. Sync completed successfully
14. Removed the files etc.2 from fuse and etc.12 from NFS
15. Looked into the geo-rep session it was faulty 
16. Looked into the logs, it showed continuous traceback

Actual results:
===============

It crashed and comes back with crawl type as history


Expected results:
=================

Worker should not crash and it should handle ESTALE gracefully

--- Additional comment from Rahul Hinduja on 2015-05-19 06:29:45 EDT ---

sosreport @ http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1222856/

rm -rf of a directory is a common usecase and hence proposing as blocker for 3.1

--- Additional comment from Aravinda VK on 2015-06-02 04:11:09 EDT ---

Patches:
master: http://review.gluster.org/#/c/10837/
release-3.7: http://review.gluster.org/10913
downstream: https://code.engineering.redhat.com/gerrit/#/c/49674/

--- Additional comment from errata-xmlrpc on 2015-06-05 02:23:26 EDT ---

Bug report changed to ON_QA status by Errata System.
A QE request has been submitted for advisory RHEA-2015:20560-02
https://errata.devel.redhat.com/advisory/20560

--- Additional comment from Rahul Hinduja on 2015-06-11 07:42:23 EDT ---

I still see the issue with build: glusterfs-3.7.1-1

Moving bug back to assigned state;

[root@georep1 scripts]# rpm -qa | grep gluster
glusterfs-client-xlators-3.7.1-1.el6rhs.x86_64
glusterfs-server-3.7.1-1.el6rhs.x86_64
glusterfs-3.7.1-1.el6rhs.x86_64
glusterfs-api-3.7.1-1.el6rhs.x86_64
glusterfs-cli-3.7.1-1.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-1.el6rhs.x86_64
glusterfs-libs-3.7.1-1.el6rhs.x86_64
glusterfs-fuse-3.7.1-1.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-1.el6rhs.x86_64
[root@georep1 scripts]# cat /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log | grep "OSError"
[2015-06-11 22:34:23.111248] E [repce(/rhs/brick2/b2):207:__call__] RepceClient: call 20852:140282122651392:1434042220.8 (entry_ops) failed on peer with OSError
[2015-06-11 22:34:46.175925] E [repce(/rhs/brick2/b2):207:__call__] RepceClient: call 21689:140594955093760:1434042280.85 (entry_ops) failed on peer with OSError
OSError: [Errno 116] Stale file handle
[2015-06-11 22:35:08.149015] E [repce(/rhs/brick2/b2):207:__call__] RepceClient: call 21766:140460004030208:1434042303.43 (entry_ops) failed on peer with OSError
OSError: [Errno 116] Stale file handle
[root@georep1 scripts]#

Comment 1 Anand Avati 2015-06-17 19:51:06 UTC
REVIEW: http://review.gluster.org/11296 (geo-rep: ignore ESTALE as ENOENT) posted (#1) for review on master by Aravinda VK (avishwan)

Comment 2 Anand Avati 2015-06-26 03:26:04 UTC
REVIEW: http://review.gluster.org/11296 (geo-rep: ignore ESTALE as ENOENT) posted (#2) for review on master by Aravinda VK (avishwan)

Comment 3 Niels de Vos 2016-06-16 13:13:39 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.