Bug 1239075 - [geo-rep]: rename followed by deletes causes ESTALE
Summary: [geo-rep]: rename followed by deletes causes ESTALE
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.1
Assignee: Kotresh HR
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1247529 1249547 1251815
TreeView+ depends on / blocked
 
Reported: 2015-07-03 12:32 UTC by Rahul Hinduja
Modified: 2015-10-05 07:18 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.1-12
Doc Type: Bug Fix
Doc Text:
Previously, the geo-replication worker was not re-trying the operation on ESTALE error lstat of an entry. As a consequence, the geo-replication worker was getting crashed and restarted. With this fix, geo-replication worker does not crash on ESTALE during lstat on the entry.
Clone Of:
: 1247529 (view as bug list)
Environment:
Last Closed: 2015-10-05 07:18:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 11:06:22 UTC

Description Rahul Hinduja 2015-07-03 12:32:40 UTC
Description of problem:
=======================
Ran the tests which does the following FOP's inorder:

Create, chmod, chown, chgrp, symlink, hardlink, truncate, rename, remove. 

The above fops are successful and they are successfully synced to slave. But the logs on Master and Slave are as follows:

Master:
=======

[2015-07-03 13:36:43.154763] E [syncdutils(/bricks/brick0/master_brick0):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 659, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1438, in service_loop
    g3.crawlwrap(oneshot=True)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 580, in crawlwrap
    self.crawl()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1161, in crawl
    self.changelogs_batch_process(changes)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1070, in changelogs_batch_process
    self.process(batch)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 948, in process
    self.process_change(change, done, retry)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 903, in process_change
    failures = self.slave.server.entry_ops(entries)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in __call__
    return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in __call__
    raise res
OSError: [Errno 116] Stale file handle: '.gfid/fece7967-616b-4d13-add7-96f6a4022e11/55958721%%BO54CXD7RN'
[2015-07-03 13:36:43.156742] I [syncdutils(/bricks/brick0/master_brick0):220:finalize] <top>: exiting.
[2015-07-03 13:36:43.159702] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF.



Slave:
======

[2015-07-03 13:36:38.359909] I [resource(slave):844:service_loop] GLUSTER: slave listening
[2015-07-03 13:36:43.149735] E [repce(slave):117:worker] <top>: call failed: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker
    res = getattr(self.obj, rmeth)(*in_data[2:])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 717, in entry_ops
    st = lstat(entry)
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 493, in lstat
    return os.lstat(e)
OSError: [Errno 116] Stale file handle: '.gfid/fece7967-616b-4d13-add7-96f6a4022e11/55958721%%BO54CXD7RN'
[2015-07-03 13:36:43.158221] I [repce(slave):92:service_loop] RepceServer: terminating on reaching EOF.
[2015-07-03 13:36:43.158576] I [syncdutils(slave):220:finalize] <top>: exiting.



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.1-6.el6rhs.x86_64


How reproducible:
=================
2/2

Steps to Reproduce:
===================
1. Create geo-rep session between Master (3x2) and Slave (3x2)
2. Run the following fops in sequential order and check the arequal after each fop:

2015-07-03 13:03:31,870 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=create /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com
2015-07-03 13:05:53,581 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=chmod /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com
2015-07-03 13:08:17,690 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=chown /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com
2015-07-03 13:10:41,876 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=chgrp /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com
2015-07-03 13:13:06,050 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=symlink /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com
2015-07-03 13:15:37,194 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=hardlink /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com
2015-07-03 13:18:16,751 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=truncate /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com
2015-07-03 13:21:06,530 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=rename /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com

Comment 5 Kotresh HR 2015-07-08 10:14:56 UTC
RCA:

This could happen if after syncing RENAME and while syncing RMDIR, if geo-rep worker switch happens. Then there could be overlap and it could pick the older RENAME changelogs and process it again. Re-processing already synced changelog is not an issue. But in this case, since lstat is handling only ENOENT and not ESTALE, the worker is crashing and coming up again. There is no side effect because of worker going down. It should eventually process RMDIRs. But if it hits the existing DHT bug BZ #1235633 while proessing RMDIRs, few directory entries might not be removed in slave.

This BUG individually is not Blocker candidate 

[root@georep1 .processed]# ls -lrt /var/lib/misc/glusterfsd/master/ssh%3A%2F%2Froot%4010.70.46.101%3Agluster%3A%2F%2F127.0.0.1%3Aslave/0c8bb5be2ffeeb7d4a349b715e9d63fe/.history/.processed
total 1980
....
....
-rw-r--r--. 1 root root    8853 Jul  8 13:29 CHANGELOG.1436342199
-rw-r--r--. 1 root root  146015 Jul  8 13:32 CHANGELOG.1436342545
-rw-r--r--. 1 root root   39498 Jul  8 13:32 CHANGELOG.1436342380
-rw-r--r--. 1 root root 1126400 Jul  8 13:37 archive_201507.tar

If above Changelogs are seen which are processed in order, CHANGELOG.1436342380 (contains RENAMES) is processed after CHANGELOG.1436342545 (contains RMDIR)

[root@georep1 .processed]# cat CHANGELOG.1436342380
E 8a4d3f6d-351f-47ea-ba2b-3678f3384486 RENAME 4a400d02-cedc-45c3-a748-c2b3da021fe3%2F559c2094%25%251DV5EMEOIG 4a400d02-cedc-45c3-a748-c2b3da021fe3%2F559c24a4%25%251GEXGPWEOU
E 2cddde7f-fb30-4d1d-8015-c4d582615740 RENAME efd58cd2-f88a-4d81-ba90-6010110bbb00%2F559c2094%25%25L9H7U5DTY5 efd58cd2-f88a-4d81-ba90-6010110bbb00%2F559c24a4%25%25YNRZ4DUL2H
E 9606f85a-a470-485d-a54f-a8792b1ef332 RENAME d698d43b-98d1-41b0-a25c-650425ca2c37%2F559c2094%25%253HGQQSN9WR d698d43b-98d1-41b0-a25c-650425ca2c37%2F559c24a4[root@georep1 .processed]# cat CHANGELOG.1436342545%25%25R61WCN1O9K
E 9f03f1b2-75ca-486d-a8c6-cbf5b5b3fe31 RENAME 4a400d02-cedc-45c3-a748-c2b3da021fe3%2F559c2094%25%25FWX83IWFLV 4a400d02-cedc-45c3-a748-c2b3da021fe3%2F559c24a4%25%25XA4B8C1TPH
E 7344e2b2-9543-4f56-8003-68833bd24052 RENAME d698d43b-98d1-41b0-a25c-650425ca2c37%2F559c2094%25%25KN5EPLIWM7 d698d43b-98d1-41b0-a25c-650425ca2c37%2F559c24a4%25%254MAM1JG9WW
E a1b6e735-883b-43a6-9d0f-95011be1bf25 RENAME efd58cd2-f88a-4d81-ba90-6010110bbb00%2F559c2094%25%25JOJ9DYXA3P efd58cd2-f88a-4d81-ba90-6010110bbb00%2F559c24a4%25%25102BGGFJCJ
....
...


[root@georep1 .processed]# cat CHANGELOG.1436342545...
E 1c233d69-6d15-4481-b407-c33492cacf74 UNLINK 262f2254-2ddf-4044-b508-f97020438668%2F559c24a4%25%25D6VBND1RD0
E 730c2463-508c-48a9-a316-4524cafdfeba UNLINK 262f2254-2ddf-4044-b508-f97020438668%2F559c24a4%25%253HVVCK9XZF
E e99f1cb3-d047-4be7-b56d-80bfdbfcf913 UNLINK 6fd3e9bf-fd9c-42c9-94b4-60121b3eb9a5%2F559c22cc%25%25NY0VLNVQ1E
E 0ccaf086-4d4e-49c2-978c-8368cac9548e UNLINK 6fd3e9bf-fd9c-42c9-94b4-60121b3eb9a5%2F559c22cc%25%25S5XG2BTZ6S
E fe4fa8b2-0a51-4b44-a873-a4fef3adc761 UNLINK 6fd3e9bf-fd9c-42c9-94b4-60121b3eb9a5%2F559c22cc%25%25Q0YHVLSX20
E 6fd3e9bf-fd9c-42c9-94b4-60121b3eb9a5 RMDIR 262f2254-2ddf-4044-b508-f97020438668%2Fsymlink_to_files
E 41e255e7-3c12-47bd-9f63-67b70082b904 UNLINK 262f2254-2ddf-4044-b508-f97020438668%2F559c24a4%25%25MSZ5E1NCSP
E bd9f06aa-69ca-4394-817d-e34ad598a6b4 UNLINK 262f2254-2ddf-4044-b508-f97020438668%2F559c24a4%25%25KRYR4HDPT7
E 63ef9152-f5c3-47a8-9361-cc7491a6da87 UNLINK eb0af88a-8d69-4510-bd0d-3b7df2b9481f%2F559c2360%25%25YZWMS04BCG
E 7dc95be4-ada4-4e75-b087-c7967376d81f UNLINK eb0af88a-8d69-4510-bd0d-3b7df2b9481f%2F559c2360%25%259EZ2OSBDQN
E 285963e7-a416-43d4-a7a6-f21372fa435b UNLINK eb0af88a-8d69-4510-bd0d-3b7df2b9481f%2F559c2360%25%25UCOTVAR1ZB
E 7d1f8f7f-046c-419e-9662-8d501f132cfa UNLINK eb0af88a-8d69-4510-bd0d-3b7df2b9481f%2F559c2360%25%25IAMVGG24TO
E eb0af88a-8d69-4510-bd0d-3b7df2b9481f RMDIR 1772ec8d-551a-4174-baa9-5f4aa0f62df6%2Fhardlink_to_files
E 0da5bcc2-b80c-485a-a6fd-26c5e67f58d1 UNLINK 1772ec8d-551a-4174-baa9-5f4aa0f62df6%2F559c24a4%25%25GZXJVYBET3
E 7c61a3d7-1c46-4e0a-945c-7de0e2a6ed85 UNLINK 98b43253-647e-4987-82db-994c367fbead%2F559c22cc%25%25ZTHB0ECQW4
E b8f53c4f-c8bf-4453-b2b2-04adf4cf497a UNLINK 98b43253-647e-4987-82db-994c367fbead%2F559c22cc%25%25EUG1DBOQEJ
E 98b43253-647e-4987-82db-994c367fbead RMDIR 8020acd1-ac67-441a-8884-c5b356679ed3%2Fsymlink_to_files
E 2aebc587-a598-474c-b16d-e4f1573f07a5 UNLINK 7a9a822f-141f-4385-aec4-df587ea46c66%2F559c2360%25%25OGKCITWKQC
....
....

Comment 6 Kotresh HR 2015-08-06 06:01:40 UTC
Upstream Patch (Master):
http://review.gluster.org/11772

Upstream Patch (3.7):
http://review.gluster.org/#/c/11820/

Comment 7 Kotresh HR 2015-08-11 09:08:58 UTC
Merged in upstream (master) and upstream (3.7). Hence moving it to POST.

Comment 9 Kotresh HR 2015-08-13 07:33:40 UTC
Downstream Patch:

https://code.engineering.redhat.com/gerrit/#/c/55052/

Comment 10 Rahul Hinduja 2015-09-03 12:09:11 UTC
Verified with build: glusterfs-3.7.1-14.el7rhgs.x86_64

Ran all the fops against changelog and xsync. Didn't observe ESTALE. Moving this bug to verified. 

bash-4.3$ python main.py -d "geo_rep" -t "$testcases"
test_1_changelog-test-create (__main__.gluster_tests) ... ok
test_2_changelog-test-chmod (__main__.gluster_tests) ... ok
test_3_changelog-test-chown (__main__.gluster_tests) ... ok
test_4_changelog-test-chgrp (__main__.gluster_tests) ... ok
test_5_changelog-test-symlink (__main__.gluster_tests) ... ok
test_6_changelog-test-hardlink (__main__.gluster_tests) ... ok
test_7_changelog-test-truncate (__main__.gluster_tests) ... ok
test_8_changelog-test-rename (__main__.gluster_tests) ... ok
test_9_changelog-test-remove (__main__.gluster_tests) ... ok
test_10_xsync-test-create (__main__.gluster_tests) ... ok
test_11_xsync-test-chmod (__main__.gluster_tests) ... ok
test_12_xsync-test-chown (__main__.gluster_tests) ... ok
test_13_xsync-test-chgrp (__main__.gluster_tests) ... ok
test_14_xsync-test-symlink (__main__.gluster_tests) ... ok
test_15_xsync-test-hardlink (__main__.gluster_tests) ... ok
test_16_xsync-test-truncate (__main__.gluster_tests) ... ok

Comment 13 errata-xmlrpc 2015-10-05 07:18:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html


Note You need to log in before you can comment on or make changes to this bug.