Description of problem: ======================= Ran the tests which does the following FOP's inorder: Create, chmod, chown, chgrp, symlink, hardlink, truncate, rename, remove. The above fops are successful and they are successfully synced to slave. But the logs on Master and Slave are as follows: Master: ======= [2015-07-03 13:36:43.154763] E [syncdutils(/bricks/brick0/master_brick0):276:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main main_i() File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 659, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1438, in service_loop g3.crawlwrap(oneshot=True) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 580, in crawlwrap self.crawl() File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1161, in crawl self.changelogs_batch_process(changes) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1070, in changelogs_batch_process self.process(batch) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 948, in process self.process_change(change, done, retry) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 903, in process_change failures = self.slave.server.entry_ops(entries) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in __call__ return self.ins(self.meth, *a) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in __call__ raise res OSError: [Errno 116] Stale file handle: '.gfid/fece7967-616b-4d13-add7-96f6a4022e11/55958721%%BO54CXD7RN' [2015-07-03 13:36:43.156742] I [syncdutils(/bricks/brick0/master_brick0):220:finalize] <top>: exiting. [2015-07-03 13:36:43.159702] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF. Slave: ====== [2015-07-03 13:36:38.359909] I [resource(slave):844:service_loop] GLUSTER: slave listening [2015-07-03 13:36:43.149735] E [repce(slave):117:worker] <top>: call failed: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker res = getattr(self.obj, rmeth)(*in_data[2:]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 717, in entry_ops st = lstat(entry) File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 493, in lstat return os.lstat(e) OSError: [Errno 116] Stale file handle: '.gfid/fece7967-616b-4d13-add7-96f6a4022e11/55958721%%BO54CXD7RN' [2015-07-03 13:36:43.158221] I [repce(slave):92:service_loop] RepceServer: terminating on reaching EOF. [2015-07-03 13:36:43.158576] I [syncdutils(slave):220:finalize] <top>: exiting. Version-Release number of selected component (if applicable): ============================================================= How reproducible: ================= 2/2 Steps to Reproduce: =================== 1. Create geo-rep session between Master (3x2) and Slave (3x2) 2. Run the following fops in sequential order and check the arequal after each fop: 2015-07-03 13:03:31,870 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=create /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com 2015-07-03 13:05:53,581 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=chmod /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com 2015-07-03 13:08:17,690 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=chown /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com 2015-07-03 13:10:41,876 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=chgrp /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com 2015-07-03 13:13:06,050 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=symlink /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com 2015-07-03 13:15:37,194 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=hardlink /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com 2015-07-03 13:18:16,751 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=truncate /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com 2015-07-03 13:21:06,530 INFO run Executing crefi --multi -n 5 -b 5 -d 5 --max=10k --min=5k --random -T 5 -t text --fop=rename /mnt/glusterfs 1>/dev/null 2>&1 on wingo.lab.eng.blr.redhat.com
RCA: This could happen if after syncing RENAME and while syncing RMDIR, if geo-rep worker switch happens. Then there could be overlap and it could pick the older RENAME changelogs and process it again. Re-processing already synced changelog is not an issue. But in this case, since lstat is handling only ENOENT and not ESTALE, the worker is crashing and coming up again. There is no side effect because of worker going down. It should eventually process RMDIRs. [root@georep1 .processed]# ls -lrt /var/lib/misc/glusterfsd/master/ssh%3A%2F%2Froot%4010.70.46.101%3Agluster%3A%2F%2F127.0.0.1%3Aslave/0c8bb5be2ffeeb7d4a349b715e9d63fe/.history/.processed total 1980 .... .... -rw-r--r--. 1 root root 8853 Jul 8 13:29 CHANGELOG.1436342199 -rw-r--r--. 1 root root 146015 Jul 8 13:32 CHANGELOG.1436342545 -rw-r--r--. 1 root root 39498 Jul 8 13:32 CHANGELOG.1436342380 -rw-r--r--. 1 root root 1126400 Jul 8 13:37 archive_201507.tar If above Changelogs are seen which are processed in order, CHANGELOG.1436342380 (contains RENAMES) is processed after CHANGELOG.1436342545 (contains RMDIR) [root@georep1 .processed]# cat CHANGELOG.1436342380 E 8a4d3f6d-351f-47ea-ba2b-3678f3384486 RENAME 4a400d02-cedc-45c3-a748-c2b3da021fe3%2F559c2094%25%251DV5EMEOIG 4a400d02-cedc-45c3-a748-c2b3da021fe3%2F559c24a4%25%251GEXGPWEOU E 2cddde7f-fb30-4d1d-8015-c4d582615740 RENAME efd58cd2-f88a-4d81-ba90-6010110bbb00%2F559c2094%25%25L9H7U5DTY5 efd58cd2-f88a-4d81-ba90-6010110bbb00%2F559c24a4%25%25YNRZ4DUL2H E 9606f85a-a470-485d-a54f-a8792b1ef332 RENAME d698d43b-98d1-41b0-a25c-650425ca2c37%2F559c2094%25%253HGQQSN9WR d698d43b-98d1-41b0-a25c-650425ca2c37%2F559c24a4[root@georep1 .processed]# cat CHANGELOG.1436342545%25%25R61WCN1O9K E 9f03f1b2-75ca-486d-a8c6-cbf5b5b3fe31 RENAME 4a400d02-cedc-45c3-a748-c2b3da021fe3%2F559c2094%25%25FWX83IWFLV 4a400d02-cedc-45c3-a748-c2b3da021fe3%2F559c24a4%25%25XA4B8C1TPH E 7344e2b2-9543-4f56-8003-68833bd24052 RENAME d698d43b-98d1-41b0-a25c-650425ca2c37%2F559c2094%25%25KN5EPLIWM7 d698d43b-98d1-41b0-a25c-650425ca2c37%2F559c24a4%25%254MAM1JG9WW E a1b6e735-883b-43a6-9d0f-95011be1bf25 RENAME efd58cd2-f88a-4d81-ba90-6010110bbb00%2F559c2094%25%25JOJ9DYXA3P efd58cd2-f88a-4d81-ba90-6010110bbb00%2F559c24a4%25%25102BGGFJCJ .... ... [root@georep1 .processed]# cat CHANGELOG.1436342545... E 1c233d69-6d15-4481-b407-c33492cacf74 UNLINK 262f2254-2ddf-4044-b508-f97020438668%2F559c24a4%25%25D6VBND1RD0 E 730c2463-508c-48a9-a316-4524cafdfeba UNLINK 262f2254-2ddf-4044-b508-f97020438668%2F559c24a4%25%253HVVCK9XZF E e99f1cb3-d047-4be7-b56d-80bfdbfcf913 UNLINK 6fd3e9bf-fd9c-42c9-94b4-60121b3eb9a5%2F559c22cc%25%25NY0VLNVQ1E E 0ccaf086-4d4e-49c2-978c-8368cac9548e UNLINK 6fd3e9bf-fd9c-42c9-94b4-60121b3eb9a5%2F559c22cc%25%25S5XG2BTZ6S E fe4fa8b2-0a51-4b44-a873-a4fef3adc761 UNLINK 6fd3e9bf-fd9c-42c9-94b4-60121b3eb9a5%2F559c22cc%25%25Q0YHVLSX20 E 6fd3e9bf-fd9c-42c9-94b4-60121b3eb9a5 RMDIR 262f2254-2ddf-4044-b508-f97020438668%2Fsymlink_to_files E 41e255e7-3c12-47bd-9f63-67b70082b904 UNLINK 262f2254-2ddf-4044-b508-f97020438668%2F559c24a4%25%25MSZ5E1NCSP E bd9f06aa-69ca-4394-817d-e34ad598a6b4 UNLINK 262f2254-2ddf-4044-b508-f97020438668%2F559c24a4%25%25KRYR4HDPT7 E 63ef9152-f5c3-47a8-9361-cc7491a6da87 UNLINK eb0af88a-8d69-4510-bd0d-3b7df2b9481f%2F559c2360%25%25YZWMS04BCG E 7dc95be4-ada4-4e75-b087-c7967376d81f UNLINK eb0af88a-8d69-4510-bd0d-3b7df2b9481f%2F559c2360%25%259EZ2OSBDQN E 285963e7-a416-43d4-a7a6-f21372fa435b UNLINK eb0af88a-8d69-4510-bd0d-3b7df2b9481f%2F559c2360%25%25UCOTVAR1ZB E 7d1f8f7f-046c-419e-9662-8d501f132cfa UNLINK eb0af88a-8d69-4510-bd0d-3b7df2b9481f%2F559c2360%25%25IAMVGG24TO E eb0af88a-8d69-4510-bd0d-3b7df2b9481f RMDIR 1772ec8d-551a-4174-baa9-5f4aa0f62df6%2Fhardlink_to_files E 0da5bcc2-b80c-485a-a6fd-26c5e67f58d1 UNLINK 1772ec8d-551a-4174-baa9-5f4aa0f62df6%2F559c24a4%25%25GZXJVYBET3 E 7c61a3d7-1c46-4e0a-945c-7de0e2a6ed85 UNLINK 98b43253-647e-4987-82db-994c367fbead%2F559c22cc%25%25ZTHB0ECQW4 E b8f53c4f-c8bf-4453-b2b2-04adf4cf497a UNLINK 98b43253-647e-4987-82db-994c367fbead%2F559c22cc%25%25EUG1DBOQEJ E 98b43253-647e-4987-82db-994c367fbead RMDIR 8020acd1-ac67-441a-8884-c5b356679ed3%2Fsymlink_to_files E 2aebc587-a598-474c-b16d-e4f1573f07a5 UNLINK 7a9a822f-141f-4385-aec4-df587ea46c66%2F559c2360%25%25OGKCITWKQC .... ....
REVIEW: http://review.gluster.org/11772 (geo-rep: Do not crash worker on ESTALE) posted (#1) for review on master by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/11772 committed in master by Venky Shankar (vshankar) ------ commit 4d2ec1d0d6bebcfb19e0c4ecf741ddd29bb28e58 Author: Kotresh HR <khiremat> Date: Tue Jul 28 14:37:47 2015 +0530 geo-rep: Do not crash worker on ESTALE Handle ESTALE returned by lstat gracefully by retrying it. Do not crash the worker. Change-Id: I2527cd8bd1f7d2428cb4fa3f20782bebaf2df12a BUG: 1247529 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/11772 Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Aravinda VK <avishwan> Reviewed-by: Venky Shankar <vshankar>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user