Description of problem: Stale stat information returned when an object is corrupted on a replicated volume. Version-Release number of selected component (if applicable): mainline How reproducible: always Steps to Reproduce: 1. Create a replicated volume, enable bitrot, mount, perform I/O 2. Wait for files to get signed 3. Corrupt an object (modify form brick directly) from one of the replica (preferably first replica) 4. Wait till scrubber marks the object as corrupted 5. Perform I/O on the file again 6. stat to check object size Actual results: Stale (+ incorrect) stat information (size, mtime, etc..) reported Expected results: Correct stat information (updated size, mtim, etc..) should be returned Additional info:
REVIEW: http://review.gluster.org/13120 (features/bitrot: add check for corrupted object in f{stat}) posted (#3) for review on master by Venky Shankar (vshankar)
COMMIT: http://review.gluster.org/13120 committed in master by Venky Shankar (vshankar) ------ commit d5d6918ce7dc9f54496da435af546611dfbe7d5c Author: Venky Shankar <vshankar> Date: Wed Dec 30 14:56:12 2015 +0530 features/bitrot: add check for corrupted object in f{stat} Check for corrupted objects is done bt bitrot stub component for data operations and such fops are denied processing by returning EIO. These checks were not done for operations such as get/set extended attribute, stat and the likes - IOW, stub only blocked pure data operations. However, its necessary to have these checks for certain other fops, most importantly stat (and fstat). This is due to the fact that clients could possibly get stale stat information (such as size, {a,c,m}time) resulting in incorrect operation of the application that rely on these fields. Note that, the data that replication would take care of fetching good (and correct) data, but the staleness of stat information could lead to data inconsistencies (e.g., rebalance, tier). Change-Id: I5a22780373b182a13f8d2c4ca6b7d9aa0ffbfca3 BUG: 1296399 Signed-off-by: Venky Shankar <vshankar> Reviewed-on: http://review.gluster.org/13120 Reviewed-by: Kotresh HR <khiremat> Reviewed-by: mohammed rafi kc <rkavunga> Reviewed-by: Raghavendra Bhat <raghavendra> Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user