Bug 1583565

Summary: [distribute]: Excessive 'dict is null' errors in geo-rep logs
Product: [Community] GlusterFS Reporter: Mohit Agrawal <moagrawa>
Component: distributeAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, moagrawa, rallan, rhs-bugs, sankarshan, storage-qa-internal, tdesala
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-5.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1581553 Environment:
Last Closed: 2018-10-23 15:10:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1581553    
Bug Blocks:    

Comment 1 Worker Ant 2018-05-29 09:31:27 UTC
REVIEW: https://review.gluster.org/20096 (dht: Excessive 'dict is null' logs in dht_revalide_cbk) posted (#1) for review on master by MOHIT AGRAWAL

Comment 2 Worker Ant 2018-05-29 17:14:13 UTC
COMMIT: https://review.gluster.org/20096 committed in master by "MOHIT AGRAWAL" <moagrawa> with a commit message- dht: Excessive 'dict is null' logs in dht_revalidate_cbk

Problem: In case of error(ESTALE/ENOENT) dht_revalidate_cbk
         throws "dict is null" error because xattr is not available

Solution: To avoid the logs update condition in dht_revalidate_cbk
          and dht_lookup_dir_cbk

BUG: 1583565
Change-Id: Ife6b3eeb6d91bf24403ed3100e237bb5d15b4357
fixes: bz#1583565
Signed-off-by: Mohit Agrawal <moagrawa>

Comment 3 Shyamsundar 2018-10-23 15:10:12 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/