+++ This bug was initially created as a clone of Bug #1545570 +++ Description of problem: DHT lookup does not handle lookups on 1xn volumes optimally. It calls a dht_lookup_everywhere even though it has already checked the only subvol leading to repeated, redundant lookups for entries. Version-Release number of selected component (if applicable): How reproducible: Consistently Steps to Reproduce: 1. Create a 1x3 volume and set client-log-level to DEBUG 2. Fuse mount the volume and create some files (touch file-{1..10}) 3. Check the mnt log for dht messages Actual results: [2018-02-15 09:53:04.799711] D [MSGID: 0] [dht-common.c:2761:dht_lookup] 0-gitmo-dht: Calling fresh lookup for /file-10 on gitmo-replicate-0 [2018-02-15 09:53:04.800179] D [MSGID: 0] [dht-common.c:2344:dht_lookup_cbk] 0-gitmo-dht: fresh_lookup returned for /file-10 with op_ret -1 [No such file or directory] [2018-02-15 09:53:04.800188] D [MSGID: 0] [dht-common.c:2357:dht_lookup_cbk] 0-gitmo-dht: Entry /file-10 missing on subvol gitmo-replicate-0 [2018-02-15 09:53:04.800196] D [MSGID: 0] [dht-common.c:2128:dht_lookup_everywhere] 0-gitmo-dht: winding lookup call to 1 subvols [2018-02-15 09:53:04.800670] D [MSGID: 0] [dht-common.c:1930:dht_lookup_everywhere_cbk] 0-gitmo-dht: returned with op_ret -1 and op_errno 2 (/file-10) from subvol gitmo-replicate-0 [2018-02-15 09:53:04.800679] D [MSGID: 0] [dht-common.c:1594:dht_lookup_everywhere_done] 0-gitmo-dht: STATUS: hashed_subvol gitmo-replicate-0 cached_subvol null [2018-02-15 09:53:04.800684] D [MSGID: 0] [dht-common.c:1655:dht_lookup_everywhere_done] 0-gitmo-dht: There was no cached file and unlink on hashed is not skipped /file-10 [2018-02-15 09:53:04.800691] D [MSGID: 0] [dht-common.c:1658:dht_lookup_everywhere_done] 0-stack-trace: stack-address: 0x7f03e4000d50, gitmo-dht returned -1 error: No such file or directory [No such file or directory] Expected results: DHT need not call dht_lookup_everywhere for 1xn volumes Additional info: --- Additional comment from Red Hat Bugzilla Rules Engine on 2018-02-15 04:58:27 EST --- This bug is automatically being proposed for the release of Red Hat Gluster Storage 3 under active development and open for bug fixes, by setting the release flag 'rhgs‑3.4.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Red Hat Bugzilla Rules Engine on 2018-02-15 07:26:53 EST --- This bug is automatically being provided 'pm_ack+' for the release flag 'rhgs‑3.4.0', having been appropriately marked for the release, and having been provided ACK from Development and QE --- Additional comment from Red Hat Bugzilla Rules Engine on 2018-02-16 07:26:30 EST --- Since this bug has has been approved for the RHGS 3.4.0 release of Red Hat Gluster Storage 3, through release flag 'rhgs-3.4.0+', and through the Internal Whiteboard entry of '3.4.0', the Target Release is being automatically set to 'RHGS 3.4.0'
REVIEW: https://review.gluster.org/19581 (cluster/dht: Handle single dht child in dht_lookup) posted (#1) for review on master by N Balachandran
COMMIT: https://review.gluster.org/19581 committed in master by "Raghavendra G" <rgowdapp> with a commit message- cluster/dht: Handle single dht child in dht_lookup This patch limits itself to only handling the case where no file (data or linkto) exists on the subvol. Additional cases to be handled: 1. A linkto file was found on the only child subvol. This currently calls dht_lookup_everywhere which eventually deletes it. It can be deleted directly as it will not be pointing to a valid subvol. 2. Directory lookups - locking might be unnecessary in some cases. Change-Id: I940ba34531f2aaee1d36fd9ca45ecfd46be662a4 BUG: 1546620 Signed-off-by: N Balachandran <nbalacha>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/