Bug 1539516 - DHT log messages: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
Summary: DHT log messages: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-0...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.12
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1537457
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-29 05:30 UTC by Nithya Balachandran
Modified: 2018-03-05 07:14 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.12.6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1537457
Environment:
Last Closed: 2018-03-05 07:14:08 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Nithya Balachandran 2018-01-29 05:30:03 UTC
+++ This bug was initially created as a clone of Bug #1537457 +++

Description of problem:

Messages like the following are repeated multiple times in the client log files (fuse mount) for volumes with a single distribute subvolume (1xn volumes)

[2018-01-15 09:45:41.066914] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv0-dht:Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0




RCA:

If the dht subvol count is 1, dht_readdirp_cbk calls dht_populate_inode_for_dentry for each dir entry returned. This function tries to save the layout in inode. However, there is no layout on disk or gfid available for the '..' entry returned when performing an ls -l on the root of the subvol, causing the function to print this message.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Worker Ant on 2018-01-23 04:37:33 EST ---

REVIEW: https://review.gluster.org/19292 (cluster/dht: Skip '..' for the volume root dir) posted (#1) for review on master by N Balachandran

--- Additional comment from Nithya Balachandran on 2018-01-23 04:38:43 EST ---

Steps to Reproduce:
1. Create a volume with a single brick
2. Fuse mount the volume and perform an ls -l on the root of the volume
3. Check the mount logs

--- Additional comment from Worker Ant on 2018-01-24 06:11:39 EST ---

COMMIT: https://review.gluster.org/19292 committed in master by \"N Balachandran\" <nbalacha@redhat.com> with a commit message- cluster/dht: Skip '..' for the volume root dir

dht_populate_inode_for_dentry tries to update the layout
for the '..' entry when listing the root of the volume.
This entry does not correspond to an entry in the volume
and therefore does not have a gfid or a layout on disk,
causing layout processing to fail.

Change-Id: I2b7470e1c5e20d87b5545160697f24d041045140
BUG: 1537457
Signed-off-by: N Balachandran <nbalacha@redhat.com>

Comment 1 Worker Ant 2018-02-08 15:17:29 UTC
REVIEW: https://review.gluster.org/19529 (cluster/dht: Skip '..' for the volume root dir) posted (#2) for review on release-3.12 by N Balachandran

Comment 2 Worker Ant 2018-02-12 10:12:31 UTC
COMMIT: https://review.gluster.org/19529 committed in release-3.12 by "jiffin tony Thottan" <jthottan@redhat.com> with a commit message- cluster/dht: Skip '..' for the volume root dir

dht_populate_inode_for_dentry tries to update the layout
for the '..' entry when listing the root of the volume.
This entry does not correspond to an entry in the volume
and therefore does not have a gfid or a layout on disk,
causing layout processing to fail.

> Change-Id: I2b7470e1c5e20d87b5545160697f24d041045140
> BUG: 1537457
> Signed-off-by: N Balachandran <nbalacha@redhat.com>

Change-Id: I2b7470e1c5e20d87b5545160697f24d041045140
BUG: 1539516
Signed-off-by: N Balachandran <nbalacha@redhat.com>

Comment 3 Jiffin 2018-03-05 07:14:08 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.6, please open a new bug report.

glusterfs-3.12.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2018-February/033552.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.