Description of problem: After performing an attach tier operation on a EC volume, running disk usage command throws the following warning. This issue is not seen with fuse mounts. [root@dhcp37-196 37.196]# du -sh du: WARNING: Circular directory structure. This almost certainly means that you have a corrupted file system. NOTIFY YOUR SYSTEM MANAGER. The following directory is part of the cycle: ‘./linux-4.7.2’ du: WARNING: Circular directory structure. This almost certainly means that you have a corrupted file system. NOTIFY YOUR SYSTEM MANAGER. The following directory is part of the cycle: ‘./files’ 87M . [root@dhcp37-86 37.63]# gluster v info krk-vol Volume Name: krk-vol Type: Tier Volume ID: 1bf70c56-56e1-4b9e-aeeb-e94a1fc42a28 Status: Started Number of Bricks: 16 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: 10.70.37.142:/bricks/brick4/ht1 Brick2: 10.70.37.153:/bricks/brick4/ht1 Brick3: 10.70.37.194:/bricks/brick4/ht1 Brick4: 10.70.37.182:/bricks/brick4/ht1 Cold Tier: Cold Tier Type : Distributed-Disperse Number of Bricks: 2 x (4 + 2) = 12 Brick5: 10.70.37.182:/bricks/brick0/v1 Brick6: 10.70.37.194:/bricks/brick0/v1 Brick7: 10.70.37.153:/bricks/brick0/v1 Brick8: 10.70.37.142:/bricks/brick0/v1 Brick9: 10.70.37.114:/bricks/brick0/v1 Brick10: 10.70.37.86:/bricks/brick0/v1 Brick11: 10.70.37.182:/bricks/brick1/v1 Brick12: 10.70.37.194:/bricks/brick1/v1 Brick13: 10.70.37.153:/bricks/brick1/v1 Brick14: 10.70.37.142:/bricks/brick1/v1 Brick15: 10.70.37.114:/bricks/brick1/v1 Brick16: 10.70.37.86:/bricks/brick1/v1 Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on features.quota-deem-statfs: on features.inode-quota: on features.quota: on ganesha.enable: on features.cache-invalidation: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on nfs-ganesha: enable cluster.enable-shared-storage: enable Version-Release number of selected component (if applicable): rpm -qa | grep 'gluster' glusterfs-cli-3.8.3-0.1.git2ea32d9.el7.centos.x86_64 glusterfs-server-3.8.3-0.1.git2ea32d9.el7.centos.x86_64 python-gluster-3.8.3-0.1.git2ea32d9.el7.centos.noarch glusterfs-client-xlators-3.8.3-0.1.git2ea32d9.el7.centos.x86_64 glusterfs-3.8.3-0.1.git2ea32d9.el7.centos.x86_64 glusterfs-fuse-3.8.3-0.1.git2ea32d9.el7.centos.x86_64 nfs-ganesha-gluster-next.20160813.2f47e8a-1.el7.centos.x86_64 glusterfs-libs-3.8.3-0.1.git2ea32d9.el7.centos.x86_64 glusterfs-api-3.8.3-0.1.git2ea32d9.el7.centos.x86_64 glusterfs-ganesha-3.8.3-0.1.git2ea32d9.el7.centos.x86_64 How reproducible: 2/3 Steps to Reproduce: 1. create a ditributed-disperse volume [2x(4+2)] 2. create few files and run kernel untar 3. attach hot tier - [2x2] 4. Perform lookup or 'du -sh' Actual results: du: WARNING: Circular directory structure. Expected results: No such warnings should be seen Additional info: sosreports from servers and clients shall be attached
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html
The issue is not related to tier volume. Even with normal volume it can be easily reproducible (it can be reproduced with FSAL_VFS as well) # mount -t nfs jiffin17:/dis2 /mnt/nfs/2/ # du -h /mnt/nfs/2/ du: WARNING: Circular directory structure. This almost certainly means that you have a corrupted file system. NOTIFY YOUR SYSTEM MANAGER. The following directory is part of the cycle: ‘/mnt/nfs/2/.trashcan’ du: WARNING: Circular directory structure. This almost certainly means that you have a corrupted file system. NOTIFY YOUR SYSTEM MANAGER. The following directory is part of the cycle: ‘/mnt/nfs/2/dir’ 4.0K /mnt/nfs/2/ # ls /mnt/nfs/2/dir/ # ls /mnt/nfs/2/.trashcan/internal_op/ # du -h /mnt/nfs/2/ 4.0K /mnt/nfs/2/.trashcan/internal_op 8.0K /mnt/nfs/2/.trashcan 4.0K /mnt/nfs/2/dir 16K /mnt/nfs/2/ #du -s /mnt/nfs/2/ 16 /mnt/nfs/2/ #du -sh /mnt/nfs/2/ 16K /mnt/nfs/2/ As mentioned above if you perform lookup on all the child directories before the readdirp(du -sh), it works fine
I forgot to add in my previous comment, the issue is not present on v3
The patch posted upstream up for review https://review.gerrithub.io/#/c/294597/2