Description of problem: This bug was reported by Nithya. Create a pure replicate volume and enable the following options: Volume Name: xvol Type: Replicate Volume ID: 095d6083-ea82-4ec9-a3a9-498fbd5f8dbe Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.122.7:/bricks/brick1/xvol-1 Brick2: 192.168.122.7:/bricks/brick1/xvol-2 Brick3: 192.168.122.7:/bricks/brick1/xvol-3 Options Reconfigured: server.event-threads: 4 client.event-threads: 4 performance.parallel-readdir: on performance.readdir-ahead: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off Fuse mount using: mount -t glusterfs -o lru-limit=500 -s 192.168.122.7:/xvol /mnt/g1 mkdir /mnt/g1/dirdd From terminal 1: cd /mnt/g1/dirdd while (true); do ls -lR dirdd; done From terminal 2: while true; do dd if=/dev/urandom of=/mnt/g1/dirdd/1G.file bs=1M count=1; rm -f /mnt/g1/dirdd/1G.file; done With performance.parallel-readdir on, ls runs into ESTALE errors. With performance.parallel-readdir off, no errors are seen. Note that both ls and rm are running on same mount point. Version-Release number of selected component (if applicable): How reproducible: consistently Steps to Reproduce: 1. 2. 3. Actual results: ls runs into ESTALE errors Expected results: ls shouldn't run into ESTALE errors Additional info:
This bug is moved to https://github.com/gluster/glusterfs/issues/841, and will be tracked there from now on. Visit GitHub issues URL for further details