+++ This bug was initially created as a clone of Bug #1395133 +++
Description of problem:
Rename is failing with ENOENT while remove-brick start operation is in progress.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1) Create an EC volume and start it (Distributed replicate volume can also be used here, the issue is seen with Distributed replicate volume as well).
2) FUSE mount the volume on a client.
3) Create a big file of size 10Gb on the FUSE mount.
dd if=/dev/urandom of=BIG bs=1024k count=10000
4) Identify the bricks on which the file 'BIG' is located and remove those bricks so that the BIG file gets migrated.
5) While remove-brick start operation is in progress, rename the file from the mount point.
We can see the below error,
mv: cannot move ‘BIG’ to ‘BIG_rename’: No such file or directory
Renaming is fialing with ENOENT.
Rename should be successful.
--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-15 04:08:20 EST ---
This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'.
If this bug should be proposed for a different release, please manually change the proposed release flag.
--- Additional comment from Prasad Desala on 2016-11-15 04:19:48 EST ---
1) We are able to reproduce the issue even with a distributed-replicate volume without any md-cache settings.
2) After completion of the remove-brick start operation, tried to rename the file and is successful.
Client: 10.70.37.91 ---> mount -t glusterfs 10.70.37.190:/ecvol /mnt/fuse/
[root@dhcp37-190 glusterfs]# gluster v info
Volume Name: ecvol
Volume ID: f90202e8-a36e-4d3d-a0e2-8fa93152c028
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
--- Additional comment from Nithya Balachandran on 2016-11-15 05:06:44 EST ---
Looks like the same issue as reported by BZ 1286127. Marking it depends on 1286127 for now.
--- Additional comment from Mohit Agrawal on 2016-11-25 03:55:58 EST ---
RCA of rename failing with ENOENT is same as mentioned in BZ 1286127.
rename is failing with ENOENT because file is not available on cached_subvol that has changed during migration process.
To resolve it pass a new rename fops(dht_rename2) (in case of failure) to dht_rebalance_complete_check ,it will call dht_rename2 after complete migration process.
REVIEW: http://review.gluster.org/15928 (cluster/dht: Rename is failing with ENOENT while migration is in progress) posted (#1) for review on master by MOHIT AGRAWAL (firstname.lastname@example.org)
REVIEW: http://review.gluster.org/15928 (WIP cluster/dht: Rename is failing with ENOENT while migration is in progress) posted (#2) for review on master by MOHIT AGRAWAL (email@example.com)
REVIEW: http://review.gluster.org/15928 (cluster/dht: Rename is failing with ENOENT while migration is in progress) posted (#3) for review on master by MOHIT AGRAWAL (firstname.lastname@example.org)
REVIEW: http://review.gluster.org/15928 (cluster/dht: Rename is failing with ENOENT while migration is in progress) posted (#4) for review on master by MOHIT AGRAWAL (email@example.com)
This update is done in bulk based on the state of the patch and the time since last activity. If the issue is still seen, please reopen the bug.