downstream patch is merged now.
*** Bug 1331164 has been marked as a duplicate of this bug. ***
Build Used: glusterfs-server-3.12.2-7.el7rhgs.x86_64
Verified below scenario's
1. Create 1x2 volume and disable shd.
2. Create a FILE and chown it to user-1
3. Kill one brick.
4. As user-1 (non root), do chmod on FILE so that there is pending metadata heal.
5. Bring the down brick up.
6. As another user-2, access the same FILE
7. healing should be successful
1. Create 2 * 3 volume and disable shd
2. create 500 files
3. chown for 1st 250 files to "qa_func"
4. chown for last 250 files to "qa_perf"
5. kill one of the brick
6. As "qa_func" user, change permissions for the 1st 250 files
7. As "qa_perf" user, change permissions for the last 250 files
8. Bring up the down brick
9. Do a lookup from 3rd user ( "qa_all" ) on all the files
10. Healing should be successful
In Both Scenarios, healing is success after lookup happened ( from another user )
Changing status to Verified
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.