Description of problem: By default the slave volume can be mounted as RW and people can change file. This will lead to gfid mismatch and geo-rep fails for those files. Version-Release number of selected component (if applicable): Any How reproducible: Always Steps to Reproduce: 1. create a geo-repo 2. change file in slave 3. change file in master, they are no longer propagated Actual results: Several: In case of modification, they are not propagated to the slave In case of deletion, the file is kept on slave In case of rename, the file on slave is also renamed with content from slave file. Expected results: No able to touch files from a slave vol when a geo-rep process is running / created for that vol.
upstream patch : https://review.gluster.org/#/c/16854/ https://review.gluster.org/#/c/16855/
REVIEW: https://review.gluster.org/16854 (performance/write-behind: Honor the client pid set) posted (#2) for review on master by Kotresh HR (khiremat)
REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#2) for review on master by Kotresh HR (khiremat)
COMMIT: https://review.gluster.org/16854 committed in master by Raghavendra G (rgowdapp) ------ commit b9e1c911833ca1916055622e5265672d5935d925 Author: Kotresh HR <khiremat> Date: Mon Mar 6 10:34:05 2017 -0500 performance/write-behind: Honor the client pid set write-behind xlator does not honor the client pid being set. It doesn't pass down the client pid saved in 'frame->root->pid'. This patch fixes the same. Change-Id: I838dcf43f56d6d0aa1d2c88811a2b271d9e88d05 BUG: 1430608 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: https://review.gluster.org/16854 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Vijay Bellur <vbellur> Reviewed-by: Raghavendra G <rgowdapp>
REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#3) for review on master by Kotresh HR (khiremat)
REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#4) for review on master by Kotresh HR (khiremat)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/
REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#5) for review on master by Kotresh HR (khiremat)
REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#6) for review on master by Kotresh HR (khiremat)
COMMIT: https://review.gluster.org/16855 committed in master by Jeff Darcy (jeff.us) ------ commit 9ab249130a5dd442044e787f1e171e7a17839906 Author: Kotresh HR <khiremat> Date: Mon Mar 6 10:19:54 2017 -0500 features/read-only: Allow internal clients to r/w If the "read-only" volume option is set, it would make the volume "read-only". But it also makes it read-only to gluster internal clients such as gsyncd, self heal, bitd, rebalance etc. In which case, all the internal operations would fail. This patch allows internal clients to read and write when "read-only" option is set. Change-Id: I8110e8d9eac8def403bb29f235000ddc79eaa433 BUG: 1430608 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: https://review.gluster.org/16855 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Karthik U S <ksubrahm> Reviewed-by: Amar Tumballi <amarts>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/