Description of problem: We deleted a current dispersed volume named `volume` then split this volume up into 3 new ones called `volume1`, `volume2` and `volume3` using the same bricks. We had a kubernetes pod running with the gluster mount mounted into the pod. After the creation of the new volumes I tried writing to the mount point and it appeared on all three volumes. `df` still shows that the original volume is mounted `IP:/volume` but we observe replication on all three new volumes. Version-Release number of selected component (if applicable): - 4.1 Gluster Server - Linux 18.04 - Azure Kubernetes Service 1.11.1 How reproducible: Steps to Reproduce: 1.Create dispersed `volume` 2.start and mount `volume` 3.stop and delete `volume` 4.reuse bricks to create `volume1,2,3` Actual results: Original mount point still active. Data written to it is still replicated. Expected results: Mount point interrupted and error message should indicate mount `volume` is not found. Additional info:
Thanks for the bug report! Interesting observation, and use case. We generally don't recommend re-use of the bricks. The original mount is interestingly still active, mainly because of 'brick' mapping. A client protocol connects to brick which is expected to be 'hostname:/path' combination, and if a brick path is never reused, it would always work as expected. We need to get ID based authentication for the handshakes. Reducing the severity, as recommended usecase, is not to re-use the same bricks.
REVIEW: https://review.gluster.org/23166 (protocol/handshake: pass volume-id for extra check) posted (#1) for review on master by Amar Tumballi
REVIEW: https://review.gluster.org/23166 (protocol/handshake: pass volume-id for extra check) merged (#10) on master by Amar Tumballi
REVIEW: https://review.gluster.org/23505 (tests: add a pending test case) posted (#1) for review on master by Amar Tumballi
REVISION POSTED: https://review.gluster.org/23505 (tests: add a pending test case) posted (#2) for review on master by Amar Tumballi
Patch is merged closing this bug now.