Bug 1620580

Summary: Deleted a volume and created a new volume with similar but not the same name. The kubernetes pod still keeps on running and doesn't crash. Still possible to write to gluster mount
Product: [Community] GlusterFS Reporter: jimmybob-leon
Component: coreAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: mainlineCC: bugs, pasik, sunkumar
Target Milestone: ---Keywords: Reopened, StudentProject, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-02-10 17:48:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description jimmybob-leon 2018-08-23 08:46:47 UTC
Description of problem:
We deleted a current dispersed volume named `volume` then split this volume up into 3 new ones called `volume1`, `volume2` and `volume3` using the same bricks. We had a kubernetes pod running with the gluster mount mounted into the pod. After the creation of the new volumes I tried writing to the mount point and it appeared on all three volumes. `df` still shows that the original volume is mounted `IP:/volume` but we observe replication on all three new volumes.


Version-Release number of selected component (if applicable):
- 4.1 Gluster Server
- Linux 18.04
- Azure Kubernetes Service 1.11.1


How reproducible:
Steps to Reproduce:
1.Create dispersed `volume`
2.start and mount `volume`
3.stop and delete `volume`
4.reuse bricks to create `volume1,2,3`

Actual results:
Original mount point still active. Data written to it is still replicated.

Expected results:
Mount point interrupted and error message should indicate mount `volume` is not found.

Additional info:

Comment 1 Amar Tumballi 2018-09-18 08:45:56 UTC
Thanks for the bug report! Interesting observation, and use case. We generally don't recommend re-use of the bricks.

The original mount is interestingly still active, mainly because of 'brick' mapping. A client protocol connects to brick which is expected to be 'hostname:/path' combination, and if a brick path is never reused, it would always work as expected.

We need to get ID based authentication for the handshakes.


Reducing the severity, as recommended usecase, is not to re-use the same bricks.

Comment 3 Worker Ant 2019-08-06 10:58:44 UTC
REVIEW: https://review.gluster.org/23166 (protocol/handshake: pass volume-id for extra check) posted (#1) for review on master by Amar Tumballi

Comment 5 Worker Ant 2019-09-30 17:24:50 UTC
REVIEW: https://review.gluster.org/23166 (protocol/handshake: pass volume-id for extra check) merged (#10) on master by Amar Tumballi

Comment 6 Worker Ant 2019-10-01 11:06:03 UTC
REVIEW: https://review.gluster.org/23505 (tests: add a pending test case) posted (#1) for review on master by Amar Tumballi

Comment 7 Worker Ant 2019-10-02 08:23:27 UTC
REVISION POSTED: https://review.gluster.org/23505 (tests: add a pending test case) posted (#2) for review on master by Amar Tumballi

Comment 8 Sunny Kumar 2020-02-10 17:48:38 UTC
Patch is merged closing this bug now.