Bug 999864 - Dist-geo-rep: 'geo-rep delete' is not deleting session details from glusterd store and also doesn't delete geo-rep working dir
Dist-geo-rep: 'geo-rep delete' is not deleting session details from glusterd ...
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Venky Shankar
Sudhir D
Depends On:
  Show dependency treegraph
Reported: 2013-08-22 05:38 EDT by M S Vishwanath Bhat
Modified: 2016-05-31 21:56 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-08-22 05:55:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description M S Vishwanath Bhat 2013-08-22 05:38:08 EDT
Description of problem:
gluster v geo master slave_node::slave delete command doesn't delete the geo-rep working dir. Also it didn't delete the session details from the glusterd store i.e. /var/lib/glusterd/geo-replication/master/. But this dir is deleted from the the other node's glusterd store.

Version-Release number of selected component (if applicable):

How reproducible:
I ran into it once. Seems like a regression. But haven't tried reproducing it.

Steps to Reproduce:
1. Create and start two 2*2 dist-rep volume (master and slave) and start a geo-rep session between them.
2. Start the session and do some io including some creates and deletes of the same file in a while loop.
3. Stop the session and delete the session using the gluster CLI
gluster v geo master euclid::slave stop
gluster v geo master euclid::slave delete

master_vol_name = master
slave_vol_name = slave
slave_node = euclid

Actual results:

On pythagoras

[root@pythagoras ]# ls -a /var/lib/glusterd/geo-replication/
.  ..  common_secret.pem.pub  gsyncd.conf  gsyncd_template.conf  master  secret.pem  secret.pem.pub

[root@pythagoras ]# ls -a /var/lib/glusterd/geo-replication/master/
.  ..  ssh%3A%2F%2Froot%4010.70.35.90%3Agluster%3A%2F%2F127.0.0.1%3Aslave.status

On ramanujan

[root@ramanujan ~]# ls -a /var/lib/glusterd/geo-replication/
.  ..  gsyncd.conf  gsyncd_template.conf  secret.pem  secret.pem.pub

As you can see it's deleted in ramanujan node but not deleted in pythagoras node.

But the working dirs are not deleted in both of the nodes.

[root@ramanujan ~]# ls -a /var/run/gluster/
.   changelog-32bb6e3a46ef511ac32bdc895ff0debf.sock  changelog-c443ba4305883538a0b2d92afcff6f89.sock  master
..  changelog-68fa5cc90f61530aea097cdc78c2b376.sock  changelog-d815084b6297ec06ce681a97cd988bdf.sock

[root@ramanujan ~]# ls -a /var/run/gluster/master/
.  ..  ssh%3A%2F%2Froot%4010.70.35.184%3Agluster%3A%2F%2F127.0.0.1%3Aslave3  ssh%3A%2F%2Froot%4010.70.35.90%3Agluster%3A%2F%2F127.0.0.1%3Aslave1

[root@pythagoras ~]# ls -a /var/run/gluster/
.   changelog-32bb6e3a46ef511ac32bdc895ff0debf.sock  changelog-68fa5cc90f61530aea097cdc78c2b376.sock  master
..  changelog-59ddf777397e52a13ba1333653d63854.sock  changelog-d815084b6297ec06ce681a97cd988bdf.sock

[root@pythagoras ~]# ls -a /var/run/gluster/master/
.  ..  ssh%3A%2F%2Froot%4010.70.35.184%3Agluster%3A%2F%2F127.0.0.1%3Aslave3  ssh%3A%2F%2Froot%4010.70.35.90%3Agluster%3A%2F%2F127.0.0.1%3Aslave1

Expected results:
The /var/lib/glusterd/geo-replication/master should be deleted after the geo-rep delete

Additional info:

I have archived all the logs.
Comment 2 M S Vishwanath Bhat 2013-08-22 05:55:11 EDT
The directory /var/run/glusterd/geo-replication/master is from the Anshi U5 rpm. The dir created from the is deleted. Somehow that dir is left behind in one node but is deleted from the other node. This is stale dir from previous upgrade testing set-up. Closing NOTABUG.
Comment 3 Avra Sengupta 2013-08-22 06:10:18 EDT
The working directory(/var/run/gluster/*) is created by libgfchangelog and is also used to refer changelog history. Hence geo-rep delete will not be deleting this information.

Note You need to log in before you can comment on or make changes to this bug.