+++ This bug was initially created as a clone of Bug #1126843 +++ Description of problem: In a two way geo-replication environment, we create one more session between original slave to original master and while doing it we do use gsec_create from origianl slave to origianl master which overwrites the content of common_secret.pem.pub file. And in a disaster recovery fail-over/fail-back situation also this is applicable when we create fail-back session from original slave to original master for fail-back. How reproducible: Always Steps to Reproduce: Explained in description Actual results: overwrites the content of common_secret.pem.pub file. Expected results: We should avoid to do that , may be one file per cluster. Additional info: Two Way geo-replication environment : Master volume : master1 and Slave volume : slave1 master1 slave1 Node1:Brick1 ----- Active ----> Node1:Brick1 Node2:Brick2 ---- Passive ----> Node2:Brick2 Master Volume : master2 and Slave volume : slave2 slave2 master2 Node1:Brick3 <----- Active ---- Node1:Brick3 Node2:Brick4 <---- Passive ---- Node2:Brick4
REVIEW: http://review.gluster.org/9460 (geo-rep: Handle copying of common_secret.pem.pub to slave correctly.) posted (#1) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/9460 (geo-rep: Handle copying of common_secret.pem.pub to slave correctly.) posted (#2) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/9460 (geo-rep: Handle copying of common_secret.pem.pub to slave correctly.) posted (#3) for review on master by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/9460 committed in master by Venky Shankar (vshankar) ------ commit f3ad194918dbbf00dcc9aebb226728294161ed7a Author: Kotresh HR <khiremat> Date: Fri Jan 16 14:32:09 2015 +0530 geo-rep: Handle copying of common_secret.pem.pub to slave correctly. Current Behaviour: 1. Geo-replication gsec_create creates common_secret.pem.pub file containing public keys of the all the nodes of master cluster in the location /var/lib/glusterd/ 2. Geo-replication create push-pem copies the common_secret.pem.pub to the same location on all the slave nodes with same name. Problem: Wrong public keys might get copied on to slave nodes in multiple geo-replication sessions simultaneosly. E.g. A geo-rep session is established between Node1(vol1:Master) to Node2 (vol2:Slave). And one more geo-rep session where Node2 (vol3) becomes master to Node3 (vol4) as below. Session1: Node1 (vol1) ---> Node2 (vol2) Session2: Node2 (vol3) ---> Node3 (vol4) If steps followed to create both geo-replication session is as follows, wrong public keys are copied on to Node3 from Node2. 1. gsec_create is done on Node1 (vol1) -Session1 2. gsec_create is done on Node2 (vol3) -Session2 3. create push-pem is done Node1 - Session1. -This overwrites common_secret.pem.pub in Node2 created by gsec_create in second step. 4. create push-pem on Node2 (vol3) copies overwrited common_secret.pem.pub keys to Node3. -Session2 Consequence: Session2 fails to start with Permission denied because of wrong public keys Solution: On geo-rep create push-pem, don't copy common_secret.pem.pub file with same name on to all slave nodes. Prefix master and slave volume names to the filename. NOTE: This brings change in manual steps to be followed to setup non-root geo-replication (mountbroker). To copy ssh public keys, extra two arguments needs to be followed. set_geo_rep_pem_keys.sh <mountbroker_user> <master vol name> \ <slave vol name> Path to set_geo_rep_pem_keys.sh: Source Installation: /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh Rpm Installatino: /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh Change-Id: If38cd4e6f58d674d5fe2d93da15803c73b660c33 BUG: 1183229 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/9460 Reviewed-by: Aravinda VK <avishwan> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Venky Shankar <vshankar> Tested-by: Venky Shankar <vshankar>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user