Description of problem: After add-brick , geo-rep should create a working_dir for added bricks in /var/run/gluster/<volname>/ssh%3A%2F%2Froot%40<slave-host>%3Agluster%3A%2F%2F127.0.0.1%3A<slave-vol>/ . Version-Release number of selected component (if applicable):3.4.0.12rhs.beta4-1.el6rhs.x86_64 How reproducible: Happens every time Steps to Reproduce: 1.Create a geo-rep relationship between master(DIST_REP) and slave 2.Add-bricks to the volume. 3.Check for a new working directory for the brick in the path /var/run/gluster/<volname>/ssh%3A%2F%2Froot%40<slavehost>%3Agluster%3A%2F%2F127.0.0.1%3A<slave-vol>/ Actual results: After add-brick, it doesn't create working dir for the added bricks Expected results: Geo-rep should create a working dir for the added brick Additional info:
After a new node(new brick being part of the new node) is added, and an add-brick is done, following commands need to be executed again: 1. gluster system:: execute gsec_create 2. geo-rep create push-pem force 3. geo-rep start force. This will make sure that the working_dir is created on the new node and that the new brick is part of geo-replication. However if the new brick added is part of one of the existing nodes, in order to make it a part of the geo-replication, the current geo-rep session needs to be stopped and started. The existing session after being restarted(stop and start), will take care of the working_dir and there is no need to delete the session and create a new one.
https://code.engineering.redhat.com/gerrit/#/c/10880/
verified on glusterfs-3.4.0.15rhs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html