Description of problem: The geo-rep config log-level option takes invalid values and makes geo-rep status defunct. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [root@redlemon ~]# gluster v geo master root.43.25::slave config log-level Afafaef geo-replication config updated successfully [root@redlemon ~]# gluster v geo master root.43.25::slave status NODE MASTER SLAVE HEALTH UPTIME ------------------------------------------------------------------------------------ redlemon.blr.redhat.com master root.43.25::slave defunct N/A redmoon.blr.redhat.com master root.43.25::slave defunct N/A redwood.blr.redhat.com master root.43.25::slave defunct N/A redcloud.blr.redhat.com master root.43.25::slave defunct N/A >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Problem with this is , it can't be recovered just by resetting it, you have to stop and start the session. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [root@redlemon ~]# gluster v geo master root.43.25::slave config \!log-level geo-replication config updated successfully [root@redlemon ~]# gluster v geo master root.43.25::slave status NODE MASTER SLAVE HEALTH UPTIME ------------------------------------------------------------------------------------ redlemon.blr.redhat.com master root.43.25::slave defunct N/A redwood.blr.redhat.com master root.43.25::slave defunct N/A redmoon.blr.redhat.com master root.43.25::slave defunct N/A redcloud.blr.redhat.com master root.43.25::slave defunct N/A >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Version-Release number of selected component (if applicable):glusterfs-3.4.0.22rhs-2.el6rhs.x86_64 How reproducible:happens everytime Steps to Reproduce: 1.create and start a geo-rep relationship between master and slave. 2.set config log-level to some random value 3.check the geo-rep status Actual results: The geo-rep status becomes defunct Expected results:It should error out properly if it get wrong values. Additional info:
upstream fix @ http://review.gluster.org/5989
https://code.engineering.redhat.com/gerrit/#/c/13677
Fixed in version please.
verified on glusterfs-3.4.0.34rhs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html