Description of problem: ========================== had about 40 volumes as below on a 3 node setup 10 vols of 2x2 type 10 vols of 2x(4+2) type 10 1x3 volumes 10 1x2 volumes 1 1x2 and 1 1x3 volume===>created before brick multiplex enabled Did a replace brick for one of the 2x2 volume as below [root@dhcp35-192 glusterfs]# gluster v replace-brick distrep_1 10.70.35.192:/rhs/brick3/distrep_1 10.70.35.192:/rhs/brick2/distrep_1_replaced commit force volume replace-brick: success: replace-brick commit force operation successful note I didn't kill the brick process before replacing Version-Release number of selected component (if applicable): ========= [root@dhcp35-192 ~]# rpm -qa|grep gluster glusterfs-libs-3.10.0-1.el7.x86_64 glusterfs-api-3.10.0-1.el7.x86_64 glusterfs-debuginfo-3.10.0-1.el7.x86_64 glusterfs-3.10.0-1.el7.x86_64 glusterfs-fuse-3.10.0-1.el7.x86_64 glusterfs-cli-3.10.0-1.el7.x86_64 glusterfs-rdma-3.10.0-1.el7.x86_64 glusterfs-client-xlators-3.10.0-1.el7.x86_64 glusterfs-server-3.10.0-1.el7.x86_64 [root@dhcp35-192 ~]#
Where's the core? No external contributor can do anything about this bug, or even verify that it's multiplexing-related, if essential information is kept private within Red Hat.
Nag - the core file has to be made public along with the other log files.
(In reply to Atin Mukherjee from comment #3) > Nag - the core file has to be made public along with the other log files. With the backtrace output as well..
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained. As a result this bug is being closed. If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.