Description of problem: On a three node cluster with brick-mux enabled, Power off one of the nodes in the cluster and perform any volume set operation on 5 volumes "performance.client-io-threads on". After setting the option power on the node. After powering on bricks are offline on most of the volumes. Version-Release number of selected component (if applicable): glusterfs-6.0-18.el7rhgs.x86_64 glusterfs-6.0-17.el7rhgs.x86_64 How reproducible: 2/2 Steps to Reproduce: 1. On a three cluster with brick mux enabled, create 25 volumes and start mount 5 volumes 2. Power off one node say N1 in the cluster 3. Set a volume option on 5 volumes. 4. Power on the node(N1) which is turned off Actual results: After powering on, the volume option is updated on the volumes on Node N1 and bricks are offline for most of the volumes on N1 Expected results: After powering on, bricks should be online on N1 and the volume option should be updated for the volumes. Additional info:
After These steps I don't see any brick going offline Steps: 1. Have a three node cluster, and create 25 replicate(1X3) volumes and start them 2. Mount any of the 5 volumes. 3. Power off one of the vm from the hypervisor. 4. Perform volume set operation on the 5 volumes that are mounted or other volumes, (performance.readdir-ahead on) 5. Volume option set will be successful. 6. Power on the node which is powered off -------------------- Additional info: [root ~]# gluster v get all all Option Value ------ ----- cluster.server-quorum-ratio 51 cluster.enable-shared-storage disable cluster.op-version 70000 cluster.max-op-version 70000 cluster.brick-multiplex on cluster.max-bricks-per-process 250 glusterd.vol_count_per_thread 100 cluster.daemon-log-level INFO ----------- glusterfs-6.0-45.el8rhgs.x86_64 glusterfs-fuse-6.0-45.el8rhgs.x86_64 glusterfs-api-6.0-45.el8rhgs.x86_64 glusterfs-selinux-1.0-1.el8rhgs.noarch glusterfs-client-xlators-6.0-45.el8rhgs.x86_64 glusterfs-server-6.0-45.el8rhgs.x86_64 glusterfs-cli-6.0-45.el8rhgs.x86_64 glusterfs-libs-6.0-45.el8rhgs.x86_64 Hence marking this bug as verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603