Bug 1763030 - Set volume option when one of the node is powered off, After powering the node brick processes are offline (with brick-mux enabled)
Summary: Set volume option when one of the node is powered off, After powering the nod...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Srijan Sivakumar
QA Contact: milind
URL:
Whiteboard:
Depends On: 1773856 1808964 1808966
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-18 05:40 UTC by Bala Konda Reddy M
Modified: 2020-12-17 04:50 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-38
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1773856 (view as bug list)
Environment:
Last Closed: 2020-12-17 04:50:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:50:33 UTC

Description Bala Konda Reddy M 2019-10-18 05:40:29 UTC
Description of problem:
On a three node cluster with brick-mux enabled, Power off one of the nodes in the cluster and perform any volume set operation on 5 volumes "performance.client-io-threads on". After setting the option power on the node.
After powering on bricks are offline on most of the volumes.


Version-Release number of selected component (if applicable):
glusterfs-6.0-18.el7rhgs.x86_64
glusterfs-6.0-17.el7rhgs.x86_64

How reproducible:
2/2

Steps to Reproduce:
1. On a three cluster with brick mux enabled, create 25 volumes and start mount 5 volumes
2. Power off one node say N1 in the cluster
3. Set a volume option on 5 volumes.
4. Power on the node(N1) which is turned off

Actual results:
After powering on, the volume option is updated on the volumes on Node N1 and bricks are offline for most of the volumes on N1

Expected results:
After powering on, bricks should be online on N1 and the volume option should be updated for the volumes.


Additional info:

Comment 19 milind 2020-09-29 12:17:16 UTC
After These steps I don't see any brick going offline

Steps:
1. Have a three node cluster, and create 25 replicate(1X3) volumes and start them
2. Mount any of the 5 volumes.
3. Power off one of the vm from the hypervisor.
4. Perform volume set operation on the 5 volumes that are mounted or other volumes, (performance.readdir-ahead on)
5. Volume option set will be successful.
6. Power on the node which is powered off
--------------------
Additional info:
[root ~]# gluster v get all all
Option                                  Value                                   
------                                  -----                                   
cluster.server-quorum-ratio             51                                      
cluster.enable-shared-storage           disable                                 
cluster.op-version                      70000                                   
cluster.max-op-version                  70000                                   
cluster.brick-multiplex                 on                                      
cluster.max-bricks-per-process          250                                     
glusterd.vol_count_per_thread           100                                     
cluster.daemon-log-level                INFO                                    
-----------
glusterfs-6.0-45.el8rhgs.x86_64
glusterfs-fuse-6.0-45.el8rhgs.x86_64
glusterfs-api-6.0-45.el8rhgs.x86_64
glusterfs-selinux-1.0-1.el8rhgs.noarch
glusterfs-client-xlators-6.0-45.el8rhgs.x86_64
glusterfs-server-6.0-45.el8rhgs.x86_64
glusterfs-cli-6.0-45.el8rhgs.x86_64
glusterfs-libs-6.0-45.el8rhgs.x86_64

Hence marking this bug as verified

Comment 21 errata-xmlrpc 2020-12-17 04:50:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.