Bug 1663156 - Bricks offline upon rebooting gluster pods after turning brick mux off
Summary: Bricks offline upon rebooting gluster pods after turning brick mux off
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhgs-server-container
Version: ocs-3.11
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Raghavendra Talur
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-03 10:27 UTC by vinutha
Modified: 2023-09-14 04:44 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-11 20:20:03 UTC
Embargoed:


Attachments (Terms of Use)

Description vinutha 2019-01-03 10:27:54 UTC
Description of problem:
+++++++++++ This bug is created as a clone of https://bugzilla.redhat.com/show_bug.cgi?id=1658984 ++++++++++++++++++

Brick fails to come back online after gluster pod reboot, when BRICKMULTIPLEX is turned off  

Version-Release number of selected component (if applicable):
# rpm -qa| grep openshift
openshift-ansible-3.11.51-2.git.0.51c90a3.el7.noarch
atomic-openshift-excluder-3.11.51-1.git.0.1560686.el7.noarch
atomic-openshift-hyperkube-3.11.51-1.git.0.1560686.el7.x86_64
atomic-openshift-node-3.11.51-1.git.0.1560686.el7.x86_64
openshift-ansible-docs-3.11.51-2.git.0.51c90a3.el7.noarch
openshift-ansible-roles-3.11.51-2.git.0.51c90a3.el7.noarch
atomic-openshift-clients-3.11.51-1.git.0.1560686.el7.x86_64
atomic-openshift-3.11.51-1.git.0.1560686.el7.x86_64
openshift-ansible-playbooks-3.11.51-2.git.0.51c90a3.el7.noarch
atomic-openshift-docker-excluder-3.11.51-1.git.0.1560686.el7.noarch

# oc rsh glusterfs-storage-525dl rpm -qa| grep gluster 
glusterfs-server-3.12.2-32.el7rhgs.x86_64
gluster-block-0.2.1-30.el7rhgs.x86_64
glusterfs-api-3.12.2-32.el7rhgs.x86_64
glusterfs-cli-3.12.2-32.el7rhgs.x86_64
python2-gluster-3.12.2-32.el7rhgs.x86_64
glusterfs-fuse-3.12.2-32.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-32.el7rhgs.x86_64
glusterfs-libs-3.12.2-32.el7rhgs.x86_64
glusterfs-3.12.2-32.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-32.el7rhgs.x86_64

# oc rsh heketi-storage-1-jct7p rpm -qa| grep heketi
heketi-client-8.0.0-7.el7rhgs.x86_64
heketi-8.0.0-7.el7rhgs.x86_64

How reproducible:
2X2

Steps to Reproduce:

1. On a 4 node OCS setup create 1 file and 1 block pvc with the default storage class. Gluster volume info displays cluster.brick-multiplex: on' 

2. Edit the gluster daemonset to add the below parameter 
- name: GLUSTER_BRICKMULTIPLEX
  value: "No"

3. Reboot the gluster pod for gluster volume info to reflect 'cluster.brick-multiplex: off' 

4. Observed that 1 of the brick fails to come online after the reboot 


Actual results:
Bricks offline after gluster pod reboot with bmux set to off 

Expected results:
All bricks should be online after gluster pod reboot with bmux off 

Additional info:

Comment 4 Niels de Vos 2019-01-03 15:30:55 UTC
Could you share the logs of the containers? Mainly the /var/log/glusterfs (with the glusterd.log and the container/ subdir).

Comment 12 Red Hat Bugzilla 2023-09-14 04:44:25 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.