Bug 1635586 - Document about GLUSTER_BLOCKD_STATUS_PROBE_ENABLE.
Summary: Document about GLUSTER_BLOCKD_STATUS_PROBE_ENABLE.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Container_Native_Storage_with_OpenShift
Version: cns-3.10
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Bhavana
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
: 1635692 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-03 10:37 UTC by Humble Chirammal
Modified: 2023-05-08 07:34 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-18 08:01:50 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1597726 0 unspecified CLOSED On reboot of the CNS node, ISCSI login failed to its own target IP 2021-02-22 00:41:40 UTC

Internal Links: 1597726

Description Humble Chirammal 2018-10-03 10:37:54 UTC
Description of problem:

As part of fixing https://bugzilla.redhat.com/show_bug.cgi?id=1597726, we have introduced a new variable called GLUSTER_BLOCKD_STATUS_PROBE_ENABLE in gluster template. This variable controls the gluster block service status/health check as part of gluster deamonset readiness probe. By default, this environment variable is set to false so no changes or regression for existing customers. However if a customer/user want to enable this check they can enable it via above mentioned env variable. We have to mention this in our upgrade step documentation.

Version-Release number of selected component (if applicable):

OCS 3.11

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Raghavendra Talur 2018-10-03 13:46:12 UTC
*** Bug 1635692 has been marked as a duplicate of this bug. ***

Comment 3 Raghavendra Talur 2018-10-05 05:48:18 UTC
Documentation content:

In deployment guide, under 6.4. Upgrading the Red Hat Gluster Storage Pods, add a step between current step 5 and step 6.

Step number. With this release, glusterfs pods's readiness/liveliness probe is capable of checking for gluster-blockd status along with that of glusterd process. If you use OCS for gluster-block volumes it is recommended that the probe be turned on. To turn it on, use the "oc edit" of previous step and ensure that the following sections exist and match:

a. Command sections of liveliness probe and readiness respectively should be :

command:
- "/bin/bash"
- "-c"
- "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"

command:
- "/bin/bash"
- "-c"
- "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"

b. under the list of "env", there should be
- name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
  value: "${GLUSTER_BLOCKD_STATUS_PROBE_ENABLE}"

c. under the list of parameters, there should be
- name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
  displayName: Enable readiness/liveness probe for gluster-blockd
  description: Setting the value to "1" enables the readiness/liveness probe for gluster-blockd.
  value: "1"
  required: false


Note You need to log in before you can comment on or make changes to this bug.