Bug 1706832 - [upgrade playbook] Document default timeout value set for OCS upgrade playbook volume heal check
Summary: [upgrade playbook] Document default timeout value set for OCS upgrade playboo...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Container_Native_Storage_with_OpenShift
Version: ocs-3.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Amrita
QA Contact: Ashmitha Ambastha
URL:
Whiteboard:
Depends On: 1703695
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-06 11:21 UTC by Ashmitha Ambastha
Modified: 2020-03-18 15:15 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-18 15:15:05 UTC
Embargoed:


Attachments (Terms of Use)

Description Ashmitha Ambastha 2019-05-06 11:21:33 UTC
Description of problem:
While the upgrade playbook runs, while upgrading from one gluster pod to another, the volume heal needs to be completed before upgrading the next gluster pod. The default value set for the number of retries while the playbook waits for volume heal to complete is 1200. 

openshift_storage_glusterfs_health_timeout: 1200

This value can be updated with respect number of the volumes on the cluster and help in making the playbook pass at once and not timeout waiting for volume heals to complete. 

Version-Release number of selected component (if applicable): OCS 3.11

How reproducible: Always

Steps to Reproduce:
1. Edit the inventory file and add openshift_storage_glusterfs_health_timeout: 1200 
2. Run the upgrade playbook, 
# ansible-playbook -i <inv_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/upgrade.yml 
3. Playbook should pass and not timeout waiting for vol heals to complete.


Note You need to log in before you can comment on or make changes to this bug.