Bug 1679692 - [Doc] Document workflow for scenario when registry and glusterfs share the same nodes
Summary: [Doc] Document workflow for scenario when registry and glusterfs share the sa...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Container_Native_Storage_with_OpenShift
Version: ocs-3.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: OCS 3.11.z Batch Update 4
Assignee: Anjana KD
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On: 1707789
Blocks: 1709460
TreeView+ depends on / blocked
 
Reported: 2019-02-21 16:45 UTC by khartsoe@redhat.com
Modified: 2019-11-04 15:44 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-04 15:44:42 UTC
Embargoed:


Attachments (Terms of Use)

Description khartsoe@redhat.com 2019-02-21 16:45:54 UTC
Description of problem:

Workflow for scenario when registry and glusterfs share the same nodes is not documented and makes this a "special install":

[Per Rav Davroni - Consulting (rdavroni)
Need to add a working inventory file to support each mode for the installation. Based on my research there are two ways of doing this:

1. glusterfs and glusterfs_registry share same nodes and same volumes ( inventory file changes, a separate storage class is created for each) this must be tested and verified

2. glusgerfs and glusterfs_registry share same nodes but on 2 (two) separate volumes attached to each node, they do not share same volumes 
 
basically we need a working ansible playbook for each case above.]

Version-Release number of selected component (if applicable):
3.11.x

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Jose A. Rivera 2019-02-21 19:28:25 UTC
If I'm understanding the request correctly, neither of these two cases are supported scenarios and can not be supported. Only one Gluster pod may run on any given node, thus it is literally impossible for the glusterfs and glsuterfs_registry clusters to share nodes. Unless I am not interpreting this correctly, please close this BZ.

Comment 4 Michael Adam 2019-02-26 16:43:37 UTC
(In reply to Jose A. Rivera from comment #3)
> If I'm understanding the request correctly, neither of these two cases are
> supported scenarios and can not be supported. Only one Gluster pod may run
> on any given node, thus it is literally impossible for the glusterfs and
> glsuterfs_registry clusters to share nodes. Unless I am not interpreting
> this correctly, please close this BZ.

I agree.
If the description means that the intention is to run two ocs/gluster clusters ("glusterfs" and "glusterfs_registry") in parallel on the same set of nodes, then this is technically not possible (except for very maybe using different host network interfaces, but not sure about this), and it is certainly not supported...

Comment 8 RamaKasturi 2019-04-05 19:28:24 UTC
Hello Anjana,

    I see that this needs to be tested explicitly and cannot be taken in during 3.11.3. However it would be good to have a draft document with steps.

Thanks
kasturi

Comment 21 Anjana KD 2019-10-16 14:11:54 UTC
Hello Kasturi and Talur

Attaching a draft content for review

https://docs.google.com/document/d/1sVDVUqV9axl3tVpg8yFAPeZwE2Dx-nik3GbmVHZtLTY/edit#


Note You need to log in before you can comment on or make changes to this bug.