Bug 1653673 - Frontend IPs should not be used for peer probing
Summary: Frontend IPs should not be used for peer probing
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-27 11:47 UTC by SATHEESARAN
Modified: 2020-05-15 06:30 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-24 11:56:18 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2018-11-27 11:47:19 UTC
Description of problem:
-----------------------
RHHI-V recommends customers to make use of frontend network for ovirtmgmt and backend network for gluster traffic.

When a host is added to virt cluster with the FQDN/IP, then ovirtmgmt bridge will be created on the corresponding network interface.

In RHHI-V case, the host is added to the virt+gluster cluster, then ovirtmgmt is created on that corresponding NIC, and also the peer probe will initiated on that particular network address.

There should be proper solution to take care of this issue 

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHHI-V 1.5
RHV 4.2.7

How reproducible:
------------------
Always

Steps to Reproduce:
--------------------
1. Create a virt+gluster cluster (hc)
2. Add a host with 2 IPs, in to the cluster using IP1

Actual results:
---------------
ovirtmgmt bridge is created on network interface corresponding to IP1, and also peer probe will be initiated with the same IP

Expected results:
-----------------
ovirtmgmt should be created with IP1
Peer probe should happen with IP2

Only when there is one IP up, ovirtmgmt & gluster network can be the one and the same


Additional info:

Comment 1 Sahina Bose 2018-11-29 05:51:25 UTC
(In reply to SATHEESARAN from comment #0)
> Description of problem:
> -----------------------
> RHHI-V recommends customers to make use of frontend network for ovirtmgmt
> and backend network for gluster traffic.
> 
> When a host is added to virt cluster with the FQDN/IP, then ovirtmgmt bridge
> will be created on the corresponding network interface.
> 
> In RHHI-V case, the host is added to the virt+gluster cluster, then
> ovirtmgmt is created on that corresponding NIC, and also the peer probe will
> initiated on that particular network address.
> 
> There should be proper solution to take care of this issue 
> 
> Version-Release number of selected component (if applicable):
> --------------------------------------------------------------
> RHHI-V 1.5
> RHV 4.2.7
> 
> How reproducible:
> ------------------
> Always
> 
> Steps to Reproduce:
> --------------------
> 1. Create a virt+gluster cluster (hc)
> 2. Add a host with 2 IPs, in to the cluster using IP1
> 
> Actual results:
> ---------------
> ovirtmgmt bridge is created on network interface corresponding to IP1, and
> also peer probe will be initiated with the same IP
> 
> Expected results:
> -----------------
> ovirtmgmt should be created with IP1
> Peer probe should happen with IP2
> 
> Only when there is one IP up, ovirtmgmt & gluster network can be the one and
> the same
> 
> 
> Additional info:

When the additional NIC (i.e IP2) is associated with the gluster network, then the gluster cluster is also peer probed with IP2.
For the data separation, bricks need to be associated with IP2, the peer probe is not the issue.
Did you try this out?

Comment 2 Sahina Bose 2018-12-18 06:19:21 UTC
Sas, could you respond to the question?

Comment 3 Sahina Bose 2019-02-18 05:37:45 UTC
Have we added this to pre-checks? If so, can you update the tracker with this bug?

Comment 4 SATHEESARAN 2019-02-20 11:08:14 UTC
(In reply to Sahina Bose from comment #1)
> When the additional NIC (i.e IP2) is associated with the gluster network,
> then the gluster cluster is also peer probed with IP2.
> For the data separation, bricks need to be associated with IP2, the peer
> probe is not the issue.
> Did you try this out?

Problem here is that when someone adds the host with gluster IPs, then the ovirtmgmt
bridge is created on that network interface. ovirtmgmt is used by VMs.

Comment 5 Sahina Bose 2019-05-07 14:16:52 UTC
(In reply to SATHEESARAN from comment #4)
> (In reply to Sahina Bose from comment #1)
> > When the additional NIC (i.e IP2) is associated with the gluster network,
> > then the gluster cluster is also peer probed with IP2.
> > For the data separation, bricks need to be associated with IP2, the peer
> > probe is not the issue.
> > Did you try this out?
> 
> Problem here is that when someone adds the host with gluster IPs, then the
> ovirtmgmt
> bridge is created on that network interface. ovirtmgmt is used by VMs.

SO, do you mean a check to ensure that the FQDN used to add host to engine is not equal to the one used to create gluster cluster?

Comment 6 Sahina Bose 2019-10-24 11:56:18 UTC
Closing. Please re-open with requested information if this needs to be taken up

Comment 7 SATHEESARAN 2020-05-15 06:30:40 UTC
(In reply to Sahina Bose from comment #6)
> Closing. Please re-open with requested information if this needs to be taken
> up

I feel better option here is, when adding a host to the HC cluster, there should be an option
for backend/gluster/Storage FQDN and host FQDN.

Gluster peer management should happen using Storage FQDN and host FQDN should be used for ovirtmgmt.
I will further discuss this with Gobinda and team, will raise suitable bz with listed scenarios


Note You need to log in before you can comment on or make changes to this bug.