Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1823144

Summary: Enabling second ethernet interface breaks connectivity to the host
Product: [oVirt] ovirt-node Reporter: Bret McMillan <bretm>
Component: GeneralAssignee: Yuval Turgeman <yturgema>
Status: CLOSED NOTABUG QA Contact: Wei Wang <weiwang>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.3CC: bugs, cshao, lsvaty, mavital, michal.skrivanek, nlevy, peyu, qiyuan, sbonazzo, shlei, weiwang, yaniwang, yturgema
Target Milestone: ---Flags: cshao: testing_ack?
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-04-14 07:57:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Node RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Bret McMillan 2020-04-12 00:06:02 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Michal Skrivanek 2020-04-12 03:39:26 UTC
please add more information

Comment 2 Bret McMillan 2020-04-12 19:59:47 UTC
Huh, apologies, thought I did submit more info.  Weird.

I have a fairly simple homelab, and I have been installing ovirt-node on some of my spare machines to play around with it.  Each has at least two ethernet adapters.

Upon initial install, when I had both plugged into my home network, with the first representing the intended mgmt vlan, and the 2nd representing a dedicated storage traffic vlan, I had huge challenges connecting to the nodes at all.

When I disconnected the storage vlan adapter, connectivity resumed (via the mgmt vlan).

After getting through the install, and getting a hosted engine up and running, I am able to see the storage interfaces as disconnected.  When I plug them in at this point, nothing happens (in a good way):  I still have connectivity via the mgmt vlan.

I've yet to figure out, though, how to simply create a non-vm storage network and assign it to the second interface...

Comment 3 Wei Wang 2020-04-13 03:00:25 UTC
(In reply to Bret McMillan from comment #2)
> Huh, apologies, thought I did submit more info.  Weird.
> 
> I have a fairly simple homelab, and I have been installing ovirt-node on
> some of my spare machines to play around with it.  Each has at least two
> ethernet adapters.
> 
> Upon initial install, when I had both plugged into my home network, with the
> first representing the intended mgmt vlan, and the 2nd representing a
> dedicated storage traffic vlan, I had huge challenges connecting to the
> nodes at all.
> 
> When I disconnected the storage vlan adapter, connectivity resumed (via the
> mgmt vlan).
> 
> After getting through the install, and getting a hosted engine up and
> running, I am able to see the storage interfaces as disconnected.  When I
> plug them in at this point, nothing happens (in a good way):  I still have
> connectivity via the mgmt vlan.
> 
> I've yet to figure out, though, how to simply create a non-vm storage
> network and assign it to the second interface...

Engine vm will be created within the storage domain, then host will be added to this engine vm, but the host's vlan is different with the storage vlan, that is to say host cannot ping engine vm? Is it reasonable? QE hasn't deployed hosted engine in different vlans. Maybe I couldn't get the point of the issue, could you please give QE the detail steps? Also the actual and expected results?

Comment 4 Sandro Bonazzola 2020-04-14 07:57:36 UTC
This doesn't seem to be a supported scenario, storage network should be defined before deploying hosted engine.