Bug 1511580 - An IPv6 address for a RHV VM's NIC is incorrectly stored as an ipaddress attribute rather than ipv6address attribute
Summary: An IPv6 address for a RHV VM's NIC is incorrectly stored as an ipaddress attr...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: GA
: 5.10.0
Assignee: Alona Kaplan
QA Contact: Angelina Vasileva
URL:
Whiteboard:
Depends On:
Blocks: 1530739
TreeView+ depends on / blocked
 
Reported: 2017-11-09 15:36 UTC by Peter McGowan
Modified: 2019-07-31 08:58 UTC (History)
9 users (show)

Fixed In Version: 5.10.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1530739 (view as bug list)
Environment:
Last Closed: 2018-06-21 21:10:28 UTC
Category: ---
Cloudforms Team: RHEVM
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ManageIQ manageiq pull 16619 0 None None None 2017-12-24 14:07:30 UTC
Github https://github.com/ManageIQ manageiq-providers-ovirt pull 170 0 None None None 2017-12-06 07:13:31 UTC

Description Peter McGowan 2017-11-09 15:36:55 UTC
Description of problem:
If a RHV VM's NIC has both IPv4 and IPv6 addresses, they are visible correctly as:

$evm.root['vm'].hardware.ipaddresses = ["10.19.137.131", "fe80::bcef:feff:feed:1cc", "2620:52:0:1388:bcef:feff:feed:1cc"]   (type: Array)

However these 3 addresses are each represented as hardware.network objects, and appear to store the addresses incorrectly:

$evm.root['vm'].hardware.network[0].ipaddress = 10.19.137.131
$evm.root['vm'].hardware.network[0].ipv6address = nil

$evm.root['vm'].hardware.network[1].ipaddress = fe80::bcef:feff:feed:1cc
$evm.root['vm'].hardware.network[1].ipv6address = nil

$evm.root['vm'].hardware.network[2].ipaddress = 2620:52:0:1388:bcef:feff:feed:1cc   
$evm.root['vm'].hardware.network[2].ipv6address = nil

By contrast a VMware VM stores the details differently in a single hardware.network object

$evm.root['vm'].hardware.ipaddresses = ["10.39.167.137", "2620:52:0:27a7:250:56ff:febf:84a2"]

$evm.root['vm'].hardware.network[0].ipaddress = 10.39.167.137
$evm.root['vm'].hardware.network[0].ipv6address = 2620:52:0:27a7:250:56ff:febf:84a2

Version-Release number of selected component (if applicable):
5.8.2.3

How reproducible:
Every time

Steps to Reproduce:
1. Find a RHV VM that contains both an IPv4 and IP6 address
2. Examine the $evm.root['vm'].hardware.networks association from automate

Actual results:
The IPv6 addresses is stored as an ipaddresses attribute and the ipv6address attribute is nil

Expected results:
The IPv6 addresses should be stored as an ipv6address attribute

Additional info:
For consistency between providers we should have a predictable number of hardware.network objects per NIC, either 1 for each address as currently in RHV, or one for the NIC as currently in VMware. Currently it seems to be implemented differently for providers which can create ambiguity when trying to access the details from automate.

Comment 5 Alona Kaplan 2017-12-07 08:49:00 UTC
Hi Peter,

I changed the rhev implementation to be consistent with the vmware one.

Please notice the same nic can have multiple ipv4/6 addresses.

The new representation will be the following (the order is determined by alphabetic sort)-

$evm.root['vm'].hardware.ipaddresses = ["10.19.137.131", "fe80::bcef:feff:feed:1cc", "2620:52:0:1388:bcef:feff:feed:1cc"]

$evm.root['vm'].hardware.network[0].ipaddress = 10.19.137.131
$evm.root['vm'].hardware.network[0].ipv6address = 2620:52:0:1388:bcef:feff:feed:1cc  

$evm.root['vm'].hardware.network[1].ipaddress = nil
$evm.root['vm'].hardware.network[1].ipv6address = fe80::bcef:feff:feed:1cc


Please notice that the guest device can have only one network.
So it will link only to the first network. The second network won't have guest device.

$evm.root['vm'].hardware.guest_devices[?].network =  $evm.root['vm'].hardware.network[0]

Comment 6 Peter McGowan 2017-12-07 13:43:09 UTC
looks good

Comment 7 Alona Kaplan 2017-12-07 13:58:29 UTC
Peter, one more question.

It was fixed for regular refresh v4 and graph refresh.

Should regular refresh v3 be fixed as well?

Comment 8 Peter McGowan 2017-12-12 08:24:28 UTC
Hi

Ideally yes, it should be fixed for v3 as well.

Thanks

Comment 9 Alona Kaplan 2017-12-13 10:00:17 UTC
The fix was merged without v3.
Fixing v3 has lower priority, if you think it worth it, please open a separate bug.


Note You need to log in before you can comment on or make changes to this bug.