Created attachment 1513702 [details]
migrate ipv6 to ipv4 logs
Description of problem:
Communication between hosts with a different IP stack (only IPv4, only IPv6 or both enabled) can fails.
for example during VM migration
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. In host-1 go to: Compute | Hosts | Network Interfaces | Setup Host Networks
2. Edit 'ovirtmgmt' attached to NIC
3. Enable IPv6 and IPv4
4. In host-2 go to the same place and enable only IPv4, IPv6 is disabled
5. At the DNS, make sure both hosts are resolved to both IPv4 & IPv6
6. Run VM on host-1
7. Try to migrate VM from host-1 to host-2
Migration should success or
The system should alert about the mismatch
1. Migration from host-2 to host-1 is working
2. The error at the log is: "No route to host"
3. It found that it is a dual-stack issue, the host that supports both, Ipv4 and IPv6 is trying to open an IPv6 connection to the host that support only IPv4, and fails
4. If FQDN of target host will be resolved to IPv4 then migration will succeed
*** Bug 1658639 has been marked as a duplicate of this bug. ***
Is this documented as a limitation?
Should we have an insights rule for this?
(In reply to Dominik Holler from comment #3)
> Should we have an insights rule for this?
but it would only be possible if we have that IP stack information aggregated anywhere. Can we see that anywhere in engine db/API which stack a host is using?
(In reply to Michal Skrivanek from comment #4)
> (In reply to Dominik Holler from comment #3)
> > Should we have an insights rule for this?
> but it would only be possible if we have that IP stack information
> aggregated anywhere. Can we see that anywhere in engine db/API which stack a
> host is using?
Indirectly via the API.
A detailed approach could be something like:
For every cluster:
For every network role:
check if all hosts have only IPv4 address on attachment of the role's network or \
check if all hosts have only IPv6 address on attachment of the role's network or \
check if all hosts have IPv6 and IPv4 address on attachment of the role's network
Closing wontfix. The described scenario is not considered critical enough to be included in insight.
Feel free to open a bug on rhv-log-collector-analyzer if it will be enough to detect the issue when someone is actively looking for issues or on ovirt-engine if you think administrator should be notified as soon as possible about the issue.