Bug 1567735 - all IPv6 tests failed on OVN / OVN-DVR deployment
Summary: all IPv6 tests failed on OVN / OVN-DVR deployment
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-networking-ovn
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: z3
: 13.0 (Queens)
Assignee: Assaf Muller
QA Contact: Eran Kuris
URL:
Whiteboard:
: 1580460 (view as bug list)
Depends On:
Blocks: ovn_ipv6_rs ovn_ipv6_ra ovn_ipv6_ra_python 1580460
TreeView+ depends on / blocked
 
Reported: 2018-04-16 06:36 UTC by Eran Kuris
Modified: 2019-09-09 13:14 UTC (History)
18 users (show)

Fixed In Version: python-networking-ovn-4.0.3-1.el7ost
Doc Type: Release Note
Doc Text:
OSP13 using OVN as the networking backend won't include IPv6 support in the first release. There is a problem with the responses to the Neighbor Solicitation requests coming from guests VMs that causes a loss of the default routes.
Clone Of:
: 1580460 (view as bug list)
Environment:
Last Closed: 2018-11-13 23:32:54 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3614 0 None None None 2018-11-13 23:34:31 UTC

Description Eran Kuris 2018-04-16 06:36:02 UTC
Description of problem:
All IPv6 Tempest tests were failed on OVN-DVR deployment. 

tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os[compute,id-d7e1f858-187c-45a6-89c9-bdafde619a9f,network,slow]
tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os[compute,id-76f26acd-9688-42b4-bc3e-cd134c4cb09e,network,slow]
tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless[compute,id-cf1c4425-766b-45b8-be35-e2959728eb00,network,slow]
tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac[compute,id-9178ad42-10e4-47e9-8987-e02b170cc5cd,network]
tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_slaac_from_os[compute,id-b6399d76-4438-4658-bcf5-0d6c8584fde2,network,slow]
tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_dhcpv6_stateless[compute,id-7ab23f41-833b-4a16-a7c9-5b42fe6d4123,network,slow]
tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac[compute,id-dec222b1-180c-4098-b8c5-cc1b8342d611,network,slow]
tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os[compute,id-2c92df61-29f0-4eaa-bee3-7c65bef62a43,network,slow]

Version-Release number of selected component (if applicable):
13   -p 2018-04-10.2
python-networking-ovn-4.0.1-0.20180315174741.a57c70e.el7ost.noarch
python-networking-ovn-metadata-agent-4.0.1-0.20180315174741.a57c70e.el7ost.noarch
puppet-ovn-12.3.1-0.20180221062110.4b16f7c.el7ost.noarch
openvswitch-ovn-central-2.9.0-15.el7fdp.x86_64
openvswitch-ovn-common-2.9.0-15.el7fdp.x86_64
openvswitch-ovn-host-2.9.0-15.el7fdp.x86_64

How reproducible:
100%

Steps to Reproduce:
1. run ipv6 tempest test on DVR ovn deployment 
use ci job
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Eran Kuris 2018-04-16 07:25:49 UTC
After a manual test, it looks like the router does not route IPv6 between to IPv6 subnets

                                 R
                                /  \
                    IPv6_subnet1     IPv6_subnet2
                       /                  \
               VM_net-64-1                VM_net-64-2 



IPv6 Traffic from VM_1 access to his GW, but IPv6 Traffic from VM_1  cant  reachable to VM_2
         

(overcloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+-------------+--------+----------------------------------------------------------+----------+----------+
| ID                                   | Name        | Status | Networks                                                 | Image    | Flavor   |
+--------------------------------------+-------------+--------+----------------------------------------------------------+----------+----------+
| 5d00b3ef-7759-43c7-ac08-fbce4c7ed764 | VM_net-64-2 | ACTIVE | net-64-2=2002::f816:3eff:fe51:73, 10.0.2.6, 10.0.0.213   | cirros35 | m1.smail |
| 7b424afa-b61a-413b-93dc-20e76372768e | VM_net-64-1 | ACTIVE | net-64-1=2001::f816:3eff:fef9:b470, 10.0.1.9, 10.0.0.219 | cirros35 | m1.smail |



ssh cirros@10.0.0.219
cirros@10.0.0.219's password: 
Permission denied, please try again.
cirros@10.0.0.219's password: 
$ ping6 2001::1
PING 2001::1 (2001::1): 56 data bytes
64 bytes from 2001::1: seq=0 ttl=254 time=2.644 ms
64 bytes from 2001::1: seq=1 ttl=254 time=0.714 ms
64 bytes from 2001::1: seq=2 ttl=254 time=0.614 ms
^C
--- 2001::1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.614/1.324/2.644 ms
$ ping6 2001::1
PING 2001::1 (2001::1): 56 data bytes
64 bytes from 2001::1: seq=0 ttl=254 time=0.774 ms
64 bytes from 2001::1: seq=1 ttl=254 time=0.823 ms
^C
--- 2001::1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.774/0.798/0.823 ms
$ ping6 2002::1
PING 2002::1 (2002::1): 56 data bytes
ping6: sendto: Network is unreachable
ping6 2002::f816:3eff:fe51:73
PING 2002::f816:3eff:fe51:73 (2002::f816:3eff:fe51:73): 56 data bytes
ping6: sendto: Network is unreachable
$ ping 10.0.2.6
PING 10.0.2.6 (10.0.2.6): 56 data bytes
64 bytes from 10.0.2.6: seq=0 ttl=63 time=2.619 ms
64 bytes from 10.0.2.6: seq=1 ttl=63 time=1.349 ms

Comment 3 Numan Siddique 2018-04-16 13:42:46 UTC
The reason for the failure you see in your manual testing is because, ovn-controller when sending the periodic IPv6 Router Advertisements, is not setting the Router lifetime, because of which the VM is not adding a default route. It needs a fix in ovn-controller.

Comment 4 Numan Siddique 2018-04-16 13:58:02 UTC
Patch to fix the issue - https://patchwork.ozlabs.org/patch/898637/

Comment 5 Daniel Alvarez Sanchez 2018-04-17 14:11:18 UTC
The patch that Numan pointed out is already merged in both master and 2.9 branch. Can we get it in next OVS 2.9 downstream package please?

Comment 13 Eran Kuris 2018-05-06 09:00:00 UTC
According CI we have some IPv6 tests that failed:
https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/DFG/view/network/view/networking-ovn/job/DFG-network-networking-ovn-13_director-rhel-virthost-3cont_2comp-ipv4-geneve-tempest-custom-image/19/testReport/

The tests failed on a job that using RHEL image.
also verified it manualliy  ping6 between 2 networks and it failed.

[root@vm-net-64-2 ~]# ip -6 route
unreachable ::/96 dev lo metric 1024 error -113 
unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -113 
2002::/64 dev eth0 proto kernel metric 256 
unreachable 2002:a00::/24 dev lo metric 1024 error -113 
unreachable 2002:7f00::/24 dev lo metric 1024 error -113 
unreachable 2002:a9fe::/32 dev lo metric 1024 error -113 
unreachable 2002:ac10::/28 dev lo metric 1024 error -113 
unreachable 2002:c0a8::/32 dev lo metric 1024 error -113 
unreachable 2002:e000::/19 dev lo metric 1024 error -113 
unreachable 3ffe:ffff::/32 dev lo metric 1024 error -113 
fe80::/64 dev eth0 proto kernel metric 256 mtu 1442 
[root@vm-net-64-2 ~]# ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:f0:52:e3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.10/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 42891sec preferred_lft 42891sec
    inet6 2002::f816:3eff:fef0:52e3/64 scope global mngtmpaddr dynamic 
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fef0:52e3/64 scope link 
       valid_lft forever preferred_lft forever


 ip -6 route
unreachable ::/96 dev lo metric 1024 error -113 
unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -113 
2001::/64 dev eth0 proto kernel metric 256 
unreachable 2002:a00::/24 dev lo metric 1024 error -113 
unreachable 2002:7f00::/24 dev lo metric 1024 error -113 
unreachable 2002:a9fe::/32 dev lo metric 1024 error -113 
unreachable 2002:ac10::/28 dev lo metric 1024 error -113 
unreachable 2002:c0a8::/32 dev lo metric 1024 error -113 
unreachable 2002:e000::/19 dev lo metric 1024 error -113 
unreachable 3ffe:ffff::/32 dev lo metric 1024 error -113 
fe80::/64 dev eth0 proto kernel metric 256 mtu 1442 
[root@vm-net-64-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:3a:d0:08 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.12/24 brd 10.0.1.255 scope global dynamic eth0
       valid_lft 43000sec preferred_lft 43000sec
    inet6 2001::f816:3eff:fe3a:d008/64 scope global mngtmpaddr dynamic 
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe3a:d008/64 scope link 
       valid_lft forever preferred_lft forever

Comment 14 Eran Kuris 2018-05-06 12:01:27 UTC
I success to reproduce the failure with RHEL7.4 image & Cirros image.
I see a strange behavior after the instances booted I pinged from vm2 to vm1, 6 packets sent reached to vm1 and after that, I got: "ping6: sendto: Network is "unreachable
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
vm-net-64-2 login: cirros
Password: 
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:5b:06:5f brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.4/24 brd 10.0.2.255 scope global eth0
    inet6 2002::f816:3eff:fe5b:65f/64 scope global dynamic 
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe5b:65f/64 scope link 
       valid_lft forever preferred_lft forever
$ ping6 2001::f816:3eff:feb2:9272
PING 2001::f816:3eff:feb2:9272 (2001::f816:3eff:feb2:9272): 56 data bytes
64 bytes from 2001::f816:3eff:feb2:9272: seq=0 ttl=254 time=2.411 ms
64 bytes from 2001::f816:3eff:feb2:9272: seq=1 ttl=254 time=1.664 ms
64 bytes from 2001::f816:3eff:feb2:9272: seq=2 ttl=254 time=0.932 ms
64 bytes from 2001::f816:3eff:feb2:9272: seq=3 ttl=254 time=1.139 ms
64 bytes from 2001::f816:3eff:feb2:9272: seq=4 ttl=254 time=0.936 ms
64 bytes from 2001::f816:3eff:feb2:9272: seq=5 ttl=254 time=0.919 ms
ping6: sendto: Network is unreachable
$ # /sbin/ip -6 route show  dev eth0
$  /sbin/ip -6 route show  dev eth0
-sh: /sbin/ip: not found
$  ip -6 route
2002::/64 dev eth0  metric 256 
fe80::/64 dev eth0  metric 256 
unreachable default dev lo  metric -1  error -101
ff00::/8 dev eth0  metric 256 
unreachable default dev lo  metric -1  error -101
$ ping6 2002::1
PING 2002::1 (2002::1): 56 data bytes
64 bytes from 2002::1: seq=0 ttl=254 time=11.360 ms
64 bytes from 2002::1: seq=1 ttl=254 time=0.709 ms
64 bytes from 2002::1: seq=2 ttl=254 time=0.676 ms
64 bytes from 2002::1: seq=3 ttl=254 time=0.645 ms
64 bytes from 2002::1: seq=4 ttl=254 time=0.709 ms

--- 2002::1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.645/2.819/11.360 ms
$ ping6 2001::1
PING 2001::1 (2001::1): 56 data bytes
ping6: sendto: Network is unreachable

Comment 16 Numan Siddique 2018-05-07 07:41:20 UTC
Issue 1 : CI failures
-----------
https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/DFG/view/network/view/networking-ovn/job/DFG-network-networking-ovn-13_director-rhel-virthost-3cont_2comp-ipv4-geneve-tempest-custom-image/19/testReport/


If we see the link - https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/DFG/view/network/view/networking-ovn/job/DFG-network-networking-ovn-13_director-rhel-virthost-3cont_2comp-ipv4-geneve-tempest-custom-image/19/testReport/tempest.scenario.test_network_v6/TestGettingAddress/test_dualnet_multi_prefix_dhcpv6_stateless_compute_id_cf1c4425_766b_45b8_be35_e2959728eb00_network_slow_/

eth1 is not configured by network service 

*****
AssertionError: Address 2003::1:f816:3eff:fec2:acc9 not configured for instance 83aecc96-c22c-49a6-b6b3-ab57ec27ef1e, ip address output is
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:b7:b3:a1 brd ff:ff:ff:ff:ff:ff
    inet 10.100.0.9/28 brd 10.100.0.15 scope global dynamic eth0
       valid_lft 42878sec preferred_lft 42878sec
    inet6 fe80::f816:3eff:feb7:b3a1/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:c2:ac:c9 brd ff:ff:ff:ff:ff:ff
*****

And that's why the failures. I will verify if we see the same behaviour with OSP13 (ml2ovs) setup. Probably metadata issue.


Issue 2 : IPv6 ping not working
------------------------------

After testing in Eran's setup, we noticed that the default route is getting deleted when the timer expires. And if the RA from ovn-controller doesn't reach within the timer expiration, the connectivity is lost.

ovn-controller sends periodic RAs based on the RFC 4861. We need to revisit the code in ovn-controller and see if there is any bug. Right now it uses a random(mininterval, maxinterval) to select the next periodic RA interval.

Comment 17 Eran Kuris 2018-05-07 08:01:07 UTC
(In reply to Numan Siddique from comment #16)
> Issue 1 : CI failures
> -----------
> https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/DFG/view/
> network/view/networking-ovn/job/DFG-network-networking-ovn-13_director-rhel-
> virthost-3cont_2comp-ipv4-geneve-tempest-custom-image/19/testReport/
> 
> 
> If we see the link -
> https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/DFG/view/
> network/view/networking-ovn/job/DFG-network-networking-ovn-13_director-rhel-
> virthost-3cont_2comp-ipv4-geneve-tempest-custom-image/19/testReport/tempest.
> scenario.test_network_v6/TestGettingAddress/
> test_dualnet_multi_prefix_dhcpv6_stateless_compute_id_cf1c4425_766b_45b8_be35
> _e2959728eb00_network_slow_/
> 
> eth1 is not configured by network service 
> 
> *****
> AssertionError: Address 2003::1:f816:3eff:fec2:acc9 not configured for
> instance 83aecc96-c22c-49a6-b6b3-ab57ec27ef1e, ip address output is
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host 
>        valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast state
> UP qlen 1000
>     link/ether fa:16:3e:b7:b3:a1 brd ff:ff:ff:ff:ff:ff
>     inet 10.100.0.9/28 brd 10.100.0.15 scope global dynamic eth0
>        valid_lft 42878sec preferred_lft 42878sec
>     inet6 fe80::f816:3eff:feb7:b3a1/64 scope link 
>        valid_lft forever preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>     link/ether fa:16:3e:c2:ac:c9 brd ff:ff:ff:ff:ff:ff
> *****
> 
> And that's why the failures. I will verify if we see the same behaviour with
> OSP13 (ml2ovs) setup. Probably metadata issue.
> 
> 
> Issue 2 : IPv6 ping not working
> ------------------------------
> 
> After testing in Eran's setup, we noticed that the default route is getting
> deleted when the timer expires. And if the RA from ovn-controller doesn't
> reach within the timer expiration, the connectivity is lost.
> 
> ovn-controller sends periodic RAs based on the RFC 4861. We need to revisit
> the code in ovn-controller and see if there is any bug. Right now it uses a
> random(mininterval, maxinterval) to select the next periodic RA interval.



Numan about issue 1 we need to understand why it failed on RHEL based job and on  
Cirros based jobs it passed.

Comment 19 Numan Siddique 2018-05-07 19:05:09 UTC
I have submitted the patch to fix the IPv6 ping issue - https://patchwork.ozlabs.org/patch/909890/. More details in the commit message of the patch.

Comment 23 Daniel Alvarez Sanchez 2018-05-21 14:13:57 UTC
*** Bug 1580460 has been marked as a duplicate of this bug. ***

Comment 26 Eran Kuris 2018-08-09 05:50:10 UTC
it looks like we have a new trace on Z-stream 2 The ipv6 address did not set on the instances. 

TestGettingAddress-212884709', u'x-compute-request-id': 'req-8ce9bd03-d040-48b9-a652-39928cb2731a', u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', u'server': 'Apache', u'openstack-api-version': 'compute 2.1', u'connection': 'close', u'x-openstack-nova-api-version': '2.1', u'date': 'Wed, 08 Aug 2018 13:55:43 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-8ce9bd03-d040-48b9-a652-39928cb2731a'}
        Body:
}}}

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/common/utils/__init__.py", line 88, in wrapper
    return f(*func_args, **func_kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_network_v6.py", line 245, in test_dualnet_dhcp6_stateless_from_os
    self._prepare_and_test(address6_mode='dhcpv6-stateless', dualnet=True)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_network_v6.py", line 196, in _prepare_and_test
    (ip, srv['id'], ssh.exec_command("ip address")))
  File "/usr/lib/python2.7/site-packages/unittest2/case.py", line 666, in fail
    raise self.failureException(msg)
AssertionError: Address 2003::f816:3eff:fef1:9924 not configured for instance d3e41c6d-fa4d-489e-9083-94f44c575c7f, ip address output is
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:c9:7a:10 brd ff:ff:ff:ff:ff:ff
    inet 10.100.0.9/28 brd 10.100.0.15 scope global dynamic eth0
       valid_lft 42939sec preferred_lft 42939sec
    inet6 fe80::f816:3eff:fec9:7a10/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast state UP qlen 1000

Comment 29 Lon Hohberger 2018-08-29 19:53:03 UTC
According to our records, this should be resolved by openvswitch-2.9.0-54.el7fdp.  This build is available now.

Comment 30 Numan Siddique 2018-08-30 07:40:02 UTC
(In reply to Lon Hohberger from comment #29)
> According to our records, this should be resolved by
> openvswitch-2.9.0-54.el7fdp.  This build is available now.

We need another patch for this fix which got recently merged in u/s master and us/ branch-2.9. - https://github.com/openvswitch/ovs/commit/935a968d3413d032560ea66f7a5f88e0cfd4eafc

Is fix available in openvswitch-2.9.0-54.el7fdp ?

Comment 31 Eran Kuris 2018-08-30 07:56:40 UTC
if the fix is not available yet, please move the bug from "ON_QA"

thanks

Comment 32 Eran Kuris 2018-08-30 11:04:31 UTC
Tested on OSP13 -p 2018-08-22.2

RPM : openvswitch-2.9.0-54.el7fdp.x86_64

and the problem exists :

$ ping6 2001::f816:3eff:fe27:d815
PING 2001::f816:3eff:fe27:d815 (2001::f816:3eff:fe27:d815): 56 data bytes
64 bytes from 2001::f816:3eff:fe27:d815: seq=0 ttl=254 time=4.019 ms
64 bytes from 2001::f816:3eff:fe27:d815: seq=1 ttl=254 time=1.851 ms
64 bytes from 2001::f816:3eff:fe27:d815: seq=2 ttl=254 time=1.131 ms
64 bytes from 2001::f816:3eff:fe27:d815: seq=3 ttl=254 time=1.112 ms
64 bytes from 2001::f816:3eff:fe27:d815: seq=4 ttl=254 time=1.227 ms
64 bytes from 2001::f816:3eff:fe27:d815: seq=5 ttl=254 time=1.144 ms
ping6: sendto: Network is unreachable
$ ping6 2001::f816:3eff:fe27:d815
PING 2001::f816:3eff:fe27:d815 (2001::f816:3eff:fe27:d815): 56 data bytes
ping6: sendto: Network is unreachable

Comment 33 Daniel Alvarez Sanchez 2018-10-22 07:11:37 UTC
Any updates on this?

Comment 34 Daniel Alvarez Sanchez 2018-10-22 07:38:25 UTC
Numan and I checked and this is the fix upstream [0]. Which is not in the 4.0.2 upstream tag so not in the D/S package being used by latest OSP13 puddle. However it's included in 4.0.3 which is already available in brew.

That patch shouldn't be needed if we used ovs 2.9.0-77+ but since we have a fix/workaround in networking-ovn which is available in a build, I'm flipping the component to it and mark the bug as MODIFIED.

[0] https://github.com/openstack/networking-ovn/commit/52abcac1f2f55d93b34a36e4b8fbb6a56845b7a4

Comment 36 Eran Kuris 2018-10-29 12:54:09 UTC
Fix verified:

cat core_puddle_version 
2018-10-24.1(overcloud) [stack@undercloud-0 ~]$ rpm -qa | grep networking-ovn 
(overcloud) [stack@undercloud-0 ~]$ ssh heat-admin@192.168.24.18
Last login: Mon Oct 29 11:36:20 2018 from 192.168.24.254
[heat-admin@compute-0 ~]$ sudo -i 
[root@compute-0 ~]#  rpm -qa | grep networking-ovn
python-networking-ovn-4.0.3-1.el7ost.noarch

(overcloud) [stack@undercloud-0 ~]$ openstack server list 
+--------------------------------------+---------------+--------+-----------------------------------------------------------+--------+-------------+
| ID                                   | Name          | Status | Networks                                                  | Image  | Flavor      |
+--------------------------------------+---------------+--------+-----------------------------------------------------------+--------+-------------+
| 8ff160fb-644a-4152-8e72-dd37cbd73f0b | net-64-2_VM_1 | ACTIVE | net-64-2=2002::f816:3eff:fe71:d858, 10.0.2.10, 10.0.0.233 | rhel75 | rhel_flavor |
| dde21418-8d2d-4827-be31-139f079f7539 | net-64-1_VM_1 | ACTIVE | net-64-1=10.0.1.19, 2001::f816:3eff:fefa:74af, 10.0.0.220 | rhel75 | rhel_flavor |
+--------------------------------------+---------------+--------+-----------------------------------------------------------+--------+-------------+
(overcloud) [stack@undercloud-0 ~]$ ssh root@10.0.0.233
The authenticity of host '10.0.0.233 (10.0.0.233)' can't be established.
ECDSA key fingerprint is SHA256:v3mOvJEV9UL989pTrvydI7JMSTH7LedeCorGdJeHjJE.
ECDSA key fingerprint is MD5:91:4d:32:84:f1:ff:d8:ca:ed:4d:13:be:64:05:18:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.233' (ECDSA) to the list of known hosts.
root@10.0.0.233's password: 
[root@net-64-2-vm-1 ~]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1442
        inet 10.0.2.10  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::f816:3eff:fe71:d858  prefixlen 64  scopeid 0x20<link>
        inet6 2002::f816:3eff:fe71:d858  prefixlen 64  scopeid 0x0<global>
        ether fa:16:3e:71:d8:58  txqueuelen 1000  (Ethernet)
        RX packets 268  bytes 31483 (30.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 354  bytes 34502 (33.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 6  bytes 416 (416.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 416 (416.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@net-64-2-vm-1 ~]# ping6 2001::f816:3eff:fefa:74af
PING 2001::f816:3eff:fefa:74af(2001::f816:3eff:fefa:74af) 56 data bytes
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=1 ttl=254 time=3.42 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=2 ttl=254 time=2.80 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=3 ttl=254 time=2.13 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=4 ttl=254 time=1.64 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=5 ttl=254 time=1.93 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=6 ttl=254 time=1.21 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=7 ttl=254 time=2.43 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=8 ttl=254 time=1.62 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=9 ttl=254 time=1.86 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=10 ttl=254 time=1.66 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=11 ttl=254 time=1.56 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=12 ttl=254 time=1.58 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=13 ttl=254 time=1.61 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=14 ttl=254 time=1.49 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=15 ttl=254 time=1.52 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=16 ttl=254 time=1.56 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=17 ttl=254 time=1.43 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=18 ttl=254 time=1.60 ms
64 bytes from 2001::f816:3eff:fefa:74af: icmp_seq=19 ttl=254 time=1.49 ms
^C
--- 2001::f816:3eff:fefa:74af ping statistics ---
19 packets transmitted, 19 received, 0% packet loss, time 18043ms
rtt min/avg/max/mdev = 1.219/1.821/3.420/0.525 ms
[root@net-64-2-vm-1 ~]# ping6 2002::f816:3eff:fe71:d858
PING 2002::f816:3eff:fe71:d858(2002::f816:3eff:fe71:d858) 56 data bytes
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=1 ttl=64 time=0.277 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=3 ttl=64 time=0.050 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=4 ttl=64 time=0.055 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=5 ttl=64 time=0.054 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=6 ttl=64 time=0.056 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=7 ttl=64 time=0.055 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=8 ttl=64 time=0.041 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=9 ttl=64 time=0.044 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=10 ttl=64 time=0.065 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=11 ttl=64 time=0.072 ms
64 bytes from 2002::f816:3eff:fe71:d858: icmp_seq=12 ttl=64 time=0.079 ms

Comment 38 errata-xmlrpc 2018-11-13 23:32:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3614


Note You need to log in before you can comment on or make changes to this bug.