Bug 1471623 - neutron.tests.tempest.scenario.test_dvr.NetworkDvrTest. tests failed on CI [NEEDINFO]
neutron.tests.tempest.scenario.test_dvr.NetworkDvrTest. tests failed on CI
Status: NEW
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron (Show other bugs)
11.0 (Ocata)
Unspecified Unspecified
high Severity high
: ---
: 11.0 (Ocata)
Assigned To: Brent Eagles
Toni Freger
: Triaged, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-17 02:00 EDT by Eran Kuris
Modified: 2017-10-04 09:14 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
beagles: needinfo? (ekuris)


Attachments (Terms of Use)
log (80.64 KB, text/plain)
2017-07-17 02:00 EDT, Eran Kuris
no flags Details

  None (edit)
Description Eran Kuris 2017-07-17 02:00:52 EDT
Created attachment 1299659 [details]
log

Description of problem:
2 DVR tests failed on CI run: 
1.neutron.tests.tempest.scenario.test_dvr.NetworkDvrTest.test_vm_reachable_through_compute
2.neutron.tests.tempest.scenario.test_dvr.NetworkDvrTest.test_update_centralized_router_to_dvr

2017-07-16 10:33:49,202 4924 ERROR    [tempest.lib.common.ssh] Failed to establish authenticated ssh connection to cirros@10.0.0.211 after 17 attempts
2017-07-16 10:33:49.202 4924 ERROR tempest.lib.common.ssh Traceback (most recent call last):
2017-07-16 10:33:49.202 4924 ERROR tempest.lib.common.ssh   File "/usr/lib/python2.7/site-packages/tempest/lib/common/ssh.py", line 107, in _get_ssh_connection
2017-07-16 10:33:49.202 4924 ERROR tempest.lib.common.ssh     sock=proxy_chan)
2017-07-16 10:33:49.202 4924 ERROR tempest.lib.common.ssh   File "/usr/lib/python2.7/site-packages/paramiko/client.py", line 324, in connect
2017-07-16 10:33:49.202 4924 ERROR tempest.lib.common.ssh     raise NoValidConnectionsError(errors)
2017-07-16 10:33:49.202 4924 ERROR tempest.lib.common.ssh NoValidConnectionsError: [Errno None] Unable to connect to port 22 on 10.0.0.211
2017-07-16 10:33:49.202 4924 ERROR tempest.lib.common.ssh 

Version-Release number of selected component (if applicable):
python-neutron-10.0.2-1.el7ost.noarch
python-neutronclient-6.1.0-1.el7ost.noarch
python-neutron-lib-1.1.0-1.el7ost.noarch
openstack-neutron-ml2-10.0.2-1.el7ost.noarch
openstack-neutron-10.0.2-1.el7ost.noarch
openstack-neutron-openvswitch-10.0.2-1.el7ost.noarch
puppet-neutron-10.3.1-1.el7ost.noarch
How reproducible:


Steps to Reproduce:
1.run DVR job 
2.
3.

Actual results:
Tests failed
Expected results:
DVR test should pass 

Additional info:
https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/DFG/view/network/view/neutron-lbaas/job/DFG-network-neutron-lbaas-11_director-rhel-7.3-virthost-3cont_2comp-ipv4-vxlan-dvr/
Comment 1 Brent Eagles 2017-09-29 13:52:48 EDT
I had a quick look at what appears to be the correct CI artifacts and it doesn't look like the L3 agent was actually deployed correctly on the compute nodes implying DVR wasn't configured correctly. Can we retrigger this job to confirm?
Comment 2 Eran Kuris 2017-10-01 02:44:13 EDT
(In reply to Brent Eagles from comment #1)
> I had a quick look at what appears to be the correct CI artifacts and it
> doesn't look like the L3 agent was actually deployed correctly on the
> compute nodes implying DVR wasn't configured correctly. Can we retrigger
> this job to confirm?

Yes I will trigger the job.
Comment 3 Brent Eagles 2017-10-04 09:05:47 EDT
I looked at the artifacts for the job including the overcloud_deploy.sh and I cannot find where DVR has been enabled. Usually I would expect to see either tht/environments/neutron-ovs-dvr.yaml included on the command line or some kind of environment file that sets the OS::TripleO::Services::ComputeNeutronL3Agent and OS::Triple::Services::ComputeNeutronMetadataAgent as well as variables like NeutronEnableDVR, etc. We need to take another look at the job and see if has been misconfigured somehow.

Note You need to log in before you can comment on or make changes to this bug.