Bug 1283154

Summary: [neutron] api error while running tempest scenario tests
Product: Red Hat OpenStack Reporter: Daniel Mellado <dmellado>
Component: openstack-neutronAssignee: lpeer <lpeer>
Status: CLOSED NOTABUG QA Contact: Ofer Blaut <oblaut>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 5.0 (RHEL 6)CC: amuller, chrisw, dmellado, majopela, nyechiel, yeylon
Target Milestone: ---Keywords: Automation, ZStream
Target Release: 5.0 (RHEL 6)   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-15 14:04:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Daniel Mellado 2015-11-18 11:53:46 UTC
Description of problem:

While running neutron tempest tests in rhos, there's an occasional server fault error which makes the test fail. Restarting the neutron-server in order to enable debug seems to fix the issue, so this could be related to a race condition on the ddbb.


Version-Release number of selected component (if applicable):


How reproducible:

Run python -m testtools.run tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details

-eventually- the test would fail and you'll get a server fault as a trace

Actual results:

TRACE neutron.api.v2.resource     "This result object does not return rows. "
TRACE neutron.api.v2.resource DBError: This result object does not return rows. It has been closed automatically.


Expected results:

The test to pass


Additional info:
Might be related to the launchpad bug attached

Comment 1 Miguel Angel Ajo 2015-12-15 10:59:56 UTC
Could you provide a more detailed trace (tempest logs + neutron server logs?)

Comment 2 Daniel Mellado 2015-12-15 14:04:10 UTC
After running it more than 20 times with the latest puddle I'm unable to reproduce the bug to post the logs, so I'll be closing it for now