Bug 1671170 - Test cases succeeds in isolation but fails when executed as a suite, from ostestr
Summary: Test cases succeeds in isolation but fails when executed as a suite, from ost...
Keywords:
Status: CLOSED DUPLICATE of bug 1639616
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tempest
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Linux
low
unspecified
Target Milestone: ---
: ---
Assignee: Chandan Kumar
QA Contact: Martin Kopec
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-30 23:59 UTC by Michele Valsecchi
Modified: 2022-03-13 17:04 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-01 09:29:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-13849 0 None None None 2022-03-13 17:03:59 UTC

Description Michele Valsecchi 2019-01-30 23:59:53 UTC
Description of problem:
The same test gives different results (failure / success) based on whether it is run in isolation or whether it is run using ostestr & tempest.

Version-Release number of selected component (if applicable):
RHOSP13

How reproducible:
In this environment the result is quite reproducible.

Steps to Reproduce:

1. Run the test in isolation

[stack@rhosp-dir01 mytempest]$ python -m testtools.run tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless
Tests running...
/usr/lib/python2.7/site-packages/paramiko/rsakey.py:119: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
  algorithm=hashes.SHA1(),

Ran 1 test in 101.020s
OK
[stack@rhosp-dir01 mytempest]$

2. Run the test using $ ostestr -c 1 | tee tempest-output-`date '+%Y%m%d%H%M'`.log

{0} tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless [207.907819s] ... FAILED

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/usr/lib/python2.7/site-packages/tempest/common/utils/__init__.py", line 88, in wrapper
        return f(*func_args, **func_kwargs)
      File "/usr/lib/python2.7/site-packages/tempest/scenario/test_network_v6.py", line 252, in test_dualnet_multi_prefix_dhcpv6_stateless
        dualnet=True)
      File "/usr/lib/python2.7/site-packages/tempest/scenario/test_network_v6.py", line 196, in _prepare_and_test
        (ip, srv['id'], ssh.exec_command("ip address")))
      File "/usr/lib/python2.7/site-packages/unittest2/case.py", line 666, in fail
        raise self.failureException(msg)
    AssertionError: Address 2000::0:aaaa:bbbb:cccc:0aa not configured for instance 00000000-zzzz-xxxx-yyyyy-zzzzzzzzzzzz, ip address output is
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue 
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
        link/ether 00:aa:bb:cc:dd:ee brd ff:ff:ff:ff:ff:ff
        inet 10.100.0.12/28 brd 10.100.0.15 scope global eth0
        inet6 0000::aaaa:bbbb:cccc:dddd/64 scope link 
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
        link/ether 00:00:00:00:00:a0 brd ff:ff:ff:ff:ff:ff
        inet6 aaaa::bbbb:cccc:dddd:eee/64 scope link 
           valid_lft forever preferred_lft forever


Actual results:
See the output of the step above (2)

Expected results:
See the output of the step above (1)

Additional info:

The environment in question is not using ipv6 so we suspect this is due to a racing condition, rather than being directly due to a ipv6 problem.

Comment 7 David Hill 2019-02-01 17:35:27 UTC
I'm not failing there but I had to modify the timeout values for tempest and haproxy:

{0} tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os [411.205818s] ... ok
{0} tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os [440.424982s] ... ok
{0} tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless [497.544712s] ... ok
{0} tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac [498.113864s] ... ok
{0} tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_slaac_from_os [430.807486s] ... ok
{0} tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_dhcpv6_stateless [489.179893s] ... ok
{0} tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac [515.309020s] ... ok
{0} tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os [436.951512s] ... ok

Here are my steps to not being able to reproduce this BZ:

cd /home/stack/cloud
sudo yum install -y openstack-tempest python-glance-tests python-horizon-tests-tempest python-nova-tests python-swift-tests python-gnocchi-tests python-aodh-tests

source overcloudrc
tempest init mytempest
cat << EOF>>tempest-deployer-input.conf
[identity]
v2_admin_endpoint_type = publicURL
v2_public_endpoint_type = publicURL
v3_endpoint_type = publicURL
EOF

bash create_network.sh
cd mytempest

net_id=$(neutron net-list | grep ext | awk '{ print $2 }')
discover-tempest-config --deployer-input ../tempest-deployer-input.conf --debug --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD --network-id $net_id

cat << EOF >> etc/tempest.conf
[service-clients]
http_timeout = 600
EOF

source ../../stackrc
for server in $( nova list | grep control | awk '{ print $12 }' | sed -e 's/ctlplane=//'); do
  ssh heat-admin@$server "sudo sed -i 's/2m/10m/g' /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg"
done

server=$( nova list | grep control | awk '{ print $12 }' | sed -e 's/ctlplane=//' | tail -1)
ssh heat-admin@$server "sudo pcs resource restart haproxy-bundle"

source ../overcloudrc
ostestr -c1 | tee tempest-output-`date '+%Y%m%d%H%M'`.log

Comment 9 Martin Kopec 2019-02-28 09:43:05 UTC
Hi mvalsecc, 

thank you for all the information you provided. 
I tried to reproduce the issue, I was running the tests in parallel in a loop for a few hours but I couldn't reproduce it. Did you hit the issue repeatedly? If so, was it on one or different deployments?

Thanks,
Martin

Comment 12 Martin Kopec 2019-03-01 09:29:16 UTC

*** This bug has been marked as a duplicate of bug 1639616 ***


Note You need to log in before you can comment on or make changes to this bug.