Bug 1570841 - Test tempest.api.volume.admin.test_volume_services.VolumesServicesV1TestJSON.test_get_service_by_host_name failed after RHOS-8 to RHOS-9 upgrade
Summary: Test tempest.api.volume.admin.test_volume_services.VolumesServicesV1TestJSON...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tempest
Version: 9.0 (Mitaka)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Eric Harney
QA Contact: Martin Kopec
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-23 14:05 UTC by Yurii Prokulevych
Modified: 2018-08-15 14:20 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-15 14:20:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yurii Prokulevych 2018-04-23 14:05:24 UTC
Description of problem:
-----------------------
After RHOS-8 to RHOS-9 upgrade on RHEL-7.5 test failed:
...
Traceback (most recent call last):
testtools.testresult.real._StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2018-04-23 06:10:36,486 5660 INFO     [tempest.lib.common.rest_client] Request (VolumesServicesV1TestJSON:test_get_service_by_host_name): 200 GET https://10.0.0.101:13776/v1/30b11e0c054d4acea73503af82e5950d/os-services?host=hostgroup 0.912s
2018-04-23 06:10:36,486 5660 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
        Body: None
    Response - Headers: {'status': '200', 'content-length': '450', 'content-location': 'https://10.0.0.101:13776/v1/30b11e0c054d4acea73503af82e5950d/os-services?host=hostgroup', 'x-compute-request-id': 'req-96a36c3d-1683-4b25-b2e2-d90d96f0f478', 'connection': 'close', 'date': 'Mon, 23 Apr 2018 10:10:36 GMT', 'content-type': 'application/json', 'x-openstack-request-id': 'req-96a36c3d-1683-4b25-b2e2-d90d96f0f478'}
        Body: {"services": [{"status": "enabled", "binary": "cinder-scheduler", "zone": "nova", "state": "up", "updated_at": "2018-04-23T10:10:27.000000", "host": "hostgroup", "disabled_reason": null}, {"status": "enabled", "binary": "cinder-volume", "zone": "nova", "frozen": false, "state": "up", "updated_at": "2018-04-23T10:10:28.000000", "host": "hostgroup@tripleo_ceph", "replication_status": "disabled", "active_backend_id": null, "disabled_reason": null}]}
}}}

Traceback (most recent call last):
  File "/home/stack/tempest_9/tempest/api/volume/admin/test_volume_services.py", line 74, in test_get_service_by_host_name
    self.assertEqual(sorted(s1), sorted(s2))
  File "/usr/lib/python2.7/site-packages/testtools/testcase.py", line 350, in assertEqual
    self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/site-packages/testtools/testcase.py", line 435, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: [u'cinder-scheduler', u'cinder-volume'] != [u'cinder-scheduler']

Additional info:
----------------
Opened based on result from Jenkins' job

Comment 2 Eric Harney 2018-06-07 16:53:17 UTC
This test failure indicates that a service listing shows only cinder-scheduler when previously it showed cinder-scheduler and cinder-volume.

Why this is the case depends on what the CI job that runs this upgrade process is doing.

Nothing here indicates that this is a Cinder bug -- it may be a Tempest test that doesn't match whatever this upgrade job does.

Comment 6 Alan Bishop 2018-08-15 14:20:02 UTC
Sorry to close this again, but I'm afraid the logs from the last completed run (back on Jun 2) are gone. [1] shows the test was run again on Aug 14, but quickly failed due to a CI provisioning error.

[1] https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/DFG/view/upgrades/view/upgrade/job/DFG-upgrades-upgrade-upgrade-8-9_director-rhel-virthost-3cont_2comp_3ceph-ipv4-vxlan-monolithic/

If the issue happens again, then I pledge to help Eric look at the logs before they're lost again.


Note You need to log in before you can comment on or make changes to this bug.