Bug 1291568 - When using Ceph, the 'storage_protocol' in tempest.conf should be set to 'ceph' | ceph is not discovered by tempest_config
When using Ceph, the 'storage_protocol' in tempest.conf should be set to 'cep...
Status: CLOSED CURRENTRELEASE
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
unspecified Severity high
: ---
: 11.0 (Ocata)
Assigned To: Angus Thomas
Omri Hochman
: Automation, AutomationBlocker, Reopened
: 1310560 (view as bug list)
Depends On:
Blocks: 1318745 1388972
  Show dependency treegraph
 
Reported: 2015-12-15 03:07 EST by Ariel Opincaru
Modified: 2017-03-13 19:51 EDT (History)
17 users (show)

See Also:
Fixed In Version: eharney@redhat.com, dsariel@redhat.com
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-13 19:51:39 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Tempest log for case #01778587 (2.26 MB, text/plain)
2017-03-07 07:54 EST, Ganesh Kadam
no flags Details
tempest.conf for case #01778587 (4.87 KB, text/plain)
2017-03-07 07:57 EST, Ganesh Kadam
no flags Details
cinder.conf for case #01778587 (152.53 KB, text/plain)
2017-03-07 07:58 EST, Ganesh Kadam
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1634499 None None None 2016-11-18 07:34 EST

  None (edit)
Description Ariel Opincaru 2015-12-15 03:07:50 EST
Description of problem:
When using Ceph as storage backend, the 'storage_protocol' under 'volume' section should be 'ceph' and not 'iscsi'


Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.
2.
3.

Actual results (tempest.conf):
[volume]
storage_protocol = iscsi

Expected results (tempest.conf):
[volume]
storage_protocol = ceph

Additional info:
Comment 2 Ariel Opincaru 2015-12-15 03:15:37 EST
some tempest tests fails because of that.

For example:

tempest.api.volume.admin.test_volume_types.VolumeTypesV1Test.test_volume_crud_with_volume_type_and_extra_specs

request-id': 'req-11b0156e-bbf2-4252-9d68-47c96b70c2a5'}
Body: 
2015-12-03 04:36:01,023 28579 INFO [tempest_lib.common.rest_client] Request (VolumeTypesV1Test:_run_cleanups): 404 GET http://192.0.2.6:8776/v1/51b34f9bdb0e4ac28cd0ff9c418f9152/volumes/bc7e1c72-983e-4b9a-8a47-2adbfe9c0d89 0.039s
2015-12-03 04:36:01,024 28579 DEBUG [tempest_lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
Body: None
Response - Headers: {'status': '404', 'content-length': '78', 'x-compute-request-id': 'req-4f7ad129-40e1-4864-a0c6-6b26ea4b63a8', 'connection': 'close', 'date': 'Thu, 03 Dec 2015 09:36:00 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-4f7ad129-40e1-4864-a0c6-6b26ea4b63a8'}
Body: {"itemNotFound": {"message": "The resource could not be found.", "code": 404}}
2015-12-03 04:36:01,050 28579 INFO [tempest_lib.common.rest_client] Request (VolumeTypesV1Test:_run_cleanups): 202 DELETE http://192.0.2.6:8776/v1/00b9822f5d304ccb87af33a25ca9d861/types/a2d8a7a3-bbe0-4a70-ab57-acc37d766ca8 0.026s
2015-12-03 04:36:01,050 28579 DEBUG [tempest_lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
Body: None
Response - Headers: {'status': '202', 'content-length': '0', 'connection': 'close', 'date': 'Thu, 03 Dec 2015 09:36:00 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'req-ef92bfac-bc0e-42b8-9963-15e7ef5af9a3'}
Body: 
2015-12-03 04:36:01,083 28579 INFO [tempest_lib.common.rest_client] Request (VolumeTypesV1Test:_run_cleanups): 202 DELETE http://192.0.2.6:8776/v1/00b9822f5d304ccb87af33a25ca9d861/types/ca0d3a1a-1b7c-47ff-96c8-ba6810f73375 0.032s
2015-12-03 04:36:01,083 28579 DEBUG [tempest_lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
Body: None
Response - Headers: {'status': '202', 'content-length': '0', 'connection': 'close', 'date': 'Thu, 03 Dec 2015 09:36:00 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'req-e990be36-546e-4e6a-9c24-452d3df44e16'}
Body: }}}

Traceback (most recent call last):
File "/home/stack/tempest/tempest/api/volume/admin/test_volume_types.py", line 70, in test_volume_crud_with_volume_type_and_extra_specs
self.volumes_client.wait_for_volume_status(volume['id'], 'available')
File "/home/stack/tempest/tempest/services/volume/json/volumes_client.py", line 173, in wait_for_volume_status
raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
VolumeBuildErrorException: Volume bc7e1c72-983e-4b9a-8a47-2adbfe9c0d89 failed to build and is in ERROR status
Comment 4 tkammer 2016-02-29 03:24:30 EST
Adding more details:

When having ceph or another non-standard volume backend, tempest needs to be configured differently. Cinder admin tests have backend related options and unless the tempest-config tool is provided with some hints it won't be able to figure on its own.

Upstream, devstack gets provided with the following options:
https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L69-L74

For ceph based deployment we need OSP-Director to add the following key=value to the tempest-deployer-input.conf file:

[volume]
storage_protocol = ceph
Comment 5 tkammer 2016-02-29 03:24:51 EST
*** Bug 1310560 has been marked as a duplicate of this bug. ***
Comment 6 tkammer 2016-02-29 03:25:45 EST
This fix needs to be done for both OSP-d 7 and OSP-d 8
Comment 7 Mike Burns 2016-04-07 17:00:12 EDT
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.
Comment 10 Ganesh Kadam 2017-03-07 07:51:53 EST
Hi, 

I am working on a case where customer is facing the issue 
while running tempest tests, on RHOSP 10 with Ceph backend. 

Below are the errors as per their tempest logs. 



<snip>

tempest.api.volume.admin.test_volume_types.VolumeTypesV2Test.test_volume_crud_with_volume_type_and_extra_specs[id-c03cc62c-f4e9-4623-91ec-64ce2f9c1260]
-------------------------------------------------------------------------------------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/tempest/tempest/api/volume/admin/test_volume_types.py", line 66, in test_volume_crud_with_volume_type_and_extra_specs
        volume['id'], 'available')
      File "/tempest/tempest/common/waiters.py", line 180, in wait_for_volume_status
        raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
    tempest.exceptions.VolumeBuildErrorException: Volume f58e1c60-6096-41fc-9449-30cac0d31eb1 failed to build and is in ERROR status

>>>>>>

empest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup[compute,id-cbc752ed-b716-4717-910f-956cce965722,image,volume]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/tempest/tempest/test.py", line 107, in wrapper
        return f(self, *func_args, **func_kwargs)
      File "/tempest/tempest/scenario/test_encrypted_cinder_volumes.py", line 80, in test_encrypted_cinder_volumes_cryptsetup
        volume_type='cryptsetup')
      File "/tempest/tempest/scenario/test_encrypted_cinder_volumes.py", line 59, in create_encrypted_volume
        return self.create_volume(volume_type=volume_type['name'])
      File "/tempest/tempest/scenario/manager.py", line 238, in create_volume
        volume['id'], 'available')
      File "/tempest/tempest/common/waiters.py", line 180, in wait_for_volume_status
        raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
    tempest.exceptions.VolumeBuildErrorException: Volume 8210805f-820d-4a01-9088-aa2b046c6396 failed to build and is in ERROR status


</snip>


After checking their cinder.conf and tempest.conf files, I found that
[volume] section from tempest.conf is missing. 

Also, after checking with upstream folks on #openstack-cinder irc channel, 

I came to know that cinder v1 and v2 apis are already deprecated. If these apis are deprecated, is there any requirement to use below parameters in tempest.conf?

#api_v1=true
#api_v2=true
Comment 11 Ganesh Kadam 2017-03-07 07:54 EST
Created attachment 1260791 [details]
Tempest log for case #01778587
Comment 12 Ganesh Kadam 2017-03-07 07:57 EST
Created attachment 1260793 [details]
tempest.conf for case #01778587
Comment 13 Ganesh Kadam 2017-03-07 07:58 EST
Created attachment 1260794 [details]
cinder.conf for case #01778587
Comment 14 Eric Harney 2017-03-07 10:48:31 EST
(In reply to Ganesh Kadam from comment #10)
> tempest.api.volume.admin.test_volume_types.VolumeTypesV2Test.
> test_volume_crud_with_volume_type_and_extra_specs[id-c03cc62c-f4e9-4623-91ec-
> 64ce2f9c1260]
> -----------------------------------------------------------------------------
> --------------------------------------------------------------------------
> 
> Captured traceback:
> ~~~~~~~~~~~~~~~~~~~
>     Traceback (most recent call last):
>       File "/tempest/tempest/api/volume/admin/test_volume_types.py", line
> 66, in test_volume_crud_with_volume_type_and_extra_specs
>         volume['id'], 'available')
>       File "/tempest/tempest/common/waiters.py", line 180, in
> wait_for_volume_status
>         raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
>     tempest.exceptions.VolumeBuildErrorException: Volume
> f58e1c60-6096-41fc-9449-30cac0d31eb1 failed to build and is in ERROR status
> 

You'll need to provide Cinder logs to know why this failed.

Note that this customer appears to be running tempest tests with massive concurrency -- the log shows 48 workers running.

It's possible that this will hit quota or space limits, again need to see the Cinder logs to see why failures are actually occurring.

> >>>>>>
> 
> empest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.
> test_encrypted_cinder_volumes_cryptsetup[compute,id-cbc752ed-b716-4717-910f-
> 956cce965722,image,volume]
> -----------------------------------------------------------------------------
> -----------------------------------------------------------------------------
>     tempest.exceptions.VolumeBuildErrorException: Volume
> 8210805f-820d-4a01-9088-aa2b046c6396 failed to build and is in ERROR status
> 

This test is not going to function correctly on Ceph because the Cinder RBD driver does not yet support encrypted Cinder volumes.  The EncryptedCinderVolumes tests need to be skipped in tempest runs for Ceph.

> 
> After checking their cinder.conf and tempest.conf files, I found that
> [volume] section from tempest.conf is missing. 
> 
> Also, after checking with upstream folks on #openstack-cinder irc channel, 
> 
> I came to know that cinder v1 and v2 apis are already deprecated. If these
> apis are deprecated, is there any requirement to use below parameters in
> tempest.conf?
> 
> #api_v1=true
> #api_v2=true

OSP deploys both v1 and v2 and tempest uses both by default, so there shouldn't be a problem here.
Comment 16 Paul Grist 2017-03-13 19:51:39 EDT
We need to close this out. I'm glad Eric was able to provide some feedback on this new case of a customer tempest run, but that really has nothing to do with the original bug that was closed. 

If you end up with more logs or issues, please try the openstack dev or rhos_tech mailing lists for help first.  If there does appear to be some internal bug with the ability to run tempest on OSP10, then it would be the subject of a new bug. But given tempest is used a lot, it's likely there will be more configuration on environmental things to debug and the mailing list is a better place for that. Thanks.

Putting the original CLOSED state back for the original bug.

Note You need to log in before you can comment on or make changes to this bug.