Bug 1479751

Summary: [OSP12] tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_boot_server_from_encrypted_volume_luks due to "Key manager error"
Product: Red Hat OpenStack Reporter: Artem Hrechanychenko <ahrechan>
Component: puppet-tripleoAssignee: Alan Bishop <abishop>
Status: CLOSED ERRATA QA Contact: Avi Avraham <aavraham>
Severity: high Docs Contact:
Priority: high    
Version: 12.0 (Pike)CC: aavraham, abishop, aschultz, eharney, jjoyce, jschluet, lyarwood, m.andre, mariel, pgrist, rhallise, sasha, slinaber, tshefi, tvignaud
Target Milestone: rcKeywords: AutomationBlocker, Triaged
Target Release: 12.0 (Pike)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: puppet-tripleo-7.2.1-0.20170807233007.4600842.el7ost Doc Type: Bug Fix
Doc Text:
Recent changes in Nova and Cinder resulted in Barbican being selected as the default encryption key manager, even when TripleO is not deploying Barbican. However, TripleO assumes that the legacy (fixed key) manager is active and selected for non-Barbican deployments. This led to broken volume encryption in non-Barbican deployments. This fix modifies the TripleO behavior to now actively configure Nova and Cinder to use the legacy key manager for non-Barbican deployments.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-13 21:51:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1487920    
Bug Blocks:    
Attachments:
Description Flags
Testdir none

Description Artem Hrechanychenko 2017-08-09 10:44:16 UTC
OSP12 HA+OC_SSL+UC_SSL
3ctrl+2comp nodes
puddle - OpenStack-12.0-RHEL-7-20170808.4

Description of problem:
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_boot_server_from_encrypted_volume_luks

testtools.testresult.real._StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2017-08-09 04:23:50,505 18217 DEBUG    [tempest.scenario.manager] Creating a volume type: tempest-scenario-type-luks-129707297 on backend None
2017-08-09 04:23:51,183 18217 INFO     [tempest.lib.common.rest_client] Request (TestVolumeBootPattern:test_boot_server_from_encrypted_volume_luks): 200 POST https://10.0.0.101:13776/v2/d4436f5dde0f4586afab9ab91ac95522/types 0.677s
2017-08-09 04:23:51,184 18217 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '&lt;omitted>'}
        Body: {"volume_type": {"extra_specs": {}, "name": "tempest-scenario-type-luks-129707297"}}
    Response - Headers: {'status': '200', u'content-length': '211', 'content-location': 'https://10.0.0.101:13776/v2/d4436f5dde0f4586afab9ab91ac95522/types', u'x-compute-request-id': 'req-795b30c2-5c6e-4e55-91a4-d8f72acac32b', u'vary': 'Accept-Encoding', u'server': 'Apache', u'connection': 'close', u'date': 'Wed, 09 Aug 2017 08:23:50 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-795b30c2-5c6e-4e55-91a4-d8f72acac32b'}
        Body: {"volume_type": {"name": "tempest-scenario-type-luks-129707297", "extra_specs": {}, "os-volume-type-access:is_public": true, "is_public": true, "id": "657eda75-e270-4b87-87ed-374eee673acc", "description": null}}
2017-08-09 04:23:51,185 18217 DEBUG    [tempest.scenario.manager] Creating an encryption type for volume type: 657eda75-e270-4b87-87ed-374eee673acc
2017-08-09 04:23:51,805 18217 INFO     [tempest.lib.common.rest_client] Request (TestVolumeBootPattern:test_boot_server_from_encrypted_volume_luks): 200 POST https://10.0.0.101:13776/v2/d4436f5dde0f4586afab9ab91ac95522/types/657eda75-e270-4b87-87ed-374eee673acc/encryption 0.620s
2017-08-09 04:23:51,806 18217 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '&lt;omitted>'}
        Body: {"encryption": {"control_location": "front-end", "key_size": 256, "cipher": "aes-xts-plain64", "provider": "nova.volume.encryptors.luks.LuksEncryptor"}}
    Response - Headers: {'status': '200', u'content-length': '267', 'content-location': 'https://10.0.0.101:13776/v2/d4436f5dde0f4586afab9ab91ac95522/types/657eda75-e270-4b87-87ed-374eee673acc/encryption', u'x-compute-request-id': 'req-6f1d73d7-2082-4304-b958-6e9cda87a949', u'vary': 'Accept-Encoding', u'server': 'Apache', u'connection': 'close', u'date': 'Wed, 09 Aug 2017 08:23:51 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-6f1d73d7-2082-4304-b958-6e9cda87a949'}
        Body: {"encryption": {"volume_type_id": "657eda75-e270-4b87-87ed-374eee673acc", "control_location": "front-end", "encryption_id": "893b9db7-c2e3-4545-950c-ee6e40c59ca0", "key_size": 256, "provider": "nova.volume.encryptors.luks.LuksEncryptor", "cipher": "aes-xts-plain64"}}
2017-08-09 04:23:52,397 18217 INFO     [tempest.lib.common.rest_client] Request (TestVolumeBootPattern:test_boot_server_from_encrypted_volume_luks): 400 POST https://10.0.0.101:13776/v2/49c7c931ea2e49a3b2efe6e3ed6b3c18/volumes 0.590s
2017-08-09 04:23:52,397 18217 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '&lt;omitted>'}
        Body: {"volume": {"snapshot_id": null, "display_name": "tempest-TestVolumeBootPattern-volume-1124969670", "imageRef": null, "volume_type": "tempest-scenario-type-luks-129707297", "size": 1}}
    Response - Headers: {'status': '400', u'content-length': '61', 'content-location': 'https://10.0.0.101:13776/v2/49c7c931ea2e49a3b2efe6e3ed6b3c18/volumes', u'x-compute-request-id': 'req-6b3d228c-033b-4924-93de-cfe21c2fab31', u'server': 'Apache', u'connection': 'close', u'date': 'Wed, 09 Aug 2017 08:23:51 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-6b3d228c-033b-4924-93de-cfe21c2fab31'}
        Body: {"badRequest": {"message": "Key manager error", "code": 400}}
2017-08-09 04:23:52,959 18217 INFO     [tempest.lib.common.rest_client] Request (TestVolumeBootPattern:_run_cleanups): 202 DELETE https://10.0.0.101:13776/v2/d4436f5dde0f4586afab9ab91ac95522/types/657eda75-e270-4b87-87ed-374eee673acc 0.557s
2017-08-09 04:23:52,960 18217 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '&lt;omitted>'}
        Body: None
    Response - Headers: {'status': '202', u'content-length': '0', 'content-location': 'https://10.0.0.101:13776/v2/d4436f5dde0f4586afab9ab91ac95522/types/657eda75-e270-4b87-87ed-374eee673acc', u'server': 'Apache', u'connection': 'close', u'date': 'Wed, 09 Aug 2017 08:23:52 GMT', u'content-type': 'text/html; charset=UTF-8', u'x-openstack-request-id': 'req-49dd2bc0-13dd-472b-b292-83cb64f1b54e'}
        Body:
}}}

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/test.py", line 103, in wrapper
    return f(self, *func_args, **func_kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_volume_boot_pattern.py", line 224, in test_boot_server_from_encrypted_volume_luks
    volume_type='luks')
  File "/usr/lib/python2.7/site-packages/tempest/scenario/manager.py", line 1278, in create_encrypted_volume
    return self.create_volume(volume_type=volume_type['name'])
  File "/usr/lib/python2.7/site-packages/tempest/scenario/manager.py", line 227, in create_volume
    volume = self.volumes_client.create_volume(**kwargs)['volume']
  File "/usr/lib/python2.7/site-packages/tempest/lib/services/volume/v2/volumes_client.py", line 108, in create_volume
    resp, body = self.post('volumes', post_body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 270, in post
    return self.request('POST', url, extra_headers, headers, body, chunked)
  File "/usr/lib/python2.7/site-packages/tempest/lib/services/volume/base_client.py", line 38, in request
    method, url, extra_headers, headers, body, chunked)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 659, in request
    self._error_checker(resp, resp_body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 770, in _error_checker
    raise exceptions.BadRequest(resp_body, resp=resp)
tempest.lib.exceptions.BadRequest: Bad request
Details: {u'message': u'Key manager error', u'code': 400}


Version-Release number of selected component (if applicable):
openstack-swift-container-2.15.1-0.20170803052833.0b22193.el7ost.noarch
openstack-nova-common-16.0.0-0.20170805120344.5971dde.el7ost.noarch
openstack-ironic-conductor-8.0.1-0.20170805084044.c08c21e.el7ost.noarch
openstack-tempest-16.1.1-0.20170805070640.112eeb1.el7ost.noarch
openstack-puppet-modules-10.0.0-0.20170315222135.0333c73.el7.1.noarch
openstack-tripleo-validations-7.2.1-0.20170728183725.1f60b6f.el7ost.noarch
openstack-swift-proxy-2.15.1-0.20170803052833.0b22193.el7ost.noarch
openstack-heat-api-cfn-9.0.0-0.20170804164328.d5f78c2.el7ost.noarch
openstack-ironic-inspector-5.1.1-0.20170801061021.b4391de.el7ost.noarch
openstack-tripleo-ui-7.2.1-0.20170805065336.ac9467f.el7ost.noarch
openstack-tripleo-image-elements-7.0.0-0.20170725091025.f3f06c7.el7ost.noarch
openstack-nova-compute-16.0.0-0.20170805120344.5971dde.el7ost.noarch
openstack-neutron-openvswitch-11.0.0-0.20170804190459.el7ost.noarch
puppet-openstack_extras-11.3.0-0.20170805105506.dae9508.el7ost.noarch
openstack-tripleo-common-containers-7.4.1-0.20170805212013.430242a.el7ost.noarch
openstack-mistral-api-5.0.0-0.20170804112907.abebc64.el7ost.noarch
python-openstackclient-3.12.0-0.20170728181821.f67ebce.el7ost.noarch
openstack-selinux-0.8.7-2.el7ost.noarch
openstack-nova-placement-api-16.0.0-0.20170805120344.5971dde.el7ost.noarch
openstack-tripleo-puppet-elements-7.0.0-0.20170803140906.4e7d35d.el7ost.noarch
openstack-neutron-common-11.0.0-0.20170804190459.el7ost.noarch
puppet-openstacklib-11.3.0-0.20170805105609.cd97f82.el7ost.noarch
python-openstacksdk-0.9.17-0.20170621195806.7946243.el7ost.noarch
openstack-tripleo-common-7.4.1-0.20170805212013.430242a.el7ost.noarch
openstack-swift-object-2.15.1-0.20170803052833.0b22193.el7ost.noarch
openstack-heat-common-9.0.0-0.20170804164328.d5f78c2.el7ost.noarch
openstack-ironic-api-8.0.1-0.20170805084044.c08c21e.el7ost.noarch
openstack-mistral-engine-5.0.0-0.20170804112907.abebc64.el7ost.noarch
openstack-nova-api-16.0.0-0.20170805120344.5971dde.el7ost.noarch
openstack-nova-conductor-16.0.0-0.20170805120344.5971dde.el7ost.noarch
openstack-keystone-12.0.0-0.20170805005310.9cbd6bc.el7ost.noarch
openstack-heat-api-9.0.0-0.20170804164328.d5f78c2.el7ost.noarch
openstack-tripleo-heat-templates-7.0.0-0.20170805163045.el7ost.noarch
openstack-mistral-executor-5.0.0-0.20170804112907.abebc64.el7ost.noarch
python-openstack-mistral-5.0.0-0.20170804112907.abebc64.el7ost.noarch
openstack-nova-scheduler-16.0.0-0.20170805120344.5971dde.el7ost.noarch
openstack-glance-15.0.0-0.20170805121150.94df7f8.el7ost.noarch
openstack-swift-account-2.15.1-0.20170803052833.0b22193.el7ost.noarch
openstack-neutron-ml2-11.0.0-0.20170804190459.el7ost.noarch
openstack-mistral-common-5.0.0-0.20170804112907.abebc64.el7ost.noarch
openstack-zaqar-5.0.0-0.20170803163737.088a08c.el7ost.noarch
openstack-neutron-11.0.0-0.20170804190459.el7ost.noarch
openstack-heat-engine-9.0.0-0.20170804164328.d5f78c2.el7ost.noarch
openstack-ironic-common-8.0.1-0.20170805084044.c08c21e.el7ost.noarch

Comment 6 Alan Bishop 2017-09-07 12:30:12 UTC
Neither of the two recent failures show signs of the "Key manager error" that has been fixed, so I wonder if the failures are due to bug #1478161. The failure reported in comment #4 contains the "keymgr.fixed_key not defined" string associated with the other bz. I don't know what happened in the comment #5 failure.

Comment 8 Tzach Shefi 2017-10-03 00:13:21 UTC
On version:
puppet-tripleo-7.4.1-0.20170925173839.bc3dfc5

This still fails but it looks as due to deployment/network issue. 
I'll debug and update once I figure it out. 

2017-10-02 12:45:28.267 18515 ERROR tempest.test     raise exceptions.Conflict(resp_body, resp=resp)
2017-10-02 12:45:28.267 18515 ERROR tempest.test Conflict: An object with that identifier already exists
2017-10-02 12:45:28.267 18515 ERROR tempest.test Details: {u'message': u'Unable to complete operation on subnet 2bdf6188-22be-4a2e-8516-f0d21079d98b: One or more ports have an IP allocation from this subnet.', u'type': u'SubnetInUse', u'detail': u''}
2017-10-02 12:45:28.267 18515 ERROR tempest.test

Comment 9 Tzach Shefi 2017-10-23 10:14:13 UTC
Odd I've set fixed_key on Cinder.conf. 
Also on nova.conf on compute node, do I need to set on Nova api server as well? 

As I still fail and getting ->  fixed_key not defined'

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/common/utils/__init__.py", line 89, in wrapper
    return f(self, *func_args, **func_kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_volume_boot_pattern.py", line 232, in test_boot_server_from_encrypted_volume_luks
    delete_on_termination=False)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_volume_boot_pattern.py", line 70, in _boot_instance_from_resource
    return self.create_server(image_id='', **create_kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/manager.py", line 205, in create_server
    image_id=image_id, **kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/common/compute.py", line 256, in create_test_server
    server['id'])
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python2.7/site-packages/tempest/common/compute.py", line 227, in create_test_server
    clients.servers_client, server['id'], wait_until)
  File "/usr/lib/python2.7/site-packages/tempest/common/waiters.py", line 76, in wait_for_server_status
    server_id=server_id)
tempest.exceptions.BuildErrorException: Server 1d5c363c-844a-4549-b0a1-a10aa2982113 failed to build and is in ERROR status
Details: {u'message': u'Build of instance 1d5c363c-844a-4549-b0a1-a10aa2982113 aborted: keymgr.fixed_key not defined', u'code': 500, u'created': u'2017-10-23T10:04:42Z'}
Ran 1 tests in 340.301s
FAILED (id=2, failures=1)

Comment 11 Alan Bishop 2017-10-23 14:11:28 UTC
Tzach,

I looked at the system you left for me (thank you!), and the problem is clear. On the nova side, it's the nova-compute service that handles encrypted volumes. It looks like the Key Manager is properly set to match Cinder's Key Manager, but the fixed_key is *not* set.

One thing to bear in mind is Nova is containerized, and so it's necessary for the fixed_key to appear in the nova_compute container. I have no idea how Tempest configures things, but the required setting is definitely missing.

Here's what I'm seeing:

[root@compute-0 ~]# docker exec -ti nova_compute bash -c "grep fixed_key /etc/nova/*.conf"
/etc/nova/nova.conf:#fixed_key=<None>

Comment 13 Tzach Shefi 2017-10-24 13:10:32 UTC
OK I think I resolved the missing fixed_key. 
I updated wrong location of nova.conf file on compute node. 
Looks a little better yet still failing. 


traceback-1: {{{
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/utils/test_utils.py", line 84, in call_and_ignore_notfound_exc
    return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/lib/services/volume/v2/volumes_client.py", line 136, in delete_volume
    resp, body = self.delete(url)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 301, in delete
    return self.request('DELETE', url, extra_headers, headers, body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/services/volume/base_client.py", line 38, in request
    method, url, extra_headers, headers, body, chunked)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 659, in request
    self._error_checker(resp, resp_body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 770, in _error_checker
    raise exceptions.BadRequest(resp_body, resp=resp)
tempest.lib.exceptions.BadRequest: Bad request
Details: {u'message': u'Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.', u'code': 400}
}}}

traceback-2: {{{
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 871, in wait_for_resource_deletion
    raise exceptions.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: (TestVolumeBootPattern:_run_cleanups) Failed to delete volume ff82f075-7c06-4a21-8cea-cf01d2d3bd58 within the required time (300 s).
}}}

traceback-3: {{{
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/lib/services/volume/v2/types_client.py", line 88, in delete_volume_type
    resp, body = self.delete("types/%s" % volume_type_id)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 301, in delete
    return self.request('DELETE', url, extra_headers, headers, body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 659, in request
    self._error_checker(resp, resp_body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 770, in _error_checker
    raise exceptions.BadRequest(resp_body, resp=resp)
tempest.lib.exceptions.BadRequest: Bad request
Details: {u'message': u'Target volume type is still in use.', u'code': 400}
}}}

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/common/utils/__init__.py", line 89, in wrapper
    return f(self, *func_args, **func_kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_volume_boot_pattern.py", line 232, in test_boot_server_from_encrypted_volume_luks
    delete_on_termination=False)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_volume_boot_pattern.py", line 70, in _boot_instance_from_resource
    return self.create_server(image_id='', **create_kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/manager.py", line 205, in create_server
    image_id=image_id, **kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/common/compute.py", line 256, in create_test_server
    server['id'])
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python2.7/site-packages/tempest/common/compute.py", line 227, in create_test_server
    clients.servers_client, server['id'], wait_until)
  File "/usr/lib/python2.7/site-packages/tempest/common/waiters.py", line 96, in wait_for_server_status
    raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: (TestVolumeBootPattern:test_boot_server_from_encrypted_volume_luks) Server 385abe20-0f07-44f4-9289-33c76c0f73ba failed to reach ACTIVE status and task state "None" within the required time (300 s). Current status: BUILD. Current task state: spawning.
======================================================================
FAIL: tearDownClass (tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern)
tags: worker-0
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/test.py", line 190, in tearDownClass
    six.reraise(etype, value, trace)
  File "/usr/lib/python2.7/site-packages/tempest/test.py", line 173, in tearDownClass
    teardown()
  File "/usr/lib/python2.7/site-packages/tempest/test.py", line 477, in clear_credentials
    cls._creds_provider.clear_creds()
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/dynamic_creds.py", line 445, in clear_creds
    self._clear_isolated_net_resources()
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/dynamic_creds.py", line 436, in _clear_isolated_net_resources
    creds.subnet['name'])
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/dynamic_creds.py", line 386, in _clear_isolated_subnet
    client.delete_subnet(subnet_id)
  File "/usr/lib/python2.7/site-packages/tempest/lib/services/network/subnets_client.py", line 52, in delete_subnet
    return self.delete_resource(uri)
  File "/usr/lib/python2.7/site-packages/tempest/lib/services/network/base.py", line 41, in delete_resource
    resp, body = self.delete(req_uri)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 301, in delete
    return self.request('DELETE', url, extra_headers, headers, body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 659, in request
    self._error_checker(resp, resp_body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 780, in _error_checker
    raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: An object with that identifier already exists
Details: {u'message': u'Unable to complete operation on subnet cc3fa5bb-3f7b-45b0-a23d-2849a7981a76: One or more ports have an IP allocation from this subnet.', u'type': u'SubnetInUse', u'detail': u''}
Ran 2 tests in 931.134s (+11.121s)
FAILED (id=4, failures=2)

Comment 15 Alan Bishop 2017-10-25 16:34:23 UTC
OK, lots of ground to cover. I signed onto Tzach's system and report the following:

1) The original "Key manager error" has been solved because the api_class is now being properly set in the key_manager section of /etc/cinder/cinder.conf and /etc/nova/nova.conf.

2) The "keymgr.fixed_key not defined" remains an issue, and updates should be tracked in bug #1478161

3) I do not know why Tzach's attempt to manually run the test with the fixed_key patched into the nova_compute container didn't work (comment #13).

- The primary error is this one

tempest.lib.exceptions.TimeoutException: Request timed out
Details: (TestVolumeBootPattern:test_boot_server_from_encrypted_volume_luks) Server 385abe20-0f07-44f4-9289-33c76c0f73ba failed to reach ACTIVE status and task state "None" within the required time (300 s). Current status: BUILD. Current task state: spawning.

- I signed onto the container, and see that nova_compute was able to access the encrypted volume, and both cinder and nova show the volume is attached. I do not know why the test timed out.

4) We still need to determine why the fixed_key isn't properly set in the nova configuration when the tests are run by CI. While it would be interesting to see the test pass after manually patching in the key, the real solution will be making sure the fix_key is set automatically.

Reccomendations:

- Close this BZ because the "Key manager error" has been resolved.
- Track the "keymgr.fixed_key not defined" issue in bug #1478161.
- As Lee pointed out in bug #1478161 comment #15 this is not a nova issue, so I'm not sure who should be the assignee. I took a stab at trying to understand the overcloud deployment, and what sort of post-deployment configurations steps are made, but it's beyond me. Things like the fixed_key are not configurable via TripleO, so something must be responsible for configuring the encryption settings. I just don't know what does that in this CI job.

Comment 16 Lee Yarwood 2017-11-01 12:37:59 UTC
The failures once fixed_key is configured might be related to bug #1487920 where we are seeing luksOpen calls hang within the n-cpu container. I'm going to close bug #1478161 out in favour of bug #1487920 as the fixed_key issue isn't something openstack-nova should handle.

Comment 17 Tzach Shefi 2017-11-06 09:29:34 UTC
Notice #16, setting depends on 1487920
As we shouldn't retest/verify before that one lands.

Comment 20 Tzach Shefi 2017-11-16 09:24:22 UTC
Created attachment 1353340 [details]
Testdir

Verified on:
openstack-tripleo-heat-templates-7.0.3-0.20171024200825.el7ost.noarch

The test passed without issue. 

I did however have to switch from LVM to other Cinder backend (xtremio).
As my LVM was too small 10G, kept failing to create Cinder volumes due to LVM. 
Once I switched Cinder backend test passed without issue.

Comment 21 Tzach Shefi 2017-11-19 08:18:29 UTC
See #20, forgot to change status to verified.

Comment 26 errata-xmlrpc 2017-12-13 21:51:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3462