Bug 1566148 - we're not configuring by default the availability of DifferentHostFilter / SameHostFilter, which make several tempest tests randomly fail
Summary: we're not configuring by default the availability of DifferentHostFilter / Sa...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tempest
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: Upstream M2
: 14.0 (Rocky)
Assignee: Artom Lifshitz
QA Contact: nova-maint
URL:
Whiteboard:
: 1441809 1565122 1565532 1565533 (view as bug list)
Depends On:
Blocks: 1335596 1356451 1565532 1565533 1584518 1585720
TreeView+ depends on / blocked
 
Reported: 2018-04-11 15:51 UTC by Miguel Angel Ajo
Modified: 2019-09-09 13:51 UTC (History)
27 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1585720 (view as bug list)
Environment:
Last Closed: 2018-11-04 13:31:06 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 570207 0 None MERGED Deprecate available filters in favor of enabled 2021-02-09 10:38:44 UTC

Description Miguel Angel Ajo 2018-04-11 15:51:08 UTC
Description of problem:

On CI tests, this one is randomly failing because nova does not seem to obey the kwargs["scheduler_hints"] = {'different_host': self.servers} 

https://github.com/openstack/tempest/blob/1dbff8a53085ae90c07ca83c6add623ade2bd0a8/tempest/scenario/test_security_groups_basic_ops.py#L270




tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_port_security_disable_security_group[compute,id-7c811dcc-263b-49a3-92d2-1b4d8405f50c,network,slow]



https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/DFG-network-networking-ovn-13_director-rhel-virthost-3cont_2comp-ipv4-geneve/31/artifact/




Version-Release number of selected component (if applicable):


How reproducible:

30%  

Actual results:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/common/utils/__init__.py", line 107, in wrapper
    return func(*func_args, **func_kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/common/utils/__init__.py", line 88, in wrapper
    return f(*func_args, **func_kwargs)
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_security_groups_basic_ops.py", line 607, in test_port_security_disable_security_group
    [new_tenant.security_groups['default']])
  File "/usr/lib/python2.7/site-packages/tempest/scenario/test_security_groups_basic_ops.py", line 291, in _create_server
    message="Failed to boot servers on different "
  File "/usr/lib/python2.7/site-packages/testtools/testcase.py", line 394, in assertNotIn
    self.assertThat(haystack, matcher, message)
  File "/usr/lib/python2.7/site-packages/testtools/testcase.py", line 435, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: [u'compute-0.localdomain'] matches Contains(u'compute-0.localdomain'): Failed to boot servers on different Compute nodes.


Expected results:
Passing tests

Additional info:

Comment 1 Miguel Angel Ajo 2018-04-11 15:52:50 UTC
I'm triaging this to identify if it's a Quota/CI configuration issue, an issue on nova, or something wrong in the test.

Comment 2 Miguel Angel Ajo 2018-04-11 16:11:11 UTC
First instance is requested as follows:

202 POST http://10.0.0.105:8774/v2.1/servers 2.164s
2018-04-06 07:21:20,036 11365 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
        Body: {"os:scheduler_hints": {"different_host": []}, "server": {"name": "tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-access_point-1462341923", "imageRef": "993404e0-a4cb-4258-847b-1ef5c8117173", "key_name": "tempest-TestSecurityGroupsBasicOps-614671852", "flavorRef": "ab8416e3-2c19-43c6-aa31-01f3c8abbcc5", "networks": [{"uuid": "b97664a5-ca55-44c3-9ca2-b0b2af9936fe"}], "security_groups": [{"name": "tempest-secgroup_general--1053691802"}, {"name": "tempest-secgroup_access--1446019820"}]}}

response body:
       Body: {"server": {"security_groups": [{"name": "tempest-secgroup_general--1053691802"}, {"name": "tempest-secgroup_access--1446019820"}], "OS-DCF:diskConfig": "MANUAL", "id": "30d39f8d-055f-4f6e-a81a-6d79e17a178f", "links": [{"href": "http://10.0.0.105:8774/v2.1/servers/30d39f8d-055f-4f6e-a81a-6d79e17a178f", "rel": "self"}, {"href": "http://10.0.0.105:8774/servers/30d39f8d-055f-4f6e-a81a-6d79e17a178f", "rel": "bookmark"}], "adminPass": "dZQKFBnq3QhL"}}



once scheduled to "hostId": "6fcb18d3e800b402a8f1ebb35a37d8ab0fe2b43d576b4b414bbb9638"


200 GET http://10.0.0.105:8774/v2.1/servers/30d39f8d-055f-4f6e-a81a-6d79e17a178f 0.162s
2018-04-06 07:21:21,388 11365 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
        Body: None
    Response - Headers: {'status': '200', u'content-length': '1402', 'content-location': 'http://10.0.0.105:8774/v2.1/servers/30d39f8d-055f-4f6e-a81a-6d79e17a178f', u'x-compute-request-id': 'req-8d8042c6-4808-420f-b540-e6c83a4ca209', u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding', u'server': 'Apache', u'openstack-api-version': 'compute 2.1', u'connection': 'close', u'x-openstack-nova-api-version': '2.1', u'date': 'Fri, 06 Apr 2018 11:21:21 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-8d8042c6-4808-420f-b540-e6c83a4ca209'}
        Body: {"server": {"OS-EXT-STS:task_state": "spawning", "addresses": {}, "links": [{"href": "http://10.0.0.105:8774/v2.1/servers/30d39f8d-055f-4f6e-a81a-6d79e17a178f", "rel": "self"}, {"href": "http://10.0.0.105:8774/servers/30d39f8d-055f-4f6e-a81a-6d79e17a178f", "rel": "bookmark"}], "image": {"id": "993404e0-a4cb-4258-847b-1ef5c8117173", "links": [{"href": "http://10.0.0.105:8774/images/993404e0-a4cb-4258-847b-1ef5c8117173", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "building", "OS-SRV-USG:launched_at": null, "flavor": {"id": "ab8416e3-2c19-43c6-aa31-01f3c8abbcc5", "links": [{"href": "http://10.0.0.105:8774/flavors/ab8416e3-2c19-43c6-aa31-01f3c8abbcc5", "rel": "bookmark"}]}, "id": "30d39f8d-055f-4f6e-a81a-6d79e17a178f", "user_id": "8ce7cd1d3184427785019ed71bcdd967", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "BUILD", "updated": "2018-04-06T11:21:21Z", "hostId": "6fcb18d3e800b402a8f1ebb35a37d8ab0fe2b43d576b4b414bbb9638", "OS-SRV-USG:terminated_at": null, "key_name": "tempest-TestSecurityGroupsBasicOps-614671852", "name": "tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-access_point-1462341923", "created": "2018-04-06T11:21:20Z", "tenant_id": "05376e17925247ee8e322cbdb2217818", "os-extended-volumes:volumes_attached": [], "metadata": {}}}
...

2018-04-06 07:21:37,970 11365 INFO     [tempest.common.waiters] State transition "BUILD/spawning" ==> "ACTIVE/None" after 17 second wait


So the first instances has uuid: 30d39f8d-055f-4f6e-a81a-6d79e17a178f

Second instance is requested like:

2018-04-06 07:21:47,594 11365 INFO     [tempest.lib.common.rest_client] Request (TestSecurityGroupsBasicOps:test_port_security_disable_security_group): 202 POST http://10.0.0.105:8774/v2.1/servers 1.794s
2018-04-06 07:21:47,595 11365 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
        Body: {"os:scheduler_hints": {"different_host": ["30d39f8d-055f-4f6e-a81a-6d79e17a178f"]}, "server": {"name": "tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-gen-1-580712707", "imageRef": "993404e0-a4cb-4258-847b-1ef5c8117173", "key_name": "tempest-TestSecurityGroupsBasicOps-614671852", "flavorRef": "ab8416e3-2c19-43c6-aa31-01f3c8abbcc5", "networks": [{"uuid": "b97664a5-ca55-44c3-9ca2-b0b2af9936fe"}], "security_groups": [{"name": "tempest-secgroup_general--1053691802"}]}}
    Response - Headers: {'status': '202', u'content-length': '401', 'content-location': 'http://10.0.0.105:8774/v2.1/servers', u'x-compute-request-id': 'req-062683e9-e0bf-4f17-98dc-e017fdd77dc6', u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', u'server': 'Apache', u'openstack-api-version': 'compute 2.1', u'connection': 'close', u'location': 'http://10.0.0.105:8774/v2.1/servers/83d40cc7-4fce-470c-bbce-ceb3c2ebc754', u'x-openstack-nova-api-version': '2.1', u'date': 'Fri, 06 Apr 2018 11:21:45 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-062683e9-e0bf-4f17-98dc-e017fdd77dc6'}
        Body: {"server": {"security_groups": [{"name": "tempest-secgroup_general--1053691802"}], "OS-DCF:diskConfig": "MANUAL", "id": "83d40cc7-4fce-470c-bbce-ceb3c2ebc754", "links": [{"href": "http://10.0.0.105:8774/v2.1/servers/83d40cc7-4fce-470c-bbce-ceb3c2ebc754", "rel": "self"}, {"href": "http://10.0.0.105:8774/servers/83d40cc7-4fce-470c-bbce-ceb3c2ebc754", "rel": "bookmark"}], "adminPass": "oG9VDGNUDAMi"}}




We can see the different_host": ["30d39f8d-055f-4f6e-a81a-6d79e17a178f"] in this case for the scheduler hints.


Eventually is scheduled and built:

2018-04-06 07:21:59,835 11365 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
        Body: None
    Response - Headers: {'status': '200', u'content-length': '1633', 'content-location': 'http://10.0.0.105:8774/v2.1/servers/83d40cc7-4fce-470c-bbce-ceb3c2ebc754', u'x-compute-request-id': 'req-fbd778cf-3c80-4004-920a-ee94186f38be', u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding', u'server': 'Apache', u'openstack-api-version': 'compute 2.1', u'connection': 'close', u'x-openstack-nova-api-version': '2.1', u'date': 'Fri, 06 Apr 2018 11:21:59 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-fbd778cf-3c80-4004-920a-ee94186f38be'}
        Body: {"server": {"OS-EXT-STS:task_state": null, "addresses": {"tempest-network-smoke--1643056769": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ff:b1:77", "version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://10.0.0.105:8774/v2.1/servers/83d40cc7-4fce-470c-bbce-ceb3c2ebc754", "rel": "self"}, {"href": "http://10.0.0.105:8774/servers/83d40cc7-4fce-470c-bbce-ceb3c2ebc754", "rel": "bookmark"}], "image": {"id": "993404e0-a4cb-4258-847b-1ef5c8117173", "links": [{"href": "http://10.0.0.105:8774/images/993404e0-a4cb-4258-847b-1ef5c8117173", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2018-04-06T11:21:57.000000", "flavor": {"id": "ab8416e3-2c19-43c6-aa31-01f3c8abbcc5", "links": [{"href": "http://10.0.0.105:8774/flavors/ab8416e3-2c19-43c6-aa31-01f3c8abbcc5", "rel": "bookmark"}]}, "id": "83d40cc7-4fce-470c-bbce-ceb3c2ebc754", "security_groups": [{"name": "tempest-secgroup_general--1053691802"}], "user_id": "8ce7cd1d3184427785019ed71bcdd967", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "updated": "2018-04-06T11:21:58Z", "hostId": "6fcb18d3e800b402a8f1ebb35a37d8ab0fe2b43d576b4b414bbb9638", "OS-SRV-USG:terminated_at": null, "key_name": "tempest-TestSecurityGroupsBasicOps-614671852", "name": "tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-gen-1-580712707", "created": "2018-04-06T11:21:47Z", "tenant_id": "05376e17925247ee8e322cbdb2217818", "os-extended-volumes:volumes_attached": [], "metadata": {}}}
2018-04-06 07:21:59,844 11365 INFO     [tempest.common.waiters] State transition "BUILD/spawning" ==> "ACTIVE/None" after 12 second wait

On same host ID: 6fcb18d3e800b402a8f1ebb35a37d8ab0fe2b43d576b4b414bbb9638

I'm continuing with investigation of the nova scheduler logs to see if I find the culprit.

Comment 3 Miguel Angel Ajo 2018-04-11 17:48:34 UTC
I see around the time of the request:




* on compute0:

2018-04-06 11:21:11.787 1 INFO nova.compute.claims [req-d0906699-0d68-424c-9bc5-503a63cb4411 881b0d6219564319b7daf1da3e2e6a77 836283e35b4145acaa27eb9f09ba586a - default default] [instance: 1c395928-89a8-4d65-b101-51b77d40103d] memory limit not specified, defaulting to unlimited
2018-04-06 11:21:11.787 1 INFO nova.compute.claims [req-d0906699-0d68-424c-9bc5-503a63cb4411 881b0d6219564319b7daf1da3e2e6a77 836283e35b4145acaa27eb9f09ba586a - default default] [instance: 1c395928-89a8-4d65-b101-51b77d40103d] Total disk: 49 GB, used: 0.00 GB
2018-04-06 11:21:11.788 1 INFO nova.compute.claims [req-d0906699-0d68-424c-9bc5-503a63cb4411 881b0d6219564319b7daf1da3e2e6a77 836283e35b4145acaa27eb9f09ba586a - default default] [instance: 1c395928-89a8-4d65-b101-51b77d40103d] disk limit not specified, defaulting to unlimited
2018-04-06 11:21:11.788 1 INFO nova.compute.claims [req-d0906699-0d68-424c-9bc5-503a63cb4411 881b0d6219564319b7daf1da3e2e6a77 836283e35b4145acaa27eb9f09ba586a - default default] [instance: 1c395928-89a8-4d65-b101-51b77d40103d] Total vcpu: 8 VCPU, used: 0.00 VCPU




* on compute1:

2018-04-06 11:21:01.644 1 INFO nova.compute.claims [req-019287ca-fd73-40ef-b6d3-00d17cdb4070 80cf57b3a39d47208410055995f9991b 8758ee8bad30407e91fdb3618884afb5 - default default] [instance: 42e5b829-50f1-4028-bd78-1ceaa1d93cfa] Attempting claim on node compute-1.localdomain: memory 64 MB, disk 0 GB, vcpus 1 CPU
2018-04-06 11:21:01.645 1 INFO nova.compute.claims [req-019287ca-fd73-40ef-b6d3-00d17cdb4070 80cf57b3a39d47208410055995f9991b 8758ee8bad30407e91fdb3618884afb5 - default default] [instance: 42e5b829-50f1-4028-bd78-1ceaa1d93cfa] Total memory: 32767 MB, used: 4096.00 MB
2018-04-06 11:21:01.645 1 INFO nova.compute.claims [req-019287ca-fd73-40ef-b6d3-00d17cdb4070 80cf57b3a39d47208410055995f9991b 8758ee8bad30407e91fdb3618884afb5 - default default] [instance: 42e5b829-50f1-4028-bd78-1ceaa1d93cfa] memory limit not specified, defaulting to unlimited
2018-04-06 11:21:01.646 1 INFO nova.compute.claims [req-019287ca-fd73-40ef-b6d3-00d17cdb4070 80cf57b3a39d47208410055995f9991b 8758ee8bad30407e91fdb3618884afb5 - default default] [instance: 42e5b829-50f1-4028-bd78-1ceaa1d93cfa] Total disk: 49 GB, used: 0.00 GB
2018-04-06 11:21:01.646 1 INFO nova.compute.claims [req-019287ca-fd73-40ef-b6d3-00d17cdb4070 80cf57b3a39d47208410055995f9991b 8758ee8bad30407e91fdb3618884afb5 - default default] [instance: 42e5b829-50f1-4028-bd78-1ceaa1d93cfa] disk limit not specified, defaulting to unlimited
2018-04-06 11:21:01.646 1 INFO nova.compute.claims [req-019287ca-fd73-40ef-b6d3-00d17cdb4070 80cf57b3a39d47208410055995f9991b 8758ee8bad30407e91fdb3618884afb5 - default default] [instance: 42e5b829-50f1-4028-bd78-1ceaa1d93cfa] Total vcpu: 8 VCPU, used: 0.00 VCPU







Miguels-MBP:1566148 ajo$ os-log-merger -a3 controller-?/var/log/containers/nova/nova-*  | grep req-062683e9-e0bf-4f17-98dc-e017fdd77dc6
Detected controller-0/var/log/containers/nova/nova-api.log as OSLogParser
Detected controller-1/var/log/containers/nova/nova-placement-api.log as OSLogParser
Detected controller-2/var/log/containers/nova/nova-api-metadata.log as OSLogParser
Detected controller-2/var/log/containers/nova/nova-placement-api.log as OSLogParser
Detected controller-0/var/log/containers/nova/nova-consoleauth.log as OSLogParser
Detected controller-1/var/log/containers/nova/nova-consoleauth.log as OSLogParser
Detected controller-1/var/log/containers/nova/nova-api.log as OSLogParser
Detected controller-1/var/log/containers/nova/nova-novncproxy.log as OSLogParser
Detected controller-2/var/log/containers/nova/nova-novncproxy.log as OSLogParser
Detected controller-0/var/log/containers/nova/nova-scheduler.log as OSLogParser
Detected controller-2/var/log/containers/nova/nova-api.log as OSLogParser
Detected controller-0/var/log/containers/nova/nova-conductor.log as OSLogParser
Detected controller-1/var/log/containers/nova/nova-api-metadata.log as OSLogParser
Detected controller-2/var/log/containers/nova/nova-scheduler.log as OSLogParser
Detected controller-0/var/log/containers/nova/nova-novncproxy.log as OSLogParser
Detected controller-0/var/log/containers/nova/nova-api-metadata.log as OSLogParser
Detected controller-2/var/log/containers/nova/nova-conductor.log as OSLogParser
Detected controller-1/var/log/containers/nova/nova-scheduler.log as OSLogParser
Detected controller-0/var/log/containers/nova/nova-placement-api.log as OSLogParser
Detected controller-0/var/log/containers/nova/nova-manage.log as OSLogParser
Detected controller-1/var/log/containers/nova/nova-conductor.log as OSLogParser
Detected controller-2/var/log/containers/nova/nova-consoleauth.log as OSLogParser
2018-04-06 11:21:45.804 [0/N-API] 20 DEBUG nova.api.openstack.wsgi [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Action: 'create', calling method: <bound method ServersController.create of <nova.api.openstack.compute.servers.ServersController object at 0x7f74f7d38f10>>, body: {"os:scheduler_hints": {"different_host": ["30d39f8d-055f-4f6e-a81a-6d79e17a178f"]}, "server": {"name": "tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-gen-1-580712707", "imageRef": "993404e0-a4cb-4258-847b-1ef5c8117173", "key_name": "tempest-TestSecurityGroupsBasicOps-614671852", "flavorRef": "ab8416e3-2c19-43c6-aa31-01f3c8abbcc5", "networks": [{"uuid": "b97664a5-ca55-44c3-9ca2-b0b2af9936fe"}], "security_groups": [{"name": "tempest-secgroup_general--1053691802"}]}} _process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:604
2018-04-06 11:21:47.477 [0/N-API] 20 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.477 [0/N-API] 20 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.484 [0/N-API] 20 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.485 [0/N-API] 20 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.496 [0/N-API] 20 DEBUG nova.quota [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Getting quotas for project 05376e17925247ee8e322cbdb2217818. Resources: set(['instances', 'ram', 'cores']) _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:409
2018-04-06 11:21:47.499 [0/N-API] 20 DEBUG nova.quota [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Getting quotas for user 8ce7cd1d3184427785019ed71bcdd967 and project 05376e17925247ee8e322cbdb2217818. Resources: set(['instances', 'ram', 'cores']) _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:401
2018-04-06 11:21:47.509 [0/N-API] 20 DEBUG nova.compute.api [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Going to run 1 instances... _provision_instances /usr/lib/python2.7/site-packages/nova/compute/api.py:883
2018-04-06 11:21:47.524 [0/N-API] 20 WARNING nova.utils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Hostname tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-gen-1-580712707 is longer than 63, truncate it to tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-ge
2018-04-06 11:21:47.525 [0/N-API] 20 DEBUG nova.compute.api [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] [instance: 83d40cc7-4fce-470c-bbce-ceb3c2ebc754] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='993404e0-a4cb-4258-847b-1ef5c8117173',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,uuid=<?>,volume_id=None,volume_size=None)] _bdm_validate_set_size_and_instance /usr/lib/python2.7/site-packages/nova/compute/api.py:1263
2018-04-06 11:21:47.582 [0/N-API] 20 INFO nova.api.openstack.requestlog [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] 172.17.1.17 "POST /v2.1/servers" status: 202 len: 401 microversion: 2.1 time: 1.786538
2018-04-06 11:21:47.608 [0/N-SCH] 1 DEBUG nova.scheduler.manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Starting to schedule for instances: [u'83d40cc7-4fce-470c-bbce-ceb3c2ebc754'] select_destinations /usr/lib/python2.7/site-packages/nova/scheduler/manager.py:110
2018-04-06 11:21:47.672 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Getting compute nodes and services for cell 00000000-0000-0000-0000-000000000000(cell0) _get_computes_for_cells /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:620
2018-04-06 11:21:47.673 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.673 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.684 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Getting compute nodes and services for cell 0276a246-7d99-4b67-b6a5-ed165e02dee0(default) _get_computes_for_cells /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:620
2018-04-06 11:21:47.685 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.685 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.699 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "(u'compute-1.localdomain', u'compute-1.localdomain')" acquired by "nova.scheduler.host_manager._locked_update" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.700 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Update host state from compute node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='{"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["pge", "avx", "xsaveopt", "clflush", "sep", "rtm", "tsc_adjust", "tsc-deadline", "invpcid", "tsc", "fsgsbase", "xsave", "smap", "vmx", "erms", "hle", "cmov", "smep", "fpu", "pat", "arat", "lm", "msr", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "sse4.1", "pae", "sse4.2", "pclmuldq", "pcid", "fma", "vme", "mmx", "osxsave", "cx8", "mce", "de", "aes", "mca", "pse", "lahf_lm", "abm", "rdseed", "popcnt", "pdpe1gb", "apic", "sse", "f16c", "pni", "rdtscp", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 1, "cells": 1, "threads": 1, "sockets": 8}}',created_at=2018-04-06T10:55:31Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=43,free_disk_gb=49,free_ram_mb=28479,host='compute-1.localdomain',host_ip=172.17.1.25,hypervisor_hostname='compute-1.localdomain',hypervisor_type='QEMU',hypervisor_version=2010000,id=2,local_gb=49,local_gb_used=0,mapped=1,memory_mb=32767,memory_mb_used=4288,metrics='[]',numa_topology='{"nova_object.version": "1.2", "nova_object.changes": ["cells"], "nova_object.name": "NUMATopology", "nova_object.data": {"cells": [{"nova_object.version": "1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset", "pinned_cpus", "siblings", "memory", "mempages", "id"], "nova_object.name": "NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7], "pinned_cpus": [], "siblings": [], "memory": 32767, "mempages": [{"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 8388471, "reserved": 0, "size_kb": 4}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 2048}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 1048576}, "nova_object.namespace": "nova"}], "id": 0}, "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.0,running_vms=3,service_id=None,stats={io_workload='3',num_instances='3',num_os_type_None='3',num_proj_836283e35b4145acaa27eb9f09ba586a='1',num_proj_8758ee8bad30407e91fdb3618884afb5='2',num_task_None='3',num_vm_building='3'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2018-04-06T11:21:30Z,uuid=ca81669d-8bbb-49ea-b22b-61aa014590fa,vcpus=8,vcpus_used=3) _locked_update /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:172
2018-04-06 11:21:47.701 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Update host state with aggregates: [] _locked_update /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:175
2018-04-06 11:21:47.701 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Update host state with service dict: {'binary': u'nova-compute', 'uuid': '566c2ff1-3be1-4942-b1dc-c19689c7deb1', 'deleted': False, 'created_at': datetime.datetime(2018, 4, 6, 10, 55, 31, tzinfo=<iso8601.Utc>), 'updated_at': datetime.datetime(2018, 4, 6, 11, 21, 39, tzinfo=<iso8601.Utc>), 'report_count': 154, 'topic': u'compute', 'host': u'compute-1.localdomain', 'version': 30, 'disabled': False, 'forced_down': False, 'last_seen_up': datetime.datetime(2018, 4, 6, 11, 21, 39, tzinfo=<iso8601.Utc>), 'deleted_at': None, 'disabled_reason': None, 'id': 14} _locked_update /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:178
2018-04-06 11:21:47.702 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Update host state with instances: {'b12efbbb-32c8-4952-ad5b-b7464f968a7e': Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2018-04-06T11:21:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-191842624',display_name='tempest-TestNetworkBasicOps-server-191842624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),host='compute-1.localdomain',hostname='tempest-testnetworkbasicops-server-191842624',id=98,image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtxqPHwZgU+TIogM6HtCbT8iT8uHElaHQSgDuhlRobwrM6YeLtve4sW6MiTp/EPEW2EzWnTf39H+EvZnWgoVr3PPY7usE4W69KS18DF/VZpdmgvpkEwRZTbgpdPkZj/WFqAupQNF717Tw4uOgrJ6tVZMXAPF958sC6BA1mM72N9qOr60bvry2dYr1gQAEM+vdGUL5P8eKTd3XNRauxWlTpTxm2aqpKUUKOuRJHcDBb00+lIamlqeuqI4Yj02YH7ZVEP2MTJcKxpf32rvCszbYY0eICIN4/Klpv6/p5MhEmNLuW6A4iasU2Pc00lNBvOpc2I7kfVwgeZ67tCpQ9uBTV Generated-by-Nova',key_name='tempest-TestNetworkBasicOps-1267237442',keypairs=KeyPairList,launch_index=0,launched_at=2018-04-06T11:21:39Z,launched_on='compute-1.localdomain',locked=False,locked_by=None,memory_mb=64,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8758ee8bad30407e91fdb3618884afb5',ramdisk_id='',reservation_id='r-q53p312t',root_device_name='/dev/vda',root_gb=0,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='swiftoperator',image_base_image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',image_container_format='bare',image_disk_format='qcow2',image_min_disk='0',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1523713807',owner_user_name='tempest-TestNetworkBasicOps-1523713807'},tags=TagList,task_state=None,terminated_at=None,updated_at=2018-04-06T11:21:31Z,user_data=None,user_id='80cf57b3a39d47208410055995f9991b',uuid=b12efbbb-32c8-4952-ad5b-b7464f968a7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active'), 'e933a387-6f4b-415c-a315-929bc1f26f02': Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2018-04-06T11:21:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-test-1119075800',display_name='tempest-server-test-1119075800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),host='compute-1.localdomain',hostname='tempest-server-test-1119075800',id=92,image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDwjY/LzlAFc8SMRsEmf/Q9igBlCwSomiJuXQQYikzrftiPBqQdYOBASVHy/vsaSqmpwp3yKhLRkrdsSJ63FwdwxZOkZf5bB1uOShVD+ZMatF5ZVH3ZehYoRu1lhxI2bHxW94bqX2Qg7VrSvyZLC3ddvA2yeB+BnG9lACjb6p/bLT8gDBuM9il/ZUIYKcImmivRnP+4hA0NPulV8NMrSVDS6oIS4oWLBjk8rrWvz9Lb76BI1C3mUvo2W8jiWrrWWdkSakUlTQrf76fz35v3ahexGkU+VW0kLVxoyqkjp7RbAVQiiFBtKQFCpwVMm86xolXj6iQJ+T/ilJMAC9AwV6EP Generated-by-Nova',key_name='tempest-keypair-test-661384362',keypairs=KeyPairList,launch_index=0,launched_at=2018-04-06T11:21:30Z,launched_on='compute-1.localdomain',locked=False,locked_by=None,memory_mb=64,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='836283e35b4145acaa27eb9f09ba586a',ramdisk_id='',reservation_id='r-a5imdok1',root_device_name='/dev/vda',root_gb=0,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='swiftoperator',image_base_image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',image_container_format='bare',image_disk_format='qcow2',image_min_disk='0',image_min_ram='0',owner_project_name='tempest-NetworkSecGroupTest-1449649998',owner_user_name='tempest-NetworkSecGroupTest-1449649998'},tags=TagList,task_state=None,terminated_at=None,updated_at=2018-04-06T11:21:13Z,user_data=None,user_id='881b0d6219564319b7daf1da3e2e6a77',uuid=e933a387-6f4b-415c-a315-929bc1f26f02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active'), '42e5b829-50f1-4028-bd78-1ceaa1d93cfa': Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2018-04-06T11:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064841860',display_name='tempest-TestNetworkBasicOps-server-1064841860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),host='compute-1.localdomain',hostname='tempest-testnetworkbasicops-server-1064841860',id=86,image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDF5za+i9FpWEPB1pv3GtAlPfWsBQQRQAZ2bYQaH7hZYiq6XXQ2T9Skwd7KS/rokdM3ZEqVbWSys7kO7+mgzCqa0veN3Sowyt1tqxJOVxhLbAKjN6eNPrcz6eBv5T1nO2lHZ9v5F1vV6CbflvbYGZr3emrhYvAAZhGfcdvxIwM8OzTZLQXtxeWrj2FPuVnGWtACU/iIfuKtGaEtxbcoc7OmGdL2yMp8tSTdigqvQ8cRi6pt5FT5YIi6tnhhSRAs9fMkC7p9EcgJF0dH2ZV6oKPR8FMvCDaJQpPLLmdwSAyTEFspt1lUwjsiWB6HZZEyVqciCVXEecsGy57RC1HQVJq3 Generated-by-Nova',key_name='tempest-TestNetworkBasicOps-1447359561',keypairs=KeyPairList,launch_index=0,launched_at=2018-04-06T11:21:16Z,launched_on='compute-1.localdomain',locked=False,locked_by=None,memory_mb=64,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8758ee8bad30407e91fdb3618884afb5',ramdisk_id='',reservation_id='r-0wqb9rk2',root_device_name='/dev/vda',root_gb=0,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='swiftoperator',image_base_image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',image_container_format='bare',image_disk_format='qcow2',image_min_disk='0',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1523713807',owner_user_name='tempest-TestNetworkBasicOps-1523713807'},tags=TagList,task_state=None,terminated_at=None,updated_at=2018-04-06T11:21:02Z,user_data=None,user_id='80cf57b3a39d47208410055995f9991b',uuid=42e5b829-50f1-4028-bd78-1ceaa1d93cfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active')} _locked_update /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:181
2018-04-06 11:21:47.707 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "(u'compute-1.localdomain', u'compute-1.localdomain')" released by "nova.scheduler.host_manager._locked_update" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.708 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "(u'compute-0.localdomain', u'compute-0.localdomain')" acquired by "nova.scheduler.host_manager._locked_update" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.708 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Update host state from compute node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='{"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["pge", "avx", "xsaveopt", "clflush", "sep", "rtm", "tsc_adjust", "tsc-deadline", "invpcid", "tsc", "fsgsbase", "xsave", "smap", "vmx", "erms", "hle", "cmov", "smep", "fpu", "pat", "arat", "lm", "msr", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "sse4.1", "pae", "sse4.2", "pclmuldq", "pcid", "fma", "vme", "mmx", "osxsave", "cx8", "mce", "de", "aes", "mca", "pse", "lahf_lm", "abm", "rdseed", "popcnt", "pdpe1gb", "apic", "sse", "f16c", "pni", "rdtscp", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 1, "cells": 1, "threads": 1, "sockets": 8}}',created_at=2018-04-06T10:55:38Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=43,free_disk_gb=49,free_ram_mb=28543,host='compute-0.localdomain',host_ip=172.17.1.13,hypervisor_hostname='compute-0.localdomain',hypervisor_type='QEMU',hypervisor_version=2010000,id=5,local_gb=49,local_gb_used=0,mapped=1,memory_mb=32767,memory_mb_used=4224,metrics='[]',numa_topology='{"nova_object.version": "1.2", "nova_object.changes": ["cells"], "nova_object.name": "NUMATopology", "nova_object.data": {"cells": [{"nova_object.version": "1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset", "pinned_cpus", "siblings", "memory", "mempages", "id"], "nova_object.name": "NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7], "pinned_cpus": [], "siblings": [], "memory": 32767, "mempages": [{"nova_object.version": "1.1", "nova_object.changes": ["total", "reserved", "size_kb", "used"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 8388471, "reserved": 0, "size_kb": 4}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "reserved", "size_kb", "used"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 2048}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "reserved", "size_kb", "used"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 1048576}, "nova_object.namespace": "nova"}], "id": 0}, "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.0,running_vms=2,service_id=None,stats={io_workload='2',num_instances='2',num_os_type_None='2',num_proj_05376e17925247ee8e322cbdb2217818='1',num_proj_836283e35b4145acaa27eb9f09ba586a='1',num_task_None='2',num_vm_building='2'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2018-04-06T11:21:20Z,uuid=7bcea892-0eac-41ea-9d0a-533b1e464c39,vcpus=8,vcpus_used=2) _locked_update /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:172
2018-04-06 11:21:47.710 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Update host state with aggregates: [] _locked_update /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:175
2018-04-06 11:21:47.710 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Update host state with service dict: {'binary': u'nova-compute', 'uuid': 'c0d066aa-9668-440a-b5ab-298945be877e', 'deleted': False, 'created_at': datetime.datetime(2018, 4, 6, 10, 55, 37, tzinfo=<iso8601.Utc>), 'updated_at': datetime.datetime(2018, 4, 6, 11, 21, 38, tzinfo=<iso8601.Utc>), 'report_count': 154, 'topic': u'compute', 'host': u'compute-0.localdomain', 'version': 30, 'disabled': False, 'forced_down': False, 'last_seen_up': datetime.datetime(2018, 4, 6, 11, 21, 38, tzinfo=<iso8601.Utc>), 'deleted_at': None, 'disabled_reason': None, 'id': 20} _locked_update /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:178
2018-04-06 11:21:47.710 [0/N-SCH] 1 DEBUG nova.scheduler.host_manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Update host state with instances: {'1c395928-89a8-4d65-b101-51b77d40103d': Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2018-04-06T11:21:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-test-576273337',display_name='tempest-server-test-576273337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),host='compute-0.localdomain',hostname='tempest-server-test-576273337',id=89,image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDwjY/LzlAFc8SMRsEmf/Q9igBlCwSomiJuXQQYikzrftiPBqQdYOBASVHy/vsaSqmpwp3yKhLRkrdsSJ63FwdwxZOkZf5bB1uOShVD+ZMatF5ZVH3ZehYoRu1lhxI2bHxW94bqX2Qg7VrSvyZLC3ddvA2yeB+BnG9lACjb6p/bLT8gDBuM9il/ZUIYKcImmivRnP+4hA0NPulV8NMrSVDS6oIS4oWLBjk8rrWvz9Lb76BI1C3mUvo2W8jiWrrWWdkSakUlTQrf76fz35v3ahexGkU+VW0kLVxoyqkjp7RbAVQiiFBtKQFCpwVMm86xolXj6iQJ+T/ilJMAC9AwV6EP Generated-by-Nova',key_name='tempest-keypair-test-661384362',keypairs=KeyPairList,launch_index=0,launched_at=2018-04-06T11:21:28Z,launched_on='compute-0.localdomain',locked=False,locked_by=None,memory_mb=64,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='836283e35b4145acaa27eb9f09ba586a',ramdisk_id='',reservation_id='r-u1yri0y6',root_device_name='/dev/vda',root_gb=0,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='swiftoperator',image_base_image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',image_container_format='bare',image_disk_format='qcow2',image_min_disk='0',image_min_ram='0',owner_project_name='tempest-NetworkSecGroupTest-1449649998',owner_user_name='tempest-NetworkSecGroupTest-1449649998'},tags=TagList,task_state=None,terminated_at=None,updated_at=2018-04-06T11:21:12Z,user_data=None,user_id='881b0d6219564319b7daf1da3e2e6a77',uuid=1c395928-89a8-4d65-b101-51b77d40103d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active'), '30d39f8d-055f-4f6e-a81a-6d79e17a178f': Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2018-04-06T11:21:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-access_point-1462341923',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1700123291-access_point-1462341923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),host='compute-0.localdomain',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1700123291-ac',id=95,image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDV5+SjM5DeIPqDcd/6KbAHMqJXr7epqIgbIgS+cWzoLUma3eDZI9CDVf9u4ND2GPLEH0W53bh09eXDXGEhGp++qUwbkWBYR0kiRDuyxaJYhUSe6sDTLHiUjZPo7LWEzQ0aUs52ZMWE/8GQ+0c5TzRYlFljyMfw7Iyh1tCeO85VxLPoPgv2dSlqb7g9zift1oHKG48f59GMZo8ny7YoXjPjnfywVuYZxpmWdvChnxVR3YiALvhP1D1Y5E9ZAJSJ83uJ+/5GmF06Lcl+5e1MZX8CRd3kcwzOBnLdOEB/qnfB045esFNvOO+Kh9zGmMlLSA43kfGD2Au/r4vCNM3EgXbR Generated-by-Nova',key_name='tempest-TestSecurityGroupsBasicOps-614671852',keypairs=KeyPairList,launch_index=0,launched_at=2018-04-06T11:21:37Z,launched_on='compute-0.localdomain',locked=False,locked_by=None,memory_mb=64,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='05376e17925247ee8e322cbdb2217818',ramdisk_id='',reservation_id='r-le2vzgn6',root_device_name='/dev/vda',root_gb=0,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='swiftoperator',image_base_image_ref='993404e0-a4cb-4258-847b-1ef5c8117173',image_container_format='bare',image_disk_format='qcow2',image_min_disk='0',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1700123291',owner_user_name='tempest-TestSecurityGroupsBasicOps-1700123291'},tags=TagList,task_state=None,terminated_at=None,updated_at=2018-04-06T11:21:21Z,user_data=None,user_id='8ce7cd1d3184427785019ed71bcdd967',uuid=30d39f8d-055f-4f6e-a81a-6d79e17a178f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active')} _locked_update /usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py:181
2018-04-06 11:21:47.713 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "(u'compute-0.localdomain', u'compute-0.localdomain')" released by "nova.scheduler.host_manager._locked_update" :: held 0.005s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.714 [0/N-SCH] 1 DEBUG nova.filters [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Starting with 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:70
2018-04-06 11:21:47.714 [0/N-SCH] 1 DEBUG nova.scheduler.filters.retry_filter [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Re-scheduling is disabled host_passes /usr/lib/python2.7/site-packages/nova/scheduler/filters/retry_filter.py:38
2018-04-06 11:21:47.715 [0/N-SCH] 1 DEBUG nova.scheduler.filters.retry_filter [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Re-scheduling is disabled host_passes /usr/lib/python2.7/site-packages/nova/scheduler/filters/retry_filter.py:38
2018-04-06 11:21:47.715 [0/N-SCH] 1 DEBUG nova.filters [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Filter RetryFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
2018-04-06 11:21:47.715 [0/N-SCH] 1 DEBUG nova.filters [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Filter AvailabilityZoneFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
2018-04-06 11:21:47.716 [0/N-SCH] 1 DEBUG nova.filters [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Filter ComputeFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
2018-04-06 11:21:47.717 [0/N-SCH] 1 DEBUG nova.filters [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Filter ComputeCapabilitiesFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
2018-04-06 11:21:47.717 [0/N-SCH] 1 DEBUG nova.filters [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Filter ImagePropertiesFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
2018-04-06 11:21:47.717 [0/N-SCH] 1 DEBUG nova.filters [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Filter ServerGroupAntiAffinityFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
2018-04-06 11:21:47.718 [0/N-SCH] 1 DEBUG nova.filters [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Filter ServerGroupAffinityFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
2018-04-06 11:21:47.718 [0/N-SCH] 1 DEBUG nova.scheduler.filter_scheduler [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Filtered [(compute-1.localdomain, compute-1.localdomain) ram: 28479MB disk: 44032MB io_ops: 3 instances: 3, (compute-0.localdomain, compute-0.localdomain) ram: 28543MB disk: 44032MB io_ops: 2 instances: 2] _get_sorted_hosts /usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py:404
2018-04-06 11:21:47.718 [0/N-SCH] 1 DEBUG nova.scheduler.filter_scheduler [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Weighed [(compute-0.localdomain, compute-0.localdomain) ram: 28543MB disk: 44032MB io_ops: 2 instances: 2, (compute-1.localdomain, compute-1.localdomain) ram: 28479MB disk: 44032MB io_ops: 3 instances: 3] _get_sorted_hosts /usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py:424
2018-04-06 11:21:47.719 [0/N-SCH] 1 DEBUG nova.scheduler.utils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Attempting to claim resources in the placement API for instance 83d40cc7-4fce-470c-bbce-ceb3c2ebc754 claim_resources /usr/lib/python2.7/site-packages/nova/scheduler/utils.py:786
2018-04-06 11:21:47.777 [0/N-SCH] 1 DEBUG nova.scheduler.filter_scheduler [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Selected host: (compute-0.localdomain, compute-0.localdomain) ram: 28543MB disk: 44032MB io_ops: 2 instances: 2 _consume_selected_host /usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py:324
2018-04-06 11:21:47.778 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "(u'compute-0.localdomain', u'compute-0.localdomain')" acquired by "nova.scheduler.host_manager._locked" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.780 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "(u'compute-0.localdomain', u'compute-0.localdomain')" released by "nova.scheduler.host_manager._locked" :: held 0.002s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.853 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.854 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.898 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.898 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.901 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.902 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.920 [1/N-COND] 27 DEBUG nova.quota [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Getting quotas for project 05376e17925247ee8e322cbdb2217818. Resources: set(['instances', 'ram', 'cores']) _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:409
2018-04-06 11:21:47.924 [1/N-COND] 27 DEBUG nova.quota [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Getting quotas for user 8ce7cd1d3184427785019ed71bcdd967 and project 05376e17925247ee8e322cbdb2217818. Resources: set(['instances', 'ram', 'cores']) _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:401
2018-04-06 11:21:47.936 [1/N-COND] 27 DEBUG nova.conductor.manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] [instance: 83d40cc7-4fce-470c-bbce-ceb3c2ebc754] Selected host: compute-0.localdomain; Selected node: compute-0.localdomain; Alternates: [(u'compute-1.localdomain', u'compute-1.localdomain')] schedule_and_build_instances /usr/lib/python2.7/site-packages/nova/conductor/manager.py:1210
2018-04-06 11:21:47.937 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.938 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.955 [1/N-COND] 27 DEBUG nova.conductor.manager [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] [instance: 83d40cc7-4fce-470c-bbce-ceb3c2ebc754] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='993404e0-a4cb-4258-847b-1ef5c8117173',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,uuid=<?>,volume_id=None,volume_size=None)] _create_block_device_mapping /usr/lib/python2.7/site-packages/nova/conductor/manager.py:1017
2018-04-06 11:21:47.957 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.957 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:47.987 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:47.988 [1/N-COND] 27 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "0276a246-7d99-4b67-b6a5-ed165e02dee0" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:50.908 [0/N-API] 20 DEBUG nova.api.openstack.wsgi [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Calling method '<bound method Versions.index of <nova.api.openstack.compute.versions.Versions object at 0x7f74f8080590>>' _process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:607
2018-04-06 11:21:50.909 [0/N-API] 20 INFO nova.api.openstack.requestlog [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] 172.17.1.12 "OPTIONS /" status: 200 len: 411 microversion: - time: 0.001207
2018-04-06 11:21:51.298 [0/N-API] 20 DEBUG nova.api.openstack.wsgi [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Calling method '<bound method Versions.index of <nova.api.openstack.compute.versions.Versions object at 0x7f74f8080590>>' _process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:607
2018-04-06 11:21:51.300 [0/N-API] 20 INFO nova.api.openstack.requestlog [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] 172.17.1.15 "OPTIONS /" status: 200 len: 411 microversion: - time: 0.001665
2018-04-06 11:21:58.070 [2/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "host_instance" acquired by "nova.scheduler.host_manager.update_instance_info" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:58.071 [2/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "host_instance" released by "nova.scheduler.host_manager.update_instance_info" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:58.087 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "host_instance" acquired by "nova.scheduler.host_manager.update_instance_info" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:58.087 [0/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "host_instance" released by "nova.scheduler.host_manager.update_instance_info" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-04-06 11:21:58.116 [1/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "host_instance" acquired by "nova.scheduler.host_manager.update_instance_info" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-04-06 11:21:58.116 [1/N-SCH] 1 DEBUG oslo_concurrency.lockutils [req-062683e9-e0bf-4f17-98dc-e017fdd77dc6 8ce7cd1d3184427785019ed71bcdd967 05376e17925247ee8e322cbdb2217818 - default default] Lock "host_instance" released by "nova.scheduler.host_manager.update_instance_info" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285

Comment 4 Miguel Angel Ajo 2018-04-11 17:51:22 UTC
Moving to openstack-nova because I don't understand if there's anything we're missing why nova is ignoring the scheduling hints.

Please find the artifacts of the configuration here.
https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/DFG-network-networking-ovn-13_director-rhel-virthost-3cont_2comp-ipv4-geneve/31/artifact/

Comment 5 Miguel Angel Ajo 2018-04-11 18:20:50 UTC
I see that ServerGroupAffinityFilter and ServerGroupAffinity  are enabled (not sure if those scheduler hints are dependent of those filters)

2018-04-06 10:55:40.056 [0/N-SCH] 1 DEBUG oslo_service.service [req-7bf0333c-f216-4af1-bd6e-280e5539eb29 - - - - -] filter_scheduler.enabled_filters = ['RetryFilter', 'AvailabilityZoneFilter', 'ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2902

Comment 6 Miguel Angel Ajo 2018-04-11 18:23:51 UTC
Is it necessary to manually enable DifferentHostFilter ? if so, how do we do that via director?

https://docs.openstack.org/nova/queens/admin/configuration/schedulers.html

Comment 8 Miguel Angel Ajo 2018-04-12 10:07:23 UTC
@I agree with you, I would make sure we enable by default
"SameHostFilter" and "DifferentHostFilter" since there are affinity/antiaffinity tests also in tempest.

Comment 9 Miguel Angel Ajo 2018-04-12 10:11:13 UTC
Not sure if we need to move this to tripleo/director instead.

Comment 10 Sylvain Bauza 2018-04-12 10:38:49 UTC
You're right, the internal CI configuration is missing those filters.
Here is the devstack upstream patch that noted those missing for gating with Tempest, hence enabling them :
https://bugzilla.redhat.com/show_bug.cgi?id=1566148

Putting the BZ to the Compute DFG as Arie stated it's probably not only impacting OVN.

Comment 11 Miguel Angel Ajo 2018-04-12 10:52:03 UTC
The right link from the email thread we were talking about this bz, thanks @sylvain
https://github.com/openstack-dev/devstack/commit/e0d61118f198e6a46af0956902485098f78e8d26

Comment 19 Miguel Angel Ajo 2018-04-16 12:34:11 UTC
Well, but in the other hand, I believe we need those filters available, otherwise our testing capabilities are degraded since we're not validating host-to-host networking.

Comment 20 Miguel Angel Ajo 2018-04-16 12:36:10 UTC
I mean, it's good we get the tempest config well generated, but, still, before we were not ensuring host-to-host test because the DifferentHost filter has never been available, and we want that.

Comment 21 Waldemar Znoinski 2018-04-16 22:38:46 UTC
my 10 cents...

nova scheduler has many filters (20+): https://github.com/openstack/nova/tree/master/nova/scheduler/filters but only 7 enabled by default - currenly in downstream CI as well: https://docs.openstack.org/nova/queens/configuration/config.html#filter_scheduler.enabled_filters

but I'm sure there are more in custom builds of openstack different companies may have written to match their physical topology, i.e.: depending on networking equipment, geolocation or whatever may come to mind

obviously the default 7 is some sane default that should match the out-of-the-box situation and most common/basic use cases but which filters a particular deployment/provider/data centre wants to enable depends on their needs (i.e.: if you want to request instances that have 2 numa nodes (some NFV use cases) you want to enable Numa_topology filter, another NFV related use case maybe requesting instances with physical NICs passed thru from host to VM for better performance and access to hw offloads - there's a pci_passthru filter for that ... and so on)

our use case is CI - testing all physically possible scenarios and features - that makes me think it's solely our own CI configuration change issue - not a candidate for bug, minor fix or even an rfe to the openstack defaults... the more nova scheduler filters we enable the better (probably) as it will allow us to run more test cases using them....

most of the customizations we do are on the tempest.conf level (network-feature-enable, compute-feature-enable etc.) but in this case tempest test case relies on the feature not configurable from tempest.conf and possibly is a bug with the test itself that it doesn't warn about wrong configuration of nova-scheduler as Ollie suggested above... in fact we get skips of tests when .i.e.:
some extensions are not enabled - https://github.com/openstack/tempest/blob/master/tempest/api/network/admin/test_routers_dvr.py#L90
networking drivers not configured - https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/admin/test_networks.py#L76

I don't know, tho, whether it's possible to achieve that with current code or we'd need, for example, a new tempest/lib module to check nova scheduler filters enabled

Comment 22 Assaf Muller 2018-04-18 13:49:17 UTC
*** Bug 1565122 has been marked as a duplicate of this bug. ***

Comment 23 Matthew Booth 2018-04-19 14:49:44 UTC
As Ollie mentioned above, we discussed this in the Compute DFG meeting and we see it as a config issue rather than a Nova bug. As such, we're not going to make any changes in Nova based on this bug: it's a config issue.

That said there's clearly a change to be made here by somebody, I suspect infrared. I understand this might be the fix:

https://review.gerrithub.io/#/c/407794/3/plugins/tripleo-overcloud/vars/overcloud/templates/nova_schedulers.yml

Arie, which BZ component should I transfer this to? If it stays in nova I will recommend we close it again as there's no action for us.

Thanks,

Matt

Comment 24 Artom Lifshitz 2018-04-19 16:10:33 UTC
I did also propose to change the upstream default filters [1]. It could have had a chance if the operator survey (linked from the review) had shown folks were deploying (Same|Different)HostFilter in production, which they're not. So this really will have to be fixed in the CI deployment tool, not in nova.

[1] https://review.openstack.org/#/c/561651/

Comment 25 Arie Bregman 2018-04-19 16:12:13 UTC
If the decision is not to

- Change Nova conf defaults
- Fix tempest (clearly not all scheduler filters are enabled and yet, the test is still running)
- Change defaults in TripleO

Then you can proceed with closing the bug. IR issues are being tracked in Jira.

Comment 26 Miguel Angel Ajo 2018-04-20 09:16:12 UTC
(In reply to Arie Bregman from comment #25)
> If the decision is not to
> 
> - Change Nova conf defaults
> - Fix tempest (clearly not all scheduler filters are enabled and yet, the
> test is still running)
> - Change defaults in TripleO
> 
> Then you can proceed with closing the bug. IR issues are being tracked in
> Jira.

So one last comment, ... if we do it this way, then when customers run tempest, to verify a deployment, it will fail.

Are we sure we want that?

Comment 27 Arie Bregman 2018-04-20 09:33:04 UTC
(In reply to Miguel Angel Ajo from comment #26)
> (In reply to Arie Bregman from comment #25)
> > If the decision is not to
> > 
> > - Change Nova conf defaults
> > - Fix tempest (clearly not all scheduler filters are enabled and yet, the
> > test is still running)
> > - Change defaults in TripleO
> > 
> > Then you can proceed with closing the bug. IR issues are being tracked in
> > Jira.
> 
> So one last comment, ... if we do it this way, then when customers run
> tempest, to verify a deployment, it will fail.
> 
> Are we sure we want that?

Personally, I'm sure I don't want that. If a fix in TripleO on Nova is not acceptable then at least tempest should expose somehow the real list of filters enabled.

Comment 28 Artom Lifshitz 2018-04-20 09:55:18 UTC
Why would something like [1] be unaccepatble? Why does the "fix" have to be in nova or tripleo?

[1] https://review.openstack.org/#/c/563015/

Comment 29 Arie Bregman 2018-04-20 11:28:37 UTC
It doesn't have to be in nova or tripleo if you are saying that it's not reflecting production environments. Tempest is fine by me and better solution than workaround in CI. Thanks for submitting it.

In that case, we'll keep this open and switch component to tempest.

Comment 32 Assaf Muller 2018-05-23 13:56:27 UTC
*** Bug 1565532 has been marked as a duplicate of this bug. ***

Comment 33 Assaf Muller 2018-05-23 13:56:42 UTC
*** Bug 1565533 has been marked as a duplicate of this bug. ***

Comment 35 Waldemar Znoinski 2018-08-28 17:57:46 UTC
FYI
I'm hitting it in d/s opendaylight osp13 tempest CI jobs, especialyl with tempest's --concurrency greater than 1.

Comment 36 Artom Lifshitz 2018-09-12 07:13:20 UTC
Tempest is stuck with the misleading default of "all" for scheduler_available_filters". My proposal to change that [1] has justifiably been shot down as backwards-imcompatible, so it'll at best need a deprecation cycle before it's changed.

My proposal to add an option to is_scheduler_filter_enabled() so that tests can decide what kind of behaviour they want [2] has been shot down as well as being really ugly.

And we're definitely not changing the Nova defaults to accomodate CI [3].

With all of that mind, I've amended my original proposal to just deprecate the 'all' value [4], but that's a long term solution, and is the best we can do upstream.

I don't see any immediate solution besides modifying the CI deployment tooling to ensure that both scheduler_available_filters in tempest.conf and enabled_filters in nova.conf are explicitly set to the same value, thus correctly causing any tests that depend on filters that aren't enabled to skip.

[1] https://review.openstack.org/#/c/570207/4
[2] https://review.openstack.org/#/c/563015/
[3] https://review.openstack.org/#/c/561651/
[1] https://review.openstack.org/#/c/570207/5

Comment 37 Waldemar Znoinski 2018-09-12 08:14:03 UTC
Works for me (Opendaylight DFG), I'll remove the configuration in CI once this bug is solved (meaning: tempest will be able to recognize/amend the enabled filters so the Different/Same Host filter too).

I guess nova and neutron DFGs has to chime in here too.

Thanks

Comment 41 Daniel Alvarez Sanchez 2018-10-02 10:46:21 UTC
(In reply to Waldemar Znoinski from comment #37)
> Works for me (Opendaylight DFG), I'll remove the configuration in CI once
> this bug is solved (meaning: tempest will be able to recognize/amend the
> enabled filters so the Different/Same Host filter too).
> 
+1. Arie just proposed workaround to networking-ovn D/S C where we're hitting this bug as well.

Comment 42 Artom Lifshitz 2018-11-04 13:31:06 UTC
With [1] merged, scheduler_enabled_filters replaces scheduler_available_filters, and the defaults are taken from Nova's. Since Tempest and branchless and this fix can't be backported, I think closing this as UPSTREAM is the right thing to do.

[1] https://review.openstack.org/#/c/570207/

Comment 43 Ollie Walsh 2018-11-06 22:29:13 UTC
*** Bug 1441809 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.