Bug 1421421

Summary: refresh fails on RHOS9 <Fog> excon.error #<Excon::Error::GatewayTimeout: Expected(200)
Product: Red Hat CloudForms Management Engine Reporter: Ronnie Rasouli <rrasouli>
Component: ProvidersAssignee: Tzu-Mainn Chen <tzumainn>
Status: CLOSED DUPLICATE QA Contact: Dave Johnson <dajohnso>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 5.7.0CC: jfrey, jhardy, maufart, obarenbo
Target Milestone: GA   
Target Release: cfme-future   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: openstack
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-13 15:56:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
evm log none

Description Ronnie Rasouli 2017-02-12 08:45:48 UTC
Created attachment 1249465 [details]
evm log

Description of problem:

refresh infra provider fails with RHOS9 setup.

Got the following error.

Error - 2 Minutes Ago
Expected(200) <=> Actual(404 Not Found) excon.error.response :body => "{\"explanation\": \"The resource co... 

Version-Release number of selected component (if applicable):
 Version 5.7.1.1.20170206165110_3c42361 

How reproducible:
appears after a while

Steps to Reproduce:
1. conifugre an openstack infra provider to CFME
2. refresh the provider 
3.

Actual results:

infra provider refresh fails, no 

Expected results:

refresh pass all nodes and stacks reveals 

Additional info:
Method:[rescue in deliver]
[----] E, [2017-02-12T02:52:02.773719 #11092:12db134] ERROR -- : /var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:50:in `refresh'
/var/www/miq/vmdb/app/models/manageiq/providers/base_manager/refresher.rb:10:in `refresh'
/var/www/miq/vmdb/app/models/ems_refresh.rb:91:in `block in refresh'
/var/www/miq/vmdb/app/models/ems_refresh.rb:90:in `each'
/var/www/miq/vmdb/app/models/ems_refresh.rb:90:in `refresh'
/var/www/miq/vmdb/app/models/miq_queue.rb:347:in `block in deliver'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:91:in `block in timeout'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `block in catch'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `catch'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `catch'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:106:in `timeout'
/var/www/miq/vmdb/app/models/miq_queue.rb:343:in `deliver'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:106:in `deliver_queue_message'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:152:in `block in do_work'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `loop'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `do_work'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:334:in `block in do_work_loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `do_work_loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:153:in `run'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:128:in `start'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:21:in `start_worker'
/var/www/miq/vmdb/app/models/miq_worker.rb:343:in `block in start'
/opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/var/www/miq/vmdb/app/models/miq_worker.rb:341:in `start'
/var/www/miq/vmdb/app/models/miq_worker.rb:270:in `start_worker'
/var/www/miq/vmdb/app/models/mixins/per_ems_worker_mixin.rb:68:in `start_worker_for_ems'
/var/www/miq/vmdb/app/models/mixins/per_ems_worker_mixin.rb:46:in `block in sync_workers'
/var/www/miq/vmdb/app/models/mixins/per_ems_worker_mixin.rb:45:in `each'
/var/www/miq/vmdb/app/models/mixins/per_ems_worker_mixin.rb:45:in `sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:52:in `block in sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `each'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:22:in `monitor_workers'
/var/www/miq/vmdb/app/models/miq_server.rb:346:in `block in monitor'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/var/www/miq/vmdb/app/models/miq_server.rb:346:in `monitor'
/var/www/miq/vmdb/app/models/miq_server.rb:368:in `block (2 levels) in monitor_loop'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/var/www/miq/vmdb/app/models/miq_server.rb:368:in `block in monitor_loop'
/var/www/miq/vmdb/app/models/miq_server.rb:367:in `loop'
/var/www/miq/vmdb/app/models/miq_server.rb:367:in `monitor_loop'
/var/www/miq/vmdb/app/models/miq_server.rb:250:in `start'
/var/www/miq/vmdb/lib/workers/evm_server.rb:65:in `start'
/var/www/miq/vmdb/lib/workers/evm_server.rb:92:in `start'
/var/www/miq/vmdb/lib/workers/bin/evm_server.rb:4:in `<main>'
[----] I, [2017-02-12T02:52:02.773840 #11092:12db134]  INFO -- : MIQ(MiqQueue#delivered) Message id: [234000000145191], State: [error], Delivered in [31.189156137] seconds
[----] I, [2017-02-12T02:52:04.836397 #13881:12db134]  INFO -- : MIQ(MiqScheduleWorker::Runner#do_work) Number of scheduled items to be processed: 2.
[----] I, [2017-02-12T02:52:04.843039 #13881:12db134]  INFO -- : MIQ(MiqQueue.put) Message id: [234000000145205],  id: [], Zone: [default], Role: [smartstate], Server: [], Ident: [generic], Target id: [], Instance id: [], Task id: [job_dispatcher], Command: [JobProxyDispatcher.dispatch], Timeout: [600], Priority: [20], State: [ready], Deliver On: [], Data: [], Args: []
[----] I, [2017-02-12T02:52:04.850106 #13881:12db134]  INFO -- : MIQ(MiqQueue.put) Message id: [234000000145206],  id: [], Zone: [default], Role: [], Server: [36128cba-eddb-11e6-a1ed-525400f652e7], Ident: [generic], Target id: [], Instance id: [], Task id: [], Command: [Session.check_session_timeout], Timeout: [600], Priority: [90], State: [ready], Deliver On: [], Data: [], Args: []
[----] E, [2017-02-12T02:52:05.733429 #5199:12db134] ERROR -- : <Fog> excon.error     #<Excon::Error::GatewayTimeout: Expected(200) <=> Actual(504 Gateway Timeout)
excon.error.response
  :body          => "<html><body><h1>504 Gateway Time-out</h1>\nThe server didn't respond in time.\n</body></html>\n"
  :cookies       => [
  ]
  :headers       => {
    "Cache-Control" => "no-cache"
    "Connection"    => "close"
    "Content-Type"  => "text/html"
  }
  :host          => "10.0.0.101"
  :local_address => "10.0.0.8"
  :local_port    => 34642
  :path          => "/v1/cdf6407ac4054241bfc3abe354ca6848/stacks"
  :port          => 13004
  :reason_phrase => "Gateway Time-out"
  :remote_ip     => "10.0.0.101"
  :status        => 504
  :status_line   => "HTTP/1.0 504 Gateway Time-out\r\n"
>

[----] E, [2017-02-12T02:52:05.733703 #5199:12db134] ERROR -- : MIQ(OpenstackHandle::OrchestrationDelegate#handled_list) Unable to obtain collection: 'stacks' in service: 'orchestration' using project scope: 'admin' in provider: '10.0.0.101'. Message=Expected(200) <=> Actual(504 Gateway Timeout)
excon.error.response
  :body          => "<html><body><h1>504 Gateway Time-out</h1>\nThe server didn't respond in time.\n</body></html>\n"
  :cookies       => [
  ]
  :headers       => {
    "Cache-Control" => "no-cache"
    "Connection"    => "close"
    "Content-Type"  => "text/html"
  }
  :host          => "10.0.0.101"
  :local_address => "10.0.0.8"
  :local_port    => 34642
  :path          => "/v1/cdf6407ac4054241bfc3abe354ca6848/stacks"
  :port          => 13004
  :reason_phrase => "Gateway Time-out"
  :remote_ip     => "10.0.0.101"
  :status        => 504
  :status_line   => "HTTP/1.0 504 Gateway Time-out\r\n"

[----] E, [2017-02-12T02:52:05.733797 #5199:12db134] ERROR -- : MIQ(OpenstackHandle::OrchestrationDelegate#handled_list) /opt/rh/cfme-gemset/gems/excon-0.54.0/lib/excon/middlewares/expects.rb:7:in `response_call'
/opt/rh/cfme-gemset/gems/excon-0.54.0/lib/excon/middlewares/response_parser.rb:9:in `response_call'
:

Comment 2 Ronnie Rasouli 2017-02-13 13:14:28 UTC
heat -d stack-list
DEBUG (session) REQ: curl -g -i -X GET http://192.168.24.1:5000/v2.0 -H "Accept: application/json" -H "User-Agent: python-keystoneclient"
INFO (connectionpool) Starting new HTTP connection (1): 192.168.24.1
DEBUG (connectionpool) "GET /v2.0 HTTP/1.1" 200 229
DEBUG (session) RESP: [200] Date: Mon, 13 Feb 2017 13:13:50 GMT Server: Apache/2.4.6 (Red Hat Enterprise Linux) Vary: X-Auth-Token,Accept-Encoding x-openstack-request-id: req-ac8211e0-cf8d-4f98-be48-e20e6691da63 Content-Encoding: gzip Content-Length: 229 Connection: close Content-Type: application/json 
RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.24.1:5000/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}}

DEBUG (v2) Making authentication request to http://192.168.24.1:5000/v2.0/tokens
INFO (connectionpool) Resetting dropped connection: 192.168.24.1
DEBUG (connectionpool) "POST /v2.0/tokens HTTP/1.1" 200 998
WARNING (shell) "heat stack-list" is deprecated, please use "openstack stack list" instead
DEBUG (session) REQ: curl -g -i -X GET http://192.168.24.1:8004/v1/005013daacd644ddb07289804c7532e2/stacks? -H "User-Agent: python-heatclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}e7664ebfe7ee1e03a538cc0a0e7b8d93f245902a"
INFO (connectionpool) Starting new HTTP connection (1): 192.168.24.1
DEBUG (connectionpool) "GET /v1/005013daacd644ddb07289804c7532e2/stacks HTTP/1.1" 200 815
DEBUG (session) RESP: [200] Content-Type: application/json; charset=UTF-8 Content-Length: 815 X-Openstack-Request-Id: req-f7d8d081-f127-4cc7-8f70-6fedc89585af Date: Mon, 13 Feb 2017 13:13:50 GMT Connection: keep-alive 
RESP BODY: {"stacks": [{"description": "Deploy an OpenStack environment, consisting of several node types (roles), Controller, Compute, BlockStorage, SwiftStorage and CephStorage.  The Storage roles enable independent scaling of the storage components, but the minimal deployment is one Controller and one Compute node.\n", "parent": null, "stack_status_reason": "Stack CREATE completed successfully", "stack_name": "overcloud", "stack_user_project_id": "6aaa4018ca24478fac93081d2375a1ed", "tags": null, "creation_time": "2017-02-12T09:47:51", "links": [{"href": "http://192.168.24.1:8004/v1/005013daacd644ddb07289804c7532e2/stacks/overcloud/39a71b41-ba6f-4fbb-880c-83d3fdfc3697", "rel": "self"}], "updated_time": null, "stack_owner": "admin", "stack_status": "CREATE_COMPLETE", "id": "39a71b41-ba6f-4fbb-880c-83d3fdfc3697"}]}

+--------------------------------------+------------+-----------------+---------------------+--------------+
| id                                   | stack_name | stack_status    | creation_time       | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 39a71b41-ba6f-4fbb-880c-83d3fdfc3697 | overcloud  | CREATE_COMPLETE | 2017-02-12T09:47:51 | None         |
+--------------------------------------+------------+-----------------+---------------------+--------------+

Comment 3 Marek Aufart 2017-02-13 13:27:06 UTC
The description of this BZ mentions 404 error, which looks to be different to the error in Additional info part. Pasting 404 stack error from attached log file.

[----] E, [2017-02-12T03:43:32.885226 #11092:12db134] ERROR -- : MIQ(ManageIQ::Providers::Openstack::InfraManager::Refresher#refresh) EMS: [uc_ci], id: [234000000000003] Refresh failed
[----] E, [2017-02-12T03:43:32.885420 #11092:12db134] ERROR -- : [Fog::Orchestration::OpenStack::NotFound]: Expected(200) <=> Actual(404 Not Found)
excon.error.response
  :body          => "{\"explanation\": \"The resource could not be found.\", \"code\": 404, \"error\": {\"message\": \"Not found\", \"traceback\": \"Traceback (most recent call last):\\n\\n  File \\\"/usr/lib/python2.7/site-packages/heat/common/context.py\\\", line 329, in wrapped\\n    return func(self, ctx, *args, **kwargs)\\n\\n  File \\\"/usr/lib/python2.7/site-packages/heat/engine/service.py\\\", line 1771, in list_stack_resources\\n    for resource in rsrcs]\\n\\n  File \\\"/usr/lib/python2.7/site-packages/heat/engine/stack.py\\\", line 318, in iter_resources\\n    for res in six.itervalues(self._find_resources(filters)):\\n\\n  File \\\"/usr/lib/python2.7/site-packages/heat/engine/stack.py\\\", line 300, in _find_resources\\n    self.context, self.id, True, filters)):\\n\\n  File \\\"/usr/lib/python2.7/site-packages/heat/objects/resource.py\\\", line 143, in get_all_by_stack\\n    filters)\\n\\n  File \\\"/usr/lib/python2.7/site-packages/heat/db/api.py\\\", line 117, in resource_get_all_by_stack\\n    return IMPL.resource_get_all_by_stack(context, stack_id, key_id, filters)\\n\\n  File \\\"/usr/lib/python2.7/site-packages/heat/db/sqlalchemy/api.py\\\", line 324, in resource_get_all_by_stack\\n    % stack_id)\\n\\nNotFound: no resources for stack_id eeeb2daa-f504-429a-85eb-03ed956e5e8e were found\\n\", \"type\": \"NotFound\"}, \"title\": \"Not Found\"}"
  :cookies       => [
  ]
  :headers       => {
    "Content-Length"         => "1283"
    "Content-Type"           => "application/json; charset=UTF-8"
    "Date"                   => "Sun, 12 Feb 2017 08:43:32 GMT"
    "X-Openstack-Request-Id" => "req-01921564-512f-4446-ae64-1d4df60d0abf"
  }
  :host          => "192.168.24.2"
  :local_address => "10.0.0.8"
  :local_port    => 58224
  :path          => "/v1/7171e09abbd74885ad04ced4a9c418a0/stacks/overcloud-Compute-pcktq33kjhrs-2-dfqz5yybn7lq-NodeExtraConfig-woowwhwsmd5v/eeeb2daa-f504-429a-85eb-03ed956e5e8e/resources"
  :port          => 13004
  :reason_phrase => "Not Found"
  :remote_ip     => "192.168.24.2"
  :status        => 404
  :status_line   => "HTTP/1.1 404 Not Found\r\n"
  Method:[rescue in block in refresh]
[----] E, [2017-02-12T03:43:32.885495 #11092:12db134] ERROR -- : /opt/rh/cfme-gemset/gems/excon-0.54.0/lib/excon/middlewares/expects.rb:7:in `response_call'
/opt/rh/cfme-gemset/gems/excon-0.54.0/lib/excon/middlewares/response_parser.rb:9:in `response_call'
/opt/rh/cfme-gemset/gems/excon-0.54.0/lib/excon/connection.rb:388:in `response'
/opt/rh/cfme-gemset/gems/excon-0.54.0/lib/excon/connection.rb:252:in `request'
/opt/rh/cfme-gemset/gems/fog-core-1.43.0/lib/fog/core/connection.rb:81:in `request'
/opt/rh/cfme-gemset/gems/fog-openstack-0.1.19/lib/fog/openstack/core.rb:81:in `request'
/opt/rh/cfme-gemset/gems/fog-openstack-0.1.19/lib/fog/orchestration/openstack/requests/list_resources.rb:27:in `list_resources'
/opt/rh/rh-ruby23/root/usr/share/ruby/delegate.rb:83:in `method_missing'
/var/www/miq/vmdb/app/models/manageiq/providers/openstack/infra_manager/refresh_parser.rb:102:in `all_stack_server_resources'
/var/www/miq/vmdb/app/models/manageiq/providers/openstack/infra_manager/refresh_parser.rb:88:in `block in all_server_resources'
/var/www/miq/vmdb/app/models/manageiq/providers/openstack/infra_manager/refresh_parser.rb:85:in `each'
/var/www/miq/vmdb/app/models/manageiq/providers/openstack/infra_manager/refresh_parser.rb:85:in `all_server_resources'
/var/www/miq/vmdb/app/models/manageiq/providers/openstack/infra_manager/refresh_parser.rb:152:in `load_hosts'
/var/www/miq/vmdb/app/models/manageiq/providers/openstack/infra_manager/refresh_parser.rb:70:in `ems_inv_to_hashes'
/var/www/miq/vmdb/app/models/manageiq/providers/openstack/infra_manager/refresh_parser.rb:12:in `ems_inv_to_hashes'
/var/www/miq/vmdb/app/models/manageiq/providers/openstack/infra_manager/refresher.rb:7:in `parse_legacy_inventory'
/var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:122:in `block in parse_targeted_inventory'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:122:in `parse_targeted_inventory'
/var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:87:in `block in refresh_targets_for_ems'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:86:in `refresh_targets_for_ems'
/var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:24:in `block (2 levels) in refresh'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:24:in `block in refresh'
/var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:14:in `each'
/var/www/miq/vmdb/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:14:in `refresh'
/var/www/miq/vmdb/app/models/manageiq/providers/base_manager/refresher.rb:10:in `refresh'
/var/www/miq/vmdb/app/models/ems_refresh.rb:91:in `block in refresh'
/var/www/miq/vmdb/app/models/ems_refresh.rb:90:in `each'
/var/www/miq/vmdb/app/models/ems_refresh.rb:90:in `refresh'
/var/www/miq/vmdb/app/models/miq_queue.rb:347:in `block in deliver'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:91:in `block in timeout'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `block in catch'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `catch'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `catch'
/opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:106:in `timeout'
/var/www/miq/vmdb/app/models/miq_queue.rb:343:in `deliver'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:106:in `deliver_queue_message'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:152:in `block in do_work'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `loop'
/var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `do_work'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:334:in `block in do_work_loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:331:in `do_work_loop'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:153:in `run'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:128:in `start'
/var/www/miq/vmdb/app/models/miq_worker/runner.rb:21:in `start_worker'
/var/www/miq/vmdb/app/models/miq_worker.rb:343:in `block in start'
/opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/var/www/miq/vmdb/app/models/miq_worker.rb:341:in `start'
/var/www/miq/vmdb/app/models/miq_worker.rb:270:in `start_worker'
/var/www/miq/vmdb/app/models/mixins/per_ems_worker_mixin.rb:68:in `start_worker_for_ems'
/var/www/miq/vmdb/app/models/mixins/per_ems_worker_mixin.rb:46:in `block in sync_workers'
/var/www/miq/vmdb/app/models/mixins/per_ems_worker_mixin.rb:45:in `each'
/var/www/miq/vmdb/app/models/mixins/per_ems_worker_mixin.rb:45:in `sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:52:in `block in sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `each'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `sync_workers'
/var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:22:in `monitor_workers'
/var/www/miq/vmdb/app/models/miq_server.rb:346:in `block in monitor'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/var/www/miq/vmdb/app/models/miq_server.rb:346:in `monitor'
/var/www/miq/vmdb/app/models/miq_server.rb:368:in `block (2 levels) in monitor_loop'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/var/www/miq/vmdb/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/var/www/miq/vmdb/app/models/miq_server.rb:368:in `block in monitor_loop'
/var/www/miq/vmdb/app/models/miq_server.rb:367:in `loop'
/var/www/miq/vmdb/app/models/miq_server.rb:367:in `monitor_loop'
/var/www/miq/vmdb/app/models/miq_server.rb:250:in `start'
/var/www/miq/vmdb/lib/workers/evm_server.rb:65:in `start'
/var/www/miq/vmdb/lib/workers/evm_server.rb:92:in `start'
/var/www/miq/vmdb/lib/workers/bin/evm_server.rb:4:in `<main>'

Comment 4 Marek Aufart 2017-02-13 13:47:20 UTC
Another update of description - refresh fails for Cloud provider (not Infra).

Comment 5 Marek Aufart 2017-02-13 13:59:29 UTC
Log from comment #2 is from undercloud (which works), overcloud heat stack list output - http://pastebin.test.redhat.com/454831

Comment 6 Tzu-Mainn Chen 2017-02-13 15:51:08 UTC
Looking at the machine, it looks like overcloud refresh works.  Undercloud fails, but there are two BZs reporting issues around that which have been fixed upstream and are scheduled to be built in the future:

https://bugzilla.redhat.com/show_bug.cgi?id=1417273
https://bugzilla.redhat.com/show_bug.cgi?id=1420536

Comment 7 Tzu-Mainn Chen 2017-02-13 15:56:04 UTC

*** This bug has been marked as a duplicate of bug 1420536 ***