Description of problem: Service retirement fails for catalog type bundle consisting two ansible tower catalog item and a rhv catalog item. The vm gets deleted from the provider but the service state never switch to 'retired', stuck into 'retiring' state and eventually error. Following error reported in the logs. ----] I, [2019-02-04T03:28:41.845911 #14400:1177108] INFO -- : MIQ(MiqQueue#deliver) Message id: [34000022294823], Delivering... [----] E, [2019-02-04T03:28:41.957198 #14400:1177108] ERROR -- : MIQ(MiqQueue#deliver) Message id: [34000022294823], Error: [Couldn't find User with 'id'=34000000000019] [----] E, [2019-02-04T03:28:41.957633 #14400:1177108] ERROR -- : [ActiveRecord::RecordNotFound]: Couldn't find User with 'id'=34000000000019 Method:[block in method_missing] [----] E, [2019-02-04T03:28:41.957870 #14400:1177108] ERROR -- : /opt/rh/cfme-gemset/gems/activerecord-5.0.6/lib/active_record/core.rb:173:in `find' /var/www/miq/vmdb/app/models/mixins/retirement_mixin.rb:236:in `system_context_requester' /var/www/miq/vmdb/app/models/mixins/retirement_mixin.rb:109:in `retirement_check' /opt/rh/cfme-gemset/gems/activerecord-5.0.6/lib/active_record/relation/delegation.rb:38:in `each' /opt/rh/cfme-gemset/gems/activerecord-5.0.6/lib/active_record/relation/delegation.rb:38:in `each' /var/www/miq/vmdb/app/models/retirement_manager.rb:6:in `block in check' /var/www/miq/vmdb/app/models/retirement_manager.rb:4:in `each' /var/www/miq/vmdb/app/models/retirement_manager.rb:4:in `flat_map' /var/www/miq/vmdb/app/models/retirement_manager.rb:4:in `check' /var/www/miq/vmdb/app/models/miq_queue.rb:453:in `block in dispatch_method' /opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:91:in `block in timeout' /opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `block in catch' /opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `catch' /opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:33:in `catch' /opt/rh/rh-ruby23/root/usr/share/ruby/timeout.rb:106:in `timeout' /var/www/miq/vmdb/app/models/miq_queue.rb:452:in `dispatch_method' /var/www/miq/vmdb/app/models/miq_queue.rb:429:in `block in deliver' /var/www/miq/vmdb/app/models/user.rb:269:in `with_user_group' /var/www/miq/vmdb/app/models/miq_queue.rb:429:in `deliver' /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:104:in `deliver_queue_message' /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message' /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:152:in `block in do_work' /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `loop' /var/www/miq/vmdb/app/models/miq_queue_worker_base/runner.rb:146:in `do_work' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:329:in `block in do_work_loop' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:326:in `loop' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:326:in `do_work_loop' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:153:in `run' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:127:in `start' /var/www/miq/vmdb/app/models/miq_worker/runner.rb:22:in `start_worker' /var/www/miq/vmdb/app/models/miq_worker.rb:376:in `block in start_runner_via_fork' /opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork' /opt/rh/cfme-gemset/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork' /var/www/miq/vmdb/app/models/miq_worker.rb:374:in `start_runner_via_fork' /var/www/miq/vmdb/app/models/miq_worker.rb:368:in `start_runner' /var/www/miq/vmdb/app/models/miq_worker.rb:415:in `start' /var/www/miq/vmdb/app/models/miq_worker.rb:266:in `start_worker' /var/www/miq/vmdb/app/models/miq_worker.rb:153:in `block in sync_workers' /var/www/miq/vmdb/app/models/miq_worker.rb:153:in `times' /var/www/miq/vmdb/app/models/miq_worker.rb:153:in `sync_workers' /var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:53:in `block in sync_workers' /var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `each' /var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:50:in `sync_workers' /var/www/miq/vmdb/app/models/miq_server/worker_management/monitor.rb:22:in `monitor_workers' /var/www/miq/vmdb/app/models/miq_server.rb:343:in `block in monitor' /opt/rh/cfme-gemset/bundler/gems/cfme-gems-pending-0cee44cbcd3f/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store' /opt/rh/cfme-gemset/bundler/gems/cfme-gems-pending-0cee44cbcd3f/lib/gems/pending/util/extensions/miq-benchmark.rb:28:in `realtime_block' /var/www/miq/vmdb/app/models/miq_server.rb:343:in `monitor' /var/www/miq/vmdb/app/models/miq_server.rb:382:in `block (2 levels) in monitor_loop' /opt/rh/cfme-gemset/bundler/gems/cfme-gems-pending-0cee44cbcd3f/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store' /opt/rh/cfme-gemset/bundler/gems/cfme-gems-pending-0cee44cbcd3f/lib/gems/pending/util/extensions/miq-benchmark.rb:35:in `realtime_block' /var/www/miq/vmdb/app/models/miq_server.rb:382:in `block in monitor_loop' /var/www/miq/vmdb/app/models/miq_server.rb:381:in `loop' /var/www/miq/vmdb/app/models/miq_server.rb:381:in `monitor_loop' /var/www/miq/vmdb/app/models/miq_server.rb:244:in `start' /var/www/miq/vmdb/lib/workers/evm_server.rb:27:in `start' /var/www/miq/vmdb/lib/workers/evm_server.rb:48:in `start' /var/www/miq/vmdb/lib/workers/bin/evm_server.rb:4:in `<main>' [----] I, [2019-02-04T03:28:41.958051 #14400:1177108] INFO -- : MIQ(MiqQueue#delivered) Message id: [34000022294823], State: [error], Delivered in [0.11217338] seconds [----] D, [2019-02-04T03:28:44.310196 #11934:1177108] DEBUG -- : EMS [console.int.aws.bancsabadell.com] as [] Garbage collection took 0.2830837919609621 seconds Version-Release number of selected component (if applicable): 5.9.7.2.20190109154935_c0d03d6 How reproducible: Always at customer end.
This issue was opened as a result of log messages included here: https://bugzilla.redhat.com/show_bug.cgi?id=1672338
https://github.com/ManageIQ/manageiq/pull/18437
Please assess the impact of this issue and update the severity accordingly. Please refer to https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity for a reminder on each severity's definition. If it's something like a tracker bug where it doesn't matter, please set the severity to Low.
https://github.com/ManageIQ/manageiq/pull/18443
Also https://github.com/ManageIQ/manageiq/pull/18437
Sure, sorry Gellert. You can do all that through logs, unless you're meaning something else? I'm not quite sure what you're looking for, but for a system_context retirement the logging should look like this: We run the retirement check at https://github.com/ManageIQ/manageiq/blob/master/app/models/mixins/retirement_mixin.rb#L124 which raises the retirement event, which should show ```Raising Retirement Event for [#{name}] with queue options: #{q_options.inspect}")``` which then raises the evm_event, there's a log message associated with that, too. You should also see it hitting the system_context_requester log line https://github.com/ManageIQ/manageiq/blob/master/app/models/mixins/retirement_mixin.rb#L257.
Verified in Version 5.11.0.8.20190611155126_01e077e