Bug 1413575

Summary: ERROR during db_seed upgrade step: Actions::Candlepin::ListenOnCandlepinEvents::Reconnect[message: initialized...have not connected yet]
Product: Red Hat Satellite Reporter: Roman Plevka <rplevka>
Component: UpgradesAssignee: satellite6-bugs <satellite6-bugs>
Status: CLOSED DUPLICATE QA Contact: Katello QA List <katello-qa-list>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.2.6CC: bbuckingham, bkearney, inecas, mbacovsk, rplevka, shihliu, zhunting
Target Milestone: Unspecified   
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-01-26 09:30:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1410795    
Attachments:
Description Flags
forman-debug-info from satellite6.1.11 to satellite6.2.7.sp2 none

Description Roman Plevka 2017-01-16 12:26:40 UTC
Description of problem:
I've performed an upgrade of completely clean Satellite 6.1.11 to the recent 6.2.6.
After the upgrade has finished restarting the services and started the db_seed step,
the client dispatcher has thrown an error:

[ INFO 2017-01-16 05:26:18 main] Upgrade Step: db_seed...
[DEBUG 2017-01-16 05:26:48 main] E, [2017-01-16T05:26:47.455508 #17587] ERROR -- /client-dispatcher: Could not find an executor for Dynflow::Dispatcher::Envelope[request_id: 2, sender_id: 469e1b64-d870-49e5-85f4-73db901e273a, receiver_id: Dynflow::Dispatcher::UnknownWorld, message: Dynflow::Dispatcher::Event[execution_plan_id: 2396c717-996d-42ca-a1b4-b24529b27c8f, step_id: 2, event: Actions::Candlepin::ListenOnCandlepinEvents::Reconnect[message: initialized...have not connected yet]]] (Dynflow::Error)

wit ha long traceback (see the attachment).
according the stdout, the db_seed has finished just fine afterwards.

However, after checking the state of the satellite i found out, the 'Listen on candlepin events' service was not running (it was marked as stopped-success).

Restarting the katello-services once again has resolved the issue.


Version-Release number of selected component (if applicable):
6.1.11 -> 6.2.6

How reproducible:
nothing special, just ran the clean 6.1.11 installation and then upgraded to 6.2.6


Actual results:
[ INFO 2017-01-16 05:26:18 main] Upgrade Step: db_seed...
[DEBUG 2017-01-16 05:26:48 main] E, [2017-01-16T05:26:47.455508 #17587] ERROR -- /client-dispatcher: Could not find an executor for Dynflow::Dispatcher::Envelope[request_id: 2, sender_id: 469e1b64-d870-49e5-85f4-73db901e273a, receiver_id: Dynflow::Dispatcher::UnknownWorld, message: Dynflow::Dispatcher::Event[execution_plan_id: 2396c717-996d-42ca-a1b4-b24529b27c8f, step_id: 2, event: Actions::Candlepin::ListenOnCandlepinEvents::Reconnect[message: initialized...have not connected yet]]] (Dynflow::Error)

Listen on candlepin events task not running

Expected results:
no traceback, satellite fully functional after upgrade

Additional info:

Comment 7 Liushihui 2017-01-19 01:41:01 UTC
It also has the same problem when upgrade satellite6.1.11 to satellite6.2.7-sp2. 
please see the foreman debug log in attachment foreman_debug.log:

[root@hp-dl2x170g6-01 yum.repos.d]# satellite-installer --scenario satellite --upgrade
Upgrading...
Upgrade Step: stop_services...
.............................
Tasks: TOP => db:migrate
(See full trace by running task with --trace)
== 20150930183738 MigrateContentHosts: migrating ==============================

false

Upgrade Step: remove_nodes_distributors...
MongoDB shell version: 2.6.11
connecting to: pulp_database
WriteResult({ "nRemoved" : 0 })

Upgrade Step: Running installer...
 /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]: Failed to call refresh: /usr/sbin/foreman-rake db:migrate returned 1 instead of one of [0]
 /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]: /usr/sbin/foreman-rake db:migrate returned 1 instead of one of [0]
 /Stage[main]/Foreman::Database/Foreman::Rake[db:seed]/Exec[foreman-rake-db:seed]: Failed to call refresh: /usr/sbin/foreman-rake db:seed returned 1 instead of one of [0]
 /Stage[main]/Foreman::Database/Foreman::Rake[db:seed]/Exec[foreman-rake-db:seed]: /usr/sbin/foreman-rake db:seed returned 1 instead of one of [0]
Installing             Done                                               [100%] [........................................................]
  Something went wrong! Check the log for ERROR-level output
  The full log is at /var/log/foreman-installer/satellite.log
Upgrade failed during the installation phase. Fix the error and re-run the upgrade.


[root@hp-dl2x170g6-01 yum.repos.d]# rpm -q satellite
satellite-6.2.7-1.0.el7sat.noarch

[root@hp-dl2x170g6-01 yum.repos.d]# rpm -q katello-installer-base
katello-installer-base-3.0.0.70-1.el7sat.noarch

Comment 8 Liushihui 2017-01-19 01:42:14 UTC
Created attachment 1242320 [details]
forman-debug-info from satellite6.1.11 to satellite6.2.7.sp2

Comment 9 Martin Bacovsky 2017-01-25 11:43:33 UTC
Liushihui, from the log you've attached I don't see any relation to this bug. I can see there issues with gutterball removal which is likely tracked in [1]

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1410783

Comment 10 Martin Bacovsky 2017-01-25 11:48:32 UTC
Roman, were you able to re-reproduce this issue again? I have no luck reproducing it. From the information I have it seems to have the same root cause as [1] even though the message is slightly different. If you are not able to re-reproduce I'd suggest to close this bug as dupe of [1].


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1413966

Comment 11 Roman Plevka 2017-01-26 09:30:56 UTC
unfortunately i was not able to reproduce this again, the suggested bug seems to be the same issue, so I'm closing this in favor of https://bugzilla.redhat.com/show_bug.cgi?id=1413966

*** This bug has been marked as a duplicate of bug 1413966 ***