Description of problem: On a freshly installed sat6 with snap13-c1 build, katello-installer status on RHEL6 says "Some services failed: qpidd" This issue doesn't block any issue and all the below stuff works fine a) restarting qpidd service goes fine b) service qpidd status shows nothing, just exits c) we are able to install packages from sat6 web-ui to the client side. d) sat6-capsule sync too works fine. But having said the above, this causes a lot of confusion for the user. Version-Release number of selected component (if applicable): sat6.1.1-SNAP13-c1 How reproducible: Steps to Reproduce: 1. Install Sat6 and capsule on RHEL6 and check for qpidd service 2. 3. Actual results: running "katello-service status" throws "Some services failed: qpidd". Expected results: running "katello-service status" should say "success". Additional info:
The underlying issue here, is, I think: `service qpidd status` returns nothing, therefore `katello-service status` gets nothing for qpidd, therefore, qpidd is assumed, by katello-service, to be borked.
I see this also on a fresh system. The error is ssl related. + test -x /usr/bin/qpid-ha + /usr/bin/qpid-ha --config /etc/qpid/qpidd.conf ping + return 1 # /usr/bin/qpid-ha --config /etc/qpid/qpidd.conf ping ConnectionError: connection aborted # tail /var/log/messages [...] Jul 31 18:54:19 xxx qpidd[35179]: 2015-07-31 18:54:19 [Security] error Rej Jul 31 18:54:19 xxx qpidd[35179]: 2015-07-31 18:54:19 [Protocol] error Cond: Connection must be encrypted.(320)
Reopening, as this seems like a valid bug with at least 2 customer case opened against it, resetting flags.
Yeah.. I can reproduce this with latest 6.1.5 compose. ---- [root@sat614-qe-rhel67 ~]# katello-service status mongod (pid 7834) is running... listening on 127.0.0.1:27017 connection test successful qdrouterd (pid 7921) is running... tomcat6 (pid 8580) is running... [ OK ] celery init v10.0. Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat pulp_celerybeat (pid 8145) is running. celery init v10.0. Using config script: /etc/default/pulp_resource_manager node resource_manager (pid 8084) is running... elasticsearch (pid 8203) is running... celery init v10.0. Using config script: /etc/default/pulp_workers node reserved_resource_worker-0 (pid 8846) is running... node reserved_resource_worker-1 (pid 8874) is running... node reserved_resource_worker-2 (pid 8903) is running... node reserved_resource_worker-3 (pid 8932) is running... foreman-proxy (pid 8985) is running... httpd (pid 9037) is running... dynflow_executor is running. dynflow_executor_monitor is running. Some services failed: qpidd --
I can reproduce it on RHEL6 only. Sat6.1 on RHEL7 is fine. Root cause is really "service qpidd status" returns nothing but with return value 1. The cause is: # bash -x /etc/init.d/qpidd status .. ++ RC=0 ++ '[' -z /var/run/qpidd.pid -a -z ' 14098' ']' ++ '[' -n ' 14098' ']' ++ echo 'qpidd (pid 14098) is running...' ++ return 0 + MESSAGE='qpidd (pid 14098) is running...' + qpid_ping + test -x /usr/bin/qpid-ha + /usr/bin/qpid-ha --config /etc/qpid/qpidd.conf ping + return 1 + RETVAL=1 + exit 1 # qpid-ha ping fails since qpidd does not accept unencrypted connections on 5672. Upgrade to qpid-cpp-server-0.34 with updated /etc/init.d/qpidd script is solution, since this script deals with qpid-ha test in smarter way (does HA test only if the broker is in HA cluster, what would fail fo Sat6 deployment).
FYI this has 6.2+ and blocker+ flags but 6.2 snap1 does _not_ contain qpid-cpp of version 0.34 (still 0.30).
If this is required for Sat 6.2, we'll have to look at backporting to qpid-cpp 0.30
*** Bug 1193762 has been marked as a duplicate of this bug. ***
Builds completed for RHEL 6 and 7. RHEL-6: https://brewweb.devel.redhat.com/buildinfo?buildID=484732 (qpid-cpp-0.30-11.el6) RHEL-7: https://brewweb.devel.redhat.com/buildinfo?buildID=485022 (qpid-cpp-0.30-11.el7sat) Packages from the builds are tagged Satellite-6.2.0-rhel-[67]-candidate, respectively.
Verified this issue in Satellite 6.2 snap 4. The issue is no more reproducible specially on RHEL6. The Output : # katello-service status postmaster (pid 32313) is running... mongod (pid 32598) is running... listening on 127.0.0.1:27017 connection test successful qdrouterd (pid 5051) is running... qpidd (pid 2944) is running... celery init v10.0. Using config script: /etc/default/pulp_workers node reserved_resource_worker-0 (pid 4437) is running... node reserved_resource_worker-1 (pid 4456) is running... node reserved_resource_worker-2 (pid 4475) is running... node reserved_resource_worker-3 (pid 4496) is running... node reserved_resource_worker-4 (pid 4521) is running... node reserved_resource_worker-5 (pid 4546) is running... node reserved_resource_worker-6 (pid 4571) is running... node reserved_resource_worker-7 (pid 4596) is running... foreman-proxy (pid 2098) is running... celery init v10.0. Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat pulp_celerybeat (pid 3918) is running. celery init v10.0. Using config script: /etc/default/pulp_resource_manager node resource_manager (pid 4154) is running... tomcat6 (pid 3230) is running...[ OK ] dynflow_executor is running. dynflow_executor_monitor is running. httpd (pid 4667) is running... Success! [root@dell-t7400-01 ~]# service qpidd status qpidd (pid 2944) is running... So closing this as Verified !
*** Bug 1300933 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1501