Bug 1246152 - katello-service status on RHEL6 always says "Some services failed: qpidd"
Summary: katello-service status on RHEL6 always says "Some services failed: qpidd"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Hammer
Version: 6.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: Unspecified
Assignee: Mike Cressman
QA Contact: Jitendra Yejare
URL:
Whiteboard:
: 1193762 1300933 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-23 14:47 UTC by Kedar Bidarkar
Modified: 2020-08-13 08:15 UTC (History)
18 users (show)

Fixed In Version: qpid-cpp-0.30-11.el6/el7
Doc Type: Bug Fix
Doc Text:
Previously, the katello-service status command returned an incorrect message when using Red Hat Enterprise Linux 6. With this release, the qpidd deamon has been improved on Red Hat Enterprise Linux 6 and returns the correct status.
Clone Of:
Environment:
Last Closed: 2016-07-27 11:37:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1581103 0 None None None Never

Description Kedar Bidarkar 2015-07-23 14:47:29 UTC
Description of problem:
On a freshly installed sat6 with snap13-c1 build,
katello-installer status on RHEL6 says "Some services failed: qpidd"

This issue doesn't block any issue and all the below stuff works fine
a) restarting qpidd service goes fine
b) service qpidd status shows nothing, just exits
c) we are able to install packages from sat6 web-ui to the client side.
d) sat6-capsule sync too works fine.

But having said the above, this causes a lot of confusion for the user.

Version-Release number of selected component (if applicable):
sat6.1.1-SNAP13-c1

How reproducible:


Steps to Reproduce:
1. Install Sat6 and capsule on RHEL6 and check for qpidd service
2.
3.

Actual results:
running "katello-service status" throws "Some services failed: qpidd".

Expected results:

running "katello-service status" should say "success".
Additional info:

Comment 2 Corey Welton 2015-07-24 13:43:35 UTC
The underlying issue here, is, I think:

`service qpidd status` returns nothing, therefore
`katello-service status` gets nothing for qpidd, therefore,
qpidd is assumed, by katello-service, to be borked.

Comment 3 Axel Thimm 2015-07-31 16:57:06 UTC
I see this also on a fresh system. The error is ssl related.

+ test -x /usr/bin/qpid-ha
+ /usr/bin/qpid-ha --config /etc/qpid/qpidd.conf ping
+ return 1

# /usr/bin/qpid-ha --config /etc/qpid/qpidd.conf ping
ConnectionError: connection aborted
# tail /var/log/messages
[...]
Jul 31 18:54:19 xxx qpidd[35179]: 2015-07-31 18:54:19 [Security] error Rej
Jul 31 18:54:19 xxx qpidd[35179]: 2015-07-31 18:54:19 [Protocol] error Cond: Connection must be encrypted.(320)

Comment 4 Justin Sherrill 2015-12-02 12:26:52 UTC
Reopening, as this seems like a valid bug with at least 2 customer case opened against it, resetting flags.

Comment 6 Sachin Ghai 2015-12-02 12:39:25 UTC
Yeah.. I can reproduce this with latest 6.1.5 compose.

----
[root@sat614-qe-rhel67 ~]# katello-service status
mongod (pid  7834) is running...
listening on 127.0.0.1:27017
connection test successful
qdrouterd (pid 7921) is running...

tomcat6 (pid 8580) is running...                           [  OK  ]
celery init v10.0.
Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat
pulp_celerybeat (pid 8145) is running.
celery init v10.0.
Using config script: /etc/default/pulp_resource_manager
node resource_manager (pid 8084) is running...
elasticsearch (pid  8203) is running...
celery init v10.0.
Using config script: /etc/default/pulp_workers
node reserved_resource_worker-0 (pid 8846) is running...
node reserved_resource_worker-1 (pid 8874) is running...
node reserved_resource_worker-2 (pid 8903) is running...
node reserved_resource_worker-3 (pid 8932) is running...
foreman-proxy (pid  8985) is running...
httpd (pid  9037) is running...
dynflow_executor is running.
dynflow_executor_monitor is running.
Some services failed: qpidd
--

Comment 7 Pavel Moravec 2015-12-03 08:34:24 UTC
I can reproduce it on RHEL6 only. Sat6.1 on RHEL7 is fine.

Root cause is really "service qpidd status" returns nothing but with return value 1.

The cause is:

# bash -x /etc/init.d/qpidd status
..
++ RC=0
++ '[' -z /var/run/qpidd.pid -a -z ' 14098' ']'
++ '[' -n ' 14098' ']'
++ echo 'qpidd (pid  14098) is running...'
++ return 0
+ MESSAGE='qpidd (pid  14098) is running...'
+ qpid_ping
+ test -x /usr/bin/qpid-ha
+ /usr/bin/qpid-ha --config /etc/qpid/qpidd.conf ping
+ return 1
+ RETVAL=1
+ exit 1
#

qpid-ha ping fails since qpidd does not accept unencrypted connections on 5672.

Upgrade to qpid-cpp-server-0.34 with updated /etc/init.d/qpidd script is solution, since this script deals with qpid-ha test in smarter way (does HA test only if the broker is in HA cluster, what would fail fo Sat6 deployment).

Comment 10 Pavel Moravec 2016-03-01 09:51:44 UTC
FYI this has 6.2+ and blocker+ flags but 6.2 snap1 does _not_ contain qpid-cpp of version 0.34 (still 0.30).

Comment 11 Mike Cressman 2016-03-01 16:44:59 UTC
If this is required for Sat 6.2, we'll have to look at backporting to qpid-cpp 0.30

Comment 12 Pavel Moravec 2016-03-07 07:39:27 UTC
*** Bug 1193762 has been marked as a duplicate of this bug. ***

Comment 13 Mike Cressman 2016-03-08 19:02:51 UTC
Builds completed for RHEL 6 and 7.

RHEL-6: https://brewweb.devel.redhat.com/buildinfo?buildID=484732
(qpid-cpp-0.30-11.el6)

RHEL-7: https://brewweb.devel.redhat.com/buildinfo?buildID=485022
(qpid-cpp-0.30-11.el7sat)

Packages from the builds are tagged Satellite-6.2.0-rhel-[67]-candidate, respectively.

Comment 14 Jitendra Yejare 2016-03-21 09:12:10 UTC
Verified this issue in Satellite 6.2 snap 4.

The issue is no more reproducible specially on RHEL6.

The Output :
# katello-service status
postmaster (pid  32313) is running...
mongod (pid  32598) is running...
listening on 127.0.0.1:27017
connection test successful
qdrouterd (pid 5051) is running...
qpidd (pid 2944) is running...
celery init v10.0.
Using config script: /etc/default/pulp_workers
node reserved_resource_worker-0 (pid 4437) is running...
node reserved_resource_worker-1 (pid 4456) is running...
node reserved_resource_worker-2 (pid 4475) is running...
node reserved_resource_worker-3 (pid 4496) is running...
node reserved_resource_worker-4 (pid 4521) is running...
node reserved_resource_worker-5 (pid 4546) is running...
node reserved_resource_worker-6 (pid 4571) is running...
node reserved_resource_worker-7 (pid 4596) is running...
foreman-proxy (pid  2098) is running...
celery init v10.0.
Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat
pulp_celerybeat (pid 3918) is running.
celery init v10.0.
Using config script: /etc/default/pulp_resource_manager
node resource_manager (pid 4154) is running...
tomcat6 (pid 3230) is running...[  OK  ]
dynflow_executor is running.
dynflow_executor_monitor is running.
httpd (pid  4667) is running...
Success!

[root@dell-t7400-01 ~]# service qpidd status
qpidd (pid 2944) is running...


So closing this as Verified !

Comment 17 Simon Reber 2016-06-21 07:15:04 UTC
*** Bug 1300933 has been marked as a duplicate of this bug. ***

Comment 18 Bryan Kearney 2016-07-27 11:37:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1501


Note You need to log in before you can comment on or make changes to this bug.