Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1247636 - When running katello-service stop foreman-proxy is still running on satellite 6.0.8
Summary: When running katello-service stop foreman-proxy is still running on satellite...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Other
Version: 6.0.8
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: Unspecified
Assignee: Brad Buckingham
QA Contact: Corey Welton
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-28 13:54 UTC by jnikolak
Modified: 2019-10-10 10:00 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-27 08:54:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1500 0 normal SHIPPED_LIVE Red Hat Satellite 6.2 Base Libraries 2016-07-27 12:24:38 UTC

Description jnikolak 2015-07-28 13:54:26 UTC
I ran 

katello-service stop

# katello-service stop
Shutting down Katello services...
Stopping httpd:                                            [  OK  ]
celery init v10.0.
Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat
Stopping pulp_celerybeat... OK
celery init v10.0.
Using config script: /etc/default/pulp_workers
celery multi v3.1.11 (Cipater)
> Stopping nodes...
	> reserved_resource_worker-1.pek.redhat.com: TERM -> 2282
	> reserved_resource_worker-5.pek.redhat.com: TERM -> 2374
	> reserved_resource_worker-7.pek.redhat.com: TERM -> 2423
	> reserved_resource_worker-4.pek.redhat.com: TERM -> 2351
	> reserved_resource_worker-6.pek.redhat.com: TERM -> 2400
	> reserved_resource_worker-2.pek.redhat.com: TERM -> 2306
	> reserved_resource_worker-0.pek.redhat.com: TERM -> 2263
	> reserved_resource_worker-3.pek.redhat.com: TERM -> 2332
> Waiting for 8 nodes -> 2282, 2374, 2423, 2351, 2400, 2306, 2263, 2332............
	> reserved_resource_worker-1.pek.redhat.com: OK
> Waiting for 7 nodes -> 2374, 2423, 2351, 2400, 2306, 2263, 2332....
	> reserved_resource_worker-5.pek.redhat.com: OK
> Waiting for 6 nodes -> 2423, 2351, 2400, 2306, 2263, 2332....
	> reserved_resource_worker-7.pek.redhat.com: OK
> Waiting for 5 nodes -> 2351, 2400, 2306, 2263, 2332....
	> reserved_resource_worker-4.pek.redhat.com: OK
> Waiting for 4 nodes -> 2400, 2306, 2263, 2332....
	> reserved_resource_worker-6.pek.redhat.com: OK
> Waiting for 3 nodes -> 2306, 2263, 2332....
	> reserved_resource_worker-2.pek.redhat.com: OK
> Waiting for 2 nodes -> 2263, 2332....
	> reserved_resource_worker-0.pek.redhat.com: OK
> Waiting for 1 node -> 2332....
	> reserved_resource_worker-3.pek.redhat.com: OK

celery init v10.0.
Using config script: /etc/default/pulp_resource_manager
celery multi v3.1.11 (Cipater)
> Stopping nodes...
	> resource_manager.pek.redhat.com: TERM -> 2202
> Waiting for 1 node -> 2202.....
	> resource_manager.pek.redhat.com: OK

Stopping elasticsearch:                                    [  OK  ]
Stopping Qpid AMQP daemon:                                 [  OK  ]
Stopping mongod:                                           [  OK  ]
Stopping tomcat6:                                          [  OK  ]
Done.



There doesn't appear to be any option to have foreman-proxy stopped.


After running

ps aux | grep foreman-proxy
495       1677  0.2  0.6 161260 53168 ?        Sl   09:37   0:01 /usr/bin/ruby /usr/share/foreman-proxy/bin/smart-proxy
root      3470  0.0  0.0 103252   836 pts/0    S+   09:48   0:00 grep foreman-proxy


I still see the foreman-proxy running


I can close this off with
service foreman-proxy stop


Is this issue fixed in satellite 6.1 at it could affect customers who want to re-provision servers.

Comment 2 Brad Buckingham 2015-09-02 18:53:36 UTC
I have tested this on a Satellite 6.1.1 GA install and am no longer observing
the behavior described.  'foreman-proxy' is now stopped as part of the katello-service command.

RPMs:
ruby193-rubygem-katello-2.2.0.66-1.el7sat.noarch
foreman-1.7.2.34-1.el7sat.noarch

Comment 3 jnikolak 2015-09-16 07:35:20 UTC
I'm still seeing the same issue on satellite 6.1.1
It was upgraded from Satellite 6.0.x


# katello-service stop
.....
celery init v10.0.
Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat
Stopping pulp_celerybeat... OK
Stopping elasticsearch:                                    [  OK  ]
Stopping foreman-proxy:                                    [  OK  ]

Stopping httpd:                                            [  OK  ]
Success!
[root@jnikolaksat6rhel6 foreman-proxy]# ps aux | grep foreman-proxy
495       2246  0.2  0.6 162852 53980 ?        Sl   Sep15   3:20 ruby /usr/share/foreman-proxy/bin/smart-proxy
root      4247  0.0  0.0 103312   856 pts/0    S+   17:30   0:00 grep foreman-proxy

Comment 5 Tazim Kolhar 2016-04-03 06:26:16 UTC
VERIFIED:
# rpm -qa foreman
foreman-1.11.0.9-1.el7sat.noarch

Steps:
# katello-service stop
Redirecting to /bin/systemctl stop  foreman-tasks.service

Redirecting to /bin/systemctl stop  httpd.service

Redirecting to /bin/systemctl stop  pulp_workers.service

Redirecting to /bin/systemctl stop  pulp_resource_manager.service

Redirecting to /bin/systemctl stop  pulp_celerybeat.service

Redirecting to /bin/systemctl stop  foreman-proxy.service

Redirecting to /bin/systemctl stop  tomcat.service

Redirecting to /bin/systemctl stop  qdrouterd.service

Redirecting to /bin/systemctl stop  qpidd.service

Redirecting to /bin/systemctl stop  postgresql.service

Redirecting to /bin/systemctl stop  mongod.service

Success!

# ps aux | grep foreman-proxy
root     22917  0.0  0.0 112640   968 pts/0    S+   08:25   0:00 grep --color=auto foreman-proxy

Comment 8 errata-xmlrpc 2016-07-27 08:54:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1500


Note You need to log in before you can comment on or make changes to this bug.