Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1848111 - systemctl status -l pulp_workers return "Active: active (exited)"
Summary: systemctl status -l pulp_workers return "Active: active (exited)"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Pulp
Version: 6.7.0
Hardware: All
OS: All
unspecified
medium
Target Milestone: 6.10.0
Assignee: satellite6-bugs
QA Contact: Lai
URL:
Whiteboard:
: 1858982 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-17 17:19 UTC by Waldirio M Pinheiro
Modified: 2022-08-17 14:44 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-16 14:09:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:4702 0 None None None 2021-11-16 14:09:23 UTC

Description Waldirio M Pinheiro 2020-06-17 17:19:16 UTC
Description of problem:
When checking the status of the service "pulp_workers" we can see the status as "exited" instead of "running" as we can see on all the other services that complement Satellite suite.

Version-Release number of selected component (if applicable):
6.7 (all of them indeed)

How reproducible:
100%

Steps to Reproduce:
1. Access your Satellite 6.7
2. Run the command "systemctl status -l pulp_workers"
3.

Actual results:
---
[root@wallsat67 ~]# systemctl status -l pulp_workers
● pulp_workers.service - Pulp Celery Workers
   Loaded: loaded (/usr/lib/systemd/system/pulp_workers.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
...
---

Expected results:
---
[root@wallsat67 ~]# systemctl status -l pulp_workers
● pulp_workers.service - Pulp Celery Workers
   Loaded: loaded (/usr/lib/systemd/system/pulp_workers.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
...
---

Additional info:

Comment 1 Waldirio M Pinheiro 2020-06-17 17:23:12 UTC
// Complete foreman-maintain service status
---
[root@wallsat67 ~]# foreman-maintain service status
Running Status Services
================================================================================
Get status of applicable services: 
Displaying the following service(s):

rh-mongodb34-mongod, postgresql, qdrouterd, qpidd, squid, pulp_celerybeat, pulp_resource_manager, pulp_streamer, pulp_workers, smart_proxy_dynflow_core, tomcat, dynflowd, httpd, puppetserver, foreman-proxy
\ displaying rh-mongodb34-mongod                                                
● rh-mongodb34-mongod.service - High-performance, schema-free document-oriented database
   Loaded: loaded (/usr/lib/systemd/system/rh-mongodb34-mongod.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:26 EDT; 1 months 12 days ago
  Process: 2914 ExecStart=/opt/rh/rh-mongodb34/root/usr/libexec/mongodb-scl-helper enable $RH_MONGODB34_SCLS_ENABLED -- /opt/rh/rh-mongodb34/root/usr/bin/mongod $OPTIONS run (code=exited, status=0/SUCCESS)
 Main PID: 2919 (mongod)
    Tasks: 51
   CGroup: /system.slice/rh-mongodb34-mongod.service
           └─2919 /opt/rh/rh-mongodb34/root/usr/bin/mongod -f /etc/opt/rh/rh-mongodb34/mongod.conf run

Jun 12 17:25:27 wallsat67.usersys.redhat.com mongod.27017[2919]: [conn218] command pulp_database.repo_content_units command: getMore { getMore: 84020521863, collection: "repo_content_units" } originatingCommand: { find: "repo_content_units", filter: { repo_id: "63b918f9-7aca-44e9-86e2-8dd00334a151", unit_type_id: { $in: [ "rpm" ] } }, projection: { unit_id: 1, unit_type_id: 1 } } planSummary: IXSCAN { repo_id: -1, unit_type_id: -1 } cursorid:84020521863 keysExamined:29024 docsExamined:29024 cursorExhausted:1 numYields:229 nreturned:29024 reslen:2920433 locks:{ Global: { acquireCount: { r: 460 } }, Database: { acquireCount: { r: 230 } }, Collection: { acquireCount: { r: 230 } } } protocol:op_query 158ms
Jun 12 20:14:36 wallsat67.usersys.redhat.com mongod.27017[2919]: [conn223] received client metadata from 127.0.0.1:55148 conn223: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.9" }, os: { type: "Linux", name: 
...
planSummary: DISTINCT_SCAN { unit_id: 1 } keysExamined:1965 docsExamined:0 numYields:15 reslen:45940 locks:{ Global: { acquireCount: { r: 32 } }, Database: { acquireCount: { r: 16 } }, Collection: { acquireCount: { r: 16 } } } protocol:op_query 107ms
Jun 14 22:01:18 wallsat67.usersys.redhat.com mongod.27017[2919]: [conn216] command pulp_database.units_rpm command: getMore { getMore: 212491135696, collection: "units_rpm" } originatingCommand: { find: "units_rpm", filter: {}, projection: { _storage_path: 1, name: 1, checksum: 1, epoch: 1, version: 1, release: 1, _id: 1, arch: 1, checksumtype: 1 }, noCursorTimeout: true } planSummary: COLLSCAN cursorid:212491135696 keysExamined:0 docsExamined:29095 cursorExhausted:1 numYields:228 nreturned:29095 reslen:11291177 locks:{ Global: { acquireCount: { r: 458 } }, Database: { acquireCount: { r: 229 } }, Collection: { acquireCount: { r: 229 } } } protocol:op_query 164ms
Jun 17 13:10:55 wallsat67.usersys.redhat.com mongod.27017[2919]: [conn245] received client metadata from 127.0.0.1:53966 conn245: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.9" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.8 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-1062.el7.x86_64" } }
| displaying postgresql                                                         
● postgresql.service - PostgreSQL database server
   Loaded: loaded (/etc/systemd/system/postgresql.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:27 EDT; 1 months 12 days ago
  Process: 2787 ExecStop=/usr/bin/pg_ctl stop -D ${PGDATA} -s -m fast (code=exited, status=0/SUCCESS)
  Process: 2951 ExecStart=/usr/bin/pg_ctl start -D ${PGDATA} -s -o -p ${PGPORT} -w -t 300 (code=exited, status=0/SUCCESS)
  Process: 2945 ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
 Main PID: 2954 (postgres)
    Tasks: 28
   CGroup: /system.slice/postgresql.service
           ├─ 2954 /usr/bin/postgres -D /var/lib/pgsql/data -p 5432
           ├─ 2955 postgres: logger process                        
           ├─ 2957 postgres: checkpointer process                  
           ├─ 2958 postgres: writer process                        
           ├─ 2959 postgres: wal writer process                    
           ├─ 2960 postgres: autovacuum launcher process           
           ├─ 2961 postgres: stats collector process               
           ├─ 3863 postgres: candlepin candlepin 127.0.0.1(53384) idl
           ├─ 3864 postgres: candlepin candlepin 127.0.0.1(53386) idl
           ├─ 3865 postgres: candlepin candlepin 127.0.0.1(53388) idl
           ├─ 3873 postgres: candlepin candlepin 127.0.0.1(53390) idl
           ├─ 3874 postgres: candlepin candlepin 127.0.0.1(53392) idl
           ├─ 4090 postgres: foreman foreman [local] idle          
           ├─ 4106 postgres: foreman foreman [local] idle          
           ├─ 4110 postgres: foreman foreman [local] idle          
           ├─ 4139 postgres: foreman foreman [local] idle          
           ├─12831 postgres: foreman foreman [local] idle          
           ├─12836 postgres: foreman foreman [local] idle          
           ├─12837 postgres: foreman foreman [local] idle          
           ├─14529 postgres: foreman foreman [local] idle          
           ├─24492 postgres: foreman foreman [local] idle          
           ├─24851 postgres: foreman foreman [local] idle          
           ├─24902 postgres: candlepin candlepin 127.0.0.1(42598) idl
           ├─25075 postgres: candlepin candlepin 127.0.0.1(42642) idl
           ├─25883 postgres: foreman foreman [local] idle          
           ├─26792 postgres: candlepin candlepin 127.0.0.1(43024) idl
           ├─26793 postgres: candlepin candlepin 127.0.0.1(43026) idl
           └─27315 postgres: candlepin candlepin 127.0.0.1(43162) idl

May 05 15:48:26 wallsat67.usersys.redhat.com systemd[1]: Starting PostgreSQL database server...
May 05 15:48:27 wallsat67.usersys.redhat.com systemd[1]: Started PostgreSQL database server.
| displaying qdrouterd                                                          
● qdrouterd.service - Qpid Dispatch router daemon
   Loaded: loaded (/usr/lib/systemd/system/qdrouterd.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/qdrouterd.service.d
           └─90-limits.conf
   Active: active (running) since Tue 2020-05-05 15:48:27 EDT; 1 months 12 days ago
 Main PID: 2965 (qdrouterd)
    Tasks: 3
   CGroup: /system.slice/qdrouterd.service
           └─2965 /usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf

May 05 15:48:33 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-05-05 15:48:33.805018 -0400 ROUTER_CORE (info) Link Route Activated 'linkRoute/3' on connection broker
May 05 15:48:39 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-05-05 15:48:39.870207 -0400 SERVER (info) [2]: Connection from 10.8.29.169:45602 (to :5647) failed: proton:io Connection reset by peer - on write to :5672 (SSL Failure: Unknown error)
May 05 15:48:39 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-05-05 15:48:39.935783 -0400 SERVER (info) [4]: Accepted connection to :5647 from 10.8.29.169:45604
May 05 22:57:50 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-05-05 22:57:45.493638 -0400 SERVER (info) [4]: Connection from 10.8.29.169:45604 (to :5647) failed: amqp:connection:framing-error SSL Failure: Unknown error
May 05 22:57:59 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-05-05 22:57:59.413782 -0400 SERVER (info) [5]: Accepted connection to :5647 from 10.8.29.169:45640
May 06 00:02:40 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-05-06 00:02:40.218800 -0400 SERVER (info) [5]: Connection from 10.8.29.169:45640 (to :5647) failed: amqp:resource-limit-exceeded local-idle-timeout expired
May 06 00:11:33 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-05-06 00:11:33.576919 -0400 SERVER (info) [6]: Accepted connection to :5647 from 10.8.29.169:45642
Jun 04 16:24:47 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-06-04 16:24:47.447452 -0400 SERVER (info) [6]: Connection from 10.8.29.169:45642 (to :5647) failed: amqp:connection:framing-error SSL Failure: Unknown error
Jun 04 16:24:49 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-06-04 16:24:49.059385 -0400 SERVER (info) [7]: Accepted connection to :5647 from 10.8.29.169:48344
Jun 04 16:24:49 wallsat67.usersys.redhat.com qdrouterd[2965]: 2020-06-04 16:24:49.367834 -0400 SERVER (info) [8]: Accepted connection to :5647 from 10.8.29.169:48346
| displaying qpidd                                                              
● qpidd.service - An AMQP message broker daemon.
   Loaded: loaded (/usr/lib/systemd/system/qpidd.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/qpidd.service.d
           └─90-limits.conf, wait-for-port.conf
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
     Docs: man:qpidd(1)
           http://qpid.apache.org/
  Process: 2969 ExecStartPost=/bin/bash -c while ! nc -z 127.0.0.1 5671; do sleep 1; done (code=exited, status=0/SUCCESS)
 Main PID: 2968 (qpidd)
    Tasks: 4
   CGroup: /system.slice/qpidd.service
           └─2968 /usr/sbin/qpidd --config /etc/qpid/qpidd.conf

Jun 12 20:14:46 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:46 [Protocol] error Connection qpid.[::1]:5672-[::1]:56782 closed by error: connection-forced: Connection must be encrypted.(320)
Jun 12 20:14:46 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:46 [Protocol] error Connection qpid.[::1]:5672-[::1]:56782 closed by error: connection-forced: Connection must be encrypted.(320)
Jun 12 20:14:47 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:47 [Security] error Rejected un-encrypted connection.
Jun 12 20:14:47 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:47 [Security] error Rejected un-encrypted connection.
Jun 12 20:14:47 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:47 [Protocol] error Connection qpid.[::1]:5672-[::1]:56784 closed by error: connection-forced: Connection must be encrypted.(320)
Jun 12 20:14:47 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:47 [Protocol] error Connection qpid.[::1]:5672-[::1]:56784 closed by error: connection-forced: Connection must be encrypted.(320)
Jun 12 20:14:47 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:47 [Security] error Rejected un-encrypted connection.
Jun 12 20:14:47 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:47 [Security] error Rejected un-encrypted connection.
Jun 12 20:14:47 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:47 [Protocol] error Connection qpid.127.0.0.1:5672-127.0.0.1:35530 closed by error: connection-forced: Connection must be encrypted.(320)
Jun 12 20:14:47 wallsat67.usersys.redhat.com qpidd[2968]: 2020-06-12 20:14:47 [Protocol] error Connection qpid.127.0.0.1:5672-127.0.0.1:35530 closed by error: connection-forced: Connection must be encrypted.(320)
| displaying squid                                                              
● squid.service - Squid caching proxy
   Loaded: loaded (/usr/lib/systemd/system/squid.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
  Process: 2776 ExecStop=/usr/sbin/squid -k shutdown -f $SQUID_CONF (code=exited, status=0/SUCCESS)
  Process: 19650 ExecReload=/usr/sbin/squid $SQUID_OPTS -k reconfigure -f $SQUID_CONF (code=exited, status=0/SUCCESS)
  Process: 2987 ExecStart=/usr/sbin/squid $SQUID_OPTS -f $SQUID_CONF (code=exited, status=0/SUCCESS)
  Process: 2981 ExecStartPre=/usr/libexec/squid/cache_swap.sh (code=exited, status=0/SUCCESS)
 Main PID: 2989 (squid)
    Tasks: 3
   CGroup: /system.slice/squid.service
           ├─ 2989 /usr/sbin/squid -f /etc/squid/squid.conf
           ├─ 2991 (squid-1) -f /etc/squid/squid.conf
           └─19653 (logfile-daemon) /var/log/squid/access.log

May 05 15:48:29 wallsat67.usersys.redhat.com systemd[1]: Starting Squid caching proxy...
May 05 15:48:29 wallsat67.usersys.redhat.com squid[2989]: Squid Parent: will start 1 kids
May 05 15:48:29 wallsat67.usersys.redhat.com squid[2989]: Squid Parent: (squid-1) process 2991 started
May 05 15:48:29 wallsat67.usersys.redhat.com systemd[1]: Started Squid caching proxy.
May 06 00:02:44 wallsat67.usersys.redhat.com systemd[1]: Reloading Squid caching proxy.
May 06 00:02:44 wallsat67.usersys.redhat.com systemd[1]: Reloaded Squid caching proxy.
May 06 00:02:46 wallsat67.usersys.redhat.com systemd[1]: Reloading Squid caching proxy.
May 06 00:02:46 wallsat67.usersys.redhat.com systemd[1]: Reloaded Squid caching proxy.
/ displaying pulp_celerybeat                                                    
● pulp_celerybeat.service - Pulp's Celerybeat
   Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
 Main PID: 2994 (celery)
    Tasks: 5
   CGroup: /system.slice/pulp_celerybeat.service
           └─2994 /usr/bin/python2 /usr/bin/celery beat --app=pulp.server.async.celery_instance.celery --scheduler=pulp.server.async.scheduler.Scheduler

Jun 17 11:33:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 11:43:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 11:53:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 12:03:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 12:13:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 12:23:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 12:33:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 12:43:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 12:53:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Jun 17 13:03:43 wallsat67.usersys.redhat.com pulp[2994]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
/ displaying pulp_resource_manager                                              
● pulp_resource_manager.service - Pulp Resource Manager
   Loaded: loaded (/usr/lib/systemd/system/pulp_resource_manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
 Main PID: 2997 (celery)
    Tasks: 15
   CGroup: /system.slice/pulp_resource_manager.service
           ├─2997 /usr/bin/python2 /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
           └─3225 /usr/bin/python2 /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid

May 18 20:36:33 wallsat67.usersys.redhat.com pulp[2997]: celery.worker.strategy:INFO: Received task: pulp.server.async.tasks._queue_reserved_task[38a163a0-b27d-4a95-86b2-3ac564c5ef98]
May 18 20:36:33 wallsat67.usersys.redhat.com pulp[3225]: celery.app.trace:INFO: [38a163a0] Task pulp.server.async.tasks._queue_reserved_task[38a163a0-b27d-4a95-86b2-3ac564c5ef98] succeeded in 0.0591363422573s: None
Jun 12 16:25:22 wallsat67.usersys.redhat.com pulp[2997]: celery.worker.strategy:INFO: Received task: pulp.server.async.tasks._queue_reserved_task[bbffc411-a3d4-44d6-a9a0-b70b55405148]
Jun 12 16:25:22 wallsat67.usersys.redhat.com pulp[3225]: celery.app.trace:INFO: [bbffc411] Task pulp.server.async.tasks._queue_reserved_task[bbffc411-a3d4-44d6-a9a0-b70b55405148] succeeded in 0.123051095754s: None
Jun 12 16:25:22 wallsat67.usersys.redhat.com pulp[2997]: celery.worker.strategy:INFO: Received task: pulp.server.async.tasks._queue_reserved_task[660db825-3cd4-494d-9c05-fd87ac8fbc4e]
Jun 12 16:25:22 wallsat67.usersys.redhat.com pulp[3225]: celery.app.trace:INFO: [660db825] Task pulp.server.async.tasks._queue_reserved_task[660db825-3cd4-494d-9c05-fd87ac8fbc4e] succeeded in 0.0553151601925s: None
Jun 12 16:25:39 wallsat67.usersys.redhat.com pulp[2997]: celery.worker.strategy:INFO: Received task: pulp.server.async.tasks._queue_reserved_task[51ae6659-4ed4-4143-9993-f8cb28663a5e]
Jun 12 16:25:39 wallsat67.usersys.redhat.com pulp[3225]: celery.app.trace:INFO: [51ae6659] Task pulp.server.async.tasks._queue_reserved_task[51ae6659-4ed4-4143-9993-f8cb28663a5e] succeeded in 0.104989361949s: None
Jun 12 17:10:42 wallsat67.usersys.redhat.com pulp[2997]: celery.worker.strategy:INFO: Received task: pulp.server.async.tasks._queue_reserved_task[00448b7a-f9c1-4ffd-9a9b-458ed636f158]
Jun 12 17:10:42 wallsat67.usersys.redhat.com pulp[3225]: celery.app.trace:INFO: [00448b7a] Task pulp.server.async.tasks._queue_reserved_task[00448b7a-f9c1-4ffd-9a9b-458ed636f158] succeeded in 0.0534203695133s: None
/ displaying pulp_streamer                                                      
● pulp_streamer.service - The Pulp lazy content loading streamer
   Loaded: loaded (/usr/lib/systemd/system/pulp_streamer.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
 Main PID: 3000 (pulp_streamer)
    Tasks: 3
   CGroup: /system.slice/pulp_streamer.service
           └─3000 /usr/bin/python /usr/bin/pulp_streamer --nodaemon --syslog --prefix=pulp_streamer --pidfile= --python /usr/share/pulp/wsgi/streamer.tac

May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: pulp.plugins.loader.manager:INFO: Loaded plugin yum_profiler for types: rpm,erratum,modulemd
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: pulp.plugins.loader.manager:INFO: Loaded plugin puppet_whole_repo_profiler for types: puppet_module
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: pulp.plugins.loader.manager:INFO: Loaded plugin yum for types: rpm
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: pulp.plugins.loader.manager:INFO: Loaded plugin rhui for types: rpm
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: [-] Loading /usr/share/pulp/wsgi/streamer.tac...
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: [-] Loaded.
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: [-] twistd 16.4.1 (/usr/bin/python 2.7.5) starting up.
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: [-] reactor class: twisted.internet.epollreactor.EPollReactor.
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: [-] Site starting on 8751
May 05 15:48:35 wallsat67.usersys.redhat.com pulp_streamer[3000]: [-] Starting factory <twisted.web.server.Site instance at 0x7fc862cd3b00>
/ displaying pulp_workers                                                       
● pulp_workers.service - Pulp Celery Workers
   Loaded: loaded (/usr/lib/systemd/system/pulp_workers.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
  Process: 2725 ExecStop=/usr/bin/python2 -m pulp.server.async.manage_workers stop (code=exited, status=0/SUCCESS)
  Process: 3003 ExecStart=/usr/bin/python2 -m pulp.server.async.manage_workers start (code=exited, status=0/SUCCESS)
 Main PID: 3003 (code=exited, status=0/SUCCESS)
    Tasks: 0
   CGroup: /system.slice/pulp_workers.service

May 05 15:48:29 wallsat67.usersys.redhat.com systemd[1]: Starting Pulp Celery Workers...
May 05 15:48:29 wallsat67.usersys.redhat.com systemd[1]: Started Pulp Celery Workers.
/ displaying smart_proxy_dynflow_core                                           
● smart_proxy_dynflow_core.service - Foreman smart proxy dynflow core service
   Loaded: loaded (/usr/lib/systemd/system/smart_proxy_dynflow_core.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/smart_proxy_dynflow_core.service.d
           └─90-limits.conf
   Active: active (running) since Tue 2020-05-05 15:48:32 EDT; 1 months 12 days ago
     Docs: https://github.com/theforeman/smart_proxy_dynflow
  Process: 3019 ExecStart=/usr/bin/smart_proxy_dynflow_core -d -p /var/run/foreman-proxy/smart_proxy_dynflow_core.pid (code=exited, status=0/SUCCESS)
 Main PID: 3087 (ruby)
    Tasks: 6
   CGroup: /system.slice/smart_proxy_dynflow_core.service
           └─3087 ruby /usr/bin/smart_proxy_dynflow_core -d -p /var/run/foreman-proxy/smart_proxy_dynflow_core.pid

May 05 15:48:29 wallsat67.usersys.redhat.com systemd[1]: Starting Foreman smart proxy dynflow core service...
May 05 15:48:32 wallsat67.usersys.redhat.com systemd[1]: Started Foreman smart proxy dynflow core service.
- displaying tomcat                                                             
● tomcat.service - Apache Tomcat Web Application Container
   Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:32 EDT; 1 months 12 days ago
 Main PID: 3092 (java)
    Tasks: 106
   CGroup: /system.slice/tomcat.service
           └─3092 /usr/lib/jvm/jre/bin/java -Xms1024m -Xmx4096m -classpath /usr/share/tomcat/bin/bootstrap.jar:/usr/share/tomcat/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat -Dcatalina.home=/usr/share/tomcat -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat/temp -Djava.util.logging.config.file=/usr/share/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start

May 05 15:49:09 wallsat67.usersys.redhat.com server[3092]: May 05, 2020 3:49:09 PM com.google.inject.internal.ProxyFactory <init>
May 05 15:49:09 wallsat67.usersys.redhat.com server[3092]: WARNING: Method [public org.candlepin.model.Persisted org.candlepin.model.RulesCurator.create(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@23700ce2]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
May 05 15:49:09 wallsat67.usersys.redhat.com server[3092]: May 05, 2020 3:49:09 PM com.google.inject.internal.ProxyFactory <init>
May 05 15:49:09 wallsat67.usersys.redhat.com server[3092]: WARNING: Method [public void org.candlepin.model.EntitlementCertificateCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@23700ce2]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
May 05 15:49:42 wallsat67.usersys.redhat.com server[3092]: May 05, 2020 3:49:42 PM org.apache.catalina.startup.HostConfig deployDirectory
May 05 15:49:42 wallsat67.usersys.redhat.com server[3092]: INFO: Deployment of web application directory /var/lib/tomcat/webapps/candlepin has finished in 63,711 ms
May 05 15:49:42 wallsat67.usersys.redhat.com server[3092]: May 05, 2020 3:49:42 PM org.apache.coyote.AbstractProtocol start
May 05 15:49:42 wallsat67.usersys.redhat.com server[3092]: INFO: Starting ProtocolHandler ["http-bio-8443"]
May 05 15:49:43 wallsat67.usersys.redhat.com server[3092]: May 05, 2020 3:49:42 PM org.apache.catalina.startup.Catalina start
May 05 15:49:43 wallsat67.usersys.redhat.com server[3092]: INFO: Server startup in 64362 ms
- displaying dynflowd                                                           
● dynflowd.service - Foreman jobs daemon
   Loaded: loaded (/usr/lib/systemd/system/dynflowd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:48:46 EDT; 1 months 12 days ago
     Docs: https://theforeman.org
  Process: 2302 ExecStop=/usr/sbin/dynflowd stop (code=exited, status=0/SUCCESS)
  Process: 3095 ExecStart=/usr/sbin/dynflowd start (code=exited, status=0/SUCCESS)
    Tasks: 12
   CGroup: /system.slice/dynflowd.service
           ├─3280 dynflow_executor             
           └─3282 dynflow_executor_monitor     

May 05 15:48:32 wallsat67.usersys.redhat.com systemd[1]: Starting Foreman jobs daemon...
May 05 15:48:45 wallsat67.usersys.redhat.com dynflowd[3095]: /usr/share/foreman/lib/foreman.rb:8: warning: already initialized constant Foreman::UUID_REGEXP
May 05 15:48:45 wallsat67.usersys.redhat.com dynflowd[3095]: /usr/share/foreman/lib/foreman.rb:8: warning: previous definition of UUID_REGEXP was here
May 05 15:48:46 wallsat67.usersys.redhat.com dynflowd[3095]: Dynflow Executor: start in progress
May 05 15:48:46 wallsat67.usersys.redhat.com dynflowd[3095]: /opt/theforeman/tfm/root/usr/share/gems/gems/daemons-1.2.3/lib/daemons/daemonize.rb:75: warning: conflicting chdir during another chdir block
May 05 15:48:46 wallsat67.usersys.redhat.com dynflowd[3095]: /opt/theforeman/tfm/root/usr/share/gems/gems/daemons-1.2.3/lib/daemons/daemonize.rb:108: warning: conflicting chdir during another chdir block
May 05 15:48:46 wallsat67.usersys.redhat.com dynflowd[3095]: dynflow_executor: process with pid 3280 started.
May 05 15:48:46 wallsat67.usersys.redhat.com systemd[1]: Started Foreman jobs daemon.
- displaying httpd                                                              
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-12 20:51:31 EDT; 1 months 5 days ago
     Docs: man:httpd(8)
           man:apachectl(8)
  Process: 18783 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
  Process: 24283 ExecReload=/usr/sbin/httpd $OPTIONS -k graceful (code=exited, status=0/SUCCESS)
 Main PID: 18942 (httpd)
   Status: "Total requests: 0; Current requests/sec: 0; Current traffic:   0 B/sec"
    Tasks: 187
   CGroup: /system.slice/httpd.service
           ├─10779 /usr/sbin/httpd -DFOREGROUND
           ├─10789 /usr/sbin/httpd -DFOREGROUND
           ├─10812 /usr/sbin/httpd -DFOREGROUND
           ├─12821 Passenger RackApp: /usr/share/foreman                                      
           ├─18942 /usr/sbin/httpd -DFOREGROUND
           ├─24346 (wsgi:pulp)     -DFOREGROUND
           ├─24347 (wsgi:pulp)     -DFOREGROUND
           ├─24348 (wsgi:pulp)     -DFOREGROUND
           ├─24349 (wsgi:pulp-cont -DFOREGROUND
           ├─24350 (wsgi:pulp-cont -DFOREGROUND
           ├─24351 (wsgi:pulp-cont -DFOREGROUND
           ├─24352 (wsgi:pulp_forg -DFOREGROUND
           ├─24353 PassengerWatchdog
           ├─24356 PassengerHelperAgent
           ├─24364 PassengerLoggingAgent
           ├─24372 /usr/sbin/httpd -DFOREGROUND
           ├─24373 /usr/sbin/httpd -DFOREGROUND
           ├─24374 /usr/sbin/httpd -DFOREGROUND
           ├─24375 /usr/sbin/httpd -DFOREGROUND
           ├─24376 /usr/sbin/httpd -DFOREGROUND
           ├─24377 /usr/sbin/httpd -DFOREGROUND
           ├─24378 /usr/sbin/httpd -DFOREGROUND
           ├─24379 /usr/sbin/httpd -DFOREGROUND
           └─24479 Passenger AppPreloader: /usr/share/foreman                                 

Jun 14 03:40:11 wallsat67.usersys.redhat.com pulp[24348]: gofer.messaging.adapter.connect:INFO: connected: qpid+ssl://localhost:5671
Jun 14 03:40:11 wallsat67.usersys.redhat.com pulp[24346]: gofer.messaging.adapter.qpid.connection:INFO: opened: qpid+ssl://localhost:5671
Jun 14 03:40:11 wallsat67.usersys.redhat.com pulp[24346]: gofer.messaging.adapter.connect:INFO: connected: qpid+ssl://localhost:5671
Jun 14 03:40:11 wallsat67.usersys.redhat.com pulp[24347]: gofer.messaging.adapter.qpid.connection:INFO: opened: qpid+ssl://localhost:5671
Jun 14 03:40:11 wallsat67.usersys.redhat.com pulp[24347]: gofer.messaging.adapter.connect:INFO: connected: qpid+ssl://localhost:5671
Jun 14 22:00:51 wallsat67.usersys.redhat.com pulp[24348]: kombu.transport.qpid:INFO: Connected to qpid with SASL mechanism ANONYMOUS
Jun 14 22:00:52 wallsat67.usersys.redhat.com pulp[24347]: kombu.transport.qpid:INFO: Connected to qpid with SASL mechanism ANONYMOUS
Jun 14 23:00:56 wallsat67.usersys.redhat.com pulp[24347]: kombu.transport.qpid:INFO: Connected to qpid with SASL mechanism ANONYMOUS
Jun 15 23:00:56 wallsat67.usersys.redhat.com pulp[24347]: kombu.transport.qpid:INFO: Connected to qpid with SASL mechanism ANONYMOUS
Jun 16 23:00:55 wallsat67.usersys.redhat.com pulp[24346]: kombu.transport.qpid:INFO: Connected to qpid with SASL mechanism ANONYMOUS
- displaying puppetserver                                                       
● puppetserver.service - puppetserver Service
   Loaded: loaded (/usr/lib/systemd/system/puppetserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:50:27 EDT; 1 months 12 days ago
  Process: 2663 ExecStop=/opt/puppetlabs/server/apps/puppetserver/bin/puppetserver stop (code=exited, status=0/SUCCESS)
  Process: 3348 ExecStart=/opt/puppetlabs/server/apps/puppetserver/bin/puppetserver start (code=exited, status=0/SUCCESS)
 Main PID: 3388 (java)
    Tasks: 52 (limit: 4915)
   CGroup: /system.slice/puppetserver.service
           └─3388 /usr/bin/java -Xms2G -Xmx2G -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -Djava.security.egd=file:/dev/urandom -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar:/opt/puppetlabs/server/apps/puppetserver/jruby-1_7.jar:/opt/puppetlabs/server/data/puppetserver/jars/* clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d --bootstrap-config /etc/puppetlabs/puppetserver/services.d/,/opt/puppetlabs/server/apps/puppetserver/config/services.d/ --restart-file /opt/puppetlabs/server/data/puppetserver/restartcounter

May 05 15:48:48 wallsat67.usersys.redhat.com systemd[1]: Starting puppetserver Service...
May 05 15:50:27 wallsat67.usersys.redhat.com systemd[1]: Started puppetserver Service.
- displaying foreman-proxy                                                      
● foreman-proxy.service - Foreman Proxy
   Loaded: loaded (/usr/lib/systemd/system/foreman-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-05-05 15:50:31 EDT; 1 months 12 days ago
 Main PID: 4235 (ruby)
    Tasks: 8
   CGroup: /system.slice/foreman-proxy.service
           └─4235 ruby /usr/share/foreman-proxy/bin/smart-proxy --no-daemonize

Jun 15 01:54:53 wallsat67.usersys.redhat.com smart-proxy[4235]: wallsat67.usersys.redhat.com - - [15/Jun/2020:01:54:53 EDT] "GET /pulp/status/disk_usage HTTP/1.1" 200 375
Jun 15 01:54:53 wallsat67.usersys.redhat.com smart-proxy[4235]: - -> /pulp/status/disk_usage
Jun 15 13:55:08 wallsat67.usersys.redhat.com smart-proxy[4235]: wallsat67.usersys.redhat.com - - [15/Jun/2020:13:55:08 EDT] "GET /pulp/status/disk_usage HTTP/1.1" 200 375
Jun 15 13:55:08 wallsat67.usersys.redhat.com smart-proxy[4235]: - -> /pulp/status/disk_usage
Jun 16 01:55:11 wallsat67.usersys.redhat.com smart-proxy[4235]: wallsat67.usersys.redhat.com - - [16/Jun/2020:01:55:11 EDT] "GET /pulp/status/disk_usage HTTP/1.1" 200 375
Jun 16 01:55:11 wallsat67.usersys.redhat.com smart-proxy[4235]: - -> /pulp/status/disk_usage
Jun 16 13:55:26 wallsat67.usersys.redhat.com smart-proxy[4235]: wallsat67.usersys.redhat.com - - [16/Jun/2020:13:55:26 EDT] "GET /pulp/status/disk_usage HTTP/1.1" 200 375
Jun 16 13:55:26 wallsat67.usersys.redhat.com smart-proxy[4235]: - -> /pulp/status/disk_usage
Jun 17 01:55:41 wallsat67.usersys.redhat.com smart-proxy[4235]: wallsat67.usersys.redhat.com - - [17/Jun/2020:01:55:41 EDT] "GET /pulp/status/disk_usage HTTP/1.1" 200 375
Jun 17 01:55:41 wallsat67.usersys.redhat.com smart-proxy[4235]: - -> /pulp/status/disk_usage
\ All services are running                                            [OK]      
--------------------------------------------------------------------------------

[root@wallsat67 ~]#
---

Note. Everything looks good.


// Here we can see the service name + the status
---
[root@wallsat67 ~]# foreman-maintain service status | grep -E '(\.service - |Active)'
● rh-mongodb34-mongod.service - High-performance, schema-free document-oriented database
   Active: active (running) since Tue 2020-05-05 15:48:26 EDT; 1 months 12 days ago
● postgresql.service - PostgreSQL database server
   Active: active (running) since Tue 2020-05-05 15:48:27 EDT; 1 months 12 days ago
● qdrouterd.service - Qpid Dispatch router daemon
   Active: active (running) since Tue 2020-05-05 15:48:27 EDT; 1 months 12 days ago
● qpidd.service - An AMQP message broker daemon.
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
● squid.service - Squid caching proxy
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
● pulp_celerybeat.service - Pulp's Celerybeat
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
● pulp_resource_manager.service - Pulp Resource Manager
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
● pulp_streamer.service - The Pulp lazy content loading streamer
   Active: active (running) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
● pulp_workers.service - Pulp Celery Workers
   Active: active (exited) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
● smart_proxy_dynflow_core.service - Foreman smart proxy dynflow core service
   Active: active (running) since Tue 2020-05-05 15:48:32 EDT; 1 months 12 days ago
● tomcat.service - Apache Tomcat Web Application Container
   Active: active (running) since Tue 2020-05-05 15:48:32 EDT; 1 months 12 days ago
● dynflowd.service - Foreman jobs daemon
   Active: active (running) since Tue 2020-05-05 15:48:46 EDT; 1 months 12 days ago
● httpd.service - The Apache HTTP Server
   Active: active (running) since Tue 2020-05-12 20:51:31 EDT; 1 months 5 days ago
● puppetserver.service - puppetserver Service
   Active: active (running) since Tue 2020-05-05 15:50:27 EDT; 1 months 12 days ago
● foreman-proxy.service - Foreman Proxy
   Active: active (running) since Tue 2020-05-05 15:50:31 EDT; 1 months 12 days ago
[root@wallsat67 ~]#
---

Note. Only pulp_workers present the exited

	---
	● pulp_workers.service - Pulp Celery Workers
	   Active: active (exited) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
	---


// Checking only this service
---
[root@wallsat67 ~]# systemctl status -l pulp_workers
● pulp_workers.service - Pulp Celery Workers
   Loaded: loaded (/usr/lib/systemd/system/pulp_workers.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2020-05-05 15:48:29 EDT; 1 months 12 days ago
  Process: 2725 ExecStop=/usr/bin/python2 -m pulp.server.async.manage_workers stop (code=exited, status=0/SUCCESS)
  Process: 3003 ExecStart=/usr/bin/python2 -m pulp.server.async.manage_workers start (code=exited, status=0/SUCCESS)
 Main PID: 3003 (code=exited, status=0/SUCCESS)
    Tasks: 0
   CGroup: /system.slice/pulp_workers.service

May 05 15:48:29 wallsat67.usersys.redhat.com systemd[1]: Starting Pulp Celery Workers...
May 05 15:48:29 wallsat67.usersys.redhat.com systemd[1]: Started Pulp Celery Workers.
[root@wallsat67 ~]#
---


// Here we can see the process is up and ok
---
[root@wallsat67 ~]# ps -ef | grep worke
root         4     2  0 Apr20 ?        00:00:00 [kworker/0:0H]
root        16     2  0 Apr20 ?        00:00:00 [kworker/1:0H]
root       615     2  0 Apr20 ?        00:00:29 [kworker/1:1H]
root      1229     2  0 Apr20 ?        00:00:54 [kworker/0:1H]
apache    2997     1  0 May05 ?        06:37:16 /usr/bin/python2 /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache    3225  2997  0 May05 ?        00:42:40 /usr/bin/python2 /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache   10555     1  0 Jun11 ?        00:54:01 /usr/bin/python2 /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid
apache   10563     1  0 Jun11 ?        00:54:10 /usr/bin/python2 /usr/bin/celery worker -n reserved_resource_worker-1@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-1.pid
apache   10615 10563  0 Jun11 ?        00:07:27 /usr/bin/python2 /usr/bin/celery worker -n reserved_resource_worker-1@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-1.pid
apache   10625 10555  0 Jun11 ?        00:57:41 /usr/bin/python2 /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid
root     23972     2  0 12:50 ?        00:00:00 [kworker/1:1]
root     25621     2  0 13:00 ?        00:00:00 [kworker/0:3]
root     25642     2  0 13:00 ?        00:00:00 [kworker/1:0]
root     26094     2  0 Jun16 ?        00:00:02 [kworker/u4:1]
root     26283     2  0 Jun16 ?        00:00:00 [kworker/u4:2]
root     26573     2  0 13:05 ?        00:00:00 [kworker/0:0]
root     27278     2  0 13:10 ?        00:00:00 [kworker/0:1]
root     28446     2  0 13:13 ?        00:00:00 [kworker/1:2]
root     28848 27342  0 13:14 pts/0    00:00:00 grep --color=auto worke
[root@wallsat67 ~]# 
---

// And just to confirm, the Satellite is up and running on this server
---
[root@wallsat67 ~]# hammer ping
database:       
    Status:          ok
    Server Response: Duration: 0ms
candlepin:      
    Status:          ok
    Server Response: Duration: 31ms
candlepin_auth: 
    Status:          ok
    Server Response: Duration: 28ms
pulp:           
    Status:          ok
    Server Response: Duration: 102ms
pulp_auth:      
    Status:          ok
    Server Response: Duration: 52ms
foreman_tasks:  
    Status:          ok
    Server Response: Duration: 7ms

[root@wallsat67 ~]#
---

Comment 2 Kavita 2020-07-21 06:58:28 UTC
*** Bug 1858982 has been marked as a duplicate of this bug. ***

Comment 3 Tanya Tereshchenko 2021-04-19 17:48:36 UTC
This is solved in Pulp 3. There are individual services for each worker in Pulp 3.

● pulpcore-worker - Pulp RQ Worker
   Loaded: loaded (/usr/lib/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-04-14 16:55:15 UTC; 5 days ago
 Main PID: 6206 (rq)
   Memory: 106.1M
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker
           └─6206 /usr/local/lib/pulp/bin/python3.6 /usr/local/lib/pulp/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-1/reserved-resource-wor...

Apr 19 14:39:23 pulp2-nightly-pulp3-source-centos7.rhtemp.example.com rq[6206]: pulp [None]: rq.worker:INFO: Cleaning registries for queue: 6206@pulp2-nightly-pulp3-sourc...mple.com
Apr 19 14:59:30 pulp2-nightly-pulp3-source-centos7.rhtemp.example.com rq[6206]: pulp [None]: rq.worker:INFO: Cleaning registries for queue: 6206@pulp2-nightly-pulp3-sourc...mple.com


● pulpcore-worker - Pulp RQ Worker
   Loaded: loaded (/usr/lib/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-04-14 16:55:15 UTC; 5 days ago
 Main PID: 6205 (rq)
   Memory: 108.3M
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker
           └─6205 /usr/local/lib/pulp/bin/python3.6 /usr/local/lib/pulp/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-2/reserved-resource-wor...

Apr 19 14:39:23 pulp2-nightly-pulp3-source-centos7.rhtemp.example.com rq[6205]: pulp [None]: rq.worker:INFO: Cleaning registries for queue: 6205@pulp2-nightly-pulp3-sourc...mple.com
Apr 19 14:59:30 pulp2-nightly-pulp3-source-centos7.rhtemp.example.com rq[6205]: pulp [None]: rq.worker:INFO: Cleaning registries for queue: 6205@pulp2-nightly-pulp3-sourc...mple.com

Comment 4 Tanya Tereshchenko 2021-04-19 17:49:57 UTC
Keeping this BZ to be verified for Pulp 3.
Pulp 2 is in maintenance mode, there are no plans to fix it there.

Comment 5 Lai 2021-07-09 19:46:56 UTC
Steps to retest.

1. On a 6.10 sat, run the following command: systemctl status -l pulpcore-worker
2. Check that pulpworker is active (running)
3. Repeat steps 1-2 with 'pulpcore-worker'

Expected result:
pulp workers should have "Active: active (running)"

Actual:
pulpcore-worker - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-07-02 11:02:15 EDT; 1 weeks 0 days ago
 Main PID: 9085 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker
           └─9085 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker -c pulpcore.rqconfig --disable-job-desc-logging

pulpcore-worker - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-07-02 11:02:15 EDT; 1 weeks 0 days ago
 Main PID: 9103 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker
           └─9103 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker -c pulpcore.rqconfig --disable-job-desc-logging

Verified on 6.10_07

Comment 8 errata-xmlrpc 2021-11-16 14:09:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Satellite 6.10 Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4702


Note You need to log in before you can comment on or make changes to this bug.