Bug 1277456 - Make pulp processes more robust / recoverable / verbose on problems
Make pulp processes more robust / recoverable / verbose on problems
Status: CLOSED UPSTREAM
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Pulp (Show other bugs)
6.1.3
All Linux
high Severity high (vote)
: Unspecified
: --
Assigned To: satellite6-bugs
Katello QA List
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-03 06:22 EST by Pavel Moravec
Modified: 2017-07-26 15:37 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-13 12:00:45 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Pavel Moravec 2015-11-03 06:22:26 EST
Description of problem:
pulp consists of various processes that are somehow being monitored if they are running. BUT when a pulp process dies:
1) there is absolutely no log what happened, why the process exited or so
2) the monitoring just detects process is gone, but does not attempt to respawn it again.

This monitoring is useless if it's the only activity is to log 

Workers 'resource_manager@satellite.example.com' has gone missing, removing from list of workers

Please improve the monitoring (or replace by anything elase or implement something new) such that both 1) and 2) is fixed.

1) is required for root cause analysis why a worker or resource manager died - now there is absolutely no clue. No way of investigation for either GSS or developers.

2) is required for having more stable and robust pulp that takes care of died processes


Version-Release number of selected component (if applicable):
Sat 6.1.3


How reproducible:
n.a.


Steps to Reproduce:
??? somehow achieve a pulp worker process or resource manager is gone silently
check what it logs and if pulp restarts it


Actual results:
no log why the worker is gone
no activity made to restore it


Expected results:
proper log why the worker is gone
an attempt to restart the worker


Additional info:
The lack of robustness is more shocking, when we compare it to the fact that both pulp celery worker thread and also resource worker thread sends heartbeats via qpid broker every second, and they even _write_ that activity to mongo (workers collection/table). Why the processes are such probed and why the keepalive probes are written to backend database _every_second_ (!!!) if points 1) and 2) are not implemented?
Comment 2 Pavel Moravec 2015-11-03 07:14:22 EST
One curious observation:

pulp sends heartbeats every 2 seconds (at least my observation). And writes it down to mongo db every 2 seconds. But it takes loong minutes to detect a worker is gone or to recover. Why?

Assume scenario:
1) pulp processes freshly restarted, everything working
2) select a worker and (via iptables) drop all packets between the process and qpidd broker (there should be 2 TCP connections, so I block in INPUT --sport and --dport of the client ports of the connections)
3) remember time when starting the connection outage simulation (in my case 12:39:45)
4) observe /var/log/messages for connection loss detection - in my case, at 12:44:49 pulp.server.async.scheduler:ERROR: .. was logged, and at 12:55:09, celery.worker.consumer:WARNING: Connection to broker lost logged. And even at 13:10:46, the worker reconnected and worker_watcher discovered it.

So pulp worker has heartbeat interval 2 seconds on its application level, but scheduler detects worker is gone after 5 minutes? And the worker detects so after 15 minutes? And reconnects after another 15 minutes?? So with 2 seconds heartbeat, a worker thread recovers from connection problem after 30 minutes?? What is the purpose of the heartbeats, then???
Comment 3 Pavel Moravec 2015-11-03 07:16:54 EST
.. and once the worker is registred again, worker_watcher is getting confused:

Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: pulp.server.async.worker_watcher:INFO: New worker 'reserved_resource_worker-0@pmoravec-sat61-rhel7.gsslab.brq.redhat.com' discovered
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: pulp.server.async.worker_watcher:INFO: Worker 'reserved_resource_worker-0@pmoravec-sat61-rhel7.gsslab.brq.redhat.com' shutdown
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: py.warnings:WARNING: (13881-92544) /usr/lib/python2.7/site-packages/kombu/pidbox.py:75: UserWarning: A node named reserved_resource_worker-0@pmoravec-sat61-rhel7.gsslab.brq.redhat.com is already using this process mailbox!
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: py.warnings:WARNING: (13881-92544)
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: py.warnings:WARNING: (13881-92544) Maybe you forgot to shutdown the other node or did not do so properly?
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: py.warnings:WARNING: (13881-92544) Or if you meant to start multiple nodes on the same host please make sure
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: py.warnings:WARNING: (13881-92544) you give each node a unique node name!
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: py.warnings:WARNING: (13881-92544)
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: py.warnings:WARNING: (13881-92544)   warnings.warn(W_PIDBOX_IN_USE.format(node=self))
Nov  3 13:10:46 pmoravec-sat61-rhel7 pulp: py.warnings:WARNING: (13881-92544)
Nov  3 13:10:48 pmoravec-sat61-rhel7 pulp: pulp.server.async.worker_watcher:INFO: New worker 'reserved_resource_worker-0@pmoravec-sat61-rhel7.gsslab.brq.redhat.com' discovered
Comment 4 Brian Bouterse 2015-11-06 12:18:29 EST
I'm glad you filed this because we want Pulp to be awesome. Here are some comments.

The worker realizing they are no longer connected is not important. Once the worker_watcher notices a worker is missing after 5 minutes, their work is cancelled. If the worker recovers before it's noticed missing then it's not considered a failure. If the worker recovers after it's noticed missing then it rejoins just like a new worker would.

In a later version of pulp the heartbeat time was set to 30 seconds instead of the celery default of 2. This makes the mongo connection less chatty and 2 seconds wasn't really benefiting us anyway. I'm not sure when this change will be taken into sat6.

Regarding the 5 minute delay before a worker is taken out of service. This is an arbitrary decision based on a time that Pulp sets. It could be any time desired, and sat6 could set it differently if they want. Note: it's not exactly 5 minutes. The worker_watcher wakes up every 90 seconds, and deletes workers who have not checked in within the last 300 seconds. The worst case then could be roughly 300 + 90 assuming the worker_watcher woke up just before a lost worker's last checkin was 300 seconds old.

Finally, when a worker does recover or get re-registered the worker_watcher does not get confused. It does log in a confusing way though. I believe we fixed this in a newer version of Pulp but if you want me to check more on it, needsinfo me.
Comment 5 Brian Bouterse 2015-11-13 09:55:54 EST
Changes suggested in this BZ would not have helped the debugging of those customer cases. Also many of the suggested improvements are already included in sat6 or in upstream Pulp.
Comment 6 Pavel Moravec 2015-11-13 12:00:45 EST
Agreed with Brian the changes in upstream are sufficient and will appear in downstream later on.

Note You need to log in before you can comment on or make changes to this bug.