Bug 1656340 (pulp-deadlock) - pulp celery worker stuck when forking new thread and reconnecting to mongo
Summary: pulp celery worker stuck when forking new thread and reconnecting to mongo
Status: CLOSED ERRATA
Alias: pulp-deadlock
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Installer
Version: 6.4.0
Hardware: x86_64
OS: Linux
high
high with 1 vote vote
Target Milestone: 6.5.0
Assignee: Chris Roberts
QA Contact: Mike McCune
URL: https://projects.theforeman.org/issue...
Whiteboard:
Keywords: Triaged
Depends On:
Blocks: 1122832
TreeView+ depends on / blocked
 
Reported: 2018-12-05 09:45 UTC by Pavel Moravec
Modified: 2019-05-14 19:57 UTC (History)
13 users (show)

(edit)
Clone Of:
: 1693328 (view as bug list)
(edit)
Last Closed: 2019-05-14 19:57:41 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Github theforeman foreman-installer pull 327 None None None 2019-03-12 17:35 UTC
Github theforeman puppet-foreman_proxy_content pull 196 None None None 2019-03-12 17:36 UTC
Foreman Issue Tracker 26323 None None None 2019-03-12 17:35 UTC
Red Hat Knowledge Base (Solution) 3687321 None None None 2018-12-06 07:25 UTC
Github theforeman puppet-katello pull 282 None None None 2019-03-12 17:36 UTC
Github theforeman puppet-pulp pull 363 None None None 2019-04-17 13:36 UTC

Description Pavel Moravec 2018-12-05 09:45:40 UTC
Description of problem:
Pulp celery workers can fork a new thread while reopening a connection to mongo. Since pymongo isn't thread safe, per http://api.mongodb.com/python/current/faq.html#id3, there is a race condition the worker child process gets stuck with either backtrace from Additional steps.

This happens at a customer, no reproducer known at the moment (will try to come up with one).


Version-Release number of selected component (if applicable):
Sat 6.4.0
pulp-server-2.16.4.1-1.el7sat.noarch


How reproducible:
??? (regularly for a customer, not known inhouse reproducer (yet))


Steps to Reproduce:
1. ???


Actual results:
some pulp tasks stuck forever as "running" (and some potentially pending forever), being queued / "executed" by an idle celery worker. coredumps show backtrace like below (with waiting on a lock in a pymongo class)


Expected results:
No such stuck / deadlock occurs.


Additional info:
(in both options, thread forking noticed around frame 100-110 in the backtrace)

one option:
Thread 1 (Thread 0x7f8da3b6d700 
#8  0x00007f8dc746545b in PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4db1430, for file /usr/lib64/python2.7/site-packages/pymongo/pool.py, line 479, in reset 
#11 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4dbb3b0, for file /usr/lib64/python2.7/site-packages/pymongo/server.py, line 51, in close 
#14 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4db55f0, for file /usr/lib64/python2.7/site-packages/pymongo/topology.py, line 260, in close 
#17 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4dbb200, for file /usr/lib64/python2.7/site-packages/pymongo/mongo_client.py, line 677, in close 
#20 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da1b66590, for file /usr/lib/python2.7/site-packages/mongoengine/connection.py, line 84, in disconnect 
#24 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3bcdbe8, for file /usr/lib/python2.7/site-packages/pulp/server/db/connection.py, line 347, in reconnect 
#27 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da08e4050, for file /usr/lib/python2.7/site-packages/pulp/server/async/app.py, line 189, in reconnect_services 
#32 PyEval_EvalFrameEx (f=f@entry=Frame 0x1a0fa10, for file /usr/lib/python2.7/site-packages/celery/utils/dispatch/signal.py, line 227, in send 
#36 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8d9c005460, for file /usr/lib/python2.7/site-packages/celery/concurrency/prefork.py, line 81, in process_initializer 
#41 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4eab790, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 395, in after_fork 
#45 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8d900253a0, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 286, in __call__ 
#54 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8d8c0030d0, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 114, in run 
#58 PyEval_EvalFrameEx (f=f@entry=Frame 0x2262530, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 327, in _bootstrap 
#62 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8c050, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 79, in _launch 
#66 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8a750, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 24, in __init__ 
#77 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8a590, for file /usr/lib64/python2.7/site-packages/billiard/context.py, line 333, in _Popen 
#81 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8b050, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 124, in start 
#85 PyEval_EvalFrameEx (f=f@entry=Frame 0x21cfad0, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1116, in _create_worker_process 
#89 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da00139d8, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1284, in _repopulate_pool 
#93 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da0020050, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1299, in _maintain_pool 
#97 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3bd5da8, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 537, in body 
#101 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8f050, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 478, in run 
#105 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8d9c000910, for file /usr/lib64/python2.7/threading.py, line 812, in __bootstrap_inner 
#108 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8aad0, for file /usr/lib64/python2.7/threading.py, line 785, in __bootstrap 

another option:
Thread 1 (Thread 0x7f21f2351700 
#8  0x00007f2215c5545b in PyEval_EvalFrameEx (f=f@entry=Frame 0x7f22035a05f0, for file /usr/lib64/python2.7/site-packages/pymongo/topology.py, line 258, in close 
#11 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f22035aa560, for file /usr/lib64/python2.7/site-packages/pymongo/mongo_client.py, line 677, in close 
#14 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f034a590, for file /usr/lib/python2.7/site-packages/mongoengine/connection.py, line 84, in disconnect 
#18 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f23b2d90, for file /usr/lib/python2.7/site-packages/pulp/server/db/connection.py, line 347, in reconnect 
#21 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f22035a9208, for file /usr/lib/python2.7/site-packages/pulp/server/async/app.py, line 189, in reconnect_services 
#26 PyEval_EvalFrameEx (f=f@entry=Frame 0x1c74a10, for file /usr/lib/python2.7/site-packages/celery/utils/dispatch/signal.py, line 227, in send 
#30 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21d0013820, for file /usr/lib/python2.7/site-packages/celery/concurrency/prefork.py, line 81, in process_initializer 
#35 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21d8027290, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 395, in after_fork 
#39 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21d00165e0, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 286, in __call__ 
#48 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21e400a440, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 114, in run 
#52 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21e40086d0, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 327, in _bootstrap 
#56 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f2370050, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 79, in _launch 
#60 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f236e750, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 24, in __init__ 
#71 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f236e590, for file /usr/lib64/python2.7/site-packages/billiard/context.py, line 333, in _Popen 
#75 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f236f050, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 124, in start 
#79 PyEval_EvalFrameEx (f=f@entry=Frame 0x2434110, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1116, in _create_worker_process 
#83 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f001a608, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1284, in _repopulate_pool 
#87 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f001c220, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1299, in _maintain_pool 
#91 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f2374050, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 537, in body 
#95 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f2373050, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 478, in run 
#99 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21ec000910, for file /usr/lib64/python2.7/threading.py, line 812, in __bootstrap_inner 
#102 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f236ead0, for file /usr/lib64/python2.7/threading.py, line 785, in __bootstrap

Comment 2 Pavel Moravec 2018-12-06 09:48:31 UTC
Brian,
could you please advise me how to trigger a child process to have either of those backtraces?

Key problem I have:

In https://github.com/celery/billiard/blob/master/billiard/popen_fork.py#L72

a forking of the child should happen. See:

#58 PyEval_EvalFrameEx (f=f@entry=Frame 0x2262530, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 327, in _bootstrap 
#62 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8c050, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 79, in _launch 

that this really happened.

BUT I can not simulate this at all.

When I add debugging like:

    def _launch(self, process_obj):
        code = 1
        parent_r, child_w = os.pipe()
        print("PavelM: _launch: pre fork for self=%s" % (self))
        self.pid = os.fork()
        print("PavelM: _launch: post fork for self=%s and self.pid=%s" % (self, self.pid))
        if self.pid == 0:
..

then - when child process is recycled - I would expect to see something like:

Dec  6 10:37:05 pmoravec-sat64-on-rhev celery: PavelM: _launch: pre fork for self=<billiard.popen_fork.Popen object at 0x7f05a5351610>
Dec  6 10:37:05 pmoravec-sat64-on-rhev celery: PavelM: _launch: post fork for self=<billiard.popen_fork.Popen object at 0x7f05a5351610> and self.pid=9118
Dec  6 10:37:05 pmoravec-sat64-on-rhev celery: PavelM: _launch: post fork for self=<billiard.popen_fork.Popen object at 0x7f05a5351610> and self.pid=0

in /var/log/messages - one log prior the fork, two logs (for given "self" object) after the fork - from parent and child process.

But I see *just* parent process there. While:
- "ps awfux" shows a new child process of exactly the PID from above output (9118 in above example) is spawned
- coredump of this process shows it executed the "if self.pid == 0" branch and it has self.pid=0 set there


So how the child process is really spawned, then? Does billiard somehow spoof os.fork() ?



(follow reading only if you are not yet confused enough; to have it even more confusing, when I run:
- "celery -A pulp.server.async.app inspect active" to trigger bz1643650,
- restart pulp_workers,

then I *do* see both logs:

Dec  6 10:43:46 pmoravec-sat64-on-rhev celery: PavelM: _launch: pre fork for self=<billiard.popen_fork.Popen object at 0x7f05a53515d0>
Dec  6 10:43:46 pmoravec-sat64-on-rhev celery: PavelM: _launch: post fork for self=<billiard.popen_fork.Popen object at 0x7f05a53515d0> and self.pid=0
Dec  6 10:43:47 pmoravec-sat64-on-rhev celery: PavelM: _launch: pre fork for self=<billiard.popen_fork.Popen object at 0x7f05a53515d0>
Dec  6 10:43:47 pmoravec-sat64-on-rhev celery: PavelM: _launch: post fork for self=<billiard.popen_fork.Popen object at 0x7f05a53515d0> and self.pid=17739

BUT the child PID id 17739 was _recycled_ / killed that time (when I run "date; ps awfux; sleep 0.1" concurrently).

So why I see logs from _forking_ a process at the _end_ of process lifecycle?)

Comment 3 Brian Bouterse 2018-12-06 18:59:33 UTC
@pmoravec, I read your message a few times to make sure I understand it, and I think I do. Unfortunately I don't have insight into your questions. I'll share some thoughts, maybe they will help.

I don't think billiard has monkey patched os.fork(). It would have had to do that in order to cause the os.fork() call to behave differently. I poked around the billiard repo, but I don't see anything like that in my quick look.

Since you're seeing a difference in how it recycles in different scenarios? Is it possible that process recycling isn't occurring? How do you know the recycling is occuring?

The billiard developers would know a lot more about this. One thing to consider would be to ask the same question in an open issue in the billiard upstream issue tracker.

Comment 4 Pavel Moravec 2018-12-10 10:36:40 UTC
My understanding is both this BZ and https://bugzilla.redhat.com/show_bug.cgi?id=1643650#c23 will be fixed in pulp3 only that is being developed now, with not known release date or even not confirmed inlcusion in Sat6.Y minor release version.

So it does not make much sense to reproduce it, if whole celery will be replaced by rq.

Comment 5 Brian Bouterse 2018-12-10 14:35:37 UTC
@pmoravec, yes that is the plan. Pulp3 uses RQ since roughly April 2018.

Comment 6 Mike McCune 2019-02-05 20:06:44 UTC
WORKAROUND:

While we determine the best method for resolving this bug on Pulp 2 and Satellite 6 you can alleviate the stuck worker problem via disabling Pulp process recycling:

1) Edit /etc/default/pulp_workers 

2) Comment out the PULP_MAX_TASKS_PER_CHILD setting with a #. If this line is already commented out then you are unlikely to be hitting this specific bug.

# PULP_MAX_TASKS_PER_CHILD=2 

3) Restart services

foreman-maintain service restart

Comment 14 David Davis 2019-02-27 17:46:25 UTC
For this bug, we are going to fix the deadlock issues in Pulp 2 by disabling process recycling (ie commenting out the PULP_MAX_TASKS_PER_CHILD setting) by default. Longer term though in Pulp 3, we are removing celery and replacing it with rq to better solve the problem.

Comment 16 Bryan Kearney 2019-03-12 22:06:38 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue https://projects.theforeman.org/issues/26323 has been resolved.

Comment 18 Mike McCune 2019-04-16 22:11:21 UTC
Chris, I still see the configuration commented out in the latest snap, I thought it would get removed entirely from the template:

grep TASKS_PER_CHILD /etc/default/pulp_workers
# left commented, process recycling is disabled. PULP_MAX_TASKS_PER_CHILD must be > 0.
# PULP_MAX_TASKS_PER_CHILD=2


if this is as good as we are going to get for 6.5 we can live with it, but I thought we discussed it getting removed.

Comment 19 Eric Helms 2019-04-17 13:08:47 UTC
I believe that file is controlled by Pulp project, and it's codebase by default (not the installer or puppet modules) so we would need ot make the change there and release it if we care about this file:

https://github.com/pulp/pulp/blob/2-master/server/etc/default/systemd_pulp_workers#L10

Comment 20 Mike McCune 2019-04-17 14:10:45 UTC
ACK, if that file is in the Pulp codebase, for this bug we can keep it commented out and I'll file a bug upstream in Pulp2 to remove this setting.

moving this to VERIFIED as we now have this setting properly commented out.

Comment 21 Eric Helms 2019-04-17 16:37:10 UTC
Here comes the noise, while Pulp does ship that file, I missed that we were overriding it in the installer now and that the fix for this was done but not pulled into our builds. So moving back to POST and we will deliver this.

Comment 23 Mike McCune 2019-04-18 14:19:35 UTC
VERIFIED in SNAP 25:

# cat /etc/default/pulp_workers 
# Configuration file for Pulp's Celery workers

# Define the number of worker nodes you wish to have here. This defaults to the number of processors
# that are detected on the system if left commented here.
PULP_CONCURRENCY=8

# Configure Python's encoding for writing all logs, stdout and stderr
PYTHONIOENCODING="UTF-8"


no more PULP_MAX_TASKS...

Comment 24 Taft Sanders 2019-05-01 18:52:17 UTC
I had an issue recently where a Satellite would die to OOM errors due to this function being commented out. As noted in the pulp upstream documentation on the tuning guide, because pulp relies on python 2, we have to have this setting to keep memory usage down. Unless this has changed in recent releases where this value is no longer needed. OOM's were seen in 6.4.2.

https://docs.pulpproject.org/user-guide/tuning.html?highlight=pulp_max_tasks_per_child#memory-issues

Comment 25 Mike McCune 2019-05-01 19:51:29 UTC
Taft, was the OOM killer killing the pulp_worker processes or killing something else?

Do you have 'ps' + RSS usage per process output prior to the OOM execution to determine what was using the most system memory?

Comment 27 Bryan Kearney 2019-05-14 19:57:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:1222


Note You need to log in before you can comment on or make changes to this bug.