Bug 1693328 - (pulp-deadlock) pulp celery worker stuck when forking new thread and reconnecting to mongo
Summary: (pulp-deadlock) pulp celery worker stuck when forking new thread and reconnec...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Installation
Version: 6.4.0
Hardware: x86_64
OS: Linux
high
unspecified
Target Milestone: 6.4.3
Assignee: Chris Roberts
QA Contact: Perry Gagne
URL: https://projects.theforeman.org/issue...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-27 14:46 UTC by Mike McCune
Modified: 2021-01-18 14:06 UTC (History)
16 users (show)

Fixed In Version: katello-installer-base-3.7.0.23-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: pulp-deadlock
Environment:
Last Closed: 2019-04-30 16:35:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
1693328-hf.tar (2.34 MB, application/x-tar)
2019-04-17 20:50 UTC, Mike McCune
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 26323 0 High Closed pulp celery worker stuck when forking new thread and reconnecting to mongo, remove max tasks per child 2021-01-18 14:37:30 UTC
Foreman Issue Tracker 26563 0 High Closed Remove reference of Pulp Max Tasks Per Child in configs 2021-01-18 14:37:31 UTC
Github theforeman puppet-pulp pull 363 0 'None' closed Fixes #26563 - Remove reference of tasks in config 2021-01-18 14:37:30 UTC
Red Hat Product Errata RHBA-2019:0913 0 None None None 2019-04-30 16:35:32 UTC

Description Mike McCune 2019-03-27 14:46:42 UTC
+++ This bug was initially created as a clone of Bug #1656340 +++

Description of problem:
Pulp celery workers can fork a new thread while reopening a connection to mongo. Since pymongo isn't thread safe, per http://api.mongodb.com/python/current/faq.html#id3, there is a race condition the worker child process gets stuck with either backtrace from Additional steps.

This happens at a customer, no reproducer known at the moment (will try to come up with one).


Version-Release number of selected component (if applicable):
Sat 6.4.0
pulp-server-2.16.4.1-1.el7sat.noarch


How reproducible:
??? (regularly for a customer, not known inhouse reproducer (yet))


Steps to Reproduce:
1. ???


Actual results:
some pulp tasks stuck forever as "running" (and some potentially pending forever), being queued / "executed" by an idle celery worker. coredumps show backtrace like below (with waiting on a lock in a pymongo class)


Expected results:
No such stuck / deadlock occurs.


Additional info:
(in both options, thread forking noticed around frame 100-110 in the backtrace)

one option:
Thread 1 (Thread 0x7f8da3b6d700 
#8  0x00007f8dc746545b in PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4db1430, for file /usr/lib64/python2.7/site-packages/pymongo/pool.py, line 479, in reset 
#11 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4dbb3b0, for file /usr/lib64/python2.7/site-packages/pymongo/server.py, line 51, in close 
#14 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4db55f0, for file /usr/lib64/python2.7/site-packages/pymongo/topology.py, line 260, in close 
#17 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4dbb200, for file /usr/lib64/python2.7/site-packages/pymongo/mongo_client.py, line 677, in close 
#20 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da1b66590, for file /usr/lib/python2.7/site-packages/mongoengine/connection.py, line 84, in disconnect 
#24 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3bcdbe8, for file /usr/lib/python2.7/site-packages/pulp/server/db/connection.py, line 347, in reconnect 
#27 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da08e4050, for file /usr/lib/python2.7/site-packages/pulp/server/async/app.py, line 189, in reconnect_services 
#32 PyEval_EvalFrameEx (f=f@entry=Frame 0x1a0fa10, for file /usr/lib/python2.7/site-packages/celery/utils/dispatch/signal.py, line 227, in send 
#36 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8d9c005460, for file /usr/lib/python2.7/site-packages/celery/concurrency/prefork.py, line 81, in process_initializer 
#41 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8db4eab790, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 395, in after_fork 
#45 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8d900253a0, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 286, in __call__ 
#54 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8d8c0030d0, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 114, in run 
#58 PyEval_EvalFrameEx (f=f@entry=Frame 0x2262530, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 327, in _bootstrap 
#62 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8c050, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 79, in _launch 
#66 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8a750, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 24, in __init__ 
#77 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8a590, for file /usr/lib64/python2.7/site-packages/billiard/context.py, line 333, in _Popen 
#81 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8b050, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 124, in start 
#85 PyEval_EvalFrameEx (f=f@entry=Frame 0x21cfad0, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1116, in _create_worker_process 
#89 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da00139d8, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1284, in _repopulate_pool 
#93 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da0020050, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1299, in _maintain_pool 
#97 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3bd5da8, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 537, in body 
#101 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8f050, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 478, in run 
#105 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8d9c000910, for file /usr/lib64/python2.7/threading.py, line 812, in __bootstrap_inner 
#108 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8aad0, for file /usr/lib64/python2.7/threading.py, line 785, in __bootstrap 

another option:
Thread 1 (Thread 0x7f21f2351700 
#8  0x00007f2215c5545b in PyEval_EvalFrameEx (f=f@entry=Frame 0x7f22035a05f0, for file /usr/lib64/python2.7/site-packages/pymongo/topology.py, line 258, in close 
#11 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f22035aa560, for file /usr/lib64/python2.7/site-packages/pymongo/mongo_client.py, line 677, in close 
#14 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f034a590, for file /usr/lib/python2.7/site-packages/mongoengine/connection.py, line 84, in disconnect 
#18 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f23b2d90, for file /usr/lib/python2.7/site-packages/pulp/server/db/connection.py, line 347, in reconnect 
#21 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f22035a9208, for file /usr/lib/python2.7/site-packages/pulp/server/async/app.py, line 189, in reconnect_services 
#26 PyEval_EvalFrameEx (f=f@entry=Frame 0x1c74a10, for file /usr/lib/python2.7/site-packages/celery/utils/dispatch/signal.py, line 227, in send 
#30 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21d0013820, for file /usr/lib/python2.7/site-packages/celery/concurrency/prefork.py, line 81, in process_initializer 
#35 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21d8027290, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 395, in after_fork 
#39 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21d00165e0, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 286, in __call__ 
#48 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21e400a440, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 114, in run 
#52 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21e40086d0, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 327, in _bootstrap 
#56 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f2370050, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 79, in _launch 
#60 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f236e750, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 24, in __init__ 
#71 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f236e590, for file /usr/lib64/python2.7/site-packages/billiard/context.py, line 333, in _Popen 
#75 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f236f050, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 124, in start 
#79 PyEval_EvalFrameEx (f=f@entry=Frame 0x2434110, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1116, in _create_worker_process 
#83 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f001a608, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1284, in _repopulate_pool 
#87 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f001c220, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 1299, in _maintain_pool 
#91 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f2374050, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 537, in body 
#95 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f2373050, for file /usr/lib64/python2.7/site-packages/billiard/pool.py, line 478, in run 
#99 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21ec000910, for file /usr/lib64/python2.7/threading.py, line 812, in __bootstrap_inner 
#102 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f21f236ead0, for file /usr/lib64/python2.7/threading.py, line 785, in __bootstrap

--- Additional comment from  on 2018-12-05T09:45:45Z 

Since this issue was entered in Red Hat Bugzilla, the pm_ack has been set to + automatically for the next planned release.

--- Additional comment from  on 2018-12-06T09:48:31Z 

Brian,
could you please advise me how to trigger a child process to have either of those backtraces?

Key problem I have:

In https://github.com/celery/billiard/blob/master/billiard/popen_fork.py#L72

a forking of the child should happen. See:

#58 PyEval_EvalFrameEx (f=f@entry=Frame 0x2262530, for file /usr/lib64/python2.7/site-packages/billiard/process.py, line 327, in _bootstrap 
#62 PyEval_EvalFrameEx (f=f@entry=Frame 0x7f8da3b8c050, for file /usr/lib64/python2.7/site-packages/billiard/popen_fork.py, line 79, in _launch 

that this really happened.

BUT I can not simulate this at all.

When I add debugging like:

    def _launch(self, process_obj):
        code = 1
        parent_r, child_w = os.pipe()
        print("PavelM: _launch: pre fork for self=%s" % (self))
        self.pid = os.fork()
        print("PavelM: _launch: post fork for self=%s and self.pid=%s" % (self, self.pid))
        if self.pid == 0:
..

then - when child process is recycled - I would expect to see something like:

Dec  6 10:37:05 pmoravec-sat64-on-rhev celery: PavelM: _launch: pre fork for self=<billiard.popen_fork.Popen object at 0x7f05a5351610>
Dec  6 10:37:05 pmoravec-sat64-on-rhev celery: PavelM: _launch: post fork for self=<billiard.popen_fork.Popen object at 0x7f05a5351610> and self.pid=9118
Dec  6 10:37:05 pmoravec-sat64-on-rhev celery: PavelM: _launch: post fork for self=<billiard.popen_fork.Popen object at 0x7f05a5351610> and self.pid=0

in /var/log/messages - one log prior the fork, two logs (for given "self" object) after the fork - from parent and child process.

But I see *just* parent process there. While:
- "ps awfux" shows a new child process of exactly the PID from above output (9118 in above example) is spawned
- coredump of this process shows it executed the "if self.pid == 0" branch and it has self.pid=0 set there


So how the child process is really spawned, then? Does billiard somehow spoof os.fork() ?



(follow reading only if you are not yet confused enough; to have it even more confusing, when I run:
- "celery -A pulp.server.async.app inspect active" to trigger bz1643650,
- restart pulp_workers,

then I *do* see both logs:

Dec  6 10:43:46 pmoravec-sat64-on-rhev celery: PavelM: _launch: pre fork for self=<billiard.popen_fork.Popen object at 0x7f05a53515d0>
Dec  6 10:43:46 pmoravec-sat64-on-rhev celery: PavelM: _launch: post fork for self=<billiard.popen_fork.Popen object at 0x7f05a53515d0> and self.pid=0
Dec  6 10:43:47 pmoravec-sat64-on-rhev celery: PavelM: _launch: pre fork for self=<billiard.popen_fork.Popen object at 0x7f05a53515d0>
Dec  6 10:43:47 pmoravec-sat64-on-rhev celery: PavelM: _launch: post fork for self=<billiard.popen_fork.Popen object at 0x7f05a53515d0> and self.pid=17739

BUT the child PID id 17739 was _recycled_ / killed that time (when I run "date; ps awfux; sleep 0.1" concurrently).

So why I see logs from _forking_ a process at the _end_ of process lifecycle?)

--- Additional comment from  on 2018-12-06T18:59:33Z 

@pmoravec, I read your message a few times to make sure I understand it, and I think I do. Unfortunately I don't have insight into your questions. I'll share some thoughts, maybe they will help.

I don't think billiard has monkey patched os.fork(). It would have had to do that in order to cause the os.fork() call to behave differently. I poked around the billiard repo, but I don't see anything like that in my quick look.

Since you're seeing a difference in how it recycles in different scenarios? Is it possible that process recycling isn't occurring? How do you know the recycling is occuring?

The billiard developers would know a lot more about this. One thing to consider would be to ask the same question in an open issue in the billiard upstream issue tracker.

--- Additional comment from  on 2018-12-10T10:36:40Z 

My understanding is both this BZ and https://bugzilla.redhat.com/show_bug.cgi?id=1643650#c23 will be fixed in pulp3 only that is being developed now, with not known release date or even not confirmed inlcusion in Sat6.Y minor release version.

So it does not make much sense to reproduce it, if whole celery will be replaced by rq.

--- Additional comment from  on 2018-12-10T14:35:37Z 

@pmoravec, yes that is the plan. Pulp3 uses RQ since roughly April 2018.

--- Additional comment from  on 2019-02-05T20:06:44Z 

WORKAROUND:

While we determine the best method for resolving this bug on Pulp 2 and Satellite 6 you can alleviate the stuck worker problem via disabling Pulp process recycling:

1) Edit /etc/default/pulp_workers 

2) Comment out the PULP_MAX_TASKS_PER_CHILD setting with a #. If this line is already commented out then you are unlikely to be hitting this specific bug.

# PULP_MAX_TASKS_PER_CHILD=2 

3) Restart services

foreman-maintain service restart

--- Additional comment from  on 2019-02-11T13:04:02Z 


Hello Mike,

Customer is informing us that this workaround in Comment-6 was already given to him 3 months ago - 

https://access.redhat.com/support/cases/#/case/02260244?commentId=a0a2K00000OQ40zQAD


Thanks and Regards
Anand Jambhulkar

--- Additional comment from  on 2019-02-14T19:53:48Z 

The comment doesn't seem to indicate that they necessarily applied it though or whether it worked or not. Or am I missing something?

--- Additional comment from  on 2019-02-25T17:48:53Z 

I believe the sat-6.5.0 flag ought to be removed from this bug. There's nothing we can do in the 6.5 time frame.

--- Additional comment from  on 2019-02-25T17:51:23Z 

David, do we know what the cause of this is?

--- Additional comment from  on 2019-02-25T23:15:33Z 

The cause is architectural and due to an incompatability with fork and thread safety between celery (which uses pre-forking) and mongo (which uses threads). At startup time there is a tiny but not insignificant chance that when celery forks that the mongo client is left in a dirty state while the mongo connection is being concurrently setup. To state it more technically, fork() only copies the calling thread, and any mutexes held in the child thread will stay forever locked. Generally projects shouldn't mix threading and forking, but Celery's position on this is that "driver code" inside Celery should not use threading (which PyMongo and qpid.messaging both do and we're not in control of that). This matter is further complicated that Celery itself is using a non-standard forking library so getting Python community support is almost impossible.

--- Additional comment from  on 2019-02-26T13:50:16Z 

To elaborate, the problem manifested itself when we enabled process recycling (via PULP_MAX_TASKS_PER_CHILD) which meant that celery was re-forking its processes after every 2 or so tasks. I think for pulp 2, we should advise users to unset PULP_MAX_TASKS_PER_CHILD if they experience deadlocking (see comment6).

In Pulp 3, we've already solved this problem by using a different job queue technology (rq) that doesn't fork its child processes.

--- Additional comment from  on 2019-02-27T16:18:16Z 

Chris,

We are going to need to update the configuration in Satellite 6.5 to remove the PULP_MAX_TASKS_PER_CHILD setting.

I'd recommend we remove the setting entirely from the file so customers don't mistakenly un-comment it thinking it will help their situation.

--- Additional comment from  on 2019-02-27T17:46:25Z 

For this bug, we are going to fix the deadlock issues in Pulp 2 by disabling process recycling (ie commenting out the PULP_MAX_TASKS_PER_CHILD setting) by default. Longer term though in Pulp 3, we are removing celery and replacing it with rq to better solve the problem.

--- Additional comment from  on 2019-03-12T15:32:51Z 

/// internal

I don't see this setting in RHSAT 6.5 HTB. So can I consider this closed?

--- Additional comment from  on 2019-03-12T22:06:38Z 

Moving this bug to POST for triage into Satellite 6 since the upstream issue https://projects.theforeman.org/issues/26323 has been resolved.

Comment 3 Mike McCune 2019-04-05 19:53:38 UTC
Chris, as Perry states, we want this removed entirely from the configuration if at all possible to remove any confusion around its setting.

Comment 6 Bryan Kearney 2019-04-08 18:01:20 UTC
Upstream bug assigned to chrobert

Comment 7 Bryan Kearney 2019-04-08 18:01:21 UTC
Upstream bug assigned to chrobert

Comment 9 Perry Gagne 2019-04-17 17:56:30 UTC
Verified fix in 6.4.3 snap 3. 

Both katello::max_tasks_per_pulp_worker: (in custom hiera) and --katello-max-tasks-per-pulp-worker (satellite-installer) have been removed. 

Mention of PULP_MAX_TASKS_PER_CHILD has been removed from /etc/default/pulp_workers

Comment 10 Mike McCune 2019-04-17 20:49:43 UTC
*** HOTFIX AVAILABLE for Satellite 6.4.2 (all sub-versions) *** 

We have a hotfix available that removes this configuration that I'm attaching.

Please execute the following:

1) download 1693328-hf.tar to your Satellite 6 server

2) tar xf 1693328-hf.tar

3) rpm -Uvh *.rpm

4) satellite-installer

5) verify no more PULP_MAX_TASKS_PER_CHILD in the configuration file

# cat /etc/default/pulp_workers 
# Configuration file for Pulp's Celery workers

# Define the number of worker nodes you wish to have here. This defaults to the number of processors
# that are detected on the system if left commented here.
PULP_CONCURRENCY=8

# Configure Python's encoding for writing all logs, stdout and stderr
PYTHONIOENCODING="UTF-8"

6) resume operations.

Comment 11 Mike McCune 2019-04-17 20:50:05 UTC
Created attachment 1556018 [details]
1693328-hf.tar

Comment 14 errata-xmlrpc 2019-04-30 16:35:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0913


Note You need to log in before you can comment on or make changes to this bug.