Bug 1298873 - AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate
Summary: AssertionError: Calling waitall() from within one of the GreenPool's greenthr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-keystone
Version: 6.0 (Juno)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: async
: 6.0 (Juno)
Assignee: Adam Young
QA Contact: Rodrigo Duarte
URL:
Whiteboard:
Depends On: 1298598
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-15 10:39 UTC by Robin Cernin
Modified: 2019-09-12 09:45 UTC (History)
11 users (show)

Fixed In Version: openstack-keystone-2014.2.3-2.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of: 1298598
Environment:
Last Closed: 2016-02-08 14:17:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Juno backport of Kilo patch that uses Greenlet Threadpool (2.43 KB, patch)
2016-01-15 18:53 UTC, Adam Young
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1423250 0 None None None 2016-01-15 10:39:02 UTC
OpenStack gerrit 160720 0 None None None 2016-01-15 17:39:51 UTC
Red Hat Product Errata RHBA-2016:0132 0 normal SHIPPED_LIVE openstack-keystone bug fix advisory 2016-02-08 19:16:18 UTC

Description Robin Cernin 2016-01-15 10:39:03 UTC
Description of problem:

2016-01-13 16:36:02.116 48087 TRACE keystone.common.environment.eventlet_server AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate.
2016-01-13 16:36:02.116 48089 TRACE keystone.common.environment.eventlet_server AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate.
2016-01-13 16:36:02.119 48087 CRITICAL keystone [-] AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate.


Version-Release number of selected component (if applicable):

openstack-keystone-2014.2.3-1.el7ost.noarch
python-keystone-2014.2.3-1.el7ost.noarch
python-keystoneclient-0.11.1-2.el7ost.noarch
python-keystonemiddleware-1.3.2-1.el7ost.noarch


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:



Expected results:


Additional info:

Could we request possible backport of the fix https://review.openstack.org/#/c/160720/ to OSP6 / Juno ?

Thank you,
Regards,
Robin Černín

Comment 1 Adam Young 2016-01-15 17:39:51 UTC
Updating Gerrit to the corresponding Keystone review

Comment 3 Adam Young 2016-01-15 18:53:42 UTC
Created attachment 1115241 [details]
Juno backport of Kilo patch that uses Greenlet Threadpool

Comment 8 Rodrigo Duarte 2016-02-03 15:31:46 UTC
Verified for openstack-keystone-2014.2.3-2.el7ost

The bug happened during the handling of termination signals. To test if the problem persists, sent a SIGTERM signal to one of the keystone Eventlet workers while observing the log files:

# ps aux | grep keystone
keystone  7277  1.3  0.7 349752 59128 ?        Ss   09:32   0:38 /usr/bin/python /usr/bin/keystone-all
keystone  7296  0.1  0.8 458040 66256 ?        S    09:32   0:04 /usr/bin/python /usr/bin/keystone-all
keystone  7297  0.1  0.8 458300 66512 ?        S    09:32   0:05 /usr/bin/python /usr/bin/keystone-all
keystone  7298  0.1  0.8 457476 65620 ?        S    09:32   0:03 /usr/bin/python /usr/bin/keystone-all
keystone  7299  0.2  0.8 463976 68048 ?        S    09:32   0:06 /usr/bin/python /usr/bin/keystone-all
keystone  7300  0.0  0.7 455876 64044 ?        S    09:32   0:00 /usr/bin/python /usr/bin/keystone-all
keystone  7301  0.0  0.7 455596 63744 ?        S    09:32   0:00 /usr/bin/python /usr/bin/keystone-all
keystone  7302  0.0  0.6 349752 53720 ?        S    09:32   0:00 /usr/bin/python /usr/bin/keystone-all
keystone  7303  0.0  0.7 455596 63748 ?        S    09:32   0:00 /usr/bin/python /usr/bin/keystone-all
root     27096  0.0  0.0 107932   624 pts/1    S+   10:11   0:00 tail -f /var/log/keystone/keystone.log

# kill 7277

# ps aux | grep keystone
keystone  7277  1.4  0.7 349752 59128 ?        Rs   09:32   0:42 /usr/bin/python /usr/bin/keystone-all
keystone  7296  0.1  0.8 458040 66256 ?        S    09:32   0:04 /usr/bin/python /usr/bin/keystone-all
keystone  7297  0.1  0.8 458300 66512 ?        S    09:32   0:05 /usr/bin/python /usr/bin/keystone-all
keystone  7298  0.1  0.8 457476 65620 ?        S    09:32   0:03 /usr/bin/python /usr/bin/keystone-all
keystone  7299  0.2  0.8 463976 68048 ?        S    09:32   0:06 /usr/bin/python /usr/bin/keystone-all
root     27096  0.0  0.0 107932   624 pts/1    S+   10:11   0:00 tail -f /var/log/keystone/keystone.log

In the log files, no error was found, just the capture of the SIGTERM signal:

2016-02-03 10:20:09.250 7277 INFO keystone.openstack.common.service [-] Caught SIGTERM, stopping children
2016-02-03 10:20:09.251 7277 INFO keystone.openstack.common.service [-] Waiting on 8 children to exit
2016-02-03 10:20:09.251 7303 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.251 7297 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.252 7302 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.252 7303 INFO eventlet.wsgi.server [-] (7303) wsgi exited, is_accepting=True
2016-02-03 10:20:09.252 7302 INFO eventlet.wsgi.server [-] (7302) wsgi exited, is_accepting=True
2016-02-03 10:20:09.253 7296 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.254 7299 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.257 7277 INFO keystone.openstack.common.service [-] Child 7302 exited with status 1
2016-02-03 10:20:09.258 7301 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.259 7301 INFO eventlet.wsgi.server [-] (7301) wsgi exited, is_accepting=True
2016-02-03 10:20:09.260 7298 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.260 7300 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.260 7277 INFO keystone.openstack.common.service [-] Child 7303 exited with status 1
2016-02-03 10:20:09.261 7300 INFO eventlet.wsgi.server [-] (7300) wsgi exited, is_accepting=True
2016-02-03 10:20:09.266 7277 INFO keystone.openstack.common.service [-] Child 7301 exited with status 1
2016-02-03 10:20:09.268 7277 INFO keystone.openstack.common.service [-] Child 7300 exited with status 1

Comment 10 errata-xmlrpc 2016-02-08 14:17:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0132.html


Note You need to log in before you can comment on or make changes to this bug.