Bug 1298873 - AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate
AssertionError: Calling waitall() from within one of the GreenPool's greenthr...
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-keystone (Show other bugs)
6.0 (Juno)
Unspecified Unspecified
high Severity high
: async
: 6.0 (Juno)
Assigned To: Adam Young
Rodrigo Duarte
: ZStream
Depends On: 1298598
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-15 05:39 EST by Robin Cernin
Modified: 2016-04-26 12:36 EDT (History)
11 users (show)

See Also:
Fixed In Version: openstack-keystone-2014.2.3-2.el7ost
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1298598
Environment:
Last Closed: 2016-02-08 09:17:21 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Juno backport of Kilo patch that uses Greenlet Threadpool (2.43 KB, patch)
2016-01-15 13:53 EST, Adam Young
no flags Details | Diff


External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1423250 None None None 2016-01-15 05:39 EST
OpenStack gerrit 160720 None None None 2016-01-15 12:39 EST

  None (edit)
Description Robin Cernin 2016-01-15 05:39:03 EST
Description of problem:

2016-01-13 16:36:02.116 48087 TRACE keystone.common.environment.eventlet_server AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate.
2016-01-13 16:36:02.116 48089 TRACE keystone.common.environment.eventlet_server AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate.
2016-01-13 16:36:02.119 48087 CRITICAL keystone [-] AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate.


Version-Release number of selected component (if applicable):

openstack-keystone-2014.2.3-1.el7ost.noarch
python-keystone-2014.2.3-1.el7ost.noarch
python-keystoneclient-0.11.1-2.el7ost.noarch
python-keystonemiddleware-1.3.2-1.el7ost.noarch


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:



Expected results:


Additional info:

Could we request possible backport of the fix https://review.openstack.org/#/c/160720/ to OSP6 / Juno ?

Thank you,
Regards,
Robin Černín
Comment 1 Adam Young 2016-01-15 12:39:51 EST
Updating Gerrit to the corresponding Keystone review
Comment 3 Adam Young 2016-01-15 13:53 EST
Created attachment 1115241 [details]
Juno backport of Kilo patch that uses Greenlet Threadpool
Comment 8 Rodrigo Duarte 2016-02-03 10:31:46 EST
Verified for openstack-keystone-2014.2.3-2.el7ost

The bug happened during the handling of termination signals. To test if the problem persists, sent a SIGTERM signal to one of the keystone Eventlet workers while observing the log files:

# ps aux | grep keystone
keystone  7277  1.3  0.7 349752 59128 ?        Ss   09:32   0:38 /usr/bin/python /usr/bin/keystone-all
keystone  7296  0.1  0.8 458040 66256 ?        S    09:32   0:04 /usr/bin/python /usr/bin/keystone-all
keystone  7297  0.1  0.8 458300 66512 ?        S    09:32   0:05 /usr/bin/python /usr/bin/keystone-all
keystone  7298  0.1  0.8 457476 65620 ?        S    09:32   0:03 /usr/bin/python /usr/bin/keystone-all
keystone  7299  0.2  0.8 463976 68048 ?        S    09:32   0:06 /usr/bin/python /usr/bin/keystone-all
keystone  7300  0.0  0.7 455876 64044 ?        S    09:32   0:00 /usr/bin/python /usr/bin/keystone-all
keystone  7301  0.0  0.7 455596 63744 ?        S    09:32   0:00 /usr/bin/python /usr/bin/keystone-all
keystone  7302  0.0  0.6 349752 53720 ?        S    09:32   0:00 /usr/bin/python /usr/bin/keystone-all
keystone  7303  0.0  0.7 455596 63748 ?        S    09:32   0:00 /usr/bin/python /usr/bin/keystone-all
root     27096  0.0  0.0 107932   624 pts/1    S+   10:11   0:00 tail -f /var/log/keystone/keystone.log

# kill 7277

# ps aux | grep keystone
keystone  7277  1.4  0.7 349752 59128 ?        Rs   09:32   0:42 /usr/bin/python /usr/bin/keystone-all
keystone  7296  0.1  0.8 458040 66256 ?        S    09:32   0:04 /usr/bin/python /usr/bin/keystone-all
keystone  7297  0.1  0.8 458300 66512 ?        S    09:32   0:05 /usr/bin/python /usr/bin/keystone-all
keystone  7298  0.1  0.8 457476 65620 ?        S    09:32   0:03 /usr/bin/python /usr/bin/keystone-all
keystone  7299  0.2  0.8 463976 68048 ?        S    09:32   0:06 /usr/bin/python /usr/bin/keystone-all
root     27096  0.0  0.0 107932   624 pts/1    S+   10:11   0:00 tail -f /var/log/keystone/keystone.log

In the log files, no error was found, just the capture of the SIGTERM signal:

2016-02-03 10:20:09.250 7277 INFO keystone.openstack.common.service [-] Caught SIGTERM, stopping children
2016-02-03 10:20:09.251 7277 INFO keystone.openstack.common.service [-] Waiting on 8 children to exit
2016-02-03 10:20:09.251 7303 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.251 7297 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.252 7302 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.252 7303 INFO eventlet.wsgi.server [-] (7303) wsgi exited, is_accepting=True
2016-02-03 10:20:09.252 7302 INFO eventlet.wsgi.server [-] (7302) wsgi exited, is_accepting=True
2016-02-03 10:20:09.253 7296 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.254 7299 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.257 7277 INFO keystone.openstack.common.service [-] Child 7302 exited with status 1
2016-02-03 10:20:09.258 7301 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.259 7301 INFO eventlet.wsgi.server [-] (7301) wsgi exited, is_accepting=True
2016-02-03 10:20:09.260 7298 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.260 7300 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting
2016-02-03 10:20:09.260 7277 INFO keystone.openstack.common.service [-] Child 7303 exited with status 1
2016-02-03 10:20:09.261 7300 INFO eventlet.wsgi.server [-] (7300) wsgi exited, is_accepting=True
2016-02-03 10:20:09.266 7277 INFO keystone.openstack.common.service [-] Child 7301 exited with status 1
2016-02-03 10:20:09.268 7277 INFO keystone.openstack.common.service [-] Child 7300 exited with status 1
Comment 10 errata-xmlrpc 2016-02-08 09:17:21 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0132.html

Note You need to log in before you can comment on or make changes to this bug.