Description of problem: 2016-01-13 16:36:02.116 48087 TRACE keystone.common.environment.eventlet_server AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate. 2016-01-13 16:36:02.116 48089 TRACE keystone.common.environment.eventlet_server AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate. 2016-01-13 16:36:02.119 48087 CRITICAL keystone [-] AssertionError: Calling waitall() from within one of the GreenPool's greenthreads will never terminate. Version-Release number of selected component (if applicable): openstack-keystone-2014.2.3-1.el7ost.noarch python-keystone-2014.2.3-1.el7ost.noarch python-keystoneclient-0.11.1-2.el7ost.noarch python-keystonemiddleware-1.3.2-1.el7ost.noarch How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: Could we request possible backport of the fix https://review.openstack.org/#/c/160720/ to OSP6 / Juno ? Thank you, Regards, Robin Černín
Updating Gerrit to the corresponding Keystone review
Created attachment 1115241 [details] Juno backport of Kilo patch that uses Greenlet Threadpool
Verified for openstack-keystone-2014.2.3-2.el7ost The bug happened during the handling of termination signals. To test if the problem persists, sent a SIGTERM signal to one of the keystone Eventlet workers while observing the log files: # ps aux | grep keystone keystone 7277 1.3 0.7 349752 59128 ? Ss 09:32 0:38 /usr/bin/python /usr/bin/keystone-all keystone 7296 0.1 0.8 458040 66256 ? S 09:32 0:04 /usr/bin/python /usr/bin/keystone-all keystone 7297 0.1 0.8 458300 66512 ? S 09:32 0:05 /usr/bin/python /usr/bin/keystone-all keystone 7298 0.1 0.8 457476 65620 ? S 09:32 0:03 /usr/bin/python /usr/bin/keystone-all keystone 7299 0.2 0.8 463976 68048 ? S 09:32 0:06 /usr/bin/python /usr/bin/keystone-all keystone 7300 0.0 0.7 455876 64044 ? S 09:32 0:00 /usr/bin/python /usr/bin/keystone-all keystone 7301 0.0 0.7 455596 63744 ? S 09:32 0:00 /usr/bin/python /usr/bin/keystone-all keystone 7302 0.0 0.6 349752 53720 ? S 09:32 0:00 /usr/bin/python /usr/bin/keystone-all keystone 7303 0.0 0.7 455596 63748 ? S 09:32 0:00 /usr/bin/python /usr/bin/keystone-all root 27096 0.0 0.0 107932 624 pts/1 S+ 10:11 0:00 tail -f /var/log/keystone/keystone.log # kill 7277 # ps aux | grep keystone keystone 7277 1.4 0.7 349752 59128 ? Rs 09:32 0:42 /usr/bin/python /usr/bin/keystone-all keystone 7296 0.1 0.8 458040 66256 ? S 09:32 0:04 /usr/bin/python /usr/bin/keystone-all keystone 7297 0.1 0.8 458300 66512 ? S 09:32 0:05 /usr/bin/python /usr/bin/keystone-all keystone 7298 0.1 0.8 457476 65620 ? S 09:32 0:03 /usr/bin/python /usr/bin/keystone-all keystone 7299 0.2 0.8 463976 68048 ? S 09:32 0:06 /usr/bin/python /usr/bin/keystone-all root 27096 0.0 0.0 107932 624 pts/1 S+ 10:11 0:00 tail -f /var/log/keystone/keystone.log In the log files, no error was found, just the capture of the SIGTERM signal: 2016-02-03 10:20:09.250 7277 INFO keystone.openstack.common.service [-] Caught SIGTERM, stopping children 2016-02-03 10:20:09.251 7277 INFO keystone.openstack.common.service [-] Waiting on 8 children to exit 2016-02-03 10:20:09.251 7303 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting 2016-02-03 10:20:09.251 7297 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting 2016-02-03 10:20:09.252 7302 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting 2016-02-03 10:20:09.252 7303 INFO eventlet.wsgi.server [-] (7303) wsgi exited, is_accepting=True 2016-02-03 10:20:09.252 7302 INFO eventlet.wsgi.server [-] (7302) wsgi exited, is_accepting=True 2016-02-03 10:20:09.253 7296 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting 2016-02-03 10:20:09.254 7299 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting 2016-02-03 10:20:09.257 7277 INFO keystone.openstack.common.service [-] Child 7302 exited with status 1 2016-02-03 10:20:09.258 7301 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting 2016-02-03 10:20:09.259 7301 INFO eventlet.wsgi.server [-] (7301) wsgi exited, is_accepting=True 2016-02-03 10:20:09.260 7298 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting 2016-02-03 10:20:09.260 7300 INFO keystone.openstack.common.service [-] Child caught SIGTERM, exiting 2016-02-03 10:20:09.260 7277 INFO keystone.openstack.common.service [-] Child 7303 exited with status 1 2016-02-03 10:20:09.261 7300 INFO eventlet.wsgi.server [-] (7300) wsgi exited, is_accepting=True 2016-02-03 10:20:09.266 7277 INFO keystone.openstack.common.service [-] Child 7301 exited with status 1 2016-02-03 10:20:09.268 7277 INFO keystone.openstack.common.service [-] Child 7300 exited with status 1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0132.html