| Summary: | Heat stucks in DELETE_IN_PROGRESS for some input data | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Aviv Guetta <aguetta> | |
| Component: | openstack-heat | Assignee: | Zane Bitter <zbitter> | |
| Status: | CLOSED ERRATA | QA Contact: | Amit Ugol <augol> | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | 8.0 (Liberty) | CC: | agurenko, dmaley, lmiccini, mbayer, mburns, pablo.iranzo, rhel-osp-director-maint, sbaker, shardy, srevivo, zbitter | |
| Target Milestone: | async | Keywords: | Triaged, ZStream | |
| Target Release: | 8.0 (Liberty) | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | openstack-heat-5.0.1-9.el7ost | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1384667 (view as bug list) | Environment: | ||
| Last Closed: | 2016-12-21 16:43:37 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Bug Depends On: | ||||
| Bug Blocks: | 1384667 | |||
|
Description
Aviv Guetta
2016-09-05 08:34:02 UTC
I believe this is caused by a database transaction not being closed when a thread is cancelled. (When you start a delete in the middle of a create/update, the create/update gets stopped so that you don't have to wait for in order to start your delete.) This particular reproducer is kind of the perfect storm - create a lot of resources very quickly (so lots of transactions going on), then cancel it in the middle. This greatly increases the chances of encountering the bug. Unfortunately we're yet to find the source of the problem upstream. Until we have, it's difficult to say whether the fix will be readily applicable to Liberty. I'm still investigating, but it's looking increasingly possible that this could be due to a bug in sqlalchemy. This thread describes a similar issue: http://lists.openstack.org/pipermail/openstack-dev/2016-September/103674.html After discussion with Mike Bayer yesterday[1] I believe we've tracked down the problem. When we cancel the greenthread, an exception is raised inside PyMySQL, where it is communicating with the client. PyMySQL mistakenly believes that only IOError exceptions can occur here, and therefore does not handle it. Sqlalchemy likewise does not have any special handling. This means that from the server's perspective the connection is in the middle of a write. However from the Sqlalchemy/PyMySQL perspective, it's just another connection that can be used to either roll back the failed transaction or be returned to the connection pool. Next time we attempt to use the connection, the DB operation will fail. Opinions... differ on where the best place to handle this is ;). In theory, a signal handler can raise an exception between any two Python opcodes, so libraries like PyMySQL and sqlalchemy ought to take this into account. In practice, a co-operative multithreading implementation like eventlet makes the problem much easier to encounter, by ensuring that the exception is effectively always raised when either sleeping or doing I/O, and in the latter case this will always trigger the failure. Steve has added a fix in Newton (https://review.openstack.org/#/c/373518/) that should almost completely prevent us from ever cancelling a greenthread. However, it depends on extensive changes that occurred during Newton development that are certainly not backportable to Liberty. It remains to be seen what the shape of a solution at the DB connection level might look like. Until we've established that, it's difficult to say how (or even if) we can backport the fix to Liberty. That's what we'll try to figure out next. [1] http://eavesdrop.openstack.org/irclogs/%23heat/%23heat.2016-09-20.log.html#t2016-09-20T20:25:07 from the oslo_db / SQLAlchemy side, oslo_db can be made to handle these GreenletExit exceptions automatically but it requires an adjustment to SQLAlchemy, and I'd prefer to keep this as a SQLA 1.1 thing as it's a new feature. The basic issue is that GreenletExit is not a subclass of Exception and SQLAlchemy's DB-exception-interception logic only goes as low as Exception, so this would need a configuration hook. It could also be monkeypatched into known SQLAlchemy 1.0.x versions which is something we've done in oslo_db in the past. It's worth noting that "stuck" is probably the wrong term here. The root stack fails to notice that the child stack hasn't started deleting - which is difficult to address without introducing race conditions. However, the parent stack will still time out as usual and move to the DELETE_FAILED state. The user can also initiate another delete at any time (before or after the timeout), and this second delete will generally succeed. OK, there are 3 things going on here: 1) Database errors due to exit exceptions not being handled by sqlalchemy/PyMySQL. Mike is fixing that in sqlalchemy here: https://bitbucket.org/zzzeek/sqlalchemy/issues/3803/dbapi-connections-go-invalid-on and there's a good chance the fix will be in sqlalchemy 1.1. It's unlikely anybody is going to want to go back and use that with Liberty OpenStack though. Fortunately this isn't the biggest problem here - losing writes to the database sucks, but Heat almost always handles it pretty gracefully. 2) When we fail to start deleting the ResourceGroup nested stack, the parent stack should immediately transition to DELETE_FAILED, rather than remain DELETE_IN_PROGRESS until it times out. I've proposed a fix at https://review.openstack.org/374442 - it's trivial and will be easy to backport. 3) The *real* problem. When we cancel the in-progress creation of the ResourceGroup nested stack, there is a super-short timeout (2s by default) during which all of the tasks have to be cancelled, and if they're not cancelled by then Heat gives up and won't start the delete. On a stack this size, that just isn't long enough. You can work around this by setting the config option "engine_life_check_timeout" to a larger value (e.g. 30s). I've proposed a patch to not use that option for this particular timeout, and to use 30s instead: https://review.openstack.org/374443 - it's also very simple and will be easy to backport. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2989.html |