Bug 1289287
Summary: | cron job to clean out heat.raw_templates | |||
---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Dan Yocum <dyocum> | |
Component: | openstack-puppet-modules | Assignee: | Emilien Macchi <emacchi> | |
Status: | CLOSED ERRATA | QA Contact: | Amit Ugol <augol> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 7.0 (Kilo) | CC: | augol, ealcaniz, emacchi, ggillies, hbrock, jcoufal, jguiditt, jjoyce, jschluet, mburns, mlopes, ochalups, rhel-osp-director-maint, sbaker, shardy, srevivo, vcojot, zbitter | |
Target Milestone: | --- | Keywords: | Triaged, ZStream | |
Target Release: | 7.0 (Kilo) | |||
Hardware: | All | |||
OS: | All | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1313392 1313403 (view as bug list) | Environment: | ||
Last Closed: | 2017-06-20 12:25:11 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1313403, 1313405, 1339488 |
Description
Dan Yocum
2015-12-07 20:16:35 UTC
Specifically, the command to run is: heat-manage purge_deleted This cron job is something which the heat puppet module needs to configure, just like the keystone-manage token_flush cron job (which is also missing I think) I'm moving this back to openstack-heat so that the cron job can be added in packaging. Keystone's token flush cron needs to be created in the puppet module instead of packaging because knowledge of the configured token type is needed. No such config-time information is needed for the heat purged_deleted cron, so it can be created by the package. No way should a cron job be set up by the package; in general it is totally up to the operator to decide if/when/how they should purge deleted stacks from the DB. For a start, you only want to run this on one machine, not every machine that Heat is installed on. If we're talking about this problem purely in the undercloud context then perhaps this should be assigned to instack-undercloud. It looks like the best place for setting up a heat-manage purge_deleted cron job is in the heat puppet modules. Both the overcloud and undercloud heat needs this so that the db tables don't blow out. I'm going to write Puppet code for that. heat-manage purge_deleted [-g {days,hours,minutes,seconds}] [age] What are the defaults you want to see? I think 1 day is sufficient. I think 1 day for the undercloud is more than fine, for the overcloud it might want to be longer (30 days maybe?) Agree with Graeme, I think 30 days seems about right for the overcloud, but for the undercloud we probably want to clean them out a lot quicker so probably one or two days would be good there. Yes, default to 30 days in the puppet module, and we'll set it to 1 on the undercloud Emilien, could you please confirm that this change is what is required to install the cron job on the underdloud? https://review.openstack.org/#/c/279338/ And here is the corresponding overcloud change https://review.openstack.org/#/c/279342/ It looks like this missed Director 7.3 can we please get this prioritised to make it into 7.4? Regards, Graeme Steve, Graeme, to have the patches in OSP8, they'll have to be backport to stable/liberty and then rebased in the product. Here is a patch for OPM backport: https://review.openstack.org/#/c/286290 I'll let you manage the TripleO patches. Moving this to assigned, as the current patches are for master and liberty/OSP8. I have clone the bug to OSP 8 to cover those, as well as split out the instack-undercloud portion so the fixes there can be tracked as well (for kilo7 and liberty/8), resulting in a total of 4 bugs for this issue As the other half of this bug was closed deffered to osp8, I think this should be as well. Mike, do you agree? Not necessarily; it's still useful to have a cron job clearing out deleted stacks older than 30 days, especially in the overcloud. The fact that we're not reducing the age to 1 day in the undercloud is neither here nor there. Agree with Zane, this isn't dependent on the undercloud part being done and has value by itself. Whether we actually fix it, though is a question for PM. Could you update this BZ please. Thanks Edu Alcaniz we need info from PM Agreed, though at this point, I will say I have been told in general only major security issues are to be considered, as all changes are risky so far into lifecycle. Has it been addressed in v10? If not, then update the version for this BZ. If it has, then close this BZ since this should be documented in the Director Install and Config guide in the Tuning section. Is this addressed in v10? If so, close it for v7 and move on. When the heat.raw_templates table grows too large, sql queries time out, which cascades to rabbitmq queries timing out, which prevents triple-o from completing the provisioning successfully. I'd say to just move on as long as the problem is documented in the tuning section of the guide (iirc, it is). This should be fixed for v8 on both the undercloud and the overcloud. The only caveat for that would be an undercloud which is upgraded from v7, in that case they will need to still follow the documented workaround to manually create the cron entry. Hi Steve, I see no cronjobs. I looked at crontab -e as root and as stack on all nodes and in the undercloud and its not there nor in /etc/cron.* Is this with a fresh install of RHOS-8.0? These changes need to be on the undercloud for this to work. https://review.openstack.org/#/c/279338/ https://review.openstack.org/#/c/286290/ this bug is about creating a cron job and it is created where needed. to actually test that the job is doing what its told is a different issue. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1538 |