Hide Forgot
Description of problem: In order to automate this command, it should run within the scope of a cron job on at least an hourly basis. And this should automatically build during Director installation. Red Hat Enterprise Linux OpenStack Platform uses a MariaDB database in the OpenStack control plane for data persistence. This database acts as a system backend, storing configuration and historical data for certain OpenStack services, including Compute, Identity Service, and Block Storage. Version-Release number of selected component (if applicable): OSP 7 OSP 8 OSP 9 ++++++++++++++++++++++++++++ Identity Service (Keystone) ++++++++++++++++++++++++++++ ~~~~~~~~~~ /usr/bin/keystone-manage token_flush Example: 0 */1 * * * /usr/bin/keystone-manage token_flush >/dev/null 2>&1 ~~~~~~~~~~ ++++++++++++++++++++++++++++ Compute Service (Nova) ++++++++++++++++++++++++++++ ~~~~~~~~~~ /usr/bin/nova-manage db archive_deleted_rows Example: 0 */12 * * * /usr/bin/nova-manage db archive_deleted_rows >/dev/null 2>&1 ~~~~~~~~~~ ++++++++++++++++++++++++++++ Block Storage (Cinder) ++++++++++++++++++++++++++++ ~~~~~~~~~~ /usr/bin/cinder-manage db purge 1 Example: 0 */24 * * * /usr/bin/cinder-manage db purge 1 >/dev/null 2>&1 ~~~~~~~~~~ ++++++++++++++++++++++++++++ Image Service (Glance) ++++++++++++++++++++++++++++ ~~~~~~~~~~ Glance also makes use of “soft-deleted” rows, however currently these can only be removed using manual SQL commands or custom scripting. ~~~~~~~~~~ ++++++++++++++++++++++++++++ Telemetry (Ceilometer) ++++++++++++++++++++++++++++ ~~~~~~~~~~ The value can be set manually in /etc/ceilometer/ceilometer.conf time_to_live = 2592000 A cron job can run to remove the data . 0 0 * * * ceilometer-expirer --config-file /etc/ceilometer/ceilometer.conf There is already a configurable in templates to perform the same. https://access.redhat.com/solutions/2219091 ~~~~~~~~~~ Additional info: Already there is a article to perform the same, but all sections need to be revised. https://access.redhat.com/solutions/2219091
Cu, is asking the progress of this bugzilla, Can we have update on this.
There is already a KCS article for the same, but that needs be revised, and should be automatically get configured during OSP deployment, https://access.redhat.com/articles/1553233 There is no cron job configuration for Galnce and Network. This section should be revised. Glance - https://access.redhat.com/articles/1553233#Glance Network - https://access.redhat.com/articles/1553233#Neutron
Yea some items have already been included in newer versions. I see that cinder, nova, heat and keystone all have their crons setup automatically. It doesn't seem that we have anything for glance and network however.
Nilesh, if we are asking only for Glance and Network, can you please create separate BZs for Storage and for Network teams? They will need to work on those independently. I will leave it up to you if you want to re-use this one for storage and create new one for network, or close this one and create two new BZs, but we will need them separate. Thanks, Jarda PS: Tag them accordingly in Internal Whiteboard as DFG:Storage or DFG:Networking, so they get attention from the right teams.
Hello, I am not specifically asking for Glance and Network, I am asking for all component and should be automatically get configured in the cron during director installation. I will open a separate bug for glance and network. for now my question can we apply this configuration during director installation via heat templates. So there will no manual configuration need to do .
Correct, but if I understand correctly, the identified missing ones are those two, right? Absolutely, make it part of the BZ request, that you want this to be enabled via director. Each team now owns A-Z solution (including deployment) and since director is our official deployment & management tool, it is more than reasonable to request enablement through it.
Yes, correct. Two BZ's are - Glance [1] https://bugzilla.redhat.com/show_bug.cgi?id=1427765 Network [2] https://bugzilla.redhat.com/show_bug.cgi?id=1427766
Closing out as the bugs that this depended on are closed. If future RFEs are requested around this topic, please file bugs separately for each affected service.