Hide Forgot
Description of problem: After upgrading from OSP11 to the OSP12 containerized architecture the cron jobs used by keystone, nova are still running as commands on the host. I'd expect them to be run inside the newly created containers. Cron jobs: [root@controller-0 cron]# ls -l /var/spool/cron/ total 16 -rw-------. 1 root root 476 Jul 31 11:43 ceilometer -rw-------. 1 root root 494 Jul 31 11:49 heat -rw-------. 1 root root 676 Jul 31 11:50 keystone -rw-------. 1 root root 510 Jul 31 11:52 nova [root@controller-0 cron]# cat /var/spool/cron/heat # HEADER: This file was autogenerated at 2017-07-31 11:49:24 +0000 by puppet. # HEADER: While it can still be managed manually, it is definitely not recommended. # HEADER: Note particularly that the comments starting with 'Puppet Name' should # HEADER: not be deleted, as doing so could cause duplicate cron jobs. # Puppet Name: heat-manage purge_deleted PATH=/bin:/usr/bin:/usr/sbin SHELL=/bin/sh 1 0 * * * sleep `expr ${RANDOM} \% 3600`; heat-manage purge_deleted -g days 30 >>/dev/null 2>&1 [root@controller-0 cron]# cat /var/spool/cron/keystone # HEADER: This file was autogenerated at 2017-07-31 11:50:18 +0000 by puppet. # HEADER: While it can still be managed manually, it is definitely not recommended. # HEADER: Note particularly that the comments starting with 'Puppet Name' should # HEADER: not be deleted, as doing so could cause duplicate cron jobs. # Puppet Name: keystone-manage token_flush PATH=/bin:/usr/bin:/usr/sbin SHELL=/bin/sh 1 * * * * sleep `expr ${RANDOM} \% 0`; keystone-manage token_flush >>/var/log/keystone/keystone-tokenflush.log 2>&1 # Puppet Name: cinder-manage db purge PATH=/bin:/usr/bin:/usr/sbin SHELL=/bin/sh 1 0 * * * cinder-manage db purge 0 >>/var/log/cinder/cinder-rowsflush.log 2>&1 [root@controller-0 cron]# cat /var/spool/cron/nova # HEADER: This file was autogenerated at 2017-07-31 11:52:44 +0000 by puppet. # HEADER: While it can still be managed manually, it is definitely not recommended. # HEADER: Note particularly that the comments starting with 'Puppet Name' should # HEADER: not be deleted, as doing so could cause duplicate cron jobs. # Puppet Name: nova-manage db archive_deleted_rows PATH=/bin:/usr/bin:/usr/sbin SHELL=/bin/sh 1 0 * * * nova-manage db archive_deleted_rows --max_rows 100 >>/var/log/nova/nova-rowsflush.log 2>&1 Version-Release number of selected component (if applicable): openstack-tripleo-heat-templates-7.0.0-0.20170721174554.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Deploy OSP11 2. Upgrade to OSP12 3. Check Openstack related cron jobs set up by OSPd Actual results: The cron jobs are running as regular commands on host Expected results: The cron jobs are running inside the containers created during upgrade. Additional info:
It looks like we're missing the cron jobs removal during upgrade. Currently, with the latest puddle we have the nova_api_cron container running but the cron job still exists on the host: [root@controller-1 heat-admin]# docker ps | grep cron 053adcc4b586 192.168.24.1:8787/rhosp12/openstack-nova-api-docker:2017-07-26.10 "/usr/sbin/crond -n" 14 hours ago Up 14 hours nova_api_cron [root@controller-1 heat-admin]# ls -l /var/spool/cron/ total 12 -rw-------. 1 root root 494 Jul 31 15:30 heat -rw-------. 1 root root 676 Jul 31 15:31 keystone -rw-------. 1 root root 510 Jul 31 15:32 nova
How are the cron containers running the cron jobs? I checked /var/spool/cron/ inside containers and it's empty: [root@controller-1 heat-admin]# docker exec -it nova_api_cron bash -c 'ls /var/spool/cron/'
Should be resolved by https://review.openstack.org/485858
After upgrade cron jobs are running both on the host and inside containers. I believe we're missing removing the ones running on the host during upgrade: [root@controller-0 ~]# ls -l /var/spool/cron/ total 12 -rw-------. 1 root root 494 Nov 8 12:54 heat -rw-------. 1 root root 647 Nov 8 12:54 keystone -rw-------. 1 root root 510 Nov 8 12:58 nova [root@controller-0 ~]# docker ps | grep cron 4bb5f4a42888 rhos-qe-mirror-tlv.usersys.redhat.com:5000/rhosp12/openstack-heat-api-docker:20171103.1 "kolla_start" 11 minutes ago Up 11 minutes (healthy) heat_api_cron ea63fc2ded3a rhos-qe-mirror-tlv.usersys.redhat.com:5000/rhosp12/openstack-cron-docker:20171103.1 "kolla_start" 11 minutes ago Up 11 minutes logrotate_crond a01afae02e7c rhos-qe-mirror-tlv.usersys.redhat.com:5000/rhosp12/openstack-nova-api-docker:20171103.1 "kolla_start" 11 minutes ago Up 11 minutes (healthy) nova_api_cron 93ded582b6d8 rhos-qe-mirror-tlv.usersys.redhat.com:5000/rhosp12/openstack-keystone-docker:20171103.1 "/bin/bash -c '/usr/l" 14 minutes ago Up 14 minutes (healthy) keystone_cron [root@controller-0 ~]# [root@controller-0 ~]# grep autogenerated /var/spool/cron/nova # HEADER: This file was autogenerated at 2017-11-08 12:58:19 +0000 by puppet. [root@controller-0 ~]# docker exec -it nova_api_cron grep autogenerated /var/spool/cron/nova # HEADER: This file was autogenerated at 2017-11-08 16:15:23 +0000 by puppet.
removing untriaged to be discuess with the team
Carlos, Can you take a look to see if there is remaining cleanup is needed?
Hey Mike, checking the code, this bug should be already fixed. The cron cleanup was merged with this review https://review.openstack.org/#/c/490496/ The following cron files were cleaned on the upgrade tasks in: docker/services/cinder-api.yaml docker/services/heat-api.yaml docker/services/keystone.yaml docker/services/nova-api.yaml Ceilometer services were removed. Marius, can you confirm that the fix was tested?
(In reply to Carlos Camacho from comment #14) > Hey Mike, checking the code, this bug should be already fixed. > > The cron cleanup was merged with this review > https://review.openstack.org/#/c/490496/ > > The following cron files were cleaned on the upgrade tasks in: > > docker/services/cinder-api.yaml > docker/services/heat-api.yaml > docker/services/keystone.yaml > docker/services/nova-api.yaml > > > Ceilometer services were removed. > > > > Marius, can you confirm that the fix was tested? https://review.openstack.org/#/c/490496/ has been merged for some time(in August) so it was part of the build used for testing at the time of comment #10 This issue got addressed by https://review.openstack.org/#/c/517082/ which is present in the latest puddle build.
[root@controller-0 heat-admin]# ls -l /var/spool/cron/ total 4 -rw-------. 1 root root 474 Nov 21 22:04 cinder [root@controller-0 heat-admin]# cd [root@controller-0 ~]# ls -l /var/spool/cron/ total 4 -rw-------. 1 root root 474 Nov 21 22:04 cinder [root@controller-0 ~]# docker ps | grep cron 4ec0c3ac4038 rhos-qe-mirror-brq.usersys.redhat.com:5000/rhosp12/openstack-heat-api-docker:20171121.1 "kolla_start" 31 minutes ago Up 31 minutes (healthy) heat_api_cron 94301582305f rhos-qe-mirror-brq.usersys.redhat.com:5000/rhosp12/openstack-cron-docker:20171121.1 "kolla_start" 31 minutes ago Up 31 minutes logrotate_crond 7a8073813cc4 rhos-qe-mirror-brq.usersys.redhat.com:5000/rhosp12/openstack-nova-api-docker:20171121.1 "kolla_start" 31 minutes ago Up 31 minutes (healthy) nova_api_cron fbda5319036e rhos-qe-mirror-brq.usersys.redhat.com:5000/rhosp12/openstack-keystone-docker:20171121.1 "/bin/bash -c '/usr/l" 34 minutes ago Up 34 minutes (healthy) keystone_cron [root@controller-0 ~]# docker exec nova_api_cron ls -l /var/spool/cron/ total 4 -rw-------. 1 root root 510 Nov 22 13:23 nova [root@controller-0 ~]# docker exec heat_api_cron ls -l /var/spool/cron/ total 4 -rw-------. 1 root root 494 Nov 22 13:23 heat [root@controller-0 ~]# docker exec keystone_cron ls -l /var/spool/cron/ total 4 -rw-------. 1 root root 487 Nov 22 13:23 keystone
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3462