Description of problem: For HA, we must have ntp configured on the overcloud nodes. This ability exists in the CLI but needs testing.
Ana, can you add the options for setting the ntp server on the hosts?
no ntp rpm is installed on over cloud no ntpstat [root@overcloud-controller-2 heat-admin]# rpm -qa | grep ntp fontpackages-filesystem-1.44-8.el7.noarch
Is this the right package name?
It depends on what puppet is configuring. There was a question of whether the image had chrony installed since that is the rhel 7 default, but it appears neither is installed today. openstack-puppet-modules has many references to ntpd but no references to chrony, so this should be installing ntpd and ntpdate packages.
I poked at this this afternoon and here are my notes, in case useful for anyone else, otherwise I will revisit on Monday, * there is no ntp/ntpd package available for my rhel overcloud box and indeed chrony is available and already installed. * we are wired up to configure ntp (unified cli, heat templates etc)... like --ntp-server "0.fedora.pool.ntp.org" will make it so that we include ::ntp see https://github.com/rdo-management/python-rdomanager-oscplugin/blob/master/rdomanager_oscplugin/v1/overcloud_deploy.py#L610 and https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/manifests/overcloud_controller.pp#L43 BUT * when you do enable --ntp-server deploy fails because it tries to install ntp and there is no such thing on my box (no ntp package).-- * I am trying to trick ntp into configuring chrony (there is no official looking puppet-chrony), latest attempt looks like: if count(hiera('ntp::servers')) > 0 { class { 'ntp' : config => "/etc/chrony.conf", keys_file => "/etc/chrony.keys", service_name => "chronyd", } } in the controller/compute hieradata (right now we just have include ::ntp) but that is failing like (i'll keep poking next week, unless someone else picks up) 4425:Jun 26 11:27:45 overcloud-controller-0.localdomain chronyd[20165]: Fatal error : Invalid command at line 7 in file /etc/chrony.conf 4559:Jun 26 11:28:31 overcloud-controller-0.localdomain os-collect-config[4368]: he package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.\n (at /usr/share/ruby/vendor_ruby/puppet/type.rb:816:in `set_default')\u001b[0m\n\u001b[1;31mError: /Stage[main]/Ntp::Service/Service[ntp]: Failed to call refresh: Could not restart Service[ntp]: Execution of '/usr/bin/systemctl restart chronyd' returned 1: Job for chronyd.service failed. See 'systemctl status chronyd.service' and 'journalctl -xn' for details.\u001b[0m\n\u001b[1;31mError: /Stage[main]/Ntp::Service/Service[ntp]: Could not restart Service[ntp]: Execution of '/usr/bin/systemctl restart chronyd' returned 1: Job for chronyd.service failed. See 'systemctl status chronyd.service' and 'journalctl -xn' for details.\nWrapped exception:\nExecution of '/usr/bin/systemctl restart chronyd' returned 1: Job for chronyd.service failed. See 'systemctl status chronyd.service' and 'journalctl -xn' for details.\u001b[0m\n", "deploy_status_code": 6} 4660:Jun 26 11:28:31 overcloud-controller-0.localdomain os-collect-config[4368]: Error: /Stage[main]/Ntp::Service/Service[ntp]: Failed to call refresh: Could not restart Service[ntp]: Execution of '/usr/bin/systemctl restart chronyd' returned 1: Job for chronyd.service failed. See 'systemctl status chronyd.service' and 'journalctl -xn' for details. 4661:Jun 26 11:28:31 overcloud-controller-0.localdomain os-collect-config[4368]: Error: /Stage[main]/Ntp::Service/Service[ntp]: Could not restart Service[ntp]: Execution of '/usr/bin/systemctl restart chronyd' returned 1: Job for chronyd.service failed. See 'systemctl status chronyd.service' and 'journalctl -xn' for details.
chrony may be the default, but we don't have to use it. It's probably simplest to include ntpd and ntpdate in the image and not include chrony. FWIW, I didn't see chrony on my image (though I could certainly have typoed)
(In reply to marios from comment #5) > I poked at this this afternoon and here are my notes, in case useful for > anyone else, otherwise I will revisit on Monday, > > > * there is no ntp/ntpd package available for my rhel overcloud box and > indeed chrony is available and already installed. > * we are wired up to configure ntp (unified cli, heat templates etc)... like > --ntp-server "0.fedora.pool.ntp.org" will make it so that we include ::ntp > see > https://github.com/rdo-management/python-rdomanager-oscplugin/blob/master/ > rdomanager_oscplugin/v1/overcloud_deploy.py#L610 and > https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/ > manifests/overcloud_controller.pp#L43 > BUT > * when you do enable --ntp-server deploy fails because it tries to install > ntp and there is no such thing on my box (no ntp package).-- > * I am trying to trick ntp into configuring chrony (there is no official > looking puppet-chrony), latest attempt looks like: > > if count(hiera('ntp::servers')) > 0 { > class { 'ntp' : > config => "/etc/chrony.conf", > keys_file => "/etc/chrony.keys", > service_name => "chronyd", > } > } > > in the controller/compute hieradata (right now we just have include ::ntp) > but that is failing like > > (i'll keep poking next week, unless someone else picks up) > > 4425:Jun 26 11:27:45 overcloud-controller-0.localdomain chronyd[20165]: > Fatal error : Invalid command at line 7 in file /etc/chrony.conf > 4559:Jun 26 11:28:31 overcloud-controller-0.localdomain > os-collect-config[4368]: he package type's allow_virtual parameter will be > changing its default value from false to true in a future release. If you do > not want to allow virtual packages, please explicitly set allow_virtual to > false.\n (at /usr/share/ruby/vendor_ruby/puppet/type.rb:816:in > `set_default')\u001b[0m\n\u001b[1;31mError: > /Stage[main]/Ntp::Service/Service[ntp]: Failed to call refresh: Could not > restart Service[ntp]: Execution of '/usr/bin/systemctl restart chronyd' > returned 1: Job for chronyd.service failed. See 'systemctl status > chronyd.service' and 'journalctl -xn' for > details.\u001b[0m\n\u001b[1;31mError: > /Stage[main]/Ntp::Service/Service[ntp]: Could not restart Service[ntp]: > Execution of '/usr/bin/systemctl restart chronyd' returned 1: Job for > chronyd.service failed. See 'systemctl status chronyd.service' and > 'journalctl -xn' for details.\nWrapped exception:\nExecution of > '/usr/bin/systemctl restart chronyd' returned 1: Job for chronyd.service > failed. See 'systemctl status chronyd.service' and 'journalctl -xn' for > details.\u001b[0m\n", "deploy_status_code": 6} > 4660:Jun 26 11:28:31 overcloud-controller-0.localdomain > os-collect-config[4368]: Error: /Stage[main]/Ntp::Service/Service[ntp]: > Failed to call refresh: Could not restart Service[ntp]: Execution of > '/usr/bin/systemctl restart chronyd' returned 1: Job for chronyd.service > failed. See 'systemctl status chronyd.service' and 'journalctl -xn' for > details. > 4661:Jun 26 11:28:31 overcloud-controller-0.localdomain > os-collect-config[4368]: Error: /Stage[main]/Ntp::Service/Service[ntp]: > Could not restart Service[ntp]: Execution of '/usr/bin/systemctl restart > chronyd' returned 1: Job for chronyd.service failed. See 'systemctl status > chronyd.service' and 'journalctl -xn' for details. marios, i'd probably just forgo trying to make puppet-ntp work with chrony. I've pushed up a patch to just install ntp on all the images.
*** Bug 1236338 has been marked as a duplicate of this bug. ***
awesome thanks for the update
*** Bug 1194571 has been marked as a duplicate of this bug. ***
*** Bug 1237108 has been marked as a duplicate of this bug. ***
*** Bug 1238528 has been marked as a duplicate of this bug. ***
In order to verify the deployment command must be sent with --ntp-server 0.au.pool.ntp.org
Tested on controllers and computes openstack-tripleo-puppet-elements-0.0.1-4.el7ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2015:1549