rhel-osp-director: OSP10 Minor update fails with Error: Duplicate declaration: Package[python-memcache] is already declared; cannot redeclare at /etc/puppet/modules/oslo/manifests/cache.pp:159 on node overcloud-serviceapi-0.localdomain Environment: openstack-puppet-modules-9.3.0-1.el7ost.noarch instack-undercloud-5.0.0-2.el7ost.noarch openstack-tripleo-heat-templates-5.0.0-1.2.el7ost.noarch Steps to reproduce: 1. Deploy overcloud with composable roles like so: openstack overcloud deploy --templates $THT \ -r ~/openstack_deployment/roles/roles_data.yaml \ -e $THT/environments/network-isolation-v6.yaml \ -e $THT/environments/network-management.yaml \ -e $THT/environments/storage-environment.yaml \ -e $THT/environments/tls-endpoints-public-ip.yaml \ -e ~/openstack_deployment/environments/nodes.yaml \ -e ~/openstack_deployment/environments/network-environment.yaml \ -e ~/openstack_deployment/environments/disk-layout.yaml \ -e ~/openstack_deployment/environments/public_vip.yaml \ -e ~/openstack_deployment/environments/enable-tls.yaml \ -e ~/openstack_deployment/environments/inject-trust-anchor.yaml \ -e ~/openstack_deployment/environments/neutron-settings.yaml \ --log-file overcloud_deployment.log &> overcloud_install.log 2. Try to minor update. Result: [stack@undercloud-0 ~]$ heat resource-list -n5 overcloud|grep -v COMPLE WARNING (shell) "heat resource-list" is deprecated, please use "openstack stack resource list" instead +-------------------------------------------+---------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------+-----------------+----------------------+------------------------------------------------------------------------------------------------------------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | stack_name | +-------------------------------------------+---------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------+-----------------+----------------------+------------------------------------------------------------------------------------------------------------------------+ | AllNodesDeploySteps | 24a55be6-94eb-4603-9cb0-6ac16a2e8d4f | OS::TripleO::PostDeploySteps | UPDATE_FAILED | 2016-11-07T19:33:18Z | overcloud | | 2 | ac2a28b9-1acf-431c-a805-676fba08314b | OS::Heat::StructuredDeployment | UPDATE_FAILED | 2016-11-07T19:39:27Z | overcloud-AllNodesDeploySteps-txdhx6vakfgj-ServiceApiDeployment_Step4-wna25eveio7h | | ControllerDeployment_Step4 | 9310423f-998d-4ecb-9de1-6f6d8cb7f518 | OS::Heat::StructuredDeploymentGroup | UPDATE_FAILED | 2016-11-07T19:39:27Z | overcloud-AllNodesDeploySteps-txdhx6vakfgj | | ServiceApiDeployment_Step4 | 2d323006-acd7-42c1-8ff2-07f7ee042dee | OS::Heat::StructuredDeploymentGroup | UPDATE_FAILED | 2016-11-07T19:39:27Z | overcloud-AllNodesDeploySteps-txdhx6vakfgj | | 1 | c52615a2-4471-4a10-ab73-aa99d5f0e6c3 | OS::Heat::StructuredDeployment | UPDATE_FAILED | 2016-11-07T19:39:33Z | overcloud-AllNodesDeploySteps-txdhx6vakfgj-ServiceApiDeployment_Step4-wna25eveio7h | | ComputeDeployment_Step4 | 6adf4d84-9f28-4625-b721-1190f1db9fc4 | OS::Heat::StructuredDeploymentGroup | UPDATE_FAILED | 2016-11-07T19:39:33Z | overcloud-AllNodesDeploySteps-txdhx6vakfgj | | 0 | 05562849-edf6-4df3-b144-c3a17a01f39d | OS::Heat::StructuredDeployment | UPDATE_FAILED | 2016-11-07T19:39:35Z | overcloud-AllNodesDeploySteps-txdhx6vakfgj-ServiceApiDeployment_Step4-wna25eveio7h | | 1 | 921e276c-50ab-449a-aed0-b45adccf63c2 | OS::Heat::StructuredDeployment | UPDATE_FAILED | 2016-11-07T19:39:35Z | overcloud-AllNodesDeploySteps-txdhx6vakfgj-ControllerDeployment_Step4-pdydalow7rqy | +-------------------------------------------+---------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------+-----------------+----------------------+------------------------------------------------------------------------------------------------------------------------+ [stack@undercloud-0 ~]$ heat deployment-show 05562849-edf6-4df3-b144-c3a17a01f39d WARNING (shell) "heat deployment-show" is deprecated, please use "openstack software deployment show" instead { "status": "FAILED", "server_id": "010f2182-1c28-4b72-b870-052cd4658ae3", "config_id": "2c81aedb-8f24-43a3-9771-1a9fced753a9", "output_values": { "deploy_stdout": "Matching apachectl 'Server version: Apache/2.4.6 (Red Hat Enterprise Linux)\nServer built: Aug 3 2016 08:33:27'\n\u001b[mNotice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.\u001b[0m\n", "deploy_stderr": "exception: connect failed\n\u001b[1;31mWarning: Scope(Class[Cinder::Api]): keystone_enabled is deprecated, use auth_strategy instead.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Keystone]): Fernet token is recommended in Mitaka release. The default for token_provider will be changed to 'fernet' in O release.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Keystone]): admin_password is required, please set admin_password to a value != admin_token. admin_token will be removed in a later release\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Keystone::Roles::Admin]): the main class is setting the admin password differently from this\\\n class when calling bootstrap. This will lead to the password\\\n flip-flopping and cause authentication issues for the admin user.\\\n Please ensure that keystone::roles::admin::password and\\\n keystone::admin_password are set the same.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to glance.store.http.Store\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Heat]): keystone_user_domain_id is deprecated, use the name option instead.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Heat]): keystone_project_domain_id is deprecated, use the name option instead.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Neutron::Agents::L3]): parameter external_network_bridge is deprecated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Neutron::Server::Notifications]): nova_url is deprecated and will be removed after Newton cycle.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::cpu_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::ram_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::disk_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Mongodb::Server]): Replset specified, but no replset_members or replset_config provided.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::admin_user'; class ::nova::api has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::admin_password'; class ::nova::api has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::admin_tenant_name'; class ::nova::api has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::auth_uri'; class ::nova::api has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::auth_version'; class ::nova::api has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::identity_uri'; class ::nova::api has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Scheduler::Filter]): ram_allocation_ratio is deprecated in nova::scheduler::filter, please add to nova::init instead\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_host'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_protocol'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_port'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_path'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Ceilometer]): Both $metering_secret and $telemetry_secret defined, using $telemetry_secret\u001b[0m\n\u001b[1;31mWarning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 166 in file /etc/puppet/modules/gnocchi/manifests/api.pp\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Gnocchi::Api]): gnocchi:api::keystone_identity_uri is deprecated, use gnocchi::keystone::authtoken::auth_url instead\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Gnocchi::Api]): gnocchi::api::keystone_auth_uri is deprecated, use gnocchi::keystone::authtoken::auth_uri instead\u001b[0m\n\u001b[1;31mWarning: Not collecting exported resources without storeconfigs\u001b[0m\n\u001b[1;31mError: Duplicate declaration: Package[python-memcache] is already declared; cannot redeclare at /etc/puppet/modules/oslo/manifests/cache.pp:159 on node overcloud-serviceapi-0.localdomain\u001b[0m\n\u001b[1;31mError: Duplicate declaration: Package[python-memcache] is already declared; cannot redeclare at /etc/puppet/modules/oslo/manifests/cache.pp:159 on node overcloud-serviceapi-0.localdomain\u001b[0m\n", "deploy_status_code": 1 }, "creation_time": "2016-11-07T18:24:51Z", "updated_time": "2016-11-07T19:40:48Z", "input_values": { "step": 4, "update_identifier": "1478545534" },
Assigning to Lucas for initial triage - lbezdick can you please try and find some time today to have a look and see if you can work out what's going on. There's not much to go on here (no logs etc) so you may have to syncup with sasha when he comes on later and we can discuss in scrum this evening as it was added to the qe blocker list (@lbezdick needinfo flag is just to ping you about this) thanks
sasha does this fail straight away. is it failing as soon as you try updating the service node? I mean does the update start and successfully update any nodes before failing on some particular node (the service-api node from the trace in comment #0) What is different about this environment where you are getting a fail trying to update a stand-alone service-api env and https://bugzilla.redhat.com/show_bug.cgi?id=1391716#c8 where though an unrelated BZ you seem to also be using service-api node and it updates successfully.
19:43:12 stdout: starting package update on stack overcloud 19:43:12 WAITING 19:43:12 on_breakpoint: [u'overcloud-compute-0', u'overcloud-serviceapi-1', u'overcloud-controller-2', u'overcloud-cephstorage-0', u'overcloud-controller-1', u'overcloud-objectstorage-0', u'overcloud-controller-0', u'overcloud-serviceapi-0', u'overcloud-serviceapi-2'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear f4ff235d-6607-4835-9b43-f2be407d8cfe), no=cancel update, C-c=quit interactive mode: IN_PROGRESS 19:43:12 WAITING 19:43:12 completed: [u'overcloud-serviceapi-2'] 19:43:12 on_breakpoint: [u'overcloud-compute-0', u'overcloud-serviceapi-1', u'overcloud-controller-2', u'overcloud-cephstorage-0', u'overcloud-controller-1', u'overcloud-objectstorage-0', u'overcloud-controller-0', u'overcloud-serviceapi-0'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear da35e770-88ea-493c-9d3a-655d378bd46d), no=cancel update, C-c=quit interactive mode: IN_PROGRESS 19:43:12 WAITING 19:43:12 completed: [u'overcloud-serviceapi-2', u'overcloud-serviceapi-0'] 19:43:12 on_breakpoint: [u'overcloud-compute-0', u'overcloud-serviceapi-1', u'overcloud-controller-2', u'overcloud-cephstorage-0', u'overcloud-controller-1', u'overcloud-objectstorage-0', u'overcloud-controller-0'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear 6a22ab2a-5ddc-4b01-bea0-e0374ce9b3fd), no=cancel update, C-c=quit interactive mode: IN_PROGRESS 19:43:12 WAITING 19:43:12 completed: [u'overcloud-controller-0', u'overcloud-serviceapi-2', u'overcloud-serviceapi-0'] 19:43:12 on_breakpoint: [u'overcloud-compute-0', u'overcloud-serviceapi-1', u'overcloud-controller-2', u'overcloud-cephstorage-0', u'overcloud-controller-1', u'overcloud-objectstorage-0'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear 8ce56357-2c37-4606-8f80-47e83f284dec), no=cancel update, C-c=quit interactive mode: WAITING 19:43:12 completed: [u'overcloud-objectstorage-0', u'overcloud-controller-0', u'overcloud-serviceapi-2', u'overcloud-serviceapi-0'] 19:43:12 on_breakpoint: [u'overcloud-compute-0', u'overcloud-controller-2', u'overcloud-cephstorage-0', u'overcloud-controller-1', u'overcloud-serviceapi-1'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear 11bbc3cc-4a6f-4c3a-9ca4-ec363f70cb46), no=cancel update, C-c=quit interactive mode: WAITING 19:43:12 completed: [u'overcloud-objectstorage-0', u'overcloud-controller-0', u'overcloud-serviceapi-2', u'overcloud-serviceapi-0', u'overcloud-serviceapi-1'] 19:43:12 on_breakpoint: [u'overcloud-compute-0', u'overcloud-controller-2', u'overcloud-cephstorage-0', u'overcloud-controller-1'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear dbb1297e-a13f-4a23-95da-4525448399e5), no=cancel update, C-c=quit interactive mode: IN_PROGRESS 19:43:12 WAITING 19:43:12 completed: [u'overcloud-serviceapi-1', u'overcloud-controller-1', u'overcloud-objectstorage-0', u'overcloud-controller-0', u'overcloud-serviceapi-0', u'overcloud-serviceapi-2'] 19:43:12 on_breakpoint: [u'overcloud-compute-0', u'overcloud-controller-2', u'overcloud-cephstorage-0'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear a2133ee2-fffd-4712-9c7d-81e91492c9e2), no=cancel update, C-c=quit interactive mode: WAITING 19:43:12 completed: [u'overcloud-serviceapi-1', u'overcloud-cephstorage-0', u'overcloud-controller-1', u'overcloud-objectstorage-0', u'overcloud-controller-0', u'overcloud-serviceapi-0', u'overcloud-serviceapi-2'] 19:43:12 on_breakpoint: [u'overcloud-compute-0', u'overcloud-controller-2'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear 95a53b37-4400-4896-ba11-b6fb2cf4e7b7), no=cancel update, C-c=quit interactive mode: IN_PROGRESS 19:43:12 WAITING 19:43:12 completed: [u'overcloud-serviceapi-1', u'overcloud-controller-2', u'overcloud-cephstorage-0', u'overcloud-controller-1', u'overcloud-objectstorage-0', u'overcloud-controller-0', u'overcloud-serviceapi-0', u'overcloud-serviceapi-2'] 19:43:12 on_breakpoint: [u'overcloud-compute-0'] 19:43:12 Breakpoint reached, continue? Regexp or Enter=proceed (will clear 5c3b159d-ac9b-4e4b-85c6-623458f59dbd), no=cancel update, C-c=quit interactive mode: IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 IN_PROGRESS 19:43:12 FAILED 19:43:12 update finished with status FAILED I need to see if the issue reproduces with the same kind of deployment. I noticed that I haven't seen a successful update of standalone pcmk deployment. Two more deployments failed earlier due to env issues, so I didn't even get to the same stage.
This is updating from one pre-release to another. Not really a bug IMHO. Running a deploy fixes this.
Verified: Environment: openstack-tripleo-common-5.3.0-6.el7ost.noarch Was able to minor update 10.
Reproduced it with: openstack-puppet-modules-9.3.0-1.el7ost.noarch instack-undercloud-5.1.0-3.el7ost.noarch openstack-tripleo-heat-templates-5.1.0-5.el7ost.noarch
Reproduced again - reopening.
Created attachment 1224180 [details] the failing puppet manifest from service node 2
just had a look at the environment Sasha sent by mail - indeed I saw the error on service node 2 (is on all of the service nodes): /var/log/messages-Nov 24 14:30:39 host-192-168-0-23 os-collect-config: #033[1;31mError: Duplicate declaration: Package[python-memcache] is already declared; cannot redeclare at /etc/puppet/modules/oslo/manifests/cache.pp:159 on node overcloud-serviceapi-2.localdomain#033[0m /var/log/messages-Nov 24 14:30:39 host-192-168-0-23 os-collect-config: #033[1;31mError: Duplicate declaration: Package[python-memcache] is already declared; cannot redeclare at /etc/puppet/modules/oslo/manifests/cache.pp:159 on node overcloud-serviceapi-2.localdomain#033[0m /var/log/messages-Nov 24 14:30:39 host-192-168-0-23 os-collect-config: [2016-11-24 19:30:39,203] (heat-config) [ERROR] Error running /var/lib/heat-config/heat-config-puppet/091bab2d-e596-48b2-a8c3-45528922d759.pp. [1] That ^^ .pp is the https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/manifests/overcloud_role.pp - and on the box it has the full list of includes - I attached that in comment #12 I suspect then this would happen on any stack update operation against this type of OSP 10 deployment so removing DFG:Lifecycle and adding DFG:DF
We'll need more info to debug this - IIRC I hit this locally and updating all the puppet modules fixed it - so I suspect we need an update to either puppet-tripleo or puppet-oslo (or possibly puppet-keystone or puppet-horizon which also reference memcache) Please can we have the full package list e.g rpm -qa | grep puppet Also please see where else it's declared, e.g cd /etc/puppet/modules/ grep -R "python-memcache" ./* Also, if you've got the failing manifest on the node, it'd make sense to run it in debug mode which may help pinpoint where the conflicting definition is. So add step: 4 to a hieradata file in /etc/puppet/hieradata - check it's read by "hiera step", then run puppet apply --debug /var/lib/heat-config/heat-config-puppet/091bab2d-e596-48b2-a8c3-45528922d759.pp (or whatever the failing manifest name is when you reproduce).
This is caused by ensure_packages setting by default ensure to present. During update we have Package <||> {ensure => latest} which collides there on second ensure_packages of the resource. I suggest switching ensure_packages to ensure_resource{ 'package', '....'} as that does not enforce ensure=>present.
Clearing needinfo as Lukas has now debugged the problem (nice work, thanks!) :)
Lukas, can you write the doc_text for this one as a known issue?
Added stable/newton to trackers and setting to POST
Verified: Environment: puppet-oslo-9.4.0-2.el7ost.noarch puppet-horizon-9.4.1-2.el7ost.noarch The reported issue doesn't reproduce.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2978.html