nodes in ironic still point to older ram and kernel after images update in glance. Environment: python2-glanceclient-2.10.0-1.el7ost.noarch openstack-ironic-common-10.1.2-3.el7ost.noarch python-ironic-inspector-client-3.1.1-1.el7ost.noarch puppet-ironic-12.4.0-0.20180329034302.8285d85.el7ost.noarch openstack-ironic-inspector-7.2.1-0.20180409163359.2435d97.el7ost.noarch python-glance-16.0.1-2.el7ost.noarch puppet-glance-12.5.0-0.20180329032353.e5a1256.el7ost.noarch python2-ironicclient-2.2.0-1.el7ost.noarch instack-undercloud-8.4.1-2.el7ost.noarch openstack-ironic-staging-drivers-0.9.0-4.el7ost.noarch openstack-glance-16.0.1-2.el7ost.noarch openstack-ironic-api-10.1.2-3.el7ost.noarch python2-glance-store-0.23.1-0.20180213060248.ad7df98.el7ost.noarch python2-ironic-neutron-agent-1.0.0-1.el7ost.noarch openstack-ironic-conductor-10.1.2-3.el7ost.noarch python-ironic-lib-2.12.1-1.el7ost.noarch Steps to reproduce:. 1. Have nodes with assigned ram and kernel in ironic. 2. update the images (openstack overcloud image upload --update-existing) 3. Try to deploy overcloud. Result: Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: 0174f323-d875-4217-9d72-64b56e7b0448 Waiting for messages on queue 'tripleo' with no timeout. [{u'errors': [u'Node 16ba4f61-5020-4d28-b161-b802160ef51d has an incorrectly configured driver_info/deploy_ramdisk. Expected "48fc374f-52bb-413f-965d-94f5cf9e4b0c" but got "855761a7-fdb2-4b75-8167-cd367f308743". ', u'Node 16ba4f61-5020-4d28-b161-b802160ef51d has an incorrectly configured driver_info/deploy_kernel. Expected "a3d9c52b-901c-4388-8fc5-9f7884ca88d6" but got "3f19ddde-441a-461e-8c30-80f1f02d38ee".'], u'warnin gs': []}, {u'errors': [u'Node 957f5b77-89dd-4b24-94ca-c6c8a3eef6c1 has an incorrectly configured driver_info/deploy_ramdisk. Expected "48fc374f-52bb-413f-965d-94f5cf9e4b0c" but got "855761a7-fdb2-4b75-8167-cd367 f308743".', u'Node 957f5b77-89dd-4b24-94ca-c6c8a3eef6c1 has an incorrectly configured driver_info/deploy_kernel. Expected "a3d9c52b-901c-4388-8fc5-9f7884ca88d6" but got "3f19ddde-441a-461e-8c30-80f1f02d38ee".'], u'warnings': []}, {u'errors': [u'Node 8aad5403-b951-432f-bc44-f9fdc66a5816 has an incorrectly configured driver_info/deploy_ramdisk. Expected "48fc374f-52bb-413f-965d-94f5cf9e4b0c" but got "855761a7-fdb2-4b75-8 167-cd367f308743".', u'Node 8aad5403-b951-432f-bc44-f9fdc66a5816 has an incorrectly configured driver_info/deploy_kernel. Expected "a3d9c52b-901c-4388-8fc5-9f7884ca88d6" but got "3f19ddde-441a-461e-8c30-80f1f02d 38ee".'], u'warnings': []}, {u'errors': [u'Node 912e888e-5627-43db-8ca2-a8889c3688f3 has an incorrectly configured driver_info/deploy_ramdisk. Expected "48fc374f-52bb-413f-965d-94f5cf9e4b0c" but got "855761a7-fd b2-4b75-8167-cd367f308743".', u'Node 912e888e-5627-43db-8ca2-a8889c3688f3 has an incorrectly configured driver_info/deploy_kernel. Expected "a3d9c52b-901c-4388-8fc5-9f7884ca88d6" but got "3f19ddde-441a-461e-8c30 -80f1f02d38ee".'], u'warnings': []}, {u'errors': [u'Node 92ddbc82-400d-45e7-a367-8fbfd13ad2d1 has an incorrectly configured driver_info/deploy_ramdisk. Expected "48fc374f-52bb-413f-965d-94f5cf9e4b0c" but got "85 5761a7-fdb2-4b75-8167-cd367f308743".', u'Node 92ddbc82-400d-45e7-a367-8fbfd13ad2d1 has an incorrectly configured driver_info/deploy_kernel. Expected "a3d9c52b-901c-4388-8fc5-9f7884ca88d6" but got "3f19ddde-441a- 461e-8c30-80f1f02d38ee".'], u'warnings': []}, {u'errors': [u'Node 3355be67-acb5-4baf-9323-823c65dcd268 has an incorrectly configured driver_info/deploy_ramdisk. Expected "48fc374f-52bb-413f-965d-94f5cf9e4b0c" bu t got "855761a7-fdb2-4b75-8167-cd367f308743".', u'Node 3355be67-acb5-4baf-9323-823c65dcd268 has an incorrectly configured driver_info/deploy_kernel. Expected "a3d9c52b-901c-4388-8fc5-9f7884ca88d6" but got "3f19d dde-441a-461e-8c30-80f1f02d38ee".'], u'warnings': []}, {u'errors': [u'Node a0008f5b-9c98-4f16-9b85-44bed47b26fd has an incorrectly configured driver_info/deploy_ramdisk. Expected "48fc374f-52bb-413f-965d-94f5cf9 e4b0c" but got "855761a7-fdb2-4b75-8167-cd367f308743".', u'Node a0008f5b-9c98-4f16-9b85-44bed47b26fd has an incorrectly configured driver_info/deploy_kernel. Expected "a3d9c52b-901c-4388-8fc5-9f7884ca88d6" but g ot "3f19ddde-441a-461e-8c30-80f1f02d38ee".'], u'warnings': []}] ERRORS W/a: manually assign the new kernel/ram with the below command: for node in `openstack baremetal node list -f value -c Name`; do openstack overcloud node configure --deploy-kernel <UUID> --deploy-ramdisk <UUID> $node; done
Isn't this by design as we don't want to affect nodes when we make changes to glance?
Bob is right, this is by design. However, I also agree that it is extremely confusing. Maybe we should make the images upload command update nodes as well? Or maybe we should at least make it print "Deploy images updated, run `openstack overcloud node configure` for affected nodes"? Ideas are welcome.
I like the message to update the nodes.
Verified: Environment: python-tripleoclient-9.2.6-2.el7ost.noarch So this is resolved with a message being prompted to the user/admin upon uploading new images: openstack overcloud image upload --update-existing Image "overcloud-full-vmlinuz" is up-to-date, skipping. Image "overcloud-full-initrd" is up-to-date, skipping. Image "overcloud-full" was uploaded. +--------------------------------------+----------------+-------------+------------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+----------------+-------------+------------+--------+ | 70731c18-f118-41e6-8643-a2510d5cf9f1 | overcloud-full | qcow2 | 1215299584 | active | +--------------------------------------+----------------+-------------+------------+--------+ Image "bm-deploy-kernel" is up-to-date, skipping. Image "bm-deploy-ramdisk" is up-to-date, skipping. Image file "/httpboot/agent.kernel" is up-to-date, skipping. Image file "/httpboot/agent.ramdisk" is up-to-date, skipping. Some images have been updated in Glance, make sure to rerun openstack overcloud node configure to reflect the changes on the nodes
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3587