Description of problem: When a new OverCloud node is deployed using OSP-d and the operator chooses to register it to a Red Hat Satellite (e.g by passing rhel_reg_method: satellite), the newly deployed node should then be updated to the software revisions the operator has defined in the Satellite Content View to which he chooses to assign his node. Version-Release number of selected component (if applicable): openstack-tripleo-heat-templates-0.8.6-121.el7ost.noarch Additional info: Currently a newly deployed OverCloud node stay with the same software revisions that was on the overcloud image. This is likely not what an operator who chooses to register his nodes to a Satellite expect. Further, if an operator after deployment would run an update to his OverCloud (openstack overcloud update) to apply the software revisions he's specified in the Satellite and then would scale-out, the newly deployed node would have older software then the rest of the OpenStack. These problems would be remedied if an OverCloud node, upon initial deployment, would apply any errata the operator had specified should be applied in the Satellite
Other alternatives would be to document a procedure for updating the overcloud deployment image and also document that this need to be done every time the relevant content view is published in the satellite. Taking it one step further would be adding a feature to the Red Hat Satellite 6, to automatically produce an overcloud image every time the relevant content view is updated.
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
Documentation for updating images was added, Please reopen if something else is required.
Not quite sure which documentation you are referring to, could you please post a link? Also, let me re-phrase the problematic scenarios for which I opened this Bz. *) Fresh deployment. When an operator uses a satellite to manage the patch-level of his cloud infrastructure, he expects that a newly deployed cloud will have the patch level which he defined in the Satellite. As it stands now, to achieve this, after he's deployed his cloud he additionally need to run "openstack overcloud update stack" manually. And then likely reboot the nodes as well, to make the updates effective. In my opinion, the deployment process should do this out-of-the-box, i.e. if a cloud operator chooses to use our Satellite for managing the patch level of the cloud, the cloud deployment mechanism should also ensure that the patch level defined in the Satellite is applied. *) Scale out This scenario is probably more serious. Once a cloud operator has applied any updates to his cloud (openstack overcloud update stack), after he scales out, the newly deployed node will have a different patch level from the rest of the cloud until the operator again executes the "openstack overcloud update stack" procedure. Until the newly deployed node has been brought up to the same patch level as the rest of the cloud, cloud user workloads could behave erratically due to the mismatch of patch-level. Also, please keep in mind that the new node could be missing security errata but the scale-out process makes the node operational. This could expose the cloud to security vulnerabilities before the patch level defined in the Satellite have been applied to the new node.
Gotcha, will try to address in OSP11 since 10 is already closed.
I am surprised this is not the case. Basically the idea is to deploy the image and make sure that the content is up to date with what is available in the channel, correct? This will be probably regardless Satellite (especially in the second scenario) but Satellite is the most obvious driver. We will research further.
Indeed, that is also what I'd expect. Also, do note that in many cases a reboot will also be required, I guess this could somewhat complicate the orchestration process.
USER STORIES: * As a cloud operator, I want changes of packages in Satellite to be applied in time of deployment/update so that I have control over what packages are available on nodes.
So the workaround for this is quite simple I think, basically pass an environment file that looks like: parameter_defaults: UpdateIdentifier: updateme123 Whenever the value of UpdateIdentifier changes, we'll run the yum update, but it's not set on initial deployment. I guess the question is what interface we should provide for users, e.g we could set this to a value when initially registering to the satellite (e.g via the environment files we already use for this in t-h-t), but that doesn't solve the scale-out problem because we need to update the value in that case. I'll look into ways we could do this automatically, but for now I'd suggest the workaround above.
Considering this procedure is for _new_ nodes only, isn't it overly complicated and also time-consuming to run this through yum_update? In my opinion, just adding a "yum update" to extraconfig/pre_deploy/rhel-registration/scripts/rhel-registration would be both simpler and faster since at that point, we don't need to bother with cluster restarts and whatnot. Also, please keep in mind that more often then not, a reboot will also be required since the update will likely pull in a new kernel, glibc, audit etc. I'm guessing this can also be solved from extraconfig/pre_deploy/rhel-registration/scripts/rhel-registration. Or is there some more elegant way of orchestrating this then just a reboot command from the script?
Alex added a new boolean parameter for RHEL Registration called 'UpdateOnRHELRegistration' that when enabled will trigger a yum update on the node after the registration process completes.
Hey Dan, Could we document UpdateOnRHELRegistration (False by default), for RHEL Registration that when True will trigger a yum update on the node after the registration process completes. Thanks!
Verified on puddle 2018-05-01.6. yum was triggered update upon registration with SAT.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2086