Description of problem: When we run this "curl -L https://registry.access.redhat.com/v2/rhosp13/openstack-aodh-notifier/tags/list" it returns: {"name": "rhosp13/openstack-aodh-notifier", "tags": ["13.0-85", "13.0-86", "13.0-83", "13.0-100", "13.0-109", "13.0", "13.0-56", "13.0-55", "13.0-76", "13.0-72", "13.0-39", "13.0-100.1582098610", "13.0-96", "13.0-100.1580118203", "13.0-62.1543534128", "13.0-68", "13.0-72.1557945115", "13.0-46", "13.0-62", "13.0-60", "13.0-61", "13.0-66", "13.0-64", "13.0-68.1554788878", "latest"]} If we wants a specific RHOSP z-stream release so we can pull the images for that specific z-stream release, which specific tag would we use as an identifier for that z release. For example, if we don't always want "latest" but instead want one of the previous z-releases e.g. z9, z10 which tag would we use? There appears to be no clear mapping between the aforementioned tags and a given z-release. Is there another mechanism to facilitate this mapping? Can we create a tag for each zrelease? If not, can we document a matrix with the mapping?
The problem here is that some customer always specify "latest" tag for containers. Because Satellite server keeping synchronizing with CDN every day or week, and we cannot freeze containers as we can do with RPM packages using content view, the "latest" tag is going to be a newer version when deploying in production. I know we have z11 code now and that is what we are testing, but when we deploy in another region, "latest" will be something else, maybe Z12 or higher.
Maybe a proper workflow here would be to freeze the RPMs along the specific {version}-{release} ... rather than having other processes to keep track of which versions are within which z-release.
yeah we should probably tag with zX imho.
(In reply to David Hill from comment #3) > Maybe a proper workflow here would be to freeze the RPMs along the specific > {version}-{release} ... rather than having other processes to keep track of > which versions are within which z-release. Slightly off-topic: Regarding overcloud containers, even if an operator has no understanding/knowledge of z-releases they can "take a snapshot of the container" deployed in a certain env, at any given point of time. That can be done by using `openstack object save overcloud environments/containers-default-parameters.yaml --file overcloud_images.yaml` using undercloud credentials, and then use the output file in future deployments (in a `pip freeze` fashion). Is any of you aware of a similar workaround for the undercloud container images? The use case is for operators managing multiple environments having the hard requirement to be 1:1 (down to container 16.X-Y tag) with the original one even if deployed later in time. IIUC, in those cases using "tag: 16.1.2" for `container image prepare` would not be enough, as 16.1.X tag might later point to containers included with async releases, etc. and diverge from the original env.
(In reply to Michele Valsecchi from comment #12) > (In reply to David Hill from comment #3) > > Maybe a proper workflow here would be to freeze the RPMs along the specific > > {version}-{release} ... rather than having other processes to keep track of > > which versions are within which z-release. > > Slightly off-topic: > Regarding overcloud containers, even if an operator has no > understanding/knowledge of z-releases they can "take a snapshot of the > container" deployed in a certain env, at any given point of time. > That can be done by using `openstack object save overcloud > environments/containers-default-parameters.yaml --file > overcloud_images.yaml` using undercloud credentials, and then use the output > file in future deployments (in a `pip freeze` fashion). > > Is any of you aware of a similar workaround for the undercloud container > images? > > The use case is for operators managing multiple environments having the hard > requirement to be 1:1 (down to container 16.X-Y tag) with the original one > even if deployed later in time. > IIUC, in those cases using "tag: 16.1.2" for `container image prepare` would > not be enough, as 16.1.X tag might later point to containers included with > async releases, etc. and diverge from the original env. Our customer wants to do this, however we can add the overcloud_images.yaml to the deployment, but the images wan't be pulled down to the director if it's not in the container_images_prepare.yaml right? How can we do this and ensure the images are pulled to the director before deployment?
OSP13 support officially ended on 27 June 2023