Bug 1411227 - Prevent the RHGS docker image in a production setup from getting replaced with any new image with "latest" TAG pushed to the public Registry.
Summary: Prevent the RHGS docker image in a production setup from getting replaced wit...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: CNS-deployment
Version: cns-3.4
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: CNS 3.4
Assignee: Mohamed Ashiq
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks: 1385247
TreeView+ depends on / blocked
 
Reported: 2017-01-09 08:10 UTC by Prasanth
Modified: 2017-01-18 21:58 UTC (History)
15 users (show)

Fixed In Version: cns-deploy-3.1.0-14
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1400965
Environment:
Last Closed: 2017-01-18 21:58:31 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0148 0 normal SHIPPED_LIVE heketi bug fix and enhancement update 2017-01-19 02:53:24 UTC

Description Prasanth 2017-01-09 08:10:26 UTC
Description of problem:

We should prevent the RHGS/Volmanager docker image in a production setup from getting replaced with any new image with "latest" TAG pushed at a later stage to the public registry: registry.access.redhat.com.

If there is a new image tagged as "latest" is pushed to the Registry, there is a high chance that the RHGS pods running in ANY existing customer deployment setups will try to pull the "latest" image on a node reboot or a docker restart, etc. Since pulling the "latest" image will automatically happen even though there is no specific requirement at that moment, there is a high chance that this might even break the production setup if:
* the gluster pods comes back online with the "latest" image which has some major code changes/bug fixes, RHEL base image updates or whatever that may lead to the breaking of the existing setup. 
* the gluster pods fails to come back online due to any reason
* the gluster pods comes back online with the "latest" image but it ends up having different "IMAGE ID" in different nodes as the rebooted nodes will now end up having the "latest" image and the end-result is UNKNOWN


Ideally, the image upgrade should have happened ONLY when the user was actually doing a proper major/minor version upgrade by following the official documented steps. The point to note is that, here it happened even otherwise outside of a normal M-window. 

So I believe, this should somehow be restricted from happening to avoid any post deployment issues. Either via hard coding the specific "TAG" in the respective template files or by some other better ways.


Additional info:

--- Additional comment from Humble Chirammal on 2016-12-02 07:57:40 EST ---

Need to revisit the template image policy for our templates. Lets discuss and have a consensus on this.

--- Additional comment from Humble Chirammal on 2016-12-02 08:02:40 EST ---

(In reply to Humble Chirammal from comment #1)
> Need to revisit the template image policy for our templates. Lets discuss
> and have a consensus on this.

Just to be clear, `Tag + image policy` has to be adjusted based on our plan to support upgrades and such.

--- Additional comment from Mohamed Ashiq on 2016-12-02 08:23:15 EST ---

(In reply to Humble Chirammal from comment #2)
> (In reply to Humble Chirammal from comment #1)
> > Need to revisit the template image policy for our templates. Lets discuss
> > and have a consensus on this.
> 
> Just to be clear, `Tag + image policy` has to be adjusted based on our plan
> to support upgrades and such.

Yes. Tags are really important to maintain in templates based releases of container and cns itself.

--- Additional comment from Michael Adam on 2016-12-12 12:36:37 EST ---

Decision was made to not put the tags into the templates in the heketi RPMs, because this creates a circular dependency.

Instead keep the un-tagged images in the RPM and document to nail it down with the tags manually.

--- Additional comment from Humble Chirammal on 2016-12-13 07:07:40 EST ---

The container images ( vol manager and glusterfs) are tagged ( for ex: 3-1-3-19) from now on. If admin want to deploy a particular version of the heketi and glusterfs template, they need to edit the template files as shown below:


from

 "image": "<IMAGE NAME>:latest",

to

 "image": "<IMAGE NAME>:3-1-3-19",

where `latest` and `3-1-3-19` are tags of the container image.

--- Additional comment from Mohamed Ashiq on 2016-12-13 07:13:03 EST ---

(In reply to Humble Chirammal from comment #5)
> The container images ( vol manager and glusterfs) are tagged ( for ex:
> 3-1-3-19) from now on. If admin want to deploy a particular version of the
> heketi and glusterfs template, they need to edit the template files as shown
> below:
> 
> 
> from
> 
>  "image": "<IMAGE NAME>:latest",
> 
> to
> 
>  "image": "<IMAGE NAME>:3-1-3-19",
> 
> where `latest` and `3-1-3-19` are tags of the container image.

from

   "image": "<IMAGE_NAME>",

to

   "image": "<IMAGE_NAME>:3.1.3-19"

image will be without any tag which internally means latest,just add the tag specified in above.

Comment 3 Prasanth 2017-01-10 14:50:08 UTC
################
# pwd
/usr/share/heketi/templates

# ll
total 24
-rw-r--r--. 1 root root 3984 Jan  9 22:39 deploy-heketi-template.yaml
-rw-r--r--. 1 root root 3200 Jan  9 22:39 glusterfs-template.yaml
-rw-r--r--. 1 root root   77 Dec 20 23:05 heketi-service-account.yaml
-rw-r--r--. 1 root root 3888 Jan  9 22:39 heketi-template.yaml
-rw-r--r--. 1 root root  247 Jan  9 22:39 sample-gluster-endpoints.yaml
-rw-r--r--. 1 root root   93 Jan  9 22:39 sample-gluster-service.yaml


# grep -i image *
deploy-heketi-template.yaml:          image: rhgs3/rhgs-volmanager-rhel7:3.1.3-27
glusterfs-template.yaml:        - image: rhgs3/rhgs-server-rhel7:3.1.3-17
glusterfs-template.yaml:          imagePullPolicy: IfNotPresent
heketi-template.yaml:          image: rhgs3/rhgs-volmanager-rhel7:3.1.3-27
heketi-template.yaml:          imagePullPolicy: IfNotPresent


# oc get dc
NAME                     REVISION   DESIRED   CURRENT   TRIGGERED BY
heketi                   1          1         1         config
storage-project-router   1          1         1         config


#  oc export dc heketi -o yaml |grep -i recreate
    recreateParams:
    type: Recreate


#  oc export dc heketi -o yaml |grep -i image
        image: rhgs3/rhgs-volmanager-rhel7:3.1.3-27
        imagePullPolicy: IfNotPresent
################

Verified as fixed in cns-deploy-3.1.0-14

Comment 4 errata-xmlrpc 2017-01-18 21:58:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0148.html


Note You need to log in before you can comment on or make changes to this bug.