Bug 1507031 - Duplicate Image Stream Tags After oc apply
Summary: Duplicate Image Stream Tags After oc apply
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: ImageStreams
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: ---
Assignee: Ben Parees
QA Contact: Dongbo Yan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-27 13:05 UTC by Devan Goodwin
Modified: 2017-10-30 18:41 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-30 16:40:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
oc get output of image stream in question (8.28 KB, text/plain)
2017-10-27 13:05 UTC, Devan Goodwin
no flags Details
source config file we're applying (2.86 KB, text/plain)
2017-10-27 21:02 UTC, Devan Goodwin
no flags Details

Description Devan Goodwin 2017-10-27 13:05:18 UTC
Created attachment 1344303 [details]
oc get output of image stream in question

Description of problem:

On updating image streams in OpenShift Online int clusters, we found that this resulted in certain tags having duplicate entries in the image stream status.


Version-Release number of selected component (if applicable):


atomic-openshift-3.7.0-0.176.0.git.0.11e0f84.el7.x86_64

How reproducible:

Unsure.

Steps to Reproduce:

The ansible role we ran did a:

- name: Update imagestreams                                                                                                                                                                                                            
  shell: "oc apply -n openshift -f {{ osotis_imagestream_dir }}/"                                                                                                                                                                      
  changed_when: false

The specific imagestream json in question:

https://github.com/openshift/online/blob/a0b08ae43e3afe953cd080f926bbb46997574dbc/ansible/roles/oso_template_imagestream_sync/files/starter/imagestreams/jboss-webserver30-tomcat8-openshift-rhel7.json



Actual results:

Full output of "oc get is jboss-webserver30-tomcat7-openshift -o yaml" will be attached but relevant portion below.

status:
  dockerImageRepository: 172.30.215.46:5000/openshift/jboss-webserver30-tomcat7-openshift
  tags:
  - items:
    - created: 2017-10-23T19:22:03Z
      dockerImageReference: registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift@sha256:51f16a7fb2c0b9fb54655e6e6135dff57b1d05e6567d0a376fff170e49904928
      generation: 8
      image: sha256:51f16a7fb2c0b9fb54655e6e6135dff57b1d05e6567d0a376fff170e49904928
    - created: 2017-09-24T12:26:51Z
      dockerImageReference: registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift@sha256:ade48868bdc01fc4e927a097fdcc39a58c01a11098db0ec1f8a91c6fc05859ec
      generation: 6
      image: sha256:ade48868bdc01fc4e927a097fdcc39a58c01a11098db0ec1f8a91c6fc05859ec
    tag: latest
  - items:
    - created: 2017-10-26T22:04:24Z
      dockerImageReference: registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift@sha256:f23030e400e37ef8ba200750935f8b7a561588ced6cce74b0f03f3ee2b39f741
      generation: 10
      image: sha256:f23030e400e37ef8ba200750935f8b7a561588ced6cce74b0f03f3ee2b39f741
    - created: 2017-10-23T19:22:03Z
      dockerImageReference: registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift@sha256:4bbac402ed7392c8446dbd754f48b563fc7c34e49d2cd8186094ff89220605e0
      generation: 8
      image: sha256:4bbac402ed7392c8446dbd754f48b563fc7c34e49d2cd8186094ff89220605e0
    - created: 2017-09-24T12:26:51Z
      dockerImageReference: registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift@sha256:6ee8de68a744d820e249f784b5ba41d059f91a5554fce47c7e0b998aa88c97cb
      generation: 2
      image: sha256:6ee8de68a744d820e249f784b5ba41d059f91a5554fce47c7e0b998aa88c97cb
    tag: "1.3"


Expected results:

Expected just one tag, unless we're misunderstanding how this should behave.

Unclear currently how badly this breaks the images if at all, will post updates when we have them.


Additional info:

More debug info available in openshift namespace on the "free-int" cluster for time being.

via Clayton: "Apply should work on anything. It's probably that we're missing the right strategic merge patch labels.  There has to be a merge patchStrategy and a patchKey of name on the spec tags field."

Comment 2 Devan Goodwin 2017-10-27 21:02:48 UTC
Created attachment 1344523 [details]
source config file we're applying

Comment 3 Ben Parees 2017-10-27 21:08:04 UTC
I think this is working as expected. You modified the imagestreamtag, so it got reimported.  That's why you see a second set of tag events in the status.  The previous tag events are retained by design in case you want to revert to them.

I do think there may be a different but here in how we handle spec tag names (if you deleted the 1.3 tag from the spec list, and added a 1.4 tag, in the imagestream file you are applying, what you're going to see is the 1.3 tag gets deleted and the 1.4 tag is added (replace behavior)...what you probably should see is both the 1.3 and 1.4 tags show up (merge))

But I need to talk to Clayton to confirm what we would want/expect to happen there.

Comment 4 Clayton Coleman 2017-10-27 21:22:51 UTC
Yeah status tags shouldn't be cleared by apply.  Only prune should do that.

Comment 5 Ben Parees 2017-10-27 21:27:36 UTC
Clayton, I was referring to spec tags.  (I haven't looked what happens to status tags, I don't think they're being cleared but that's another investigation).

Comment 7 Ben Parees 2017-10-30 16:40:26 UTC
I have a PR [1] to change our behavior regarding spec+status tag merging, but for what you're seeing (multiple status tagevents for a given tag), that is working as expected.

[1] https://github.com/openshift/origin/pull/17091

Comment 8 Devan Goodwin 2017-10-30 17:35:45 UTC
Ben will this list grow unbounded or is it just the last 1-5 tags?

Comment 9 Ben Parees 2017-10-30 17:53:52 UTC
The tagevent history grows unbounded unless you prune. (oc adm prune images with some revision history value)

It's no different from what happens today if you run oc import-image and keeping importing new image ids.

The only reason it looks new to you is because before you were deleting the imagestream and thus all of the history along with it.

Comment 10 Devan Goodwin 2017-10-30 18:19:42 UTC
I'm wondering now what we should be doing with the ansible that updates the image streams. 

oc adm prune images looks like it's hitting all images in the cluster, whereas I wouldn't want to play around with anything outside of the openshift namespace. --keep-tag-revisions=3 would be nice if we could just limit that to one ns.

If not it seems like we should probably continue fully deleting them and re-creating.

Comment 11 Ben Parees 2017-10-30 18:41:03 UTC
having the extra history events around is not a big deal, especially for just our limited set of images.  And users should be running pruning on their own as a rule anyway.


Note You need to log in before you can comment on or make changes to this bug.