Bug 1835112 - s390x/ppc64le: Failed to upgrade Cluster from 4.2.29 to 4.3.18: unable to sync: open /opt/openshift/operator/ocp-s390x: no such file or directory
Summary: s390x/ppc64le: Failed to upgrade Cluster from 4.2.29 to 4.3.18: unable to syn...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Samples
Version: 4.3.z
Hardware: s390x
OS: Other
high
high
Target Milestone: ---
: 4.5.0
Assignee: Gabe Montero
QA Contact: Barry Donahue
URL:
Whiteboard: multi-arch
Depends On:
Blocks: OCP/Z_4.2 1835995
TreeView+ depends on / blocked
 
Reported: 2020-05-13 06:42 UTC by jschinta
Modified: 2021-04-05 17:47 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: earlier versions (4.2.x) of samples operator on s390x/ppc64le did not bootstrap as removed, given samples content has not been made available on those architectures yet Consequence: an upgrade to alter versions would go degraded as the later versions assumed samples operator was already marked removed Fix: newer versions of samples operator will now force if needed samples to removed during upgrade on s390x/ppc64le Result: upgrade of samples operator on s390x/ppc64le will succeed
Clone Of:
Environment:
Last Closed: 2020-07-13 17:38:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-samples-operator pull 271 0 None closed Bug 1835112: avoid listing file system content for unsupported architures 2021-02-09 11:32:07 UTC
Github openshift cluster-samples-operator pull 277 0 None closed Bug 1835112: ensure s390/ppc64le platforms bootstrap as removed following upgrade 2021-02-09 11:32:06 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:38:26 UTC

Description jschinta 2020-05-13 06:42:57 UTC
Description of problem:
When upgrading the zVM cluster from 4.2.29 to 4.3.18, it fails with the message, that the openshift cluster samples operator was not rolled out.
When viewing the log of the container, it shows the following error:


time="2020-05-13T06:27:05Z" level=info msg="watch event tests not part of operators inventory"
time="2020-05-13T06:28:33Z" level=info msg="Spec is valid because this operator has not processed a config yet"
time="2020-05-13T06:28:33Z" level=info msg="error reading in content : open /opt/openshift/operator/ocp-s390x: no such file or directory"
time="2020-05-13T06:28:33Z" level=info msg="CRDUPDATE file list err update"
time="2020-05-13T06:28:36Z" level=error msg="unable to sync: open /opt/openshift/operator/ocp-s390x: no such file or directory, requeuing"

When rsh into pod, it only has the directory /opt/openshift/operator/ocp-x86_64

Version-Release number of selected component (if applicable):
4.3.18

How reproducible:


Steps to Reproduce:
1. Start uograde from 4.2.29 to 4.3.18
2.
3.

Actual results:
Upgrade fails

Expected results:
Excpect upgrade to work

Additional info:

Deployment-YAML cluster-samples-operator:
kind: Deployment
apiVersion: apps/v1
metadata:
  name: cluster-samples-operator
  namespace: openshift-cluster-samples-operator
  selfLink: >-
    /apis/apps/v1/namespaces/openshift-cluster-samples-operator/deployments/cluster-samples-operator
  uid: c4a61f65-586c-11ea-b504-02462c000005
  resourceVersion: '46056999'
  generation: 6
  creationTimestamp: '2020-02-26T07:51:16Z'
  annotations:
    deployment.kubernetes.io/revision: '4'
spec:
  replicas: 1
  selector:
    matchLabels:
      name: cluster-samples-operator
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: cluster-samples-operator
    spec:
      nodeSelector:
        node-role.kubernetes.io/master: ''
      restartPolicy: Always
      serviceAccountName: cluster-samples-operator
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      securityContext: {}
      containers:
        - resources:
            requests:
              cpu: 10m
          terminationMessagePath: /dev/termination-log
          name: cluster-samples-operator
          command:
            - cluster-samples-operator
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: OPERATOR_NAME
              value: cluster-samples-operator
            - name: RELEASE_VERSION
              value: 4.3.18
            - name: IMAGE_JENKINS
              value: >-
                quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7688ecdcb88ff3b29abf0180da08cd26e42d285151cb399c0e3af160c1b2305e
            - name: IMAGE_AGENT_NODEJS
              value: >-
                quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:50fa2dbf44ac1ab0487b6c69a2eb7f3513325a99dda91799af306c35a9f39ac4
            - name: IMAGE_AGENT_MAVEN
              value: >-
                quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c052eb17a15cb35c1253d4fbd849959047b06b478e628e4601609c1b25bd178
          ports:
            - name: metrics
              containerPort: 60000
              protocol: TCP
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: samples-operator-tls
              mountPath: /etc/secrets
          terminationMessagePolicy: File
          image: >-
            quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b6920adfb6ca19d9105d292feac51867eba3ee46825c0a4d187024c4695e790
        - name: cluster-samples-operator-watch
          image: >-
            quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b6920adfb6ca19d9105d292feac51867eba3ee46825c0a4d187024c4695e790
          command:
            - cluster-samples-operator-watch
            - file-watcher-watchdog
          args:
            - '--namespace=openshift-cluster-samples-operator'
            - '--process-name=cluster-samples-operator'
            - '--termination-grace-period=30s'
            - '--files=/etc/secrets/tls.crt,/etc/secrets/tls.key'
          resources:
            requests:
              cpu: 10m
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      serviceAccount: cluster-samples-operator
      volumes:
        - name: samples-operator-tls
          secret:
            secretName: samples-operator-tls
            defaultMode: 420
      dnsPolicy: ClusterFirst
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
          effect: NoSchedule
        - key: node.kubernetes.io/unreachable
          operator: Exists
          effect: NoExecute
          tolerationSeconds: 120
        - key: node.kubernetes.io/not-ready
          operator: Exists
          effect: NoExecute
          tolerationSeconds: 120
      priorityClassName: system-cluster-critical
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
status:
  observedGeneration: 6
  replicas: 1
  updatedReplicas: 1
  readyReplicas: 1
  availableReplicas: 1
  conditions:
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2020-05-12T12:51:11Z'
      lastTransitionTime: '2020-02-26T07:51:16Z'
      reason: NewReplicaSetAvailable
      message: >-
        ReplicaSet "cluster-samples-operator-64fc49d87" has successfully
        progressed.
    - type: Available
      status: 'True'
      lastUpdateTime: '2020-05-12T14:05:30Z'
      lastTransitionTime: '2020-05-12T14:05:30Z'
      reason: MinimumReplicasAvailable
      message: Deployment has minimum availability.

Comment 1 Gabe Montero 2020-05-14 18:24:26 UTC
Yep there is an upgrade specific (vs. initial install) error path with s390/ppc64le that I now see based on the data provided with the bug,
which stems from bootstrapping as removed for those platforms, but having the payload imagestreams like tests and must-gather coming into
the samples operator.

Now, a couple of notes:
1) a process reminder, per the OCP process, I would need to fix this in 4.5 first, then 4.4.z, then 4.3.z; so it will take a bit
2) Jeremy Poulin and Renin Jose are in the process of trying to validate the s390 image in early 4.5 payloads to see if we can include
some samples finally ... not sure yet if/when that would move back to 4.4 and 4.3, but if it did, it would obviate the need for the fix.

I've cc:ed Jeremy on this bug and will send a needinfo to him for comment.

But in the interim, I'll start on 1)

Comment 2 W. Trevor King 2020-05-14 19:07:06 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
  example: 100% of customers upgrading from 4.2 to 4.3 running s390/ppc64le.  Is there also an impact from 4.2 -> 4.2, 4.3 -> 4.4, etc.?
What is the impact?  Is it serious enough to warrant blocking edges?
  example: Samples sticks on arch-specific bug, CVO sticks on samples, update hangs.
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
  example: Clearing the attempted update resolves the issue.  There is no other remediation procedure.

Comment 3 Gabe Montero 2020-05-14 19:23:53 UTC
(In reply to W. Trevor King from comment #2)
> We're asking the following questions to evaluate whether or not this bug
> warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The
> ultimate goal is to avoid delivering an update which introduces new risk or
> reduces cluster functionality in any way. Sample answers are provided to
> give more context and the UpgradeBlocker flag has been added to this bug. It
> will be removed if the assessment indicates that this should not block
> upgrade edges.
> 
> Who is impacted?  If we have to block upgrade edges based on this issue,
> which edges would need blocking?
>   example: 100% of customers upgrading from 4.2 to 4.3 running s390/ppc64le.

I believe this is 100% of customers upgrading from 4.2 to 4.3 running s390/ppc64le.

> Is there also an impact from 4.2 -> 4.2, 4.3 -> 4.4, etc.?
> What is the impact?  Is it serious enough to warrant blocking edges?
>   example: Samples sticks on arch-specific bug, CVO sticks on samples,
> update hangs.

I would expect these results on 4.2 -> 4.2, 4.3 -> 4.4.

We are currently working with the multiarch team to get samples vetted 
on s390 and perhaps ppc64le for 4.5.  But most likely that is several
weeks out for ppc64le and maybe a few days to a week for s390x.  So 
I would not expect this for 4.4 -> 4.5.

Then, a discussion on backporting content to 4.4 or 4.3 could occur, though
it is not a given that would be agreed upon.

> How involved is remediation (even moderately serious impacts might be
> acceptable if they are easy to mitigate)?
>   example: Clearing the attempted update resolves the issue.  There is no
> other remediation procedure.

running `oc delete configs.samples cluster` should reset the samples operator;
when it comes back up, it will treat things like an initial install and should 
bootstrap as removed, without misguided attempts to read non-existent content.

Comment 4 Gabe Montero 2020-05-14 19:36:28 UTC
Also, on my needinfo to Jeremy (though if anybody on cc: knows please feel free to chime in) - is there 4.3 -> 4.4 tests upgrade tests on s390x coming anytime soon?

Comment 5 Gabe Montero 2020-05-14 19:44:17 UTC
Lastly, to the originator and QA contact, can either of you reproduce the upgrade issue, and 
then run `oc delete configs.samples cluster` and observe the result.

Ultimately, an `oc get clusteroperator openshift-samples -o yaml` should confirm the reset worked and samples is available==true progressing==false degraded==false version set,
like you would get on an initial install.

Comment 6 Barry Donahue 2020-05-14 20:19:48 UTC
   This would only affect s390. 4.2 was not released on ppc64.

Comment 9 Gabe Montero 2020-05-14 23:32:03 UTC
Given

a) the 4.5 payloads are just starting out, and 
b) we are ultimately bring in s390x samples so the upgrade 4.5 won't hit the error path

I'm moving this to verified to accelerate backport to 4.3.z, where the upgrade hiccup while removed/no s390x samples was observiced

Comment 10 jschinta 2020-05-15 13:16:26 UTC
(In reply to Gabe Montero from comment #5)
> Lastly, to the originator and QA contact, can either of you reproduce the
> upgrade issue, and 
> then run `oc delete configs.samples cluster` and observe the result.
> 
> Ultimately, an `oc get clusteroperator openshift-samples -o yaml` should
> confirm the reset worked and samples is available==true progressing==false
> degraded==false version set,
> like you would get on an initial install.

Unfortunately i needed the Cluster and had to reinstall it with 4.3.18. Since i don't have enough Machines for a second Cluster, i can't reproduce the issue.

Comment 11 Gabe Montero 2020-05-18 21:34:45 UTC
I'm holding on the doc update for now.  If we provide s390 samples in 4.5, this bug/change will be rendered inert.

Comment 13 Gabe Montero 2020-05-28 14:32:05 UTC
Per last go around, manually verifying this since s390 4.5 is not available for an upgrade test, and there is no viable x86 approximation.

At least this time, we were able to vet these changes with the multi arch team on a manually built 4.3 payload, using a 4.2 to 4.3 upgrade.

Comment 14 Douglas Slavens 2020-07-07 17:24:03 UTC
*** Bug 1766287 has been marked as a duplicate of this bug. ***

Comment 15 errata-xmlrpc 2020-07-13 17:38:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409

Comment 16 Cheryl A Fillekes 2020-07-13 20:48:07 UTC
The errata is only for:

Red Hat OpenShift Container Platform 4.5 for RHEL 8 x86_64
Red Hat OpenShift Container Platform 4.5 for RHEL 7 x86_64

but the bug is against OCP 4.2, 4.3 on Z. 

The catalog sources appear to install some correct images in 4.5 on Z, and some of them appear to be s390x images, but they don't seem to start the expected workloads; they error out in strange ways; see https://bugzilla.redhat.com/show_bug.cgi?id=1766364

Comment 17 Cheryl A Fillekes 2020-07-13 20:56:30 UTC
Also, the fact that ERRATA https://access.redhat.com/errata/RHBA-2020:2409 only applies to 4.5 on x86  -- is that only because 4.5 has not GA'd yet on Z, i.e. should we be testing the upgrade paths from 4.4 to 4.5 in looking for this fix, or should that appear in a nightly, such as 4.5.0-0.nightly-s390x-2020-07-03-213659 -- and, do we close the bug if the samples appear even if none of the samples we've tried on Z seem to work, or do we open a separate bug for each sample that does not work? directing this as a needinfo to dgilmore because it's a policies & procedures question as to what we should be doing with this class of bug in general. There seem to be a lot of them.

Comment 18 Dennis Gilmore 2020-11-09 18:24:41 UTC
removing needinfo

Comment 19 W. Trevor King 2021-04-05 17:47:01 UTC
Removing UpgradeBlocker from this older bug, to remove it from the suspect queue described in [1].  If you feel like this bug still needs to be a suspect, please add keyword again.

[1]: https://github.com/openshift/enhancements/pull/475


Note You need to log in before you can comment on or make changes to this bug.