Bug 2082210 - Upgrade from ODF4.10.0 to ODF4.10.1 is failing on IBM Power platform
Summary: Upgrade from ODF4.10.0 to ODF4.10.1 is failing on IBM Power platform
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Nimrod Becker
QA Contact: Ben Eli
URL:
Whiteboard:
Depends On:
Blocks: 2083015
TreeView+ depends on / blocked
 
Reported: 2022-05-05 15:15 UTC by Aaruni Aggarwal
Modified: 2023-08-09 16:49 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2083015 (view as bug list)
Environment:
Last Closed: 2022-05-10 06:01:01 UTC
Embargoed:


Attachments (Terms of Use)
Log file for ODF4.10.0 to ODF4.10.1 upgrade (339.95 KB, text/plain)
2022-05-05 15:15 UTC, Aaruni Aggarwal
no flags Details

Description Aaruni Aggarwal 2022-05-05 15:15:41 UTC
Created attachment 1877353 [details]
Log file for ODF4.10.0 to ODF4.10.1 upgrade

Description of problem (please be detailed as possible and provide log
snippets):

While upgrading from ODF4.10.0 to ODF4.10.1, one of the noobaa pods is not getting an upgraded image. 

Version of all relevant components (if applicable): 
ODF operator 4.10.1


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. 
2.
3.


Actual results:


Expected results:


Additional info:

Comment 2 Aaruni Aggarwal 2022-05-05 15:20:02 UTC
CSVs and pods after Upgrading :

[root@rdr-upg410-1-syd04-bastion-0 scripts]# oc get csv -n openshift-storage
NAME                              DISPLAY                       VERSION   REPLACES                          PHASE
mcg-operator.v4.10.1              NooBaa Operator               4.10.1    mcg-operator.v4.10.0              Succeeded
ocs-operator.v4.10.1              OpenShift Container Storage   4.10.1    ocs-operator.v4.10.0              Succeeded
odf-csi-addons-operator.v4.10.1   CSI Addons                    4.10.1    odf-csi-addons-operator.v4.10.0   Succeeded
odf-operator.v4.10.1              OpenShift Data Foundation     4.10.1    odf-operator.v4.10.0              Succeeded

[root@rdr-upg410-1-syd04-bastion-0 scripts]# oc get pods -n openshift-storage
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-addons-controller-manager-778b5b7b4f-hthz9                    2/2     Running     0          54m
csi-cephfsplugin-cbm6q                                            3/3     Running     0          53m
csi-cephfsplugin-j5kv6                                            3/3     Running     0          52m
csi-cephfsplugin-provisioner-55f845d4bd-lrtlh                     6/6     Running     0          53m
csi-cephfsplugin-provisioner-55f845d4bd-x2m89                     6/6     Running     0          53m
csi-cephfsplugin-rqdpj                                            3/3     Running     0          53m
csi-rbdplugin-h65lv                                               4/4     Running     0          52m
csi-rbdplugin-mtdxj                                               4/4     Running     0          52m
csi-rbdplugin-provisioner-7c7dff448f-qx84p                        7/7     Running     0          53m
csi-rbdplugin-provisioner-7c7dff448f-wxrrx                        7/7     Running     0          53m
csi-rbdplugin-w6knr                                               4/4     Running     0          53m
noobaa-core-0                                                     1/1     Running     0          51m
noobaa-db-pg-0                                                    1/1     Running     0          52m
noobaa-default-backing-store-noobaa-pod-80b69322                  1/1     Running     0          80m
noobaa-endpoint-75976ddddb-t5t55                                  1/1     Running     0          52m
noobaa-operator-8549dbc96-2mkv4                                   1/1     Running     0          54m
ocs-metrics-exporter-7f6ccd544-8rlm8                              1/1     Running     0          54m
ocs-operator-849d49d7b8-4z5fh                                     1/1     Running     0          54m
odf-console-645974fd9b-vrd7z                                      1/1     Running     0          54m
odf-operator-controller-manager-68cfcc75ff-8rhqz                  2/2     Running     0          54m
rook-ceph-crashcollector-syd04-worker-0.rdr-upg410-1.ibm.cfskmw   1/1     Running     0          53m
rook-ceph-crashcollector-syd04-worker-1.rdr-upg410-1.ibm.cvmz4c   1/1     Running     0          53m
rook-ceph-crashcollector-syd04-worker-2.rdr-upg410-1.ibm.c4n75m   1/1     Running     0          53m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7f498f658f6h4   2/2     Running     0          82m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-55cbb96bkcqmq   2/2     Running     0          82m
rook-ceph-mgr-a-5d5b5ff694-q9qv9                                  2/2     Running     0          83m
rook-ceph-mon-a-7f46bbb685-gvw2l                                  2/2     Running     0          85m
rook-ceph-mon-b-5f6575bdd9-dx7tl                                  2/2     Running     0          84m
rook-ceph-mon-c-79568b6fdb-8tx95                                  2/2     Running     0          84m
rook-ceph-operator-dcb45f975-5mvmv                                1/1     Running     0          54m
rook-ceph-osd-0-86777d8987-clb55                                  2/2     Running     0          52m
rook-ceph-osd-1-6584fc6cf4-vlrwh                                  2/2     Running     0          51m
rook-ceph-osd-2-7f44f58495-mnd4s                                  2/2     Running     0          51m
rook-ceph-osd-prepare-bab58b58e1a94f7493f3bf110614b14b-wpqjl      0/1     Completed   0          83m
rook-ceph-osd-prepare-c3c2debd40986393ce3ee41142eb104e-7sw8r      0/1     Completed   0          83m
rook-ceph-osd-prepare-e5de13b714d8ea5a5caaef419130e686-qlm6w      0/1     Completed   0          83m
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-74bd665jmfqk   2/2     Running     0          82m
rook-ceph-tools-6b9786c9f5-cg4dc                                  1/1     Running     0          53m

We have 5 noobaa pods in our cluster :

[root@rdr-upg410-1-syd04-bastion-0 scripts]# oc get pods -n openshift-storage |grep noobaa
noobaa-core-0                                                     1/1     Running     0          58m
noobaa-db-pg-0                                                    1/1     Running     0          58m
noobaa-default-backing-store-noobaa-pod-80b69322                  1/1     Running     0          86m
noobaa-endpoint-75976ddddb-t5t55                                  1/1     Running     0          59m
noobaa-operator-8549dbc96-2mkv4                                   1/1     Running     0          60m


Upgrade failed with the following error : 

```
        if timeout < (time.time() - start_time):
>           raise TimeoutException(msg)
E           ocs_ci.ocs.exceptions.TimeoutException: Timeout 60 reached! Error: Waiting for 5 pods with selector: app=noobaa to be running and upgraded.
```

noobaa-default-backing-store-noobaa-pod-* isn't getting upgraded.
As logfile shows: 

08:41:40 - MainThread - ocs_ci.ocs.resources.pod - WARNING - Images: {'quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:4ff2d65ea16dd1026fe278a0f8ca920f300dfcee205b4b8ede0ab28be1aa43a6'} weren't upgraded in: noobaa-default-backing-store-noobaa-pod-80b69322!

Comment 3 Nitin Goyal 2022-05-05 15:34:37 UTC
moving it to Noobaa

Comment 4 Aaruni Aggarwal 2022-05-05 16:13:12 UTC
Must-gather logs: https://drive.google.com/file/d/1_uztOAnmZ7tOdCjfit0itdKsAKQg9ANj/view?usp=sharing

Comment 5 Liran Mauda 2022-05-09 12:59:54 UTC
looking at the must gather I cant see the issue.
all the pods (core based), and specifically noobaa-default-backing-store-noobaa-pod-80b69322, are in the latest version of 4.10

in the logs you can see the hash of the upstream code: ```Version is: 5.10.0-e90ffa7```

Why do we think it was not upgraded?

Comment 6 Aaruni Aggarwal 2022-05-09 13:14:29 UTC
After Upgrading ODF from 4.10.0 to 4.10.1, all pods should restart. But after looking at the age of noobaa-default-backing-store-noobaa-pod-80b69322 pod, which is 80m it doesn't seem this pod got restarted. While the pods which restarted after the upgrade have age around 52m. 
Also, the logfile shows the following error: 

08:41:40 - MainThread - ocs_ci.ocs.resources.pod - WARNING - Images: {'quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:4ff2d65ea16dd1026fe278a0f8ca920f300dfcee205b4b8ede0ab28be1aa43a6'} weren't upgraded in: noobaa-default-backing-store-noobaa-pod-80b69322!

I have attached the upgrade logfile with the name: "Log file for ODF4.10.0 to ODF4.10.1 upgrade"

Comment 7 Liran Mauda 2022-05-09 13:27:49 UTC
The age does not tell me anything.
the logs does. 

the code is in the latest version.

what is the image of the pod?

as far as I know the latest image should be:  
odf4-mcg-core-rhel8@sha256:40da9aaf63ead51b72bc2a25e36f2a178fbbc404653b7c90dc1f36e1c191316b

I am trying to figure out if the image didn't change (and again, the code is in the latest version as you can see from the hash written in the log, in the must gather.)

Comment 8 Aaruni Aggarwal 2022-05-09 14:09:38 UTC
Before collecting must-gather, when I described noobaa-default-backing-store-noobaa-pod-* , it was as follows : 

[root@rdr-upg410-1-syd04-bastion-0 scripts]# oc describe pod noobaa-default-backing-store-noobaa-pod-80b69322 -n openshift-storage
Name:         noobaa-default-backing-store-noobaa-pod-80b69322
Namespace:    openshift-storage
Priority:     0
Node:         syd04-worker-0.rdr-upg410-1.ibm.com/192.168.25.162
Start Time:   Thu, 05 May 2022 08:01:40 -0400
Labels:       app=noobaa
              pool=noobaa-default-backing-store
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.49"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.49"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: restricted
Status:       Running
IP:           10.128.2.49
IPs:
  IP:           10.128.2.49
Controlled By:  BackingStore/noobaa-default-backing-store
Containers:
  noobaa-agent:
    Container ID:  cri-o://56a6734e9207df06595ab2afe88f4c4a0cb9f112a82af8b6a52053ea3137d752
    Image:         quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:4ff2d65ea16dd1026fe278a0f8ca920f300dfcee205b4b8ede0ab28be1aa43a6
    Image ID:      quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:4ed1db89bdd6150f28721a3c18bcfa7e221b1b621c12ae8830fd35923dea08ca
    Port:          60101/TCP
    Host Port:     0/TCP
    Command:
      /noobaa_init_files/noobaa_init.sh
      agent
    State:          Running
      Started:      Thu, 05 May 2022 08:01:55 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  400Mi
    Requests:
      cpu:     100m
      memory:  400Mi
    Environment:
      CONTAINER_PLATFORM:  KUBERNETES
      AGENT_CONFIG:        <set to the key 'AGENT_CONFIG' in secret 'backing-store-pv-pool-noobaa-default-backing-store'>  Optional: false
    Mounts:
      /noobaa_storage from noobaastorage (rw)
      /usr/local/noobaa/logs from tmp-logs-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-df6t7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  tmp-logs-vol:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  noobaastorage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  noobaa-default-backing-store-noobaa-pvc-80b69322
    ReadOnly:   false
  kube-api-access-df6t7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
                             node.ocs.openshift.io/storage=true:NoSchedule
Events:
  Type     Reason                  Age   From                     Message
  ----     ------                  ----  ----                     -------
  Warning  FailedScheduling        85m   default-scheduler        0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               85m   default-scheduler        Successfully assigned openshift-storage/noobaa-default-backing-store-noobaa-pod-80b69322 to syd04-worker-0.rdr-upg410-1.ibm.com
  Normal   SuccessfulAttachVolume  85m   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-59726088-2936-4842-a16a-5d72e24f81a1"
  Normal   AddedInterface          85m   multus                   Add eth0 [10.128.2.49/23] from openshift-sdn
  Normal   Pulled                  85m   kubelet                  Container image "quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:4ff2d65ea16dd1026fe278a0f8ca920f300dfcee205b4b8ede0ab28be1aa43a6" already present on machine
  Normal   Created                 85m   kubelet                  Created container noobaa-agent
  Normal   Started                 85m   kubelet                  Started container noobaa-agent



Now, when I again checked the noobaa-default-backing-store-noobaa-pod-* on the same cluster,

[root@rdr-upg410-1-syd04-bastion-0 ~]# oc get pods -n openshift-storage |grep noobaa
noobaa-core-0                                                     1/1     Running     0             4d1h
noobaa-db-pg-0                                                    1/1     Running     0             4d1h
noobaa-default-backing-store-noobaa-pod-80b69322                  1/1     Running     0             3d5h
noobaa-endpoint-75976ddddb-t5t55                                  1/1     Running     0             4d1h
noobaa-operator-8549dbc96-2mkv4                                   1/1     Running     2 (43h ago)   4d1h

It seems, it restarted 3days ago and the image got upgraded. Describe on the pod shows the latest image of odf4-mcg-core-rhel8: 

[root@rdr-upg410-1-syd04-bastion-0 ~]# oc describe pod noobaa-default-backing-store-noobaa-pod-80b69322 -n openshift-storage
Name:         noobaa-default-backing-store-noobaa-pod-80b69322
Namespace:    openshift-storage
Priority:     0
Node:         syd04-worker-0.rdr-upg410-1.ibm.com/192.168.25.162
Start Time:   Fri, 06 May 2022 04:14:43 -0400
Labels:       app=noobaa
              pool=noobaa-default-backing-store
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.238"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.238"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: restricted
Status:       Running
IP:           10.128.2.238
IPs:
  IP:           10.128.2.238
Controlled By:  BackingStore/noobaa-default-backing-store
Containers:
  noobaa-agent:
    Container ID:  cri-o://b8176b0cf28c66c36a8398a9110faa907b9bd7ff6886d95333f052d49179ac14
    Image:         quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:40da9aaf63ead51b72bc2a25e36f2a178fbbc404653b7c90dc1f36e1c191316b
    Image ID:      quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:1dd893fd1b62cc70d2006fb4bf26a6af302b4052626f0a27849217c111537215
    Port:          60101/TCP
    Host Port:     0/TCP
    Command:
      /noobaa_init_files/noobaa_init.sh
      agent
    State:          Running
      Started:      Fri, 06 May 2022 04:14:58 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  400Mi
    Requests:
      cpu:     100m
      memory:  400Mi
    Environment:
      CONTAINER_PLATFORM:  KUBERNETES
      AGENT_CONFIG:        <set to the key 'AGENT_CONFIG' in secret 'backing-store-pv-pool-noobaa-default-backing-store'>  Optional: false
    Mounts:
      /noobaa_storage from noobaastorage (rw)
      /usr/local/noobaa/logs from tmp-logs-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fgms (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  tmp-logs-vol:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  noobaastorage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  noobaa-default-backing-store-noobaa-pvc-80b69322
    ReadOnly:   false
  kube-api-access-7fgms:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
                             node.ocs.openshift.io/storage=true:NoSchedule
Events:                      <none>

Comment 9 Liran Mauda 2022-05-09 14:18:08 UTC
I am probably missing something.

what is the status now? 
and if it is the correct image now, did you do anything manually to change the image?

I am asking as I don't see any issue here, as the pod is in the correct version per the must gather.

Comment 10 Aaruni Aggarwal 2022-05-09 14:26:32 UTC
In must-gather logs, inside namespaces/openshift-storage/pods/noobaa-default-backing-store-noobaa-pod-80b69322/noobaa-default-backing-store-noobaa-pod-80b69322.yaml , I am seeing following image:

image: quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:4ff2d65ea16dd1026fe278a0f8ca920f300dfcee205b4b8ede0ab28be1aa43a6


At this time, container status is as follows:

containerStatuses:
  - containerID: cri-o://b8176b0cf28c66c36a8398a9110faa907b9bd7ff6886d95333f052d49179ac14
    image: quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:40da9aaf63ead51b72bc2a25e36f2a178fbbc404653b7c90dc1f36e1c191316b
    imageID: quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:1dd893fd1b62cc70d2006fb4bf26a6af302b4052626f0a27849217c111537215
    lastState: {}
    name: noobaa-agent
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-05-06T08:14:58Z"


It seems the image got changed later and not at the time of the upgrade. I upgraded the cluster on 5th May and pod got restarted with the latest image on 6th May 


No, I didn't do anything manually.

Comment 11 Aaruni Aggarwal 2022-05-09 18:43:10 UTC
Hi Liran, 

I ran upgrade testing of ODF from 4.10.0 to 4.10.1 using the latest 4.10.1 build and it was completed successfully.  

[root@rdr-upg410-2-sao01-bastion-0 scripts]# oc get csv -n openshift-storage
NAME                              DISPLAY                       VERSION   REPLACES                          PHASE
mcg-operator.v4.10.1              NooBaa Operator               4.10.1    mcg-operator.v4.10.0              Succeeded
ocs-operator.v4.10.1              OpenShift Container Storage   4.10.1    ocs-operator.v4.10.0              Succeeded
odf-csi-addons-operator.v4.10.1   CSI Addons                    4.10.1    odf-csi-addons-operator.v4.10.0   Succeeded
odf-operator.v4.10.1              OpenShift Data Foundation     4.10.1    odf-operator.v4.10.0              Succeeded

[root@rdr-upg410-2-sao01-bastion-0 scripts]# oc get csv odf-operator.v4.10.1 -n openshift-storage -o yaml |grep full_version
    full_version: 4.10.1-7

Also, after upgrade, the image inside noobaa-default-backing-store-noobaa-pod-* got updated to the latest one. ie. quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:40da9aaf63ead51b72bc2a25e36f2a178fbbc404653b7c90dc1f36e1c191316b

[root@rdr-upg410-2-sao01-bastion-0 scripts]# oc get pods -n openshift-storage |grep noobaa
noobaa-core-0                                                     1/1     Running     0          25m
noobaa-db-pg-0                                                    1/1     Running     0          25m
noobaa-default-backing-store-noobaa-pod-86081465                  1/1     Running     0          23m
noobaa-endpoint-86b88ff659-bgxn7                                  1/1     Running     0          26m
noobaa-operator-5699ff4fd-f7dlm                                   1/1     Running     0          27m

[root@rdr-upg410-2-sao01-bastion-0 scripts]# oc get pod noobaa-default-backing-store-noobaa-pod-86081465 -n openshift-storage -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.129.2.44"
          ],
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.129.2.44"
          ],
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: restricted
  creationTimestamp: "2022-05-09T18:14:46Z"
  labels:
    app: noobaa
    pool: noobaa-default-backing-store
  name: noobaa-default-backing-store-noobaa-pod-86081465
  namespace: openshift-storage
  ownerReferences:
  - apiVersion: noobaa.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: BackingStore
    name: noobaa-default-backing-store
    uid: cd82ad14-0cc2-4791-a726-6b37a19e39bc
  resourceVersion: "84881"
  uid: 84725a74-4f6d-4e4f-a378-eaab69e5c55c
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: cluster.ocs.openshift.io/openshift-storage
            operator: Exists
  containers:
  - command:
    - /noobaa_init_files/noobaa_init.sh
    - agent
    env:
    - name: CONTAINER_PLATFORM
      value: KUBERNETES
    - name: AGENT_CONFIG
      valueFrom:
        secretKeyRef:
          key: AGENT_CONFIG
          name: backing-store-pv-pool-noobaa-default-backing-store
    image: quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:40da9aaf63ead51b72bc2a25e36f2a178fbbc404653b7c90dc1f36e1c191316b
    imagePullPolicy: IfNotPresent
    name: noobaa-agent
    ports:
    - containerPort: 60101
      protocol: TCP
    resources:
      limits:
        cpu: 100m
        memory: 400Mi
      requests:
        cpu: 100m
        memory: 400Mi
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
      runAsUser: 1000680000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /noobaa_storage
      name: noobaastorage
    - mountPath: /usr/local/noobaa/logs
      name: tmp-logs-vol
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-rcjx6
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: default-dockercfg-xxgxb
  nodeName: sao01-worker-2.rdr-upg410-2.ibm.com
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000680000
    seLinuxOptions:
      level: s0:c26,c15
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: node.ocs.openshift.io/storage
    operator: Equal
    value: "true"
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  volumes:
  - emptyDir: {}
    name: tmp-logs-vol
  - name: noobaastorage
    persistentVolumeClaim:
      claimName: noobaa-default-backing-store-noobaa-pvc-86081465
  - name: kube-api-access-rcjx6
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-05-09T18:14:46Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-05-09T18:14:53Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-05-09T18:14:53Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-05-09T18:14:46Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://b37c0f996ac5d463989eb30247cf5f6b4ee258f9c68f2fc0b31b1249d45d3d8c
    image: quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:40da9aaf63ead51b72bc2a25e36f2a178fbbc404653b7c90dc1f36e1c191316b
    imageID: quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:1dd893fd1b62cc70d2006fb4bf26a6af302b4052626f0a27849217c111537215
    lastState: {}
    name: noobaa-agent
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-05-09T18:14:52Z"
  hostIP: 192.168.0.112
  phase: Running
  podIP: 10.129.2.44
  podIPs:
  - ip: 10.129.2.44
  qosClass: Guaranteed
  startTime: "2022-05-09T18:14:46Z"



Thanks Liran

Comment 12 Mudit Agarwal 2022-05-10 05:52:47 UTC
Can we close the bug?

Comment 13 Aaruni Aggarwal 2022-05-10 05:55:42 UTC
Yes, Mudit. This can be closed now.


Note You need to log in before you can comment on or make changes to this bug.