Created attachment 1710864 [details] Noobaa UI Description of problem (please be detailed as possible and provide log snippests): Creating an OBC fails. OBC remains in pending Version of all relevant components (if applicable): 4.5.0-518.ci Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes Is there any workaround available to the best of your knowledge? Havne't found so far Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes every single time Can this issue reproduce from the UI? Haven't tried If this is a regression, please provide more details to justify this: The yaml file I used used to work and was used to validate build and to during custoemr demos Steps to Reproduce: 1. Use the YAML file attached 2. oc create -f {yaml_file_provided} 3. Actual results: OBC stay pending POD goes into error Expected results: Pod starts Additional info: $ oc version Client Version: 4.5.3 Server Version: 4.5.3 Kubernetes Version: v1.18.3+3107688 $ oc get csv -n openshift-storage NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.5.0-518.ci OpenShift Container Storage 4.5.0-518.ci Succeeded $ cat ./obc-loop.yaml --- apiVersion: v1 kind: Namespace metadata: name: webinar spec: {} --- apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: webinarbucket namespace: webinar spec: generateBucketName: "webinarbucket" storageClassName: openshift-storage.noobaa.io --- apiVersion: batch/v1 kind: Job metadata: name: batch1 namespace: webinar labels: app: batch1 spec: template: metadata: labels: app: batch1 spec: restartPolicy: OnFailure containers: - image: amazon/aws-cli:latest command: ["sh"] args: - '-c' - 'export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID ; export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY ; while true; do export mystamp=$(date +%Y%m%d_%H%M%S); dd if=/dev/urandom of=/tmp/file_${mystamp} bs=1M count=1; set -x && aws --no-verify-ssl --endpoint https://$BUCKET_HOST:$BUCKET_PORT s3 cp /tmp/file_${mystamp} s3://$BUCKET_NAME; sleep 60; rm /tmp/file_${mystamp}; done' name: batch1 env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: webinarbucket key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: webinarbucket key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: webinarbucket key: BUCKET_PORT - name: AWS_DEFAULT_REGION valueFrom: configMapKeyRef: name: webinarbucket key: BUCKET_REGION - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: webinarbucket key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: webinarbucket key: AWS_SECRET_ACCESS_KEY $ oc get pod,pvc,obc NAME READY STATUS RESTARTS AGE pod/batch1-2l27c 0/1 CreateContainerConfigError 0 15m NAME STORAGE-CLASS PHASE AGE objectbucketclaim.objectbucket.io/webinarbucket openshift-storage.noobaa.io Pending 15m $ oc describe pod/batch1-2l27c Name: batch1-2l27c Namespace: webinar Priority: 0 Node: ip-10-0-147-128.us-east-2.compute.internal/10.0.147.128 Start Time: Fri, 07 Aug 2020 14:01:46 -0700 Labels: app=batch1 controller-uid=01575954-5b63-47ac-9a2c-3b7e2850bca9 job-name=batch1 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.31" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.31" ], "default": true, "dns": {} }] openshift.io/scc: restricted Status: Pending IP: 10.131.0.31 IPs: IP: 10.131.0.31 Controlled By: Job/batch1 Containers: batch1: Container ID: Image: amazon/aws-cli:latest Image ID: Port: <none> Host Port: <none> Command: sh Args: -c export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID ; export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY ; while true; do export mystamp=$(date +%Y%m%d_%H%M%S); dd if=/dev/urandom of=/tmp/file_${mystamp} bs=1M count=1; set -x && aws --no-verify-ssl --endpoint https://$BUCKET_HOST:$BUCKET_PORT s3 cp /tmp/file_${mystamp} s3://$BUCKET_NAME; sleep 60; rm /tmp/file_${mystamp}; done State: Waiting Reason: CreateContainerConfigError Ready: False Restart Count: 0 Environment: BUCKET_NAME: <set to the key 'BUCKET_NAME' of config map 'webinarbucket'> Optional: false BUCKET_HOST: <set to the key 'BUCKET_HOST' of config map 'webinarbucket'> Optional: false BUCKET_PORT: <set to the key 'BUCKET_PORT' of config map 'webinarbucket'> Optional: false AWS_DEFAULT_REGION: <set to the key 'BUCKET_REGION' of config map 'webinarbucket'> Optional: false AWS_ACCESS_KEY_ID: <set to the key 'AWS_ACCESS_KEY_ID' in secret 'webinarbucket'> Optional: false AWS_SECRET_ACCESS_KEY: <set to the key 'AWS_SECRET_ACCESS_KEY' in secret 'webinarbucket'> Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-cl869 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-cl869: Type: Secret (a volume populated by a Secret) SecretName: default-token-cl869 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> default-scheduler Successfully assigned webinar/batch1-2l27c to ip-10-0-147-128.us-east-2.compute.internal Normal AddedInterface 15m multus Add eth0 [10.131.0.31/23] Normal Pulled 13m (x8 over 15m) kubelet, ip-10-0-147-128.us-east-2.compute.internal Successfully pulled image "amazon/aws-cli:latest" Warning Failed 13m (x8 over 15m) kubelet, ip-10-0-147-128.us-east-2.compute.internal Error: configmap "webinarbucket" not found Normal Pulling 11s (x69 over 15m) kubelet, ip-10-0-147-128.us-east-2.compute.internal Pulling image "amazon/aws-cli:latest" $ oc describe objectbucketclaim.objectbucket.io/webinarbucket Name: webinarbucket Namespace: webinar Labels: app=noobaa bucket-provisioner=openshift-storage.noobaa.io-obc noobaa-domain=openshift-storage.noobaa.io Annotations: <none> API Version: objectbucket.io/v1alpha1 Kind: ObjectBucketClaim Metadata: Creation Timestamp: 2020-08-07T21:01:46Z Finalizers: objectbucket.io/finalizer Generation: 2 Managed Fields: API Version: objectbucket.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:"objectbucket.io/finalizer": f:labels: .: f:app: f:bucket-provisioner: f:noobaa-domain: f:spec: f:ObjectBucketName: f:bucketName: f:status: .: f:phase: Manager: noobaa-operator Operation: Update Time: 2020-08-07T21:01:46Z API Version: objectbucket.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:generateBucketName: f:storageClassName: Manager: oc Operation: Update Time: 2020-08-07T21:01:46Z Resource Version: 101264 Self Link: /apis/objectbucket.io/v1alpha1/namespaces/webinar/objectbucketclaims/webinarbucket UID: c7eb7b50-a62c-4eb0-8a18-389e122b08d9 Spec: Object Bucket Name: Bucket Name: Generate Bucket Name: webinarbucket Storage Class Name: openshift-storage.noobaa.io Status: Phase: Pending Events: <none> $ oc get cm No resources found in webinar namespace. $ oc get cm -A | grep webinar $ oc get secret NAME TYPE DATA AGE builder-dockercfg-twk2c kubernetes.io/dockercfg 1 16m builder-token-9rdq5 kubernetes.io/service-account-token 4 16m builder-token-xzrtf kubernetes.io/service-account-token 4 16m default-dockercfg-mf6fr kubernetes.io/dockercfg 1 16m default-token-cl869 kubernetes.io/service-account-token 4 16m default-token-mpqsh kubernetes.io/service-account-token 4 16m deployer-dockercfg-6jglz kubernetes.io/dockercfg 1 16m deployer-token-4qvrb kubernetes.io/service-account-token 4 16m deployer-token-gkrnm kubernetes.io/service-account-token 4 16m $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4h10m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 94m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 94m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 88m $ oc describe sc openshift-storage.noobaa.io Name: openshift-storage.noobaa.io IsDefaultClass: No Annotations: <none> Provisioner: openshift-storage.noobaa.io/obc Parameters: bucketclass=noobaa-default-bucket-class AllowVolumeExpansion: <unset> MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none> $ oc get sc openshift-storage.noobaa.io -o yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-08-07T19:57:43Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:parameters: .: {} f:bucketclass: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: noobaa-operator operation: Update time: "2020-08-07T19:57:43Z" name: openshift-storage.noobaa.io resourceVersion: "69815" selfLink: /apis/storage.k8s.io/v1/storageclasses/openshift-storage.noobaa.io uid: 055bc66e-8801-487a-9819-19a4bad57020 parameters: bucketclass: noobaa-default-bucket-class provisioner: openshift-storage.noobaa.io/obc reclaimPolicy: Delete volumeBindingMode: Immediate $ oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 96m Ready 2020-08-07T19:51:19Z 4.5.0 $ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-136-92.us-east-2.compute.internal Ready master 4h18m v1.18.3+3107688 ip-10-0-147-128.us-east-2.compute.internal Ready worker 4h9m v1.18.3+3107688 ip-10-0-188-103.us-east-2.compute.internal Ready worker 4h9m v1.18.3+3107688 ip-10-0-190-245.us-east-2.compute.internal Ready master 4h17m v1.18.3+3107688 ip-10-0-216-15.us-east-2.compute.internal Ready worker 4h9m v1.18.3+3107688 ip-10-0-219-225.us-east-2.compute.internal Ready master 4h17m v1.18.3+3107688 $ oc get machines -A NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api ocp45-d85r6-master-0 Running m5.2xlarge us-east-2 us-east-2a 4h21m openshift-machine-api ocp45-d85r6-master-1 Running m5.2xlarge us-east-2 us-east-2b 4h21m openshift-machine-api ocp45-d85r6-master-2 Running m5.2xlarge us-east-2 us-east-2c 4h21m openshift-machine-api ocp45-d85r6-worker-us-east-2a-nsddt Running m5.4xlarge us-east-2 us-east-2a 4h12m openshift-machine-api ocp45-d85r6-worker-us-east-2b-px9s9 Running m5.4xlarge us-east-2 us-east-2b 4h12m openshift-machine-api ocp45-d85r6-worker-us-east-2c-fwkw8 Running m5.4xlarge us-east-2 us-east-2c 4h12m
Created attachment 1710865 [details] OCS Dashboard
*** This bug has been marked as a duplicate of bug 1866781 ***