When attempting to create StorageCluster on OCP 4.6 in AWS GovCloud, ceph cluster is created, but the Noobaa Operator pod has following errors: time="2021-02-04T16:07:13Z" level=info msg="❌ Not Found: BackingStore \"noobaa-default-backing-store\"\n" time="2021-02-04T16:07:13Z" level=info msg="CredentialsRequest \"noobaa-aws-cloud-creds\" created. Creating default backing store on AWS objectstore" func=ReconcileDefaultBackingStore sys=openshift-storage/noobaa time="2021-02-04T16:07:13Z" level=info msg="❌ Not Found: \"noobaa-aws-cloud-creds-secret\"\n" time="2021-02-04T16:07:13Z" level=info msg="Secret \"noobaa-aws-cloud-creds-secret\" was not created yet by cloud-credentials operator. retry on next reconcile.." sys=openshift-storage/noobaa time="2021-02-04T16:07:13Z" level=info msg="SetPhase: temporary error during phase \"Configuring\"" sys=openshift-storage/noobaa time="2021-02-04T16:07:13Z" level=warning msg="⏳ Temporary Error: cloud credentials secret \"noobaa-aws-cloud-creds-secret\" is not ready yet" sys=openshift-storage/noobaa Cloud-credential-operator pod log shows: time="2021-02-04T18:02:50Z" level=error msg="error syncing creds in mint-mode" actuator=aws cr=openshift-storage/noobaa-aws-cloud-creds error="AWS Error: MalformedPolicyDocument: Partition \"aws\" is not valid for resource \"arn:aws:s3:::nb.1612453984737.apps.ocp4.sbx2.dso.ncps.us-cert.gov\".\n\tstatus code: 400, request id: de22fe50-05a2-4f81-b2e8-930233ea51c1" time="2021-02-04T18:02:50Z" level=error msg="error syncing credentials: error syncing creds in mint-mode: AWS Error: MalformedPolicyDocument: Partition \"aws\" is not valid for resource \"arn:aws:s3:::nb.1612453984737.apps.ocp4.sbx2.dso.ncps.us-cert.gov\".\n\tstatus code: 400, request id: de22fe50-05a2-4f81-b2e8-930233ea51c1" controller=credreq cr=openshift-storage/noobaa-aws-cloud-creds secret=openshift-storage/noobaa-aws-cloud-creds-secret We were able to workaround the issue by not installing Noobaa. with the following changes to StorageCluster yaml: ... truncated ... spec: managedResources: cephBlockPools: {} cephFilesystems: {} snapshotClasses: {} storageClasses: {} multiCloudGateway: reconcileStrategy: ignore ... truncated ... monDataDirHostPath: /var/lib/rook
Flipped Nooba Operator image to custom build docker.io/dannyzaken/noobaa-operator@sha256:9856918557a6f3958879f195a636176b4b1e4ecd85d55babdee02245d52154e6 Configured nooba with: cat <<EOF | oc apply -f - apiVersion: noobaa.io/v1alpha1 kind: NooBaa metadata: name: noobaa namespace: openshift-storage spec: dbResources: requests: cpu: '0.1' memory: 1Gi coreResources: requests: cpu: '0.1' memory: 1Gi EOF Result: root@cloudctl nooba$ oc get all -n openshift-storage NAME READY STATUS RESTARTS AGE pod/noobaa-core-0 1/1 Running 0 2m41s pod/noobaa-db-0 1/1 Running 0 2m41s pod/noobaa-endpoint-5f86f9d965-8cqv5 1/1 Running 0 87s pod/noobaa-operator-847c58cbc-x22b7 1/1 Running 0 55m pod/ocs-metrics-exporter-579f5c8b94-p787k 1/1 Running 0 55m pod/ocs-operator-6555f86f69-kf6jn 1/1 Running 0 55m pod/rook-ceph-operator-6c4d68c-sbjtz 1/1 Running 0 55m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/noobaa-db ClusterIP 172.30.169.73 <none> 27017/TCP 2m40s service/noobaa-mgmt LoadBalancer 172.30.254.165 a2d719e087f7a462eb86b7bb8b0c7766-643224321.us-gov-west-1.elb.amazonaws.com 80:31197/TCP,443:32104/TCP,8445:32579/TCP,8446:30918/TCP 2m41s service/s3 LoadBalancer 172.30.184.126 a82418a6d99414cbc9fc6196dffeae2b-320072286.us-gov-west-1.elb.amazonaws.com 80:31644/TCP,443:30639/TCP,8444:30967/TCP 2m40s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/noobaa-endpoint 1/1 1 1 87s deployment.apps/noobaa-operator 1/1 1 1 58m deployment.apps/ocs-metrics-exporter 1/1 1 1 58m deployment.apps/ocs-operator 1/1 1 1 58m deployment.apps/rook-ceph-operator 1/1 1 1 58m NAME DESIRED CURRENT READY AGE replicaset.apps/noobaa-endpoint-5f86f9d965 1 1 1 87s replicaset.apps/noobaa-operator-78c74bbf48 0 0 0 58m replicaset.apps/noobaa-operator-847c58cbc 1 1 1 55m replicaset.apps/ocs-metrics-exporter-577d57cb89 0 0 0 58m replicaset.apps/ocs-metrics-exporter-579f5c8b94 1 1 1 55m replicaset.apps/ocs-operator-5474959848 0 0 0 58m replicaset.apps/ocs-operator-6555f86f69 1 1 1 55m replicaset.apps/rook-ceph-operator-576687f8b8 0 0 0 58m replicaset.apps/rook-ceph-operator-6c4d68c 1 1 1 55m NAME READY AGE statefulset.apps/noobaa-core 1/1 2m41s statefulset.apps/noobaa-db 1/1 2m41s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/noobaa-endpoint Deployment/noobaa-endpoint <unknown>/80% 1 2 1 87s NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/noobaa-mgmt noobaa-mgmt-openshift-storage.apps.falcon.millenium.io noobaa-mgmt mgmt-https reencrypt/Redirect None route.route.openshift.io/s3 s3-openshift-storage.apps.falcon.millenium.io s3 s3-https reencrypt None
Upon further review region is reverting back to us-east-1 for some reason: root@cloudctl nooba$ oc logs pod/noobaa-operator-847c58cbc-x22b7 time="2021-02-09T00:19:29Z" level=info msg="~\~E RPC: redirector.register_to_cluster() Response OK: took 0.1ms" time="2021-02-09T00:19:29Z" level=info msg="~]~L Not Found: BackingStore \"noobaa-default-backing-store\"\n" time="2021-02-09T00:19:29Z" level=info msg="CredentialsRequest \"noobaa-aws-cloud-creds\" created. Creating default backing store on AWS objectstore" func=ReconcileDefaultBackingStore sys=openshift-storage/noobaa time="2021-02-09T00:19:29Z" level=info msg="~\~E Exists: \"noobaa-aws-cloud-creds-secret\"\n" time="2021-02-09T00:19:29Z" level=info msg="Secret noobaa-aws-cloud-creds-secret was created successfully by cloud-credentials operator" sys=openshift-storage/noobaa time="2021-02-09T00:19:29Z" level=info msg="identified aws region us-east-1" sys=openshift-storage/noobaa root@cloudctl nooba$ oc get -oyaml secret noobaa-aws-cloud-creds-secret apiVersion: v1 data: aws_access_key_id: QUXXXXXXXXXXXXXVU= aws_secret_access_key: TmFzQXXXXXXXXXXXR3Sw== credentials: W2RlZmXXXXXXXXXXXXXXXXXJ3QmR3Sw== kind: Secret metadata: annotations: cloudcredential.openshift.io/aws-policy-last-applied: "" cloudcredential.openshift.io/credentials-request: openshift-storage/noobaa-aws-cloud-creds creationTimestamp: "2021-02-08T23:56:11Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:aws_access_key_id: {} f:aws_secret_access_key: {} f:credentials: {} f:metadata: f:annotations: .: {} f:cloudcredential.openshift.io/aws-policy-last-applied: {} f:cloudcredential.openshift.io/credentials-request: {} f:type: {} manager: cloud-credential-operator operation: Update time: "2021-02-08T23:56:11Z" name: noobaa-aws-cloud-creds-secret namespace: openshift-storage resourceVersion: "1265669" selfLink: /api/v1/namespaces/openshift-storage/secrets/noobaa-aws-cloud-creds-secret uid: 34158740-997b-4698-b067-e07272e35fb1 type: Opaque root@cloudctl nooba$ cat <<EOF | base64 -d > W2RlZmXXXXXXXXXXXXXXXXXJ3QmR3Sw== > EOF [default] aws_access_key_id = AKXXXXXXXXXXXXXXNEU aws_secret_access_key = NasB6XXXXXXXXXXXXXXXXXXXBdwK
root@cloudctl nooba$ oc -n openshift-storage describe noobaa Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NooBaaImage 54m noobaa-operator Using NooBaa image "registry.redhat.io/ocs4/mcg-core-rhel8@sha256:cef0031fe36242679707fa6660e967d28f431f43827b3b9ebfa185f4cc02b54a" for the creation of "noobaa" Warning DefaultBackingStoreFailure 11m (x22 over 53m) noobaa-operator Failed to get AWSRegion. using us-east-1 as the default region. "The parsed AWS region is invalid: \"\"" Warning DefaultBackingStoreFailure 55s (x4 over 6m56s) noobaa-operator Failed to get AWSRegion. using us-east-1 as the default region. "The parsed AWS region is invalid: \"\""
Verification should be done based on the regression testing results
Moving to VERIFIED based on the regression testing results with BUILD ID: v4.7.0-262.ci For reference, RUN ID: 1613433449
I can't answer that. Redirecting the needinfo
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2041