Hide Forgot
Description of problem: Pod is still running when use invalid S3 bucket, no error to warn. Version-Release number of selected component (if applicable): openshift-install v0.7.0-master-6-g8f02020b59147c933a08c5e248a8e2c69dad24ae How reproducible: always Steps to Reproduce: 1.Edit imageregistries image-registry to use invalid bucket 2.Check pod status after it restarts 3. Actual results: Pod is running without any error in log Expected results: It should warn if no available bucket. Additional info:
This issue will be fixed by https://github.com/openshift/cluster-image-registry-operator/pull/106 If you update the image registry resource with a new bucket name, and it does not exist on S3, it will be created (if we have access to do so), and if we can't create the bucket, and don't have the rights to access it, a condition will be set on the image registry resource which states that.
https://github.com/openshift/cluster-image-registry-operator/pull/106 has merged, can you please check this again and see if it's still an issue?
If I set an invalid bucket name which cannot be created, image-registry pod is still running, but there will be an error message when describe CVO, is it expected? Last Transition Time: 2019-01-14T10:00:26Z Message: unable to apply resources: unable to sync storage configuration: InvalidBucketName: The specified bucket is not valid. status code: 400, request id: 39192C2EE4B06A18, host id: eaEQoJ94joa1jkvMcy6+IXMLJBg/3UKtIiGiUC10/DD5wes8Mtf2905zNe49qOu/5Lnj8Tp1Jnc= Status: True Type: Progressing Last Transition Time: 2019-01-14T04:51:20Z Status: False Type: Failing
Yes, that is expected for now.
Per comment 3 and 4, verify this bug.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758