Description of problem (please be detailed as possible and provide log snippests): Today in order to know there is a S3 bucket access issue between managed clusters, ramen pod logs or VRG status must be inspected. This issue, S3 bucket access fails to upload or retrieve metadata, is very common and there is no alerting visible to the user. Version of all relevant components (if applicable): OCP 4.14 ODF 4.14 ACM 2.9 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes, if the user is unaware that there is an S3 bucket access issue they will be stuck when attempting to failover with no visible reason. Is there any workaround available to the best of your knowledge? Inspection of ramen logs and/or VRG status can be useful but there is no documented process for customers to use this method. Steps to Reproduce: 1. Create RDR or MDR test env 2. Create first DRPolicy to create OBCs on each managed cluster 3. Break noobaa by "oc scale deployment noobaa-endpoint --replicas=0 -n openshift-storage" 4. Deploy test app using cephrbd volume(s) and apply DRPolicy Actual results: VRG has message (such as below) but no alerts are fired. error uploading PV to s3Profile s3profile-perf4-ocs-storagecluster, failed to protect cluster data for PVC upload-pvc, failed to upload data of odrbucket-d1040a2e7dc0:busybox/busybox-as-placement-drpc/v1.PersistentVolume/pvc-fc9785f3-53e6-4aa8-a807-e080583c7cf4, RequestError: send request failed Expected results: VRG has error message for S3 access AND OCP alert(s) are fired with cluster and bucket details in each alerts (maximum 2 alerts per cluster for bucket access). Additional info:
DRPC now has an additional Condition named Protected. Protected condition provides the latest available observation regarding the protection status of the workload, on the cluster it is expected to be available on. Protected condition summarizes VRG conditions from the Primary cluster (and at times the Secondary cluster in cases where relocation is in progress), to generate a status of True if all required resources are protected. This includes resource protection with required replication schemes, resource protection in the S3 store that Ramen uses, and also resource readiness for use as a primary or when transitioning to secondary. Protected condition would report false if any of the required workload resources are inadequately protected. Protected condition does not ensure workload health, i.e if workload is able to mount and use protected volumes or if workload pods are healthy. As workloads are managed by either ACM Subscriptions or ArgoCD ApplicationSets (or imperatively defined on the cluster), workload health needs to be monitored and validated using available tools from the workload management layers. Additionally an alert is added to the set of DR alerts, named "WorkloadUnprotected" that will generate a warning level alert on the hub cluster if a workload that is protected by a DRPlacementControl reports Protected condition as false for more than 10 minutes. ---------------------------- The above helps address the problem posted as part of this BZ, pending merge of this backport to 4.16: https://github.com/red-hat-storage/ramen/pull/247
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:4591