Bug 1952344
Summary: | OCS 4.8: v4.8.0-359 - storagecluster is in progressing state | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Container Storage | Reporter: | Vijay Avuthu <vavuthu> |
Component: | Multi-Cloud Object Gateway | Assignee: | Danny <dzaken> |
Status: | CLOSED ERRATA | QA Contact: | Vijay Avuthu <vavuthu> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.8 | CC: | ebenahar, etamir, madam, muagarwa, nbecker, ocs-bugs |
Target Milestone: | --- | Keywords: | Automation, Regression |
Target Release: | OCS 4.8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | 4.8.0-361.ci | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-08-03 18:15:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vijay Avuthu
2021-04-22 06:06:46 UTC
> From rook-ceph-operator-55f9f45b79-8x5pb log: 2021-04-22 05:22:41.667702 E | ceph-cluster-controller: failed to retrieve ceph cluster "ocs-storagecluster-cephcluster" in namespace "openshift-storage" to update status to &{Health:{Status:HEALTH_OK Checks:map []} FSID:2a2c311d-a484-48ad-ba58-8a466e229ced ElectionEpoch:12 Quorum:[0 1 2] QuorumNames:[a b c] MonMap:{Epoch:3 FSID:2a2c311d-a484-48ad-ba58-8a466e229ced CreatedTime:2021-04-22 05:11:13.191104 ModifiedTime:202 1-04-22 05:12:05.515110 Mons:[{Name:a Rank:0 Address:172.30.127.81:6789/0 PublicAddr:172.30.127.81:6789/0 PublicAddrs:{Addrvec:[{Type:v2 Addr:172.30.127.81:3300 Nonce:0} {Type:v1 Addr:172.30.127.81:6789 Nonce:0} ]}} {Name:b Rank:1 Address:172.30.13.140:6789/0 PublicAddr:172.30.13.140:6789/0 PublicAddrs:{Addrvec:[{Type:v2 Addr:172.30.13.140:3300 Nonce:0} {Type:v1 Addr:172.30.13.140:6789 Nonce:0}]}} {Name:c Rank:2 Address :172.30.215.129:6789/0 PublicAddr:172.30.215.129:6789/0 PublicAddrs:{Addrvec:[{Type:v2 Addr:172.30.215.129:3300 Nonce:0} {Type:v1 Addr:172.30.215.129:6789 Nonce:0}]}}]} OsdMap:{OsdMap:{Epoch:61 NumOsd:3 NumUpOsd :3 NumInOsd:3 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:176}] Version:0 NumPgs:176 DataBytes:448340658 UsedBytes:3765305344 AvailableBytes:318357241856 TotalBy tes:322122547200 ReadBps:1279 WriteBps:92812 ReadOps:2 WriteOps:1 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:11 ActiveGID:24521 Ac tiveName:a ActiveAddr:10.129.2.18:6801/20 Available:true Standbys:[]} Fsmap:{Epoch:11 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:ocs-storagecluster-cephfilesystem-a Status:up:active Gid:4392} {File systemID:1 Rank:0 Name:ocs-storagecluster-cephfilesystem-b Status:up:standby-replay Gid:15073}] UpStandby:0}} E0422 05:22:42.677150 7 reflector.go:138] pkg/mod/k8s.io/client-go.0/tools/cache/reflector.go:167: Failed to watch *v1.CephObjectRealm: the server has received too many requests and has asked us to t ry again later (get cephobjectrealms.ceph.rook.io) E0422 05:22:42.677150 7 reflector.go:138] pkg/mod/k8s.io/client-go.0/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: the server has received too many requests and has asked us to try aga in later (get configmaps) E0422 05:22:42.677150 7 reflector.go:138] pkg/mod/k8s.io/client-go.0/tools/cache/reflector.go:167: Failed to watch *v1.Pod: the server has received too many requests and has asked us to try again lat er (get pods) E0422 05:22:42.677156 7 reflector.go:138] pkg/mod/k8s.io/client-go.0/tools/cache/reflector.go:167: Failed to watch *v1.CephNFS: the server has received too many requests and has asked us to try again later (get cephnfses.ceph.rook.io) > must gather logs: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/vavuthu-ocs48/vavuthu-ocs48_20210422T040618/logs/failed_testcase_ocs_logs_1619064742/test_deployment_ocs_logs/ Verified with ocs-registry:4.8.0-417.ci and deployment is successfull Job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/3781/consoleFull Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Container Storage 4.8.0 container images bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3003 |