Bug 1612013
Summary: | creation of block pvcs is in pending state since 20 hours | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nitin Goyal <nigoyal> |
Component: | heketi | Assignee: | John Mulligan <jmulligan> |
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Nitin Goyal <nigoyal> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | cns-3.10 | CC: | hchiramm, kramdoss, madam, nchilaka, nigoyal, rhs-bugs, rtalur, sankarshan, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-08-10 14:53:25 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1568862 |
Description
Nitin Goyal
2018-08-03 09:01:27 UTC
Logs and Sosreports :-> http://rhsqe-repo.lab.eng.blr.redhat.com/cns/bugs/BZ-1612013/ It looks like the gluster cluster is not in good state, need to look into this in detail. [cmdexec] INFO 2018/08/03 09:40:01 Check Glusterd service status in node dhcp47-6.lab.eng.blr.redhat.com [kubeexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:298: Get https://172.31.0.1:443/api/v1/namespaces/glusterfs/pods?labelSelector=glusterfs-node: unexpected EOF [kubeexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:299: Failed to get list of pods [cmdexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/cmdexec/peer.go:76: Failed to get list of pods [heketi] INFO 2018/08/03 09:40:38 Periodic health check status: node 25a62f65f04d287530d0022f17c9a439 up=false [cmdexec] INFO 2018/08/03 09:40:38 Check Glusterd service status in node dhcp46-167.lab.eng.blr.redhat.com [kubeexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:298: Get https://172.31.0.1:443/api/v1/namespaces/glusterfs/pods?labelSelector=glusterfs-node: dial tcp 172.31.0.1:443: getsockopt: connection refused [kubeexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:299: Failed to get list of pods [cmdexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/cmdexec/peer.go:76: Failed to get list of pods [heketi] INFO 2018/08/03 09:40:38 Periodic health check status: node 2cec134709d4eedd115df888a98dfc99 up=false [cmdexec] INFO 2018/08/03 09:40:38 Check Glusterd service status in node dhcp46-174.lab.eng.blr.redhat.com [heketi] INFO 2018/08/03 09:40:38 Periodic health check status: node 406044bb661bef3017c56dd25fdb01b4 up=false [heketi] INFO 2018/08/03 09:40:38 Cleaned 0 nodes from health cache [kubeexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:298: Get https://172.31.0.1:443/api/v1/namespaces/glusterfs/pods?labelSelector=glusterfs-node: dial tcp 172.31.0.1:443: getsockopt: connection refused [kubeexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:299: Failed to get list of pods [cmdexec] ERROR 2018/08/03 09:40:38 /src/github.com/heketi/heketi/executors/cmdexec/peer.go:76: Failed to get list of pods [heketi] INFO 2018/08/03 09:42:01 Starting Node Health Status refresh [cmdexec] INFO 2018/08/03 09:42:01 Check Glusterd service status in node dhcp47-6.lab.eng.blr.redhat.com [heketi] INFO 2018/08/03 09:42:01 Periodic health check status: node 25a62f65f04d287530d0022f17c9a439 up=false [cmdexec] INFO 2018/08/03 09:42:01 Check Glusterd service status in node dhcp46-167.lab.eng.blr.redhat.com [kubeexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:298: Get https://172.31.0.1:443/api/v1/namespaces/glusterfs/pods?labelSelector=glusterfs-node: dial tcp 172.31.0.1:443: getsockopt: connection refused [kubeexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:299: Failed to get list of pods [cmdexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/cmdexec/peer.go:76: Failed to get list of pods [kubeexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:298: Get https://172.31.0.1:443/api/v1/namespaces/glusterfs/pods?labelSelector=glusterfs-node: dial tcp 172.31.0.1:443: getsockopt: connection refused [kubeexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:299: Failed to get list of pods [cmdexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/cmdexec/peer.go:76: Failed to get list of pods [heketi] INFO 2018/08/03 09:42:01 Periodic health check status: node 2cec134709d4eedd115df888a98dfc99 up=false [cmdexec] INFO 2018/08/03 09:42:01 Check Glusterd service status in node dhcp46-174.lab.eng.blr.redhat.com [kubeexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:298: Get https://172.31.0.1:443/api/v1/namespaces/glusterfs/pods?labelSelector=glusterfs-node: dial tcp 172.31.0.1:443: getsockopt: connection refused [kubeexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:299: Failed to get list of pods [cmdexec] ERROR 2018/08/03 09:42:01 /src/github.com/heketi/heketi/executors/cmdexec/peer.go:76: Failed to get list of pods [heketi] INFO 2018/08/03 09:42:01 Periodic health check status: node 406044bb661bef3017c56dd25fdb01b4 up=false Humble, agreed. To me it may be more than just the gluster components of the cluster. This heketi instance does not appear to be able to connect to k8s api to determine what pods to talk to. Can you curl from inside the heketi pod to https://172.31.0.1:443 ? |