Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1485230 Details for
Bug 1631481
Upgarde fails due to monitoring fails to deploy
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
pod output
monitoring_pod_fail.txt (text/plain), 6.89 KB, created by
Michael Gugino
on 2018-09-20 16:47:17 UTC
(
hide
)
Description:
pod output
Filename:
MIME Type:
Creator:
Michael Gugino
Created:
2018-09-20 16:47:17 UTC
Size:
6.89 KB
patch
obsolete
>[root@fedora1 ~]# oc get pods --all-namespaces >NAMESPACE NAME READY STATUS RESTARTS AGE >default docker-registry-2-cgmqf 1/1 Running 0 16m >default registry-console-1-hg4w6 1/1 Running 2 53m >default router-2-bd47x 1/1 Running 0 16m >glusterfs glusterblock-storage-provisioner-dc-1-d27q9 1/1 Running 1 55m >glusterfs glusterfs-storage-46f5t 1/1 Running 0 1h >glusterfs glusterfs-storage-87rd7 1/1 Running 0 1h >glusterfs glusterfs-storage-tsxpf 1/1 Running 0 19m >glusterfs heketi-storage-1-xsftj 1/1 Running 3 55m >kube-service-catalog apiserver-vzt7w 1/1 Running 0 16m >kube-service-catalog controller-manager-4prh6 1/1 Running 0 16m >kube-system master-api-fedora1.mguginolocal.com 1/1 Running 0 19m >kube-system master-controllers-fedora1.mguginolocal.com 1/1 Running 0 19m >kube-system master-etcd-fedora1.mguginolocal.com 1/1 Running 0 19m >openshift-ansible-service-broker asb-1-8pglp 1/1 Running 0 15m >openshift-console console-5677c7c58d-bpc2f 1/1 Running 0 17m >openshift-monitoring cluster-monitoring-operator-5cf8fccc6-9wqfj 1/1 Running 0 15m >openshift-monitoring prometheus-operator-6c9fddd47f-qdhz8 0/1 CrashLoopBackOff 7 14m >openshift-node sync-45mv4 1/1 Running 1 21m >openshift-node sync-pnpwn 1/1 Running 0 21m >openshift-node sync-zmq8b 1/1 Running 0 21m >openshift-sdn ovs-h6bll 1/1 Running 0 38m >openshift-sdn ovs-m7gqr 1/1 Running 1 39m >openshift-sdn ovs-v6t54 1/1 Running 0 41m >openshift-sdn sdn-bhvx8 1/1 Running 0 41m >openshift-sdn sdn-kxlwc 1/1 Running 0 38m >openshift-sdn sdn-mf62s 1/1 Running 1 41m >openshift-template-service-broker apiserver-dlvv4 1/1 Running 0 15m >openshift-web-console webconsole-7df4f9f689-fvnv9 1/1 Running 0 18m >[root@fedora1 ~]# oc describe prometheus-operator-6c9fddd47f-qdhz8 -n openshift-monitoring >error: the server doesn't have a resource type "prometheus-operator-6c9fddd47f-qdhz8" >[root@fedora1 ~]# oc describe pod prometheus-operator-6c9fddd47f-qdhz8 -n openshift-monitoring >Name: prometheus-operator-6c9fddd47f-qdhz8 >Namespace: openshift-monitoring >Priority: 0 >PriorityClassName: <none> >Node: fedora3.mguginolocal.com/192.168.124.169 >Start Time: Thu, 20 Sep 2018 16:16:58 +0000 >Labels: k8s-app=prometheus-operator > pod-template-hash=2759888039 >Annotations: openshift.io/scc=restricted >Status: Running >IP: 10.129.0.8 >Controlled By: ReplicaSet/prometheus-operator-6c9fddd47f >Containers: > prometheus-operator: > Container ID: docker://93bcb808c7f9b95c315253d29e962c70eea397037b41542bdc55b1d12a821952 > Image: quay.io/coreos/prometheus-operator:v0.22.0 > Image ID: docker-pullable://quay.io/coreos/prometheus-operator@sha256:96541fa4ea179ba11b46be66e6b2beb4bf6781140fed937b25fa58a134cd3386 > Port: 8080/TCP > Host Port: 0/TCP > Args: > --kubelet-service=kube-system/kubelet > -logtostderr=true > --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1 > --prometheus-config-reloader=quay.io/coreos/prometheus-config-reloader:v0.22.0 > --namespace=openshift-monitoring > State: Waiting > Reason: CrashLoopBackOff > Last State: Terminated > Reason: Error > Exit Code: 1 > Started: Thu, 20 Sep 2018 16:28:08 +0000 > Finished: Thu, 20 Sep 2018 16:28:08 +0000 > Ready: False > Restart Count: 7 > Environment: <none> > Mounts: > /var/run/secrets/kubernetes.io/serviceaccount from prometheus-operator-token-vr8jw (ro) >Conditions: > Type Status > Initialized True > Ready False > PodScheduled True >Volumes: > prometheus-operator-token-vr8jw: > Type: Secret (a volume populated by a Secret) > SecretName: prometheus-operator-token-vr8jw > Optional: false >QoS Class: BestEffort >Node-Selectors: beta.kubernetes.io/os=linux >Tolerations: <none> >Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal Scheduled 14m default-scheduler Successfully assigned openshift-monitoring/prometheus-operator-6c9fddd47f-qdhz8 to fedora3.mguginolocal.com > Normal Pulling 14m kubelet, fedora3.mguginolocal.com pulling image "quay.io/coreos/prometheus-operator:v0.22.0" > Normal Pulled 14m kubelet, fedora3.mguginolocal.com Successfully pulled image "quay.io/coreos/prometheus-operator:v0.22.0" > Normal Created 13m (x5 over 14m) kubelet, fedora3.mguginolocal.com Created container > Normal Pulled 13m (x4 over 14m) kubelet, fedora3.mguginolocal.com Container image "quay.io/coreos/prometheus-operator:v0.22.0" already present on machine > Normal Started 13m (x5 over 14m) kubelet, fedora3.mguginolocal.com Started container >Warning BackOff 4m (x46 over 14m) kubelet, fedora3.mguginolocal.com Back-off restarting failed container
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1631481
: 1485230 |
1485231