Hide Forgot
Problem description: When I have a running docker registry pod and a terminated one (by some reason) , openshift diagnostic: ClusterRegistry is mistaking these as 2 working ones, and reporting fake error on image store consistency. This error message is unwanted. Origin/CLI L L Version-Release number of selected component (if applicable): oc v1.1.1-186-gec8fe7e kubernetes v1.1.0-origin-1107-g4c8e6f4 Steps to Reproduce: 1. Login openshift 2. Deploy the registry pod with wrong deployer image given in dc, e.g.: attempt to deploy with OSE image openshift3/ose-docker-registry:v3.1.1.6 on origin, and this will result in docker registry pod as terminating state 3. Edit dc with correct deployer image for origin: openshift/origin-docker-registry 4. oc get po -n default, and I now have a running docker registry pod and a terminated one: NAME READY STATUS RESTARTS AGE docker-registry-1-kshi4 0/1 Terminating 0 1h docker-registry-2-o3094 1/1 Running 0 1h 5. Run "openshift ex diagnostics" Actual Result: diagnostic: ClusterRegistry is reporting fake error on image store consistency since it mistake the 2 pods in repro step4 as 2 working ones: [Note] Running diagnostic: ClusterRegistry Description: Check that there is a working Docker registry ERROR: [DClu1007 from diagnostic ClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:209] The "docker-registry" service has multiple associated pods each using ephemeral storage. These are likely to have inconsistent stores of images. Builds and deployments that use images from the registry may fail sporadically. Use a single registry or add a shared storage volume to the registries. Expected Result: The fake error message reported by openshift diagnostic is unwanted Additional info: I have rc of the docker registry = 1 in the mean time
For the purposes of this check (DClu1007) it should be changed to exclude pods that are not Ready.