Bug 1426534 - Failed to deploy logging pod via "oc cluster up"
Summary: Failed to deploy logging pod via "oc cluster up"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Cesar Wong
QA Contact: Dongbo Yan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-24 08:31 UTC by Dongbo Yan
Modified: 2017-07-24 14:11 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-04-12 19:13:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0884 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.5 RPM Release Advisory 2017-04-12 22:50:07 UTC

Description Dongbo Yan 2017-02-24 08:31:10 UTC
Description of problem:
Failed to deploy logging pod via "oc cluster up"

Version-Release number of selected component (if applicable):
oc v3.5.0.33
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

How reproducible:
Always

Steps to Reproduce:
1.Create openshift cluster with metrics and logging via "oc cluster up"
$ sudo oc cluster up --image=registry.access.redhat.com/openshift3/ose --version=v3.4 --metrics=true --logging=true
2.Wait cluster up, check logging pod
$ sudo oc get pod -n logging
3.

Actual results:
$ sudo oc get all -n logging
NAME                        READY     STATUS             RESTARTS   AGE
po/logging-deployer-0clye   0/1       ImagePullBackOff   0          4m

Expected results:
logging pod is running

Additional info:
$ sudo oc get event -n logging
LASTSEEN   FIRSTSEEN   COUNT     NAME                     KIND      SUBOBJECT                   TYPE      REASON             SOURCE                    MESSAGE
5m         5m          5         logging-deployer-0clye   Pod                                   Warning   FailedScheduling   {default-scheduler }      no nodes available to schedule pods
5m         5m          1         logging-deployer-0clye   Pod                                   Normal    Scheduled          {default-scheduler }      Successfully assigned logging-deployer-0clye to 10.66.131.231
1m         5m          5         logging-deployer-0clye   Pod       spec.containers{deployer}   Normal    Pulling            {kubelet 10.66.131.231}   pulling image "registry.access.redhat.com/openshift3/ose-logging-deployment:v3.4"
1m         5m          5         logging-deployer-0clye   Pod       spec.containers{deployer}   Warning   Failed             {kubelet 10.66.131.231}   Failed to pull image "registry.access.redhat.com/openshift3/ose-logging-deployment:v3.4": image pull failed for registry.access.redhat.com/openshift3/ose-logging-deployment:v3.4, this may be because there are no credentials on this request.  details: (Error response from daemon: {"message":"unknown: Not Found"})
1m         5m          5         logging-deployer-0clye   Pod                                   Warning   FailedSync         {kubelet 10.66.131.231}   Error syncing pod, skipping: failed to "StartContainer" for "deployer" with ErrImagePull: "image pull failed for registry.access.redhat.com/openshift3/ose-logging-deployment:v3.4, this may be because there are no credentials on this request.  details: (Error response from daemon: {\"message\":\"unknown: Not Found\"})"

3s        5m        17        logging-deployer-0clye   Pod       spec.containers{deployer}   Normal    BackOff      {kubelet 10.66.131.231}   Back-off pulling image "registry.access.redhat.com/openshift3/ose-logging-deployment:v3.4"
3s        5m        17        logging-deployer-0clye   Pod                                   Warning   FailedSync   {kubelet 10.66.131.231}   Error syncing pod, skipping: failed to "StartContainer" for "deployer" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/openshift3/ose-logging-deployment:v3.4\""

Comment 1 Cesar Wong 2017-02-24 13:58:55 UTC
Troy, are we going to tag logging deployer images for v3.4 with the ose- prefix? Or is that only in 3.5?

Comment 2 Troy Dawson 2017-02-24 14:12:38 UTC
All 3.2, 3.3, and 3.4 logging and metrics images have both openshift3/ose-{logging,metrics}-<name> and openshift3/{logging,metrics}-<name>

The problem with this bug is that you are trying to pull down openshift3/ose-logging-deployment instead of openshift3/ose-logging-deployer

It has been logging-deployer since OCP 3.3

Comment 3 Cesar Wong 2017-03-01 13:09:11 UTC
Fix merged in origin with PR https://github.com/openshift/origin/pull/13151

Comment 4 Cesar Wong 2017-03-01 13:34:57 UTC
Actually needs a 1.5 backport ... PR submitted:
https://github.com/openshift/origin/pull/13165

1.4 backport:
https://github.com/openshift/origin/pull/13166

Comment 5 Cesar Wong 2017-03-02 14:08:31 UTC
1.5 Backport has merged
https://github.com/openshift/origin/pull/13165

Comment 6 Troy Dawson 2017-03-03 17:57:13 UTC
This has been merged into ocp and is in OCP v3.5.0.38 or newer.

Comment 8 Dongbo Yan 2017-03-06 07:51:48 UTC
Verified

oc version
oc v3.5.0.39
kubernetes v1.5.2+43a9be4

$ sudo oc cluster up --image=registry.access.redhat.com/openshift3/ose --version=v3.4 --metrics=true --logging=true

could deploy logging pod successfully
$ sudo oc get pod -n logging
NAME                          READY     STATUS      RESTARTS   AGE
logging-curator-1-z1435       1/1       Running     0          8m
logging-deployer-qz4ef        0/1       Completed   0          8m
logging-es-83jmlwhp-1-thcp3   1/1       Running     0          8m
logging-fluentd-rxx2r         1/1       Running     0          8m
logging-kibana-1-55afe        2/2       Running     0          8m

Comment 10 errata-xmlrpc 2017-04-12 19:13:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0884


Note You need to log in before you can comment on or make changes to this bug.