Bug 1653563 - [Next_gen_installer]Cannot go through oc new-app process for cannot communicate with integrated registry
Summary: [Next_gen_installer]Cannot go through oc new-app process for cannot communica...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.1.0
Assignee: Dan Mace
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-27 06:37 UTC by Wenjing Zheng
Modified: 2019-06-04 10:41 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2019-06-04 10:41:04 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:41:09 UTC

Description Wenjing Zheng 2018-11-27 06:37:31 UTC
Description of problem:
Cannot go through oc new-app process with default settings:
$ oc get builds
NAME        TYPE      FROM      STATUS                         STARTED   DURATION
ruby-ex-1   Source    Git       New (InvalidOutputReference)             


Version-Release number of selected component (if applicable):
# bin/openshift-install version
bin/openshift-install v0.4.0-10-ge15d801ad69481da18d409bec5fa1c7bd7998f3a
Terraform v0.11.8


How reproducible:
always

Steps to Reproduce:
1.Create a project
2.oc new-app ruby:2.5~https://github.com/openshift/ruby-ex
3.

Actual results:
$ oc get builds
NAME        TYPE      FROM      STATUS                         STARTED   DURATION
ruby-ex-1   Source    Git       New (InvalidOutputReference)             


Expected results:
Build should be triggered.

Additional info:
Try to push directly to a DockerImage with docker hub secrets added in buildconfig, New (InvalidOutputReference) status disappears;
Here is my buildconfig:
$ oc describe bc ruby-ex
Name:        ruby-ex
Namespace:    wzheng1
Created:    3 hours ago
Labels:        app=ruby-ex
Annotations:    openshift.io/generated-by=OpenShiftNewApp
Latest Version:    3

Strategy:    Source
URL:        https://github.com/sclorg/ruby-ex.git
From Image:    DockerImage docker.io/centos/ruby-25-centos7
Output to:    DockerImage docker.io/wzheng/ruby-ex:latest
Push Secret:    dockerhub

Comment 2 Ben Parees 2018-11-27 15:08:30 UTC
Does your cluster have a running internal registry?

What cloud platform are you running on?  (Only AWS will get a running registry automatically today, for any other platform you'd have to configure storage for the registry)

Comment 3 Ben Parees 2018-11-27 16:50:38 UTC
Nevermind, it appears this was regressed such that even when the registry is deployed, internalRegistryHostname is not being set properly.

Comment 5 Wenjing Zheng 2018-11-28 09:55:46 UTC
Yes, we use AWS for next gen install, the storage backend is s3 by default and pod is running.

Comment 6 Ben Parees 2018-11-28 15:40:23 UTC
new PR w/ fix is https://github.com/openshift/cluster-image-registry-operator/pull/92

Comment 7 Ben Parees 2018-12-03 14:40:28 UTC
This is just a dummy pull reference to make the bot happy: https://github.com/openshift/origin/pull/1

Comment 8 Wenjing Zheng 2018-12-04 06:19:03 UTC
Have checked with latest version for next gen install on AWS cluster, build can completed successfully, but pod failed with ImagePullBackOff error: 
registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-04-043233@sha256:c8c110b8733d0d352ddc5fe35ba9eeac913b7609c2c9c778586f2bb74f281681

Events:
  Type     Reason     Age               From                                   Message
  ----     ------     ----              ----                                   -------
  Normal   Scheduled  23s               default-scheduler                      Successfully assigned wzheng1/ruby-ex-1-k89kq to ip-10-0-173-144.ec2.internal
  Normal   BackOff    20s               kubelet, ip-10-0-173-144.ec2.internal  Back-off pulling image "image-registry.openshift-image-registry.svc:5000/wzheng1/ruby-ex@sha256:1d97235d3526a90ceef2849e172c65d7080400a38d028c0f7b3925d44106c6ed"
  Warning  Failed     20s               kubelet, ip-10-0-173-144.ec2.internal  Error: ImagePullBackOff
  Normal   Pulling    6s (x2 over 21s)  kubelet, ip-10-0-173-144.ec2.internal  pulling image "image-registry.openshift-image-registry.svc:5000/wzheng1/ruby-ex@sha256:1d97235d3526a90ceef2849e172c65d7080400a38d028c0f7b3925d44106c6ed"
  Warning  Failed     6s (x2 over 21s)  kubelet, ip-10-0-173-144.ec2.internal  Failed to pull image "image-registry.openshift-image-registry.svc:5000/wzheng1/ruby-ex@sha256:1d97235d3526a90ceef2849e172c65d7080400a38d028c0f7b3925d44106c6ed": rpc error: code = Unknown desc = pinging docker registry returned: Get https://image-registry.openshift-image-registry.svc:5000/v2/: dial tcp: lookup image-registry.openshift-image-registry.svc on 10.0.0.2:53: no such host
  Warning  Failed     6s (x2 over 21s)  kubelet, ip-10-0-173-144.ec2.internal  Error: ErrImagePull

Comment 9 Ben Parees 2018-12-04 14:45:28 UTC
The image pull issue is due to cluster DNS not being setup properly so the nodes can resolve the internal registry service name.  That is being fixed by the networking team, so xferring the bug there.

Comment 10 Wenjing Zheng 2018-12-07 09:01:44 UTC
This is no test blocker now, I will verify this bug.

Comment 11 Wenjing Zheng 2018-12-07 09:25:06 UTC
Verify with below version:
$ oc get clusterversion version 
NAME      VERSION                           AVAILABLE   PROGRESSING   SINCE     STATUS
version   4.0.0-0.alpha-2018-12-07-060250   True        False         1h        Cluster version is 4.0.0-0.alpha-2018-12-07-060250

Comment 14 errata-xmlrpc 2019-06-04 10:41:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.