Bug 1526147 - [DOCS] apb push, test, run fails when executed against a remotely hosted cluster
Summary: [DOCS] apb push, test, run fails when executed against a remotely hosted clu...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3.10.0
Assignee: Alex Dellapenta
QA Contact: sunzhaohua
Vikram Goyal
URL:
Whiteboard:
: 1519193 1537599 1541903 1557462 (view as bug list)
Depends On:
Blocks: 1533318 1537599 1541903
TreeView+ depends on / blocked
 
Reported: 2017-12-14 20:34 UTC by Wolfgang Kulhanek
Modified: 2021-08-02 21:41 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-08 19:21:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker AEROGEAR-3286 0 Critical Resolved [Mobile CI|CD]Can not push to docker internal registry 2020-05-29 06:56:07 UTC
Red Hat Knowledge Base (Solution) 3309411 0 None None None 2018-01-04 10:12:09 UTC

Description Wolfgang Kulhanek 2017-12-14 20:34:50 UTC
Description of problem:

When trying to push an Ansible Playbook Bundle (Container) from a development workstation to an Ansible Service Broker it discovers the Docker Registry to push to. However it discovers the Service IP (172.30....) instead of the route. Consequently it is not possible to push a built APB Container from a workstation that is not part of the OpenShift SDN.

This should be changed to use the Registry route rather than the Service.

Version-Release number of selected component (if applicable):
Latest in the 3.7 OCP repos (yum -y install apb)

How reproducible:


Steps to Reproduce:
From a RHEL Machine (with OCP Subscription):
1. yum -y install apb
2. git clone https://github.com/ansibleplaybookbundle/etherpad-apb
3. cd etherpad-apb
4. apb build
5. apb push --openshift --broker https://asb-1338-openshift-ansible-service-broker.apps.rdu.example.opentlc.com/ansible-service-broker (replacing with the correct ASB Route)



Actual results (only output from apb push is shown):
ocplab-07ef ~/etherpad-apb (master) $ apb push --openshift --broker https://asb-1338-openshift-ansible-service-broker.apps.rdu.example.opentlc.com/ansible-service-broker
version: 1.0
name: etherpad-apb
description: Note taking web application
bindable: True
async: optional
metadata:
  documentationUrl: https://github.com/ether/etherpad-lite/wiki
  imageUrl: https://translatewiki.net/images/thumb/6/6f/Etherpad_lite.svg/200px-Etherpad_lite.svg.png
  dependencies: ['docker.io/mariadb:latest', 'docker.io/tvelocity/etherpad-lite:latest']
  displayName: Etherpad (APB)
  longDescription: An apb that deploys Etherpad Lite
  providerDisplayName: "Red Hat, Inc."
plans:
  - name: default
    description: A single etherpad application with no DB
    free: true
    metadata:
      displayName: Default
      longDescription: This plan provides a single Etherpad application with no database
      cost: $0.00
    parameters:
      - name: mariadb_name
        required: true
        default: etherpad
        type: string
        title: MariaDB Database Name
      - name: mariadb_user
        required: true
        default: etherpad
        title: MariaDB User
        type: string
        maxlength: 63
      - name: mariadb_password
        default: admin
        type: string
        description: A random alphanumeric string if left blank
        title: MariaDB Password
      - name: mariadb_root_password
        default: admin
        type: string
        description: root password for mariadb
        title: Root Password

/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:852: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
Found registry IP at: 172.30.27.190:5000
Building image with the tag: 172.30.27.190:5000/openshift/etherpad-apb
Error accessing the docker API. Is the daemon running?
Exception occurred! 500 Server Error: Internal Server Error ("Get https://172.30.27.190:5000/v1/users/: dial tcp 172.30.27.190:5000: i/o timeout")



Expected results:
[...]
Found registry at: https://docker-registry-default.apps.rdu.example.opentlc.com


Additional info:
This same sequence of commands works if executed on the master. Which of course is unacceptable - Developers will never get access to the master.

Comment 1 Zhang Cheng 2017-12-15 02:53:32 UTC
This issue should duplicate with https://bugzilla.redhat.com/show_bug.cgi?id=1519193

Comment 4 John Matthews 2018-01-05 16:55:28 UTC
Dylan please help answer comment #3

Comment 5 Dylan Murray 2018-01-05 17:07:23 UTC
Hey Gajanan,

There is a workaround for this and I am in the progress of submitting some patches upstream to make this process easier. We have a similar bug filed here: https://bugzilla.redhat.com/show_bug.cgi?id=1519193.

To workaround this the customer can take note of the registry route:
$ oc get route docker-registry -n default
docker_registry.default.<cluster_ip>.nip.io

Then build the container image with apb build specifying the proper route:
$ apb build --tag <registry_route>/openshift/<my-apb>

$ docker login <registry_route> -u unused -p $(oc whoami -t)
$ docker push <registry_route>/openshift/<my-apb>

$ apb bootstrap

This will push the image to the local OpenShift Container Registry and then force the OpenShift Ansible Broker to bootstrap and relist the Service Catalog.

Comment 8 Wolfgang Kulhanek 2018-01-09 14:13:45 UTC
Good start for a workaround. However it fails on docker login since my cluster has self signed certificates - which a lot of development clusters will have.

docker login docker-registry-default.apps.wk.example.opentlc.com -u unused -p $(oc whoami -t)
Error response from daemon: Get https://docker-registry-default.apps.wk.example.opentlc.com/v1/users/: x509: certificate signed by unknown authority

Comment 9 Dylan Murray 2018-01-09 19:01:07 UTC
Wolfgang,

Is it out of the question for the user to add the docker-registry route to the list of insecured registries in /etc/sysconfig/docker? This is really an issue with the Docker client than it is the APB tooling. The tooling (even if we weren't using the workaround) expects all of the configuration for the registry to be done in the Docker config files.

I tested adding the route to the insecure registry list (--insecure-registry docker-registry-default.apps.wk.example.opentlc.com in /etc/sysconfig/docker for you) and restarting Docker solved this problem. Please let me know if that approach doesn't work for you. Unfortunately its our only workaround since we are dependent on the Docker client.

Comment 10 Wolfgang Kulhanek 2018-01-09 19:31:02 UTC
Dylan, we are getting somewhere. :-) 

In the current Docker 1.12.6 the location is /etc/containers/registries.conf though instead of /etc/sysconfig/docker. Regardless that worked. :-)

Now to push to ..../openshift/... the user OpenShift user that tries the push seems to need cluster-admin authorization.

Also the example above needs the full broker URL to succeed:

apb bootstrap --broker https://asb-1338-openshift-ansible-service-broker.apps.wk.example.opentlc.com/ansible-service-broker

Now finally when while the push and bootstrap succeeds a "apb list" does not show the new APB (I should have a mariadb-apb (not the rh- one) and a rocketchat-apb). 

apb list --broker https://asb-1338-openshift-ansible-service-broker.apps.wk.example.opentlc.com/ansible-service-broker
ID                                NAME               DESCRIPTION
2c259ddd8059b9bc65081e07bf20058f  rh-mariadb-apb     Mariadb apb implementation
03b69500305d9859bb9440d9f9023784  rh-mediawiki-apb   Mediawiki123 apb implementation
73ead67495322cc462794387fa9884f5  rh-mysql-apb       Software Collections MySQL APB
d5915e05b253df421efe6e41fb6a66ba  rh-postgresql-apb  SCL PostgreSQL apb implementation

I feel this is something different alltogether though.

Comment 11 Dylan Murray 2018-01-09 19:40:09 UTC
Wolfgang,

Yes it is true that the user you are using to login to the registry needs to have cluster-admin permissions. The APB tooling is heavily geared towards a user with cluster-admin access. I also submitted a PR to this bug https://bugzilla.redhat.com/show_bug.cgi?id=1523252 which will allow you to do:

apb bootstrap --broker https://asb-1338-openshift-ansible-service-broker.apps.wk.example.opentlc.com instead of the full route with suffix.

I am willing to bet this is an issue with the local_openshfit registry whitelist. By default a cluster has an empty whitelist for this registry adapter. Please go here: https://github.com/openshift/ansible-service-broker/blob/master/docs/config.md#local-openshift-registry and note the `white_list` config value. 

As a user with access to the ansible-service-broker namespace (or openshift-ansible-service-broker using openshift-ansible) you can do:

oc edit configmap broker-config -n ansible-service-broker and add the whitelist value so that all images that end in `-apb` will be bootstrapped. This should resolve your problem.

Comment 12 Dylan Murray 2018-01-09 19:40:53 UTC
Also, I added a PR to openshift-docs https://github.com/openshift/openshift-docs/pull/6780 forever ago to add these instructions into the openshift-ansible installer docs. Hopefully the info in there helps.

Comment 13 Dylan Murray 2018-01-09 19:41:02 UTC
Also, I added a PR to openshift-docs https://github.com/openshift/openshift-docs/pull/6780 forever ago to add these instructions into the openshift-ansible installer docs. Hopefully the info in there helps.

Comment 14 Wolfgang Kulhanek 2018-01-09 19:52:29 UTC
Awesome! Got it running! I did add the whitelist to the local_openshift stanza. And sure enough bootstrap/list (and even deploy of the RocketChat apb) worked like a charm.

Question: rather than using just 'openshift' under 'namespaces' for local_openshift is it possible to use '*'? Or a setting that would scan every OpenShift project? For my own testing I could certainly add a project or two - but for a "real" developer environment I feel that might be a bit tedious.

Thanks for all your help getting this running!!

Comment 15 Dylan Murray 2018-01-09 20:02:15 UTC
Great!

Unfortunately we don't currently have that ability but it seems trivial to add it. I created a GitHub issue here: https://github.com/openshift/ansible-service-broker/issues/623. We default to the openshift namespace because it is unique in that is makes all of its images/imagestreams available to the entire cluster. Creating a new project by default will not have its images be globally accessible.

Regardless you bring up a valid use case so we can track that functionality in the open issue.

Comment 16 Dylan Murray 2018-01-11 20:20:31 UTC
https://github.com/ansibleplaybookbundle/ansible-playbook-bundle/pull/187

The above PR adds the ability to specify a registry route so that apb push can be used outside the SDN.

Comment 17 Dylan Murray 2018-01-19 13:08:58 UTC
Moving to ON_QA. The important thing to test is being able to specify --registry-route when pushing.

Comment 19 Dylan Murray 2018-01-22 14:04:24 UTC
Jian,

--registry-route is used if you do not have access to the `default` namespace where the OpenShift registry lives. This way you can push to the internal registry without reading the IP address of the registry service. You are using it properly but it looks like your Docker daemon is not running on your host. You need Docker installed where the APB tool is in order to use the docker client to login to the internal registry.

Please start the Docker daemon and try again.

Comment 20 Jian Zhang 2018-01-23 01:31:49 UTC
Dylan,

Sorry, I forgot to post my Docker running info. My docker daemon was running well. Like below:

[root@localhost hello-world-apb]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@localhost hello-world-apb]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@localhost hello-world-apb]# docker images
REPOSITORY                                                                                         TAG                 IMAGE ID            CREATED             SIZE
docker.io/ansibleplaybookbundle/apb-tools                                                          sprint143.1         79b05404db24        3 days ago          730.2 MB
docker.io/zjianbjz/hello-world-db-apb                                                              latest              3ad4a49a0bf9        6 days ago          672.4 MB

Comment 21 Jian Zhang 2018-01-23 01:38:51 UTC
#FYI

[root@localhost hello-world-apb]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@localhost hello-world-apb]# ps -elf | grep docker
4 S root     12730     1  0  80   0 - 124204 futex_ Jan22 ?       00:00:02 /usr/libexec/docker/docker-containerd-current --listen unix:///run/containerd.sock --shim /usr/libexec/docker/docker-containerd-shim-current --start-timeout 2m
4 S root     29579     1  1  80   0 - 145609 futex_ 09:37 ?       00:00:00 /usr/bin/dockerd-current --add-runtime oci=/usr/libexec/docker/docker-runc-current --default-runtime=oci --containerd /run/containerd.sock --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --log-driver=journald --insecure-registry 172.30.0.0/16 --insecure-registry asb-registry.usersys.redhat.com:5000
0 S root     29723 24258  0  80   0 - 29843 pipe_w 09:37 pts/1    00:00:00 grep --color=auto docker

Comment 22 Dylan Murray 2018-01-23 13:52:25 UTC
Jian,

Ah okay, I actually think the problem here is that you are using an https route. The --registry-route flag prefixes the image tag with the --registry-route and Docker does not accept http or https in the tag. Hence:

Exception occurred! 500 Server Error: Internal Server Error ("Error parsing reference: "https://docker-registry-default.apps.0122-0tk.qe.rhcloud.com/openshift/hello-world-apb" is not a valid repository/tag")

If your registry is secured with TLS (which by default in openshift its a self signed cert) then you must be sure to update your docker configuration such that docker-registry-default.apps.0122-0tk.qe.rhcloud.com is added as an insecure registry.

These settings are in /etc/sysconfig/docker and you want to add the flag `--insecure-registry docker-registry-default.apps.0122-0tk.qe.rhcloud.com`.

Hope that helps. Please retest without https on the tag.

Comment 24 Dylan Murray 2018-01-25 07:31:40 UTC
Jian,

Please reread comment #22. The certificate warning is because you need to add the docker registry route as an insecure registry in the Docker config. We are currently tracking a way to avoid this step by using the native Docker client on the remote host, but the work is not done yet. Can you please retry with the proper Docker config?

Comment 27 Dylan Murray 2018-01-26 07:46:24 UTC
Jian,

Thanks for retesting! This looks more like the bug I would expect to see if the login is failing.

Comment 28 Dylan Murray 2018-01-26 07:49:27 UTC
I have changed this bug to describe the timeout errors seen when using `apb push` against a cluster that does not exist on the same host where the tool is run. This is seen in a Minishift environment as well. The proper way to solve this would be to connect to the remote host's docker daemon from the APB tooling the same way that Minishift is supported now. This will be difficult because we need to get the certs from the remote host. I am going to use this bug to track what changes are necessary to improve this process for a developer.

Comment 29 Dylan Murray 2018-01-26 14:23:52 UTC
*** Bug 1519193 has been marked as a duplicate of this bug. ***

Comment 31 Dylan Murray 2018-04-25 17:14:54 UTC
We have documented a workaround for working with remote clusters here: https://github.com/ansibleplaybookbundle/ansible-playbook-bundle/blob/master/docs/developers.md#alternative-to-using-apb-push

Instead of using `apb push` the developer can follow this documentation approach to populate their image onto the OpenShift cluster.

Comment 32 Dylan Murray 2018-04-25 17:23:15 UTC
Changing component to Documentation since we have properly documented the workaround here: https://github.com/ansibleplaybookbundle/ansible-playbook-bundle/blob/master/docs/developers.md#alternative-to-using-apb-push. We need this documented for 3.10 release showing how to populate internal openshift registry with APB images using `oc new-app`.

Comment 33 Dylan Murray 2018-04-25 17:24:08 UTC
*** Bug 1537599 has been marked as a duplicate of this bug. ***

Comment 34 Dylan Murray 2018-04-25 17:24:12 UTC
*** Bug 1541903 has been marked as a duplicate of this bug. ***

Comment 35 Jian Zhang 2018-04-26 08:04:36 UTC
Dylan,

I followed your workaround, but got nothing when running `oc get images| grep <bundle_name>`, I think we should change it to `oc get imagestreams -n openshift | grep <bundle_name>`, details:

1) #oc new-app <path_to_bundle_source> --name <bundle_name> -n openshift
[jzhang@localhost hello-world-apb]$ oc new-app . --name jian -n openshift
--> Found Docker image f29c340 (15 hours old) from Docker Hub for "ansibleplaybookbundle/apb-base"

    * An image stream will be created as "apb-base:latest" that will track the source image
    * A Docker build using source code from git:ansibleplaybookbundle/hello-world-apb.git#master will be created
      * The resulting image will be pushed to image stream "jian:latest"
      * Every time "apb-base:latest" changes a new build will be triggered
      * WARNING: this source repository may require credentials.
                 Create a secret with your git credentials and use 'set build-secret' to assign it to the build config.
    * This image will be deployed in deployment config "jian"
    * The image does not expose any ports - if you want to load balance or send traffic to this component
      you will need to create a service with 'expose dc/jian --port=[port]' later
    * WARNING: Image "ansibleplaybookbundle/apb-base" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    imagestream "apb-base" created
    imagestream "jian" created
    buildconfig "jian" created
    deploymentconfig "jian" created
--> Success
    Build scheduled, use 'oc logs -f bc/jian' to track its progress.
    Run 'oc status' to view your app.

2)Got nothing:
[root@host-172-16-120-36 ~]# oc get images| grep jian

[root@host-172-16-120-36 ~]# oc get imagestreams -n openshift| grep jian
jian                                  docker-registry.default.svc:5000/openshift/jian

Comment 36 Dylan Murray 2018-04-26 18:07:41 UTC
Jian,

If the imagestream was created that means that the image was more than likely in the process of building. You'll see the last line is "Build scheduled". You can watch the logs of the build and when it's completed the image will be pushed to the internal registry which at that point `oc get images` will show the APB.

Comment 37 Dylan Murray 2018-04-26 18:13:12 UTC
This bug is being updated to reflect a documentation update for 3.10 that will explain the limitation of pushing images to a registry on a remote host. To workaround this, a developer can use `oc new-app` as documented here: https://github.com/ansibleplaybookbundle/ansible-playbook-bundle/blob/master/docs/developers.md#alternative-to-using-apb-push.

For `apb run` and `apb test`, these commands directly depend upon `apb push` working properly. This means that if a user wants to do something similar to `apb run` they should instead use `oc new-app` to get the image onto the internal registry, then use `oc run` to manually deploy the pod.

`apb test` is really no different from `apb run` except that it runs the specific action `test` so it has the same limitations as `apb run`.

Comment 38 Dylan Murray 2018-04-30 12:56:47 UTC
This bug needs to include instructions on how to `oc run` an APB to workaround `apb run` not working. I will include a document upstream and link it here.

Comment 39 Dylan Murray 2018-05-01 13:44:40 UTC
*** Bug 1541903 has been marked as a duplicate of this bug. ***

Comment 41 sunzhaohua 2018-05-03 10:14:57 UTC
Dylan,

I followed your workaround, I think ` cat apb.yml | grep base64` should change to `$cat apb.yml | base64`.

Comment 42 Dylan Murray 2018-05-03 11:16:23 UTC
sunzhaohua,

Thanks for catching that! I submitted a fix along with this PR: https://github.com/ansibleplaybookbundle/ansible-playbook-bundle/pull/281

Comment 43 sunzhaohua 2018-05-28 08:30:39 UTC
Dylan,

1) I met the same problem with jian at comment 35.

[szh@localhost my-doc-apb]$ oc new-app ./ --name my-doc -n openshift
--> Found Docker image 57595a3 (2 weeks old) from Docker Hub for "ansibleplaybookbundle/apb-base"

    * An image stream will be created as "apb-base:latest" that will track the source image
    * A Docker build using binary input will be created
      * The resulting image will be pushed to image stream "my-doc:latest"
      * A binary build was created, use 'start-build --from-dir' to trigger a new build
    * This image will be deployed in deployment config "my-doc"
    * The image does not expose any ports - if you want to load balance or send traffic to this component
      you will need to create a service with 'expose dc/my-doc --port=[port]' later
    * WARNING: Image "ansibleplaybookbundle/apb-base" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    imagestream "apb-base" created
    imagestream "my-doc" created
    buildconfig "my-doc" created
    deploymentconfig "my-doc" created
--> Success
    Build scheduled, use 'oc logs -f bc/my-doc' to track its progress.
    Run 'oc status' to view your app.

[szh@localhost my-doc-apb]$ oc logs -f bc/my-doc -n openshift
error: no builds found for "my-doc"

[szh@localhost my-doc-apb]$ curl -H "Authorization: Bearer $(oc whoami -t)" -k https://asb-1338-openshift-ansible-service-broker.apps.0528-l3l.qe.rhcloud.com/ansible-service-broker/v2/catalog | grep my-doc
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3713    0  3713    0     0   3306      0 --:--:--  0:00:01 --:--:--  3309

2) In step3, "oc get route -n ansible-service-broker" should be changed to "oc get route -n openshift-ansible-service-broker"

Comment 44 Dylan Murray 2018-05-29 13:27:02 UTC
Yes for step 2 the namespace should be `openshift-ansible-service-broker`. This change should be in the docs that Alex submits, but upstream our namespace is `ansible-service-broker`.

I also do not see any errors in the `oc new-app` command which tells me that a buildconfig was created. I am not sure why you are seeing "no build found" on the next call. Can you give me more information about what is created after that call? You should see imagestreams for `apb-base` and `my-doc`. Can you return the output of the following:
$ oc get is --all-namespaces | grep my-doc
$ oc get bc --all-namespaces | grep my-doc
$ oc get images | grep apb-base
$ oc get images | grep my-doc

It's also possible that on your cluster the user is not allowed to create buildconfigs in namespace `openshift` but I did not have this problem on my cluster. Any more info you can give me would be helpful because the actual `new-app` command succeeded and on my cluster it creates an associated buildconfig.

Comment 45 Alex Dellapenta 2018-05-29 21:19:22 UTC
https://github.com/openshift/openshift-docs/pull/9648

Comment 46 sunzhaohua 2018-05-30 07:54:48 UTC
Comment 35 Comment 43 verified successful use hello-world-apb (https://github.com/ansibleplaybookbundle/hello-world-apb)

failed use my own apb

1) use hello-world-apb
[szh@localhost hello-world-apb]$ oc new-app ./ --name hellooooo -n openshift
--> Found Docker image 486a293 (7 days old) from Docker Hub for "ansibleplaybookbundle/apb-base:canary"

    * An image stream will be created as "apb-base:canary" that will track the source image
    * A Docker build using source code from https://github.com/ansibleplaybookbundle/hello-world-apb.git#master will be created
      * The resulting image will be pushed to image stream "hellooooo:latest"
      * Every time "apb-base:canary" changes a new build will be triggered
    * This image will be deployed in deployment config "hellooooo"
    * The image does not expose any ports - if you want to load balance or send traffic to this component
      you will need to create a service with 'expose dc/hellooooo --port=[port]' later
    * WARNING: Image "ansibleplaybookbundle/apb-base:canary" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    imagestream "hellooooo" created
    buildconfig "hellooooo" created
    deploymentconfig "hellooooo" created
--> Success
    Build scheduled, use 'oc logs -f bc/hellooooo' to track its progress.
    Run 'oc status' to view your app.

[szh@localhost hello-world-apb]$ oc logs -f bc/hellooooo -n openshift
Cloning "https://github.com/ansibleplaybookbundle/hello-world-apb.git" ...
	Commit:	7132053ccc8ea64b856074247638cfde54feaaff (Skip last operation if not in cluster (#6))
	Author:	David Zager <dzager>
	Date:	Thu Apr 12 12:06:42 2018 -0400

Pulling image ansibleplaybookbundle/apb-base@sha256:a501e750921fad25f8ef3a498a72694722ad5b11457d989eaefacf854f44d41b ...
Pulled 0/14 layers, 0% complete
Pulled 1/14 layers, 7% complete
Pulled 2/14 layers, 16% complete
Pulled 3/14 layers, 25% complete
Pulled 4/14 layers, 34% complete
Pulled 5/14 layers, 41% complete
Pulled 6/14 layers, 49% complete
Pulled 7/14 layers, 57% complete
Pulled 8/14 layers, 59% complete
Pulled 9/14 layers, 66% complete
Pulled 10/14 layers, 73% complete
Pulled 11/14 layers, 80% complete
Pulled 12/14 layers, 88% complete
Pulled 13/14 layers, 95% complete
Pulled 14/14 layers, 100% complete
Extracting
Step 1/9 : FROM ansibleplaybookbundle/apb-base@sha256:a501e750921fad25f8ef3a498a72694722ad5b11457d989eaefacf854f44d41b
 ---> 486a2936c6a8
Step 2/9 : ENV "HTTP_PROXY" "http://file.rdu.redhat.com:3128" "HTTPS_PROXY" "http://file.rdu.redhat.com:3128" "NO_PROXY" ".centralci.eng.rdu2.redhat.com,.cluster.local,.svc,169.254.169.254,172.16.120.62,172.30.0.1,qe-zhsun-999-manualmaster-etcd-1,qe-zhsun-999-manualnrr-1" "http_proxy" "http://file.rdu.redhat.com:3128" "https_proxy" "http://file.rdu.redhat.com:3128" "no_proxy" ".centralci.eng.rdu2.redhat.com,.cluster.local,.svc,169.254.169.254,172.16.120.62,172.30.0.1,qe-zhsun-999-manualmaster-etcd-1,qe-zhsun-999-manualnrr-1"
 ---> Running in 150bb9c3aa88
 ---> f276df8b890f
Removing intermediate container 150bb9c3aa88
Step 3/9 : LABEL "com.redhat.apb.spec" "LS0tCnZlcnNpb246IDEuMApuYW1lOiBoZWxsby13b3JsZC1hcGIKZGVzY3JpcHRpb246IGRlcGxveXMgaGVsbG8td29ybGQgd2ViIGFwcGxpY2F0aW9uCmJpbmRhYmxlOiAiRmFsc2UiCmFzeW5jOiBvcHRpb25hbAptZXRhZGF0YToKICBkaXNwbGF5TmFtZTogSGVsbG8gV29ybGQgKEFQQikKICBsb25nRGVzY3JpcHRpb246CiAgICBBIHNhbXBsZSBBUEIgd2hpY2ggZGVwbG95cyBhIGNvbnRhaW5lcml6ZWQgSGVsbG8gV29ybGQgd2ViIGFwcGxpY2F0aW9uCiAgZGVwZW5kZW5jaWVzOiBbJ2RvY2tlci5pby9hbnNpYmxlcGxheWJvb2tidW5kbGUvaGVsbG8td29ybGQ6bGF0ZXN0J10KICBwcm92aWRlckRpc3BsYXlOYW1lOiAiUmVkIEhhdCwgSW5jLiIKcGxhbnM6CiAgLSBuYW1lOiBkZWZhdWx0CiAgICBkZXNjcmlwdGlvbjogQSBzYW1wbGUgQVBCIHdoaWNoIGRlcGxveXMgSGVsbG8gV29ybGQKICAgIGZyZWU6ICJUcnVlIgogICAgbWV0YWRhdGE6CiAgICAgIGRpc3BsYXlOYW1lOiBEZWZhdWx0CiAgICAgIGxvbmdEZXNjcmlwdGlvbjoKICAgICAgICBUaGlzIHBsYW4gZGVwbG95cyBhIFB5dGhvbiB3ZWIgYXBwbGljYXRpb24gZGlzcGxheWluZyBIZWxsbyBXb3JsZAogICAgICBjb3N0OiAkMC4wMAogICAgcGFyYW1ldGVyczogW10K"
 ---> Running in 1554a25d0970
 ---> 13a7e4c5b342
Removing intermediate container 1554a25d0970
Step 4/9 : ADD playbooks /opt/apb/actions
 ---> 491c7cc16f05
Removing intermediate container 8106488f7e69
Step 5/9 : ADD . /opt/ansible/roles/hello-world-apb
 ---> cdebbf4ebd01
Removing intermediate container cf6754f25c71
Step 6/9 : RUN chmod -R g=u /opt/{ansible,apb}
 ---> Running in d0acd25f12f3

 ---> 1b316a658bc8
Removing intermediate container d0acd25f12f3
Step 7/9 : USER apb
 ---> Running in 0de84630a189
 ---> 687bd041e538
Removing intermediate container 0de84630a189
Step 8/9 : ENV "OPENSHIFT_BUILD_NAME" "hellooooo-1" "OPENSHIFT_BUILD_NAMESPACE" "openshift" "OPENSHIFT_BUILD_SOURCE" "https://github.com/ansibleplaybookbundle/hello-world-apb.git" "OPENSHIFT_BUILD_REFERENCE" "master" "OPENSHIFT_BUILD_COMMIT" "7132053ccc8ea64b856074247638cfde54feaaff"
 ---> Running in d8642ea92a71
 ---> 3f1d962b12ac
Removing intermediate container d8642ea92a71
Step 9/9 : LABEL "io.openshift.build.commit.author" "David Zager \u003cdzager\u003e" "io.openshift.build.commit.date" "Thu Apr 12 12:06:42 2018 -0400" "io.openshift.build.commit.id" "7132053ccc8ea64b856074247638cfde54feaaff" "io.openshift.build.commit.message" "Skip last operation if not in cluster (#6)" "io.openshift.build.commit.ref" "master" "io.openshift.build.name" "hellooooo-1" "io.openshift.build.namespace" "openshift" "io.openshift.build.source-location" "https://github.com/ansibleplaybookbundle/hello-world-apb.git"
 ---> Running in 7105edef86a3
 ---> 7847b40b76ee
Removing intermediate container 7105edef86a3
Successfully built 7847b40b76ee

Pushing image docker-registry.default.svc:5000/openshift/hellooooo:latest ...
Pushed 0/17 layers, 0% complete
Pushed 1/17 layers, 29% complete
Pushed 2/17 layers, 29% complete
Pushed 3/17 layers, 29% complete
Pushed 4/17 layers, 47% complete
Pushed 5/17 layers, 47% complete
Pushed 6/17 layers, 59% complete
Pushed 7/17 layers, 59% complete
Pushed 8/17 layers, 59% complete
Pushed 9/17 layers, 54% complete
Pushed 10/17 layers, 60% complete
Pushed 11/17 layers, 67% complete
Pushed 12/17 layers, 73% complete
Pushed 13/17 layers, 78% complete
Pushed 14/17 layers, 86% complete
Pushed 15/17 layers, 92% complete
Pushed 16/17 layers, 96% complete
Pushed 16/17 layers, 100% complete
Pushed 17/17 layers, 100% complete
Push successful

[szh@localhost hello-world-apb]$ oc get images | grep hellooooo
sha256:ed059ac605dde64fd4be9e3af8b6a761f4d9cf5f240ab38e2245f0a7c575b0e6   docker-registry.default.svc:5000/openshift/hellooooo@sha256:ed059ac605dde64fd4be9e3af8b6a761f4d9cf5f240ab38e2245f0a7c575b0e6

2) use my own apb couldn't  get image. 

[szh@localhost tes]$ apb init my-999-apb
Initializing /home/szh/code/tes/my-999-apb for an APB.
Generating playbook files
Successfully initialized project directory at: /home/szh/code/tes/my-999-apb
Please run *apb prepare* inside of this directory after editing files.

[szh@localhost tes]$ cd my-999-apb/
[szh@localhost my-999-apb]$ apb prepare
Finished writing dockerfile.

[szh@localhost my-999-apb]$ oc new-app ./ --name my-999-apb -n openshift
--> Found Docker image 57595a3 (3 weeks old) from Docker Hub for "ansibleplaybookbundle/apb-base"

    * An image stream will be created as "apb-base:latest" that will track the source image
    * A Docker build using binary input will be created
      * The resulting image will be pushed to image stream "my-999-apb:latest"
      * A binary build was created, use 'start-build --from-dir' to trigger a new build
    * This image will be deployed in deployment config "my-999-apb"
    * The image does not expose any ports - if you want to load balance or send traffic to this component
      you will need to create a service with 'expose dc/my-999-apb --port=[port]' later
    * WARNING: Image "ansibleplaybookbundle/apb-base" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    imagestream "my-999-apb" created
    buildconfig "my-999-apb" created
    deploymentconfig "my-999-apb" created
--> Success
    Build scheduled, use 'oc logs -f bc/my-999-apb' to track its progress.
    Run 'oc status' to view your app.

[szh@localhost my-999-apb]$ oc logs -f bc/my-999-apb -n openshift
error: no builds found for "my-999-apb"

[szh@localhost my-999-apb]$ oc get images | grep apb-base
sha256:a501e750921fad25f8ef3a498a72694722ad5b11457d989eaefacf854f44d41b   ansibleplaybookbundle/apb-base@sha256:a501e750921fad25f8ef3a498a72694722ad5b11457d989eaefacf854f44d41b

[szh@localhost my-999-apb]$ oc get is --all-namespaces | grep my-999-apb
openshift        my-999-apb                            docker-registry.default.svc:5000/openshift/my-999-apb       
                                                    
[szh@localhost my-999-apb]$ oc get bc --all-namespaces | grep my-999-apb
openshift      my-999-apb               Docker    Binary       0
[szh@localhost my-999-apb]$ oc get images | grep my-999-apb
[szh@localhost my-999-apb]$

Comment 47 sunzhaohua 2018-05-30 08:17:20 UTC
"oc run" failed, couldn't pull image.

[szh@localhost hello-world-apb]$ oc new-project apb-project
Now using project "apb-project" on server "https://host-8-243-101.host.centralci.eng.rdu2.redhat.com:8443".

[szh@localhost hello-world-apb]$ oc create serviceaccount apb
serviceaccount "apb" created
[szh@localhost hello-world-apb]$ oc create rolebinding apb --clusterrole=admin --serviceaccount=apb-project:apb
rolebinding "apb" created

[szh@localhost hello-world-apb]$   oc run pod-hello \
>       --env="POD_NAME=pod-hello" \
>       --env="POD_NAMESPACE=apb-project" \
>       --image=172.30.46.231:5000/openshift/hellooooo \
>       --restart=Never \
>       --attach=true \
>       --serviceaccount=apb \
>       -- provision -e namespace=apb-project -e cluster=$CLUSTER

[szh@localhost hello-world-apb]$  oc get pods -n apb-project
NAME         READY     STATUS             RESTARTS   AGE
pod-hello    0/1       ImagePullBackOff   0          59m

[szh@localhost hello-world-apb]$oc describe pod pod-hello
Events:
  Type     Reason          Age                From                               Message
  ----     ------          ----               ----                               -------
  Normal   Scheduled       1h                 default-scheduler                  Successfully assigned pod-hello to qe-zhsun-999-manualnrr-1
  Normal   SandboxChanged  1h (x13 over 1h)   kubelet, qe-zhsun-999-manualnrr-1  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling         1h (x3 over 1h)    kubelet, qe-zhsun-999-manualnrr-1  pulling image "172.30.46.231:5000/openshift/hellooooo"
  Warning  Failed          1h (x3 over 1h)    kubelet, qe-zhsun-999-manualnrr-1  Failed to pull image "172.30.46.231:5000/openshift/hellooooo": rpc error: code = Unknown desc = Get https://172.30.46.231:5000/v1/_ping: Forbidden
  Warning  Failed          1h (x3 over 1h)    kubelet, qe-zhsun-999-manualnrr-1  Error: ErrImagePull
  Normal   BackOff         6m (x250 over 1h)  kubelet, qe-zhsun-999-manualnrr-1  Back-off pulling image "172.30.46.231:5000/openshift/hellooooo"
  Warning  Failed          1m (x270 over 1h)  kubelet, qe-zhsun-999-manualnrr-1  Error: ImagePullBackOff


[root@qe-zhsun-999-manualmaster-etcd-1 ~]# docker pull 172.30.46.231:5000/openshift/hellooooo
Using default tag: latest
Trying to pull repository 172.30.46.231:5000/openshift/hellooooo ... 
Pulling repository 172.30.46.231:5000/openshift/hellooooo

Comment 48 Dylan Murray 2018-05-30 13:55:31 UTC
From what I can tell the problem is that your image is named:
docker-registry.default.svc:5000/openshift/hellooooo

Yet in `oc run` you used:
--image 172.30.46.231:5000/openshift/hellooooo

I would specify the name of the image you find when you do `oc get images | grep hellooo` since the registry will determine the FQName.

Alex, I would include this note in the documentation and I can submit a note upstream as well.

Comment 49 Alex Dellapenta 2018-05-31 14:32:54 UTC
Updated per comment 48:

https://github.com/openshift/openshift-docs/pull/9648

@Dylan, PTAL

Comment 50 sunzhaohua 2018-06-01 05:47:48 UTC
Verified failed.
oc run failed.

[szh@localhost hello-world-apb]$ oc get images |grep hello
sha256:37ba76423a825a5902ddbe36baf0f2615ddb15e6220ebd994a28727cc0424c16   docker-registry.default.svc:5000/openshift/hello@sha256:37ba76423a825a5902ddbe36baf0f2615ddb15e6220ebd994a28727cc0424c16

[szh@localhost hello-world-apb]$ oc run pod-hello       --env="POD_NAME=pod-hello"        --env="POD_NAMESPACE=apb-project"        --image=docker-registry.default.svc:5000/openshift/hello        --restart=Never        --attach=true        --serviceaccount=apb        -- provision -e namespace=apb-project -e cluster=$CLUSTER
If you don't see a command prompt, try pressing enter.
[DEPRECATION WARNING]: openshift_raw is kept for backwards compatibility but 
usage is discouraged. The module documentation details page may explain more 
about this rationale.. This feature will be removed in a future release. 
Deprecation warnings can be disabled by setting deprecation_warnings=False in 
ansible.cfg.
[DEPRECATION WARNING]: k8s_raw is kept for backwards compatibility but usage is
 discouraged. The module documentation details page may explain more about this
 rationale.. This feature will be removed in a future release. Deprecation 
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
 [WARNING]: Found variable using reserved name: name

PLAY [hello-world-apb provision] ***********************************************

TASK [ansibleplaybookbundle.asb-modules : debug] *******************************
skipping: [localhost]

TASK [hello-world-apb : Update last operation] *********************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error attempting to update pod with last operation annotation: (403)\nReason: Forbidden\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 01 Jun 2018 05:31:59 GMT', 'Content-Length': '303', 'Content-Type': 'application/json', 'Cache-Control': 'no-store'})\nHTTP response body: {\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"pods \\\"pod-hello\\\" is forbidden: unable to validate against any security context constraint: []\",\n  \"reason\": \"Forbidden\",\n  \"details\": {\n    \"name\": \"pod-hello\",\n    \"kind\": \"pods\"\n  },\n  \"code\": 403\n}\n"}

PLAY RECAP *********************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1   

pod apb-project/pod-hello terminated (Error)

[szh@localhost hello-world-apb]$ oc get pod -n apb-project
NAME         READY     STATUS    RESTARTS   AGE
pod-hello    0/1       Error     0          32m

[szh@localhost hello-world-apb]$ oc describe pod -n apb-project
...
Events:
  Type    Reason     Age   From                                          Message
  ----    ------     ----  ----                                          -------
  Normal  Scheduled  10m   default-scheduler                             Successfully assigned pod-hello to qe-zhsun-gceenode-registry-router-1
  Normal  Pulled     10m   kubelet, qe-zhsun-gceenode-registry-router-1  Container image "docker-registry.default.svc:5000/openshift/hello" already present on machine
  Normal  Created    10m   kubelet, qe-zhsun-gceenode-registry-router-1  Created container
  Normal  Started    10m   kubelet, qe-zhsun-gceenode-registry-router-1  Started container

Comment 51 Dylan Murray 2018-06-01 13:08:49 UTC
This means the actual `oc run` succeeded but the hello-world-apb expects you to pass in an extra variable when running outside the context of the broker. You have two options:

1. run the exact same `oc run` command with an extra variable passed in:
oc run pod-hello       --env="POD_NAME=pod-hello"        --env="POD_NAMESPACE=apb-project"        --image=docker-registry.default.svc:5000/openshift/hello        --restart=Never        --attach=true        --serviceaccount=apb        -- provision -e namespace=apb-project -e cluster=$CLUSTER -e in_cluster=false

2. You can simply not pass in POD_NAME and POD_NAMESPACE as env vars. I added those environment variables to the documentation because that is what the broker is doing so if an APB needs those variables it is supplied. However in the hello-world-apb case its setting in_cluster=true when POD_NAME and POD_NAMESPACE is defined. 

I can submit some changes to get the last_operation command disabled by default but this bug should be verified. This is a problem with the variables being supplied to the APB and not with executing the pod in the cluster. Any APB that does not use the `asb_last_operation` task will succeed with these instructions. The `asb_last_operation` task is broker dependent so we should not be executing that task by default.

Comment 52 openshift-github-bot 2018-06-01 17:23:49 UTC
Commit pushed to master at https://github.com/openshift/openshift-docs

https://github.com/openshift/openshift-docs/commit/fc8bc12f5dcbc94557e5cf2ae08e0f87094336d1
Merge pull request #9648 from adellape/apb_push

Bug 1526147: Add 'Working with Remote Clusters' APB doc

Comment 53 sunzhaohua 2018-06-04 08:36:46 UTC
Dylan,

Sorry for that I couldn't verify this bug. 

1) I use hello-world-apb as example.

$ oc run pod-hello7       --env="POD_NAME=pod-hello7"       --env="POD_NAMESPACE=apb-project"       --image=docker-registry.default.svc:5000/openshift/hello --restart=Never       --attach=true       --serviceaccount=apb       -- provision -e namespace=apb-project -e cluster=$CLUSTER  -e in_cluster=false

fatal: [localhost]: FAILED! => {"ansible_facts": {"deployment": [], "deployment_config": [], "route": [], "service": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2018-06-04T07:57:29Z", "labels": {"apb_name": "hello-world", "apb_plan_id": "default", "apb_service_class_id": "0", "apb_service_instance_id": "0"}, "name": "hello-world-0", "namespace": "apb-project", "resourceVersion": "5526", "selfLink": "/api/v1/namespaces/apb-project/services/hello-world-0", "uid": "ed7a94cb-67cc-11e8-bdbe-fa163ea59861"}, "spec": {"clusterIP": "172.30.0.153", "ports": [{"name": "web", "port": 8080, "protocol": "TCP", "targetPort": 8080}], "selector": {"app": "hello-world-0", "service": "hello-world-0"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}}, "attempts": 10, "changed": false}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1   

pod apb-project/pod-hello7 terminated (Error)

$ oc run pod-hello8      --image=docker-registry.default.svc:5000/openshift/hello --restart=Never       --attach=true       --serviceaccount=apb  -- provision -e namespace=apb-project -e cluster=$CLUSTER -e in_cluster=false
If you don't see a command prompt, try pressing enter.
fatal: [localhost]: FAILED! => {"ansible_facts": {"deployment": [], "deployment_config": [], "route": [], "service": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2018-06-04T07:57:29Z", "labels": {"apb_name": "hello-world", "apb_plan_id": "default", "apb_service_class_id": "0", "apb_service_instance_id": "0"}, "name": "hello-world-0", "namespace": "apb-project", "resourceVersion": "5526", "selfLink": "/api/v1/namespaces/apb-project/services/hello-world-0", "uid": "ed7a94cb-67cc-11e8-bdbe-fa163ea59861"}, "spec": {"clusterIP": "172.30.0.153", "ports": [{"name": "web", "port": 8080, "protocol": "TCP", "targetPort": 8080}], "selector": {"app": "hello-world-0", "service": "hello-world-0"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}}, "attempts": 10, "changed": false}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1   

pod apb-project/pod-hello8 terminated (Error)

$ oc get pod -n apb-project
NAME         READY     STATUS             RESTARTS   AGE
pod-hello7   0/1       Error              0          11m
pod-hello8   0/1       Error              0          3m

2)I use my own apb as example. After run "apb init" and "apb prepare" .run "oc new-app ./ --name my-07 -n openshift" success. 
But "oc get images | grep my-07" couldn't get image.

$ oc logs -f bc/my-07 -n openshift
error: no builds found for "my-07"

$ oc get images | grep my-07

Base on this doc I couldn't get completely nomal result. Could you provide an apb example in the doc? Thanks.

Comment 54 Dylan Murray 2018-06-04 13:24:36 UTC
Sunzhaohua,

I just tested this on my cluster using the hello-world-apb from our dockerhub org with no problem. This is the command I run:

$ oc run hello8 --image=docker.io/ansibleplaybookbundle/hello-world-apb --restart=Never --attach=true --serviceaccount=apb -- provision -e namespace=dylan-test -e cluster=openshift -e in_cluster=false

I'm not sure what the issue you are hitting is but the main thing I noticed is that you are setting `cluster=$CLUSTER`... what is $CLUSTER set to? It should be `openshift`. I also am not sure by your comment what task actually failed. The above command ran all of the APB actions without error.

Please make sure you are passing in the same values that I am above and ensure you are using the latest hello-world example.

Comment 55 sunzhaohua 2018-06-05 10:12:55 UTC
verified.
I think we'd better update "cluster=$CLUSTER" to "cluster=openshift"

[szh@localhost hello-world-apb]$ oc new-app ./ --name hello -n openshift
--> Found Docker image 486a293 (13 days old) from Docker Hub for "ansibleplaybookbundle/apb-base:canary"

    * An image stream will be created as "apb-base:canary" that will track the source image
    * A Docker build using source code from https://github.com/ansibleplaybookbundle/hello-world-apb.git#master will be created
      * The resulting image will be pushed to image stream "hello:latest"
      * Every time "apb-base:canary" changes a new build will be triggered
    * This image will be deployed in deployment config "hello"
    * The image does not expose any ports - if you want to load balance or send traffic to this component
      you will need to create a service with 'expose dc/hello --port=[port]' later
    * WARNING: Image "ansibleplaybookbundle/apb-base:canary" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    imagestream "apb-base" created
    imagestream "hello" created
    buildconfig "hello" created
    deploymentconfig "hello" created
--> Success
    Build scheduled, use 'oc logs -f bc/hello' to track its progress.
    Run 'oc status' to view your app.
[szh@localhost hello-world-apb]$ oc logs -f bc/hello -n openshift
Cloning "https://github.com/ansibleplaybookbundle/hello-world-apb.git" ...
	Commit:	aacda7eb2d90307f04090fb1b1f68bf8769fa5ab (Update hello-world-apb for ansible 2.6 (#8))
	Author:	David Zager <dzager>
	Date:	Mon Jun 4 13:17:42 2018 -0400

Pulling image ansibleplaybookbundle/apb-base@sha256:a501e750921fad25f8ef3a498a72694722ad5b11457d989eaefacf854f44d41b ...
Pulled 0/14 layers, 0% complete
Pulled 1/14 layers, 7% complete
Pulled 2/14 layers, 23% complete
Pulled 3/14 layers, 31% complete
Pulled 4/14 layers, 39% complete
Pulled 5/14 layers, 49% complete
Pulled 6/14 layers, 56% complete
Pulled 7/14 layers, 64% complete
Pulled 8/14 layers, 64% complete
Pulled 9/14 layers, 71% complete
Pulled 10/14 layers, 71% complete
Pulled 11/14 layers, 79% complete
Pulled 12/14 layers, 86% complete
Pulled 13/14 layers, 93% complete
Pulled 14/14 layers, 100% complete
Extracting
Step 1/8 : FROM ansibleplaybookbundle/apb-base@sha256:a501e750921fad25f8ef3a498a72694722ad5b11457d989eaefacf854f44d41b
 ---> 486a2936c6a8
Step 2/8 : LABEL "com.redhat.apb.spec" "LS0tCnZlcnNpb246IDEuMApuYW1lOiBoZWxsby13b3JsZC1hcGIKZGVzY3JpcHRpb246IGRlcGxveXMgaGVsbG8td29ybGQgd2ViIGFwcGxpY2F0aW9uCmJpbmRhYmxlOiAiRmFsc2UiCmFzeW5jOiBvcHRpb25hbAptZXRhZGF0YToKICBkaXNwbGF5TmFtZTogSGVsbG8gV29ybGQgKEFQQikKICBsb25nRGVzY3JpcHRpb246CiAgICBBIHNhbXBsZSBBUEIgd2hpY2ggZGVwbG95cyBhIGNvbnRhaW5lcml6ZWQgSGVsbG8gV29ybGQgd2ViIGFwcGxpY2F0aW9uCiAgZGVwZW5kZW5jaWVzOiBbJ2RvY2tlci5pby9hbnNpYmxlcGxheWJvb2tidW5kbGUvaGVsbG8td29ybGQ6bGF0ZXN0J10KICBwcm92aWRlckRpc3BsYXlOYW1lOiAiUmVkIEhhdCwgSW5jLiIKcGxhbnM6CiAgLSBuYW1lOiBkZWZhdWx0CiAgICBkZXNjcmlwdGlvbjogQSBzYW1wbGUgQVBCIHdoaWNoIGRlcGxveXMgSGVsbG8gV29ybGQKICAgIGZyZWU6ICJUcnVlIgogICAgbWV0YWRhdGE6CiAgICAgIGRpc3BsYXlOYW1lOiBEZWZhdWx0CiAgICAgIGxvbmdEZXNjcmlwdGlvbjoKICAgICAgICBUaGlzIHBsYW4gZGVwbG95cyBhIFB5dGhvbiB3ZWIgYXBwbGljYXRpb24gZGlzcGxheWluZyBIZWxsbyBXb3JsZAogICAgICBjb3N0OiAkMC4wMAogICAgcGFyYW1ldGVyczogW10K"
 ---> Running in a0c8ac6bd74d
 ---> d1dea285e6e5
Removing intermediate container a0c8ac6bd74d
Step 3/8 : ADD playbooks /opt/apb/actions
 ---> 008b06b449b0
Removing intermediate container d3a0b08089ed
Step 4/8 : ADD . /opt/ansible/roles/hello-world-apb
 ---> 68023e8fe870
Removing intermediate container 21844d3f268a
Step 5/8 : RUN chmod -R g=u /opt/{ansible,apb}
 ---> Running in ed926dfb7084

 ---> e4bd36d27a97
Removing intermediate container ed926dfb7084
Step 6/8 : USER apb
 ---> Running in 3fb6212120d3
 ---> 3b4ed71f0d98
Removing intermediate container 3fb6212120d3
Step 7/8 : ENV "OPENSHIFT_BUILD_NAME" "hello-1" "OPENSHIFT_BUILD_NAMESPACE" "openshift" "OPENSHIFT_BUILD_SOURCE" "https://github.com/ansibleplaybookbundle/hello-world-apb.git" "OPENSHIFT_BUILD_REFERENCE" "master" "OPENSHIFT_BUILD_COMMIT" "aacda7eb2d90307f04090fb1b1f68bf8769fa5ab"
 ---> Running in 97075ffaa590
 ---> b1215c2c79b1
Removing intermediate container 97075ffaa590
Step 8/8 : LABEL "io.openshift.build.commit.author" "David Zager \u003cdzager\u003e" "io.openshift.build.commit.date" "Mon Jun 4 13:17:42 2018 -0400" "io.openshift.build.commit.id" "aacda7eb2d90307f04090fb1b1f68bf8769fa5ab" "io.openshift.build.commit.message" "Update hello-world-apb for ansible 2.6 (#8)" "io.openshift.build.commit.ref" "master" "io.openshift.build.name" "hello-1" "io.openshift.build.namespace" "openshift" "io.openshift.build.source-location" "https://github.com/ansibleplaybookbundle/hello-world-apb.git"
 ---> Running in b54da26aeac5
 ---> 1a2d388357c2
Removing intermediate container b54da26aeac5
Successfully built 1a2d388357c2

Pushing image docker-registry.default.svc:5000/openshift/hello:latest ...
Pushed 0/17 layers, 0% complete
Pushed 1/17 layers, 6% complete
Pushed 2/17 layers, 12% complete
Pushed 3/17 layers, 18% complete
Pushed 4/17 layers, 24% complete
Pushed 5/17 layers, 32% complete
Pushed 6/17 layers, 38% complete
Pushed 7/17 layers, 54% complete
Pushed 8/17 layers, 60% complete
Pushed 9/17 layers, 71% complete
Pushed 10/17 layers, 76% complete
Pushed 11/17 layers, 82% complete
Pushed 12/17 layers, 88% complete
Pushed 13/17 layers, 94% complete
Pushed 14/17 layers, 100% complete
Pushed 15/17 layers, 100% complete
Pushed 16/17 layers, 100% complete
Pushed 17/17 layers, 100% complete
Push successful
[szh@localhost hello-world-apb]$ oc get images | grep hello
sha256:8e01328b77844c02977d82342c9b66cf6f57258b46446263d3efce3aec48b785   docker-registry.default.svc:5000/openshift/hello@sha256:8e01328b77844c02977d82342c9b66cf6f57258b46446263d3efce3aec48b785
[szh@localhost hello-world-apb]$ oc new-project apb-project
Now using project "apb-project" on server "https://host-8-250-138.host.centralci.eng.rdu2.redhat.com:8443".

[szh@localhost hello-world-apb]$ oc create rolebinding apb --clusterrole=admin --serviceaccount=apb-project:apb
rolebinding "apb" created

[szh@localhost hello-world-apb]$ oc create serviceaccount apb
serviceaccount "apb" created

[szh@localhost hello-world-apb]$ oc run hello --image=docker.io/ansibleplaybookbundle/hello-world-apb --restart=Never --attach=true --serviceaccount=apb -- provision -e namespace=apb-project -e cluster=openshift -e in_cluster=false

If you don't see a command prompt, try pressing enter.
changed: [localhost]

TASK [hello-world-apb : route state=present] ***********************************
changed: [localhost]

TASK [hello-world-apb : Update last operation] *********************************
skipping: [localhost]

TASK [hello-world-apb : include_tasks] *****************************************
included: /opt/ansible/roles/hello-world-apb/tasks/verify_provision.yml for localhost

TASK [hello-world-apb : Verify hello-world-0 objects exist] ********************
ok: [localhost]

TASK [hello-world-apb : Wait for deployment to become available] ***************
skipping: [localhost]

TASK [hello-world-apb : Wait for deployment config to become available] ********
FAILED - RETRYING: Wait for deployment config to become available (12 retries left).Result was: {
    "attempts": 1, 
    "changed": false, 
    "msg": "DeploymentConfig available status: False", 
    "retries": 13
}
ok: [localhost] => {
    "attempts": 2, 
    "msg": "DeploymentConfig available status: False"
}

TASK [hello-world-apb : Update last operation] *********************************
skipping: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=6    changed=3    unreachable=0    failed=0   


[szh@localhost hello-world-apb]$ oc get pod
NAME                    READY     STATUS      RESTARTS   AGE
hello                   0/1       Completed   0          3m
hello-world-0-1-sptnd   1/1       Running     0          2m

Comment 56 Dylan Murray 2018-06-05 12:37:22 UTC
Thanks for retesting!

I submitted a PR so that there is no confusion over the CLUSTER param:
https://github.com/openshift/openshift-docs/pull/9850

Comment 57 openshift-github-bot 2018-06-05 15:20:46 UTC
Commits pushed to master at https://github.com/openshift/openshift-docs

https://github.com/openshift/openshift-docs/commit/378478eed03616849997754f204e893c305c359d
Bug 1526147 - Update oc run instructions to include cluster name

https://github.com/openshift/openshift-docs/commit/23d4a9eee7c16a750cfd7be05e388bd30d89df44
Merge pull request #9850 from dymurray/open

Bug 1526147 - Update oc run instructions to include cluster name

Comment 58 Vitalii Chepeliuk 2018-06-14 08:27:12 UTC
Also having error when trying todo `apb test`
Steps:
1. git clone git:aerogearcatalog/aerogear-digger-apb.git
2. oc new-app aerogear-digger-apb  --name aerogear-digger-apb -n openshift
3. oc logs -f bc/aerogear-digger-apb -n openshift
info: Logs available at https://jenkins-openshift.192.168.37.1.nip.io/blue/organizations/jenkins/openshift%2Fopenshift-aerogear-digger-apb/detail/openshift-aerogear-digger-apb/1/
4. open https://jenkins-openshift.192.168.37.1.nip.io/blue/organizations/jenkins/openshift%2Fopenshift-aerogear-digger-apb/detail/openshift-aerogear-digger-apb/1/
Output:
OpenShift Build openshift/aerogear-digger-apb-1 from git:aerogearcatalog/aerogear-digger-apb.git

Checking out git git:aerogearcatalog/aerogear-digger-apb.git into /var/lib/jenkins/jobs/openshift/jobs/openshift-aerogear-digger-apb/workspace@script to read Jenkinsfile

Cloning the remote Git repository

Cloning repository git:aerogearcatalog/aerogear-digger-apb.git

 > git init /var/lib/jenkins/jobs/openshift/jobs/openshift-aerogear-digger-apb/workspace@script # timeout=10

Fetching upstream changes from git:aerogearcatalog/aerogear-digger-apb.git

 > git --version # timeout=10

 > git fetch --tags --progress git:aerogearcatalog/aerogear-digger-apb.git +refs/heads/*:refs/remotes/origin/*

ERROR: Error cloning remote repo 'origin'

hudson.plugins.git.GitException: Command "git fetch --tags --progress git:aerogearcatalog/aerogear-digger-apb.git +refs/heads/*:refs/remotes/origin/*" returned status code 128:

stdout: 

stderr: Host key verification failed.

fatal: Could not read from remote repository.



Please make sure you have the correct access rights

and the repository exists.



	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1990)

	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1709)

	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:72)

	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:400)

	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:609)

	at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1120)

	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)

	at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:109)

	at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:130)

	at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:59)

	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:263)

	at hudson.model.ResourceController.execute(ResourceController.java:97)

	at hudson.model.Executor.run(Executor.java:429)

ERROR: Error cloning remote repo 'origin'

Finished: FAILURE

Comment 59 Alex Dellapenta 2018-06-28 20:54:35 UTC
Sunzhaohua, are you experiencing the issue cited in comment 58 using the latest doc here?

https://docs.openshift.org/latest/apb_devel/writing/reference.html#apb-devel-writing-ref-remote-clusters

Dylan, any ideas?

Comment 60 sunzhaohua 2018-06-29 06:23:43 UTC
Alex, I test hello-world-apb and it works well, refer to commnet 55. 
I test your apb also get errors but not same with you.

1. $ git clone https://github.com/aerogearcatalog/aerogear-digger-apb.git
2. $ cat apb.yml | base64
 copy and paste result into the Dockerfile
3. $ oc new-app aerogear-digger-apb  --name aerogear-digger-apb -n openshift
    * A pipeline build using source code from https://github.com/aerogearcatalog/aerogear-digger-apb.git#master will be created
      * Use 'start-build' to trigger a new build

--> Creating resources ...
    buildconfig "aerogear-digger-apb" created
--> Success
    Build scheduled, use 'oc logs -f bc/aerogear-digger-apb' to track its progress.
    Run 'oc status' to view your app.

4. $ oc logs -f bc/aerogear-digger-apb -n openshift
info: Logs available at https://jenkins-openshift.apps.0628-9fi.qe.rhcloud.com/blue/organizations/jenkins/openshift%2Fopenshift-aerogear-digger-apb/detail/openshift-aerogear-digger-apb/1/

open https://jenkins-openshift.apps.0628-9fi.qe.rhcloud.com/blue/organizations/jenkins/openshift%2Fopenshift-aerogear-digger-apb/detail/openshift-aerogear-digger-apb/1/

Output:
OpenShift Build openshift/aerogear-digger-apb-1 from https://github.com/aerogearcatalog/aerogear-digger-apb.git
Logs
OpenShift Build openshift/aerogear-digger-apb-1 from https://github.com/aerogearcatalog/aerogear-digger-apb.git

Checking out git https://github.com/aerogearcatalog/aerogear-digger-apb.git into /var/lib/jenkins/jobs/openshift/jobs/openshift-aerogear-digger-apb/workspace@script to read Jenkinsfile

Cloning the remote Git repository

Cloning repository https://github.com/aerogearcatalog/aerogear-digger-apb.git

 > git init /var/lib/jenkins/jobs/openshift/jobs/openshift-aerogear-digger-apb/workspace@script # timeout=10

Fetching upstream changes from https://github.com/aerogearcatalog/aerogear-digger-apb.git

 > git --version # timeout=10

 > git fetch --tags --progress https://github.com/aerogearcatalog/aerogear-digger-apb.git +refs/heads/*:refs/remotes/origin/*

 > git config remote.origin.url https://github.com/aerogearcatalog/aerogear-digger-apb.git # timeout=10

 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10

 > git config remote.origin.url https://github.com/aerogearcatalog/aerogear-digger-apb.git # timeout=10

Fetching upstream changes from https://github.com/aerogearcatalog/aerogear-digger-apb.git

 > git fetch --tags --progress https://github.com/aerogearcatalog/aerogear-digger-apb.git +refs/heads/*:refs/remotes/origin/*

 > git rev-parse origin/master^{commit} # timeout=10

Checking out Revision baf7479751448673e1a8cdb3acd0a0446137a3df (origin/master)

 > git config core.sparsecheckout # timeout=10

 > git checkout -f baf7479751448673e1a8cdb3acd0a0446137a3df

Commit message: "Merge pull request #27 from philbrookes/add-labels"

First time build. Skipping changelog.

ERROR: Could not find any definition of libraries [fh-pipeline-library]

org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:

WorkflowScript: Loading libraries failed



1 error



	at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)

	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)

	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)

	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)

	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)

	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)

	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)

	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)

	at groovy.lang.GroovyShell.parse(GroovyShell.java:700)

	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:129)

	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:123)

	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:517)

	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:480)

	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:269)

	at hudson.model.ResourceController.execute(ResourceController.java:97)

	at hudson.model.Executor.run(Executor.java:429)

Finished: FAILURE

1.4.1 
· Core 2.107.3 
· 61988a2 
· 9th February 2018 02:26 AM

Comment 61 sunzhaohua 2018-06-29 08:41:02 UTC
Dylan, I re-test some apbs.

For the pushing APBs, 
1) I using rhpam-apb(https://github.com/ansibleplaybookbundle/rhpam-apb.git), import-vm-apb(https://github.com/ansibleplaybookbundle/import-vm-apb.git). It works well.

2) If I using mysql-apb(https://github.com/ansibleplaybookbundle/mysql-apb.git)
it return "no language matched"
$ oc new-app ./ --name mysql-apb -n openshift
error: No language matched the source repository

3) If I first run `apb init my-01-apb`, `apb prepare`,then run `oc new-app ./ --name my-01-apb -n openshift`, I couldn't get image.
$ oc logs -f bc/my-01-apb -n openshift
error: no builds found for "my-01-apb"

I don't know the difference between these apbs, could you help to confirm? If I run `apb push` in cluster, these apbs will be pushed successful. I change the status to assigned, if I am wrong, please correct me. Thanks.

Comment 62 Dylan Murray 2018-06-29 12:35:52 UTC
sunzhaohua,

mysql-apb repo has 3 dockerfiles `Dockerfile-canary, Dockerfile-latest, Dockerfile-nightly`. This is to help us in the build process. `oc new-app` expects the dockerfile to simply be named `Dockerfile` like it is in rhpam-apb & import-vm-apb. This is because source-to-image uses a `docker` strategy which it can only know to use if it finds a `Dockerfile` in the repo. 

For 3. I do not have enough information as to why it failed. I would refer you to the documentation for `oc new-app` to get a better understanding of the s2i functionality. Another option is to simply do `oc new-build . --to <name>` to test the build step out manually. 

https://docs.openshift.com/enterprise/3.1/dev_guide/new_app.html

Comment 63 Vitalii Chepeliuk 2018-07-25 06:42:00 UTC
@sunzhaohua, 

ahh build fails because we use pipeline library in Jenkinsfile, that needs additional configuration for Jenkins. Thanks for testing it

Comment 64 Alex Dellapenta 2018-07-25 21:29:05 UTC
Does this still need to be on ASSIGNED?

Comment 65 sunzhaohua 2018-07-30 02:05:01 UTC
Alex,
As Dylan's explanation, I think it's ok to me.

Comment 66 sunzhaohua 2018-07-30 02:10:18 UTC
As Dylan's explanation, and the updated doc according to this pr: https://github.com/openshift/openshift-docs/pull/9648. Verified this bug.

Comment 68 Alex Dellapenta 2021-08-02 21:41:34 UTC
*** Bug 1557462 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.