Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1532972 - [APB] Deprovision failed via the `apb run` command
[APB] Deprovision failed via the `apb run` command
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Service Broker (Show other bugs)
3.9.0
Unspecified Unspecified
medium Severity medium
: ---
: 3.9.0
Assigned To: David Zager
Jian Zhang
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2018-01-10 01:53 EST by Jian Zhang
Modified: 2018-03-28 10:18 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-03-28 10:18:26 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 None None None 2018-03-28 10:18 EDT

  None (edit)
Description Jian Zhang 2018-01-10 01:53:23 EST
Description of problem:
Got errors when running the `apb run --action deprovision` command

Version-Release number of selected component (if applicable):
apb version: apb-1.1.2-1.20171221180811.fc25.noarch

How reproducible:
Always

Steps to Reproduce:

1, Install the apb tool:
su -c 'wget https://copr.fedorainfracloud.org/coprs/g/ansible-service-broker/ansible-service-broker-latest/repo/epel-7/group_ansible-service-broker-ansible-service-broker-latest-epel-7.repo -O /etc/yum.repos.d/ansible-service-broker.repo'

sudo yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum -y install apb


2, Switch to su user and enable docker daemon

3, Clone the Hello World APB from https://github.com/ansibleplaybookbundle/hello-world-db-apb 

4, Login a cluster.

5, Provision it via the `apb run` command.
[root@localhost hello-world-apb]# apb run --project jian --tag docker.io/ansibleplaybookbundle/hello-world-apb
Finished writing dockerfile.
Building APB using tag: [docker.io/ansibleplaybookbundle/hello-world-apb]
Successfully built APB image: docker.io/ansibleplaybookbundle/hello-world-apb
Creating project jian
Created project
Creating service account in jian
Created service account
Creating role binding for apb-run-hello-world-apbwdrjm in jian
Created Role Binding
Creating pod with image docker.io/ansibleplaybookbundle/hello-world-apb in jian
Created Pod
APB run started
APB run complete: Succeeded

6, Deprovision it via the `apb run --action deprovision` command.

Actual results:
[root@localhost hello-world-apb]# apb run --project jian --tag docker.io/ansibleplaybookbundle/hello-world-apb --action deprovision
Finished writing dockerfile.
Building APB using tag: [docker.io/ansibleplaybookbundle/hello-world-apb]
Successfully built APB image: docker.io/ansibleplaybookbundle/hello-world-apb
Creating project jian
Project jian already exists
Creating service account in jian
Created service account
Creating role binding for apb-run-hello-world-apbvlxq8 in jian
Exception occurred! (409)
Reason: Conflict
HTTP response headers: HTTPHeaderDict({'Date': 'Wed, 10 Jan 2018 05:19:47 GMT', 'Content-Length': '240', 'Content-Type': 'application/json', 'Cache-Control': 'no-store'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings \"apb-run-hello-world-apb\" already exists","reason":"AlreadyExists","details":{"name":"apb-run-hello-world-apb","kind":"rolebindings"},"code":409}


Expected results:
Should deprovision successfully.

Additional info:
Comment 3 Jian Zhang 2018-01-25 02:27:10 EST
The apb tool version:
[root@localhost mediawiki-apb]# rpm -qa | grep apb
apb-1.1.5-1.20180117190645.el7.centos.noarch

Follow the above steps 1-5, and the APB running well. Like below:
[root@host-172-16-120-121 ~]# oc get pods -n jian
NAME                         READY     STATUS             RESTARTS   AGE
apb-run-mediawiki-apbkxsmb   1/1       Running            0          44m

And then, execute the "deprovison" action, like:
[root@localhost mediawiki-apb]# apb run --project jian --tag registry.access.redhat.com/openshift3/mediawiki-123:latest --dockerfile Dockerfile-latest --action deprovision
Finished writing dockerfile.
Building APB using tag: [registry.access.redhat.com/openshift3/mediawiki-123:latest]
Successfully built APB image: registry.access.redhat.com/openshift3/mediawiki-123:latest
mediawiki_db_schema(required)[default: mediawiki]: 
mediawiki_site_name(required)[default: MediaWiki]: 
mediawiki_site_lang(required)[default: en]: 
mediawiki_admin_user(required)[default: admin]: 
mediawiki_admin_pass(required): test
Creating project jian
Project jian already exists
Creating service account in jian
Service account apb-run-mediawiki-apb already exists
Creating role binding for apb-run-mediawiki-apb in jian
Role binding apb-run-mediawiki-apb already exists
Creating pod with image registry.access.redhat.com/openshift3/mediawiki-123:latest in jian
Created Pod
APB run started
^CTraceback (most recent call last):
  File "/usr/bin/apb", line 9, in <module>
    load_entry_point('apb==1.1.5', 'console_scripts', 'apb')()
  File "/usr/lib/python2.7/site-packages/apb/cli.py", line 532, in main
    u'cmdrun_{}'.format(args.subcommand))(**vars(args))
  File "/usr/lib/python2.7/site-packages/apb/engine.py", line 1294, in cmdrun_run
    pod_completed = watch_pod(name, namespace)
  File "/usr/lib/python2.7/site-packages/apb/engine.py", line 687, in watch_pod
    sleep(WATCH_POD_SLEEP)
KeyboardInterrupt

Deprovision failed and it created a new APBs running actually!
[root@host-172-16-120-121 ~]# oc get pods -n jian
NAME                         READY     STATUS             RESTARTS   AGE
apb-run-mediawiki-apbkxsmb   1/1       Running            0          46m
apb-run-mediawiki-apbtxsqd   1/1       Running            0          48s
Comment 4 David Zager 2018-01-25 04:04:08 EST
Looking at your output there are two things that I see that seem strange.

1) looks like you hit ^C which would exit the program.
2) 2 APB pods are running. Did you run deprovision action before provision pod completed?

Both of these would give unexpected behavior. Could you explain more about what you are doing?

1) Are you waiting for provision to finish before moving to deprovision action?
2) Are you allowing the process to complete, I don't see an exception being thrown but maybe I'm missing something.
Comment 5 Jian Zhang 2018-01-31 21:18:10 EST
David,

Answers as below:
1) Because did not get any response after a long time, so I hit ^C to interrupt it.
2) Yes, I did the deprovision action after provision completed.

Now, I used the latest version 1.1.6 to retest it.
[root@localhost postgresql-apb]# rpm -qa | grep apb
apb-1.1.6-1.20180123164703.el7.centos.noarch

Details as below:
1, Provision an APB.

[root@localhost postgresql-apb]# apb run --project jian3 --tag registry.access.stage.redhat.com/openshift3/postgresql-apb:v3.9 --dockerfile Dockerfile-latest
Finished writing dockerfile.
Building APB using tag: [registry.access.stage.redhat.com/openshift3/postgresql-apb:v3.9]
Successfully built APB image: registry.access.stage.redhat.com/openshift3/postgresql-apb:v3.9
Select plan [dev, prod]: 
ERROR: Please enter valid plan
Select plan [dev, prod]: 
ERROR: Please enter valid plan
Select plan [dev, prod]: 
ERROR: Please enter valid plan
Select plan [dev, prod]: dev
postgresql_database(required)[default: admin]: 
postgresql_user(required)[default: admin]: 
postgresql_password(required): test
postgresql_version(required)[default: 9.6]: 
Creating project jian3
Created project
Creating service account in jian3
Created service account
Creating role binding for apb-run-postgresql-apb in jian3
Created Role Binding
Creating pod with image registry.access.stage.redhat.com/openshift3/postgresql-apb:v3.9 in jian3
Created Pod
APB run started
APB run complete: Succeeded

[root@host-172-16-120-21 ~]# oc get pods -n jian3
NAME                                  READY     STATUS      RESTARTS   AGE
apb-run-postgresql-apbkbnvw           0/1       Completed   0          1m
postgresql-9.6-dev-7c4cd6ff4d-6kkvn   1/1       Running     0          1m

2, Deprovision it.
[root@localhost postgresql-apb]# apb run --project jian3 --tag registry.access.stage.redhat.com/openshift3/postgresql-apb:v3.9 --dockerfile Dockerfile-latest --action deprovision
Finished writing dockerfile.
Building APB using tag: [registry.access.stage.redhat.com/openshift3/postgresql-apb:v3.9]
Successfully built APB image: registry.access.stage.redhat.com/openshift3/postgresql-apb:v3.9
Select plan [dev, prod]: dev
postgresql_database(required)[default: admin]: 
postgresql_user(required)[default: admin]: 
postgresql_password(required): test
postgresql_version(required)[default: 9.6]: 
Creating project jian3
Project jian3 already exists
Creating service account in jian3
Service account apb-run-postgresql-apb already exists
Creating role binding for apb-run-postgresql-apb in jian3
Role binding apb-run-postgresql-apb already exists
Creating pod with image registry.access.stage.redhat.com/openshift3/postgresql-apb:v3.9 in jian3
Created Pod
APB run started
APB run complete: Succeeded

[root@host-172-16-120-21 ~]# oc get pods -n jian3
NAME                          READY     STATUS      RESTARTS   AGE
apb-run-postgresql-apbkbnvw   0/1       Completed   0          4m
apb-run-postgresql-apbs2mht   0/1       Completed   0          2m

3, Delete these deploy pods.

[root@host-172-16-120-21 ~]# oc delete pods --all -n jian3
pod "apb-run-postgresql-apbkbnvw" deleted
pod "apb-run-postgresql-apbs2mht" deleted
[root@host-172-16-120-21 ~]# oc get all -n jian3
No resources found.

Overall, it looks good to me. But, I think it will be better if we add the action info into the name of these deploy pods. It will more readable. Maybe we need to open a new bug for this. How do you think?
Comment 6 David Zager 2018-02-01 08:47:44 EST
I think that creating a new bug to track the apb naming on `apb run` is a good idea, please do create that.
Comment 10 errata-xmlrpc 2018-03-28 10:18:26 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489

Note You need to log in before you can comment on or make changes to this bug.