Bug 1332432

Summary: Will be reported as failed in CLI output when the router was created successfully
Product: OKD Reporter: Meng Bo <bmeng>
Component: ocAssignee: Clayton Coleman <ccoleman>
Status: CLOSED CURRENTRELEASE QA Contact: Meng Bo <bmeng>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.xCC: aos-bugs, ghuang, mmccomas, pruan, pweil, xxia
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-12-09 21:51:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Meng Bo 2016-05-03 07:51:59 UTC
Description of problem:
Create router on a cluster which has been had a router before. 
The CLI will report failed due to the serviceaccounts and rolebinding exists.

But the router was actual created successfully.


Version-Release number of selected component (if applicable):
oc v1.3.0-alpha.0-267-gcd62e58


How reproducible:
always

Steps to Reproduce:
1. Create router on the cluster
# oadm policy add-scc-to-user hostnetwork -z router
# oadm router

2. Delete the router above
# oc delete dc/router svc/router

3. Create the router again
# oadm router



Actual results:
Cli will report that the operation failed.
[root@master ~]# oadm router 
info: password for stats user admin has been set to 9pNTg5lJwT
--> Creating router router ...
    error: serviceaccounts "router" already exists
    error: rolebinding "router-router-role" already exists
    deploymentconfig "router" created
    service "router" created
--> Failed

But the router was created successfully.
[root@master ~]# oc get po
NAME             READY     STATUS    RESTARTS   AGE
router-1-oejeh   1/1       Running   0          55s



Expected results:
Should not report failed if the errors in the middle will not block the resources being created.

Additional info:

Comment 1 Fabiano Franz 2016-05-03 17:58:25 UTC
I think we should tolerate the existing serviceaccount and rolebinding, warn about it but exit successfully.

Comment 2 Xingxing Xia 2016-05-04 04:21:11 UTC
Agree Fabiano. Similar for `oadm registry`

Comment 3 Xingxing Xia 2016-05-16 07:07:28 UTC
Seems some oc sub-commands have similar issue too:
$ oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
---- snipped ----
--> Creating resources with label app=ruby-hello-world ...
    error: imagestreams "ruby-22-centos7" already exists
    imagestream "ruby-hello-world" created
    buildconfig "ruby-hello-world" created
    deploymentconfig "ruby-hello-world" created
    service "ruby-hello-world" created
--> Failed

Comment 4 Gan Huang 2016-10-10 10:24:57 UTC
After a fresh install. Cli reported 

"serviceaccounts \"ipfailover\" already exists" 

when running 

"oadm ipfailover --create --service-account=ipfailover --interface=eth0 --selector='region=infra' --replicas=2 --virtual-ips=\"192.168.0.4\" --credentials=/etc/origin/master/openshift-router.kubeconfig"

I guess the issue was caused by this bug as well. 

Worked well for OCP 3.2.

Comment 5 Gan Huang 2016-10-10 10:30:10 UTC
Set the severity to medium as it might block the ansible installation when running this command in ansible playbook.

Please see https://bugzilla.redhat.com/show_bug.cgi?id=1383233

Comment 6 Paul Weil 2016-10-28 13:38:23 UTC
proposed fix: https://github.com/openshift/origin/pull/11639

Comment 7 Xingxing Xia 2016-11-02 11:03:04 UTC
Tested with openshift/oc v1.4.0-alpha.0+90d8c62-1000.
`oadm router` outputs well.
`oadm registry` and `oc new-app` still output same as before.
See below:

# CA=/openshift.local.config/master

# oc delete dc/router svc/router --config=$CA/admin.kubeconfig
deploymentconfig "router" deleted
service "router" deleted
# oadm router --service-account=router -n default --config=$CA/admin.kubeconfig # > openshift-oadm_router_password.txt
info: password for stats user admin has been set to HpEIjF1F1G
--> Creating router router ...
    warning: serviceaccounts "router" already exists
    warning: clusterrolebinding "router-router-role" already exists
    deploymentconfig "router" created
    service "router" created
--> Success
# echo $?
0

# oc delete dc/docker-registry svc/docker-registry --config=$CA/admin.kubeconfig
deploymentconfig "docker-registry" deleted
service "docker-registry" deleted
# oadm registry --service-account=registry -n default --config=$CA/admin.kubeconfig
--> Creating registry registry ...
    error: serviceaccounts "registry" already exists
    error: clusterrolebinding "registry-registry-role" already exists
    deploymentconfig "docker-registry" created
    service "docker-registry" created
--> Failed
# echo $?
1

# oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git --config=$CA/admin.kubeconfig -n xxia-test
--> Found Docker image a74366c (13 hours old) from Docker Hub for "centos/ruby-22-centos7"

    Ruby 2.2 
    -------- 
    Platform for building and running Ruby 2.2 applications

    Tags: builder, ruby, ruby22

    * An image stream will be created as "ruby-22-centos7:latest" that will track the source image
    * A source build using source code from https://github.com/openshift/ruby-hello-world.git will be created
      * The resulting image will be pushed to image stream "ruby-hello-world:latest"
      * Every time "ruby-22-centos7:latest" changes a new build will be triggered
    * This image will be deployed in deployment config "ruby-hello-world"
    * Port 8080/tcp will be load balanced by service "ruby-hello-world"
      * Other containers can access this service through the hostname "ruby-hello-world"

--> Creating resources ...
    error: imagestreams "ruby-22-centos7" already exists
    imagestream "ruby-hello-world" created
    buildconfig "ruby-hello-world" created
    deploymentconfig "ruby-hello-world" created
    service "ruby-hello-world" created
--> Failed
# echo $?
1

Comment 8 Paul Weil 2016-11-07 13:20:15 UTC
Xingxing Xia - since this issue was specifically for the router that is all that is fixed.  We can update the registry in a subsequent issue - we now have the infrastructure available to make that change fairly easily.

For new app, I think there is far too many variables to consider to provide a concrete case of when to provide warnings vs when not to.  That command should not be changed.  

Let's scope this to router only.  Thanks!

Comment 9 Xingxing Xia 2016-11-08 02:21:19 UTC
Paul, thank you for clarification. Moving to verified.

Comment 10 Ben Bennett 2016-12-20 15:06:21 UTC
*** Bug 1381378 has been marked as a duplicate of this bug. ***