Bug 1597259 - [Kubevirt APB] No meaningful error message when trying to deploy twice
Summary: [Kubevirt APB] No meaningful error message when trying to deploy twice
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Installation
Version: 1.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 1.3
Assignee: Ryan Hallisey
QA Contact: Lukas Bednar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-02 12:05 UTC by Nelly Credi
Modified: 2018-11-09 14:58 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-09 14:58:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
oc get serviceinstances --all-namespaces -o yaml (5.11 KB, text/plain)
2018-08-13 14:42 UTC, Lukas Bednar
no flags Details

Description Nelly Credi 2018-07-02 12:05:17 UTC
Description of problem:
If trying to deploy the kubevirt APB for the second time (with the same registry) it fails with a meaningless error  


Version-Release number of selected component (if applicable):
0.7.0-alpha.2

How reproducible:
100%

Steps to Reproduce:
1. deploy kubevirt once
2. try to deploy for the second time with the same registry
3.

Actual results:
Error provisioning ServiceInstance of ClusterServiceClass (K8S: "fd9b21a9caa8bf8b42b27bb0c90d3b74" ExternalName: "dh-virtualization") at ClusterServiceBroker "ansible-service-broker": Status: 400; ErrorMessage: <nil>; Description: not found; ResponseError: <nil>

Expected results:
should give a meaningful error message

Additional info:

Comment 1 Tommy Hughes 2018-07-06 17:34:33 UTC
While I was able to reproduce the issue with v0.7.0-alpha.2 ... the issue appears to have been resolved with release v0.7.0. Repeated deployments of the kubevirt-apb to the same namespace will complete/provision successfully without error.

Comment 3 Steve Reichard 2018-07-17 12:58:01 UTC
Ryan, 
Looks like that PR was for 0.7.0, do we need a backport for 0.6.*

Comment 4 Ryan Hallisey 2018-07-17 13:00:45 UTC
I branched after it was merged so 0.6.2 already has it.

Comment 5 Nelly Credi 2018-07-19 09:14:20 UTC
@ryan, I tried it on the latest d/s build,
and getting
Error provisioning ServiceInstance of ClusterServiceClass (K8S: "fd9b21a9caa8bf8b42b27bb0c90d3b74" ExternalName: "dh-virtualization") at ClusterServiceBroker "ansible-service-broker": Status: 400; ErrorMessage: <nil>; Description: not found; ResponseError: <nil>

are you sure the fix is in?

Comment 6 Ryan Hallisey 2018-07-19 12:07:47 UTC
It worked for me yesterday.  Are you using 'brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv11-tech-preview/kubevirt-apb:1.1' ?

Comment 7 Lukas Bednar 2018-08-13 14:42:30 UTC
Created attachment 1475589 [details]
oc get serviceinstances --all-namespaces  -o yaml

Comment 8 Lukas Bednar 2018-08-13 14:44:32 UTC
I was able to create two service instances without any error.

[root@cnv-executor-lbednar-master1 ~]# oc get serviceinstances -n kube-system 
NAME        AGE
kubevirt    21m
kubevirt2   5m

Comment 9 Lukas Bednar 2018-08-31 08:37:03 UTC
Based on https://bugzilla.redhat.com/show_bug.cgi?id=1597259#c8 moving back to assigned.

Comment 10 Ryan Hallisey 2018-11-06 17:12:58 UTC
is this bug about a meaningless error being reported or that we should error meaningfully on a second deployment?  If it's the former, then we can close this.

Comment 11 Lukas Bednar 2018-11-08 09:04:09 UTC
I didn't experience with any meaningless error, it just let me to provision two kubevirt service instances (https://bugzilla.redhat.com/show_bug.cgi?id=1597259#c8) .

I don't know what is proper solution, but I believe that have multiple service instances of kubevirt deployed is not appropriate.

Comment 12 Ryan Hallisey 2018-11-09 14:58:01 UTC
The broker doesn't have a mechanism to block this.  We would have to implement something on the kubevirt-apb side.  I think we can close this at least for the 1.3 timeframe.


Note You need to log in before you can comment on or make changes to this bug.