Bug 1597263
| Summary: | [Kubevirt APB] Failed installation shows up in the UI as successful | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Container Native Virtualization (CNV) | Reporter: | Nelly Credi <ncredi> | ||||||
| Component: | Installation | Assignee: | Ryan Hallisey <rhallise> | ||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Lukas Bednar <lbednar> | ||||||
| Severity: | high | Docs Contact: | |||||||
| Priority: | high | ||||||||
| Version: | 1.1 | CC: | fsimonce, rhallise | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | 1.1.1 | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2018-11-09 12:29:11 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
Created attachment 1455947 [details]
pods state
@ryan did you handle this one already? https://github.com/kubevirt/kubevirt-ansible/pull/306 Fixed upstream I am not sure how exactly PR#306 solves this issue. I believe that we need to add some health check at the end of kubevirt-apb tasks. We could start with one of following options: "curl -X GET -H "Authorization: Bearer $(oc whoami -t)" -k https://localhost:8443/apis/subresources.kubevir t.io/v1alpha2/version" "curl -X GET -H "Authorization: Bearer $(oc whoami -t)" -k https://localhost:8443/apis/kubevirt.io/v1alpha2/healthz" The health checks are a good addition. Do you want to add a retry loop around them and push them to kubevirt-ansible lukas? #306 was the last PR before 1.1 was cut, which is stable. OK, it is not problem to add it, unfortunately I found out that health check is not working at the moment, I opened issue about it here https://github.com/kubevirt/kubevirt/issues/1442 . That issue [1] doesn't seem to be moving, so I am marking this bug as verified, and opening issue [2] on kubevirt-ansible to add health check once [1] is implemented. [1] https://github.com/kubevirt/kubevirt/issues/1442 [2] https://github.com/kubevirt/kubevirt-ansible/issues/370 |
Created attachment 1455946 [details] service events screenshot Description of problem: Failed installation shows up in the UI as successful pods are in imagepullbackoff/errimagepull, but the kubevirt provisioning services shows up as provisioned successfully Version-Release number of selected component (if applicable): 0.7.0-alpha.2 How reproducible: 100% Steps to Reproduce: 1. deploy kubevirt with a bad registry value 2. 3. Actual results: pods are in imagepullbackoff/errimagepull state and the service claims to be successful Expected results: the service should indicate that there was a failure Additional info: