Bug 1664560 - virt-api pod cannot reach ready state on kubevirt namespace (downstream build)
Summary: virt-api pod cannot reach ready state on kubevirt namespace (downstream build)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Installation
Version: 1.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 1.4
Assignee: Marcin Franczyk
QA Contact: Nelly Credi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-09 06:59 UTC by Juan Manuel Parrilla Madrid
Modified: 2019-03-05 14:46 UTC (History)
8 users (show)

Fixed In Version: virt-api-container-v1.4.0-7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-05 14:46:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Juan Manuel Parrilla Madrid 2019-01-09 06:59:20 UTC
Created attachment 1519356 [details]
Kubevirt Manifest

Description of problem:

The problem that I am facing is that the virt-api cannot stay at READY state

Version-Release number of selected component (if applicable):

- kubevirt-manifests-0.12.0-alpha.2.2.gee3e763.cd09f01.noarch.rpm
- kubevirt-virtctl-0.12.0-alpha.2.2.gee3e763.cd09f01.x86_64.rpm
- kubevirt-cdi-manifests-1.4.0-1.211c0a0.noarch.rpm
- ovs-cni-manifests-0.2.0-10.noarch.rpm
- RHEL 7.6

How reproducible:


Steps to Reproduce:
1. Add the usual privileged permission:
  - "oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-privileged"
  - "oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-controller"
  - "oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-apiserver"

2. Change the registry to point to an internal one, on the manifest

3. Deploy CNV using the manifest from the RPM showed above on Openshift 3.11 using a 'oc create -f /usr/share/kubevirt/manifests/release/kubevirt.yaml' command


Actual results:

In the virt-api pod we can see his log trace:

{"component":"virt-api","contentLength":28,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/2.0","remoteAddress":"10.130.0.1","statusCode":401,"timestamp":"2019-01-08T14:59:41.040165Z","url":"/apis/subresources.kubevirt.io/v1alpha2/healthz","username":"-"}

Expected results:

Pod reaching READY state

Additional info:

[root@dhcp8-120-110 ~]# oc get pods -n kubevirt
NAME                               READY     STATUS    RESTARTS   AGE
virt-api-85dd68c9dc-jfkvq          0/1       Running   0          6h
virt-api-85dd68c9dc-zkw4w          0/1       Running   0          6h
virt-controller-74f7f86987-llqrn   1/1       Running   0          6h
virt-controller-74f7f86987-rr2cz   1/1       Running   0          6h
virt-handler-hb6c7                 1/1       Running   0          6h
virt-handler-hn2hr                 1/1       Running   0          6h
virt-handler-vswv8                 1/1       Running   0          6h
virt-handler-wwv9h                 1/1       Running   0          6h

Comment 4 Marcin Franczyk 2019-01-09 13:47:48 UTC
The new correct container has been delivered, virt-api-container-v1.4.0-7, please check if it works for you.

Comment 5 Juan Manuel Parrilla Madrid 2019-01-09 15:14:54 UTC
The patch is working fine for here.

Thanks


Note You need to log in before you can comment on or make changes to this bug.