Created attachment 1519356 [details] Kubevirt Manifest Description of problem: The problem that I am facing is that the virt-api cannot stay at READY state Version-Release number of selected component (if applicable): - kubevirt-manifests-0.12.0-alpha.2.2.gee3e763.cd09f01.noarch.rpm - kubevirt-virtctl-0.12.0-alpha.2.2.gee3e763.cd09f01.x86_64.rpm - kubevirt-cdi-manifests-1.4.0-1.211c0a0.noarch.rpm - ovs-cni-manifests-0.2.0-10.noarch.rpm - RHEL 7.6 How reproducible: Steps to Reproduce: 1. Add the usual privileged permission: - "oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-privileged" - "oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-controller" - "oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-apiserver" 2. Change the registry to point to an internal one, on the manifest 3. Deploy CNV using the manifest from the RPM showed above on Openshift 3.11 using a 'oc create -f /usr/share/kubevirt/manifests/release/kubevirt.yaml' command Actual results: In the virt-api pod we can see his log trace: {"component":"virt-api","contentLength":28,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/2.0","remoteAddress":"10.130.0.1","statusCode":401,"timestamp":"2019-01-08T14:59:41.040165Z","url":"/apis/subresources.kubevirt.io/v1alpha2/healthz","username":"-"} Expected results: Pod reaching READY state Additional info: [root@dhcp8-120-110 ~]# oc get pods -n kubevirt NAME READY STATUS RESTARTS AGE virt-api-85dd68c9dc-jfkvq 0/1 Running 0 6h virt-api-85dd68c9dc-zkw4w 0/1 Running 0 6h virt-controller-74f7f86987-llqrn 1/1 Running 0 6h virt-controller-74f7f86987-rr2cz 1/1 Running 0 6h virt-handler-hb6c7 1/1 Running 0 6h virt-handler-hn2hr 1/1 Running 0 6h virt-handler-vswv8 1/1 Running 0 6h virt-handler-wwv9h 1/1 Running 0 6h
The new correct container has been delivered, virt-api-container-v1.4.0-7, please check if it works for you.
The patch is working fine for here. Thanks