Bug 1644687

Summary: kubevirt-apb doesn't contain ovs-cni-manifests rpm
Product: Container Native Virtualization (CNV) Reporter: Lukas Bednar <lbednar>
Component: InstallationAssignee: Ryan Hallisey <rhallise>
Status: CLOSED WONTFIX QA Contact: Irina Gulina <igulina>
Severity: high Docs Contact:
Priority: high    
Version: 1.3CC: cnv-qe-bugs, danken, fsimonce, ncredi, ohadlevy, rhallise
Target Milestone: ---   
Target Release: 1.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: kubevirt-apb:v3.11-5 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-22 17:06:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Lukas Bednar 2018-10-31 12:09:58 UTC
Description of problem:

kubevirt-apb doesn't have ovs-cni-manifests rpm installed inside of apb container.

That implies that we use manifests from kubevirt-ansible/roles/network-multus/templates which is not what was shipped.

Version-Release number of selected component (if applicable):
kubevirt-apb-v1.3.0-3


How reproducible: 100%


Steps to Reproduce:
1. Deploy KubeVirt using APB

Actual results:
kubevirt-apb is using manifests from kubevirt-ansible to deploy ovs-cni, instead of using manifests from ovs-cni-manifests rpm.

/usr/share/ovs-cni/manifests/openshift-multus.yml
/usr/share/ovs-cni/manifests/openshift-ovs-vsctl.yml
/usr/share/ovs-cni/manifests/ovs-cni.yml

Expected results:

kubevirt-apb should use manifests from ovs-cni-manifests to deploy ovs-cni, not from kubevirt-ansible.

/usr/share/ovs-cni/manifests/openshift-multus.yml
/usr/share/ovs-cni/manifests/openshift-ovs-vsctl.yml
/usr/share/ovs-cni/manifests/ovs-cni.yml

Additional info:

Comment 1 Nelly Credi 2018-11-06 13:58:24 UTC
@Dan, do you know what we are missing here to get it working?

Comment 2 Ryan Hallisey 2018-11-06 14:33:24 UTC
The latest downstream build uses ovs-cni-manifests.

Comment 3 Federico Simoncelli 2018-11-07 11:08:27 UTC
(In reply to Ryan Hallisey from comment #2)
> The latest downstream build uses ovs-cni-manifests.

What's the next step then? ON_QA?

Comment 4 Ohad Levy 2018-11-07 11:24:49 UTC
Based on the comment I believe so. updating accordingly.

Comment 5 Lukas Bednar 2018-11-07 11:43:08 UTC
We (QE) can not test it at the moment, we are waiting for another rebuilt of kubevirt-apb, because current APB is still in cnv13-tech-preview namespace -> we need to have it in cnv-tech-preview namespace since our ASB is configured to pick APBs from there.

Comment 6 Ryan Hallisey 2018-11-07 11:56:35 UTC
build blocked on https://projects.engineering.redhat.com/browse/FACTORY-3482.

Comment 7 Nelly Credi 2018-11-07 14:45:28 UTC
Moving back to modified until the build will be done.
please move it to ON_QA once ready & fill in the 'fixed in version'

Comment 8 Ryan Hallisey 2018-11-08 15:33:14 UTC
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-apb:v1.3.0-5

Comment 9 Lukas Bednar 2018-11-13 11:45:15 UTC
Looking into kubevirt-apb-v3.11-4 ...

I see that Dockerfile and I am worried about follwoing line

RUN cp /usr/share/ovs-cni/manifests/* /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates/


I believe it should be
RUN cp /usr/share/ovs-cni/manifests/* /etc/ansible/roles/kubevirt-ansible/roles/network-multus/templates/

Please Ryan correct me if I am wrong.

Comment 10 Ryan Hallisey 2018-11-13 16:37:44 UTC
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-apb:v3.11-5

Comment 11 Nelly Credi 2018-11-22 15:55:05 UTC
@Ryan, from what i understand, you are still using the templates from kubevirt-ansible and not the rpm

so we should either put it back on assigned or close it as wont fix,
but QE cannot verify it

WDYT?

Comment 12 Ryan Hallisey 2018-11-22 17:06:56 UTC
The APB was using them, but they have hard coded values for things like registry/namespace/tag/image_name so I went back to the kubevirt-ansible templates.  I think close wont fix works.