Description of problem: RHEL templates are using 'bridge' as pod network while it should be using 'masquerade' VMs with 'bridge' as pod networks cannot be migrated. Version-Release number of selected component (if applicable): Additional info: oc get templates -n openshift rhel8-server-tiny-v0.6.2 -oyaml apiVersion: template.openshift.io/v1 kind: Template metadata: annotations: defaults.template.kubevirt.io/disk: rootdisk description: This template can be used to create a VM suitable for Red Hat Enterprise Linux 7 and newer. The template assumes that a PVC is available which is providing the necessary RHEL disk image. iconClass: icon-rhel name.os.template.kubevirt.io/rhel8.0: Red Hat Enterprise Linux 8.0 openshift.io/display-name: Red Hat Enterprise Linux 7.0+ VM openshift.io/documentation-url: https://github.com/kubevirt/common-templates openshift.io/provider-display-name: KubeVirt openshift.io/support-url: https://github.com/kubevirt/common-templates/issues tags: kubevirt,virtualmachine,linux,rhel template.kubevirt.io/editable: | /objects[0].spec.template.spec.domain.cpu.sockets /objects[0].spec.template.spec.domain.cpu.cores /objects[0].spec.template.spec.domain.cpu.threads /objects[0].spec.template.spec.domain.resources.requests.memory /objects[0].spec.template.spec.domain.devices.disks /objects[0].spec.template.spec.volumes /objects[0].spec.template.spec.networks template.kubevirt.io/version: v1alpha1 template.openshift.io/bindable: "false" validations: | [ { "name": "minimal-required-memory", "path": "jsonpath::.spec.domain.resources.requests.memory", "rule": "integer", "message": "This VM requires more memory.", "min": 2147483648 }, ] creationTimestamp: "2019-09-12T13:17:30Z" labels: flavor.template.kubevirt.io/tiny: "true" os.template.kubevirt.io/rhel8.0: "true" template.kubevirt.io/type: base workload.template.kubevirt.io/server: "true" name: rhel8-server-tiny-v0.6.2 namespace: openshift ownerReferences: - apiVersion: kubevirt.io/v1 kind: KubevirtCommonTemplatesBundle name: common-templates-hyperconverged-cluster uid: 49e5a996-d55f-11e9-b3ac-fa163ee93bc1 resourceVersion: "838178" selfLink: /apis/template.openshift.io/v1/namespaces/openshift/templates/rhel8-server-tiny-v0.6.2 uid: ac664b41-d55f-11e9-b1ec-0a580a80002f objects: - apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: app: ${NAME} vm.kubevirt.io/template: rhel8-server-tiny vm.kubevirt.io/template.revision: "1" vm.kubevirt.io/template.version: v0.6.2 name: ${NAME} spec: running: false template: metadata: labels: kubevirt.io/domain: ${NAME} kubevirt.io/size: tiny spec: domain: cpu: cores: 1 sockets: 1 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: default rng: {} resources: requests: memory: 1G evictionStrategy: LiveMigrate networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - name: rootdisk persistentVolumeClaim: claimName: ${PVCNAME} - cloudInitNoCloud: userData: |- #cloud-config password: redhat chpasswd: { expire: False } name: cloudinitdisk parameters: - description: VM name from: rhel8-[a-z0-9]{16} generate: expression name: NAME - description: Name of the PVC with the disk image name: PVCNAME required: true
The templates we ship should use `masquerade`, but this should not block cnv-2.1.
In this case we should at least document it as a known issue, but would like to get Steve's ack about pushing it out first just to be clear, when using the rhel template to create vm it wont be migratable
Right now there are two patches handling this bug. one in 'kubevirt/common templates' and one in 'kubevirt/kubevirt'. both of them change the templates to be with masquerade interface instead of bridge. The one in 'kubevirt/common templates' is already merged U\S ( https://github.com/MarSik/kubevirt-ssp-operator/commits/master) and D/S (https://code.engineering.redhat.com/gerrit/gitweb?p=kubevirt-ssp-operator.git;a=shortlog;h=refs/heads/cnv-2.1-rhel-7). About the one in kubevirt/kubvirt - still not merged.
Both patches are merged
https://github.com/kubevirt/kubevirt/pull/2701
is this really on qa? was it backported from master? whats the 'fixed in version'?
It wasn't backported from master but it is part of the D\S release. kubevirt-ssp-operator-container-v2.1.0-12 probably has the change.
I have a silly wording related note: It was not backported as that implies backport to an older version (like 2.0). But it was released as part of 2.1.
moving to 2.2.0 as we would like to avoid an HCO rebuild in 2.2.1.
Changing target release to 2.2.0 per previous comment.
Moving back to ON_QA. The "Fixed in version" is already set since 2.1.
Martin, based on https://bugzilla.redhat.com/show_bug.cgi?id=1751869#c10 I assume this is also a part of 2.2, right?
Yes, it is part of every release since 2.1 and that includes 2.2.
Testing on OCP4.2 + CNV2.2 with image container-native-virtualization-kubevirt-ssp-operator:v2.2.0-5 But still can see it is using bridge: $ oc get templates -n openshift rhel8-server-tiny-v0.6.2 -oyaml | grep -A 5 interface interfaces: - bridge: {} name: default rng: {} resources: requests:
Was that a clean setup? The name should be rhel8-server-tiny-v0.7.0. We stopped using v0.6.2 long time ago. Both CNV 2.1 and CNV 2.2 have v0.7.0 in the name.
Ah, I think I know what the issue is. The installed common templates are too old due to one forgotten default in the cPaaS related files..
There is a simple workaround. The customer can redeploy the proper templates by providing the proper version in the CR. See here for example: https://github.com/MarSik/kubevirt-ssp-operator/blob/master/deploy/crds/kubevirt_v1_commontemplatesbundle_cr.yaml CNV 2.1 contains templates v0.7.0 and CNV 2.2 contains templates v0.8.0 that both contain the masquerade fix. The only issue is that HCO is not providing the version (intentionally) and the default was kept at 0.6.2.
I will fix this for CNV 2.2. Clean installs will be fine, but upgrades might end up with both versions installed. We can either invest more time into it (somehow removing the old templates) or just add a release note.
SSP operator 2.2.0-7 should install the newest templates by default.
Would future updates to the template have the same problem? If not, I would not bother even with a release note.
(In reply to Dan Kenigsberg from comment #22) > Would future updates to the template have the same problem? If not, I would > not bother even with a release note. No, all future updates will properly replace the upgraded templates (the names are stable since v0.7.0 which should have been the default in CNV 2.1). Any extra templates you might have (custom or from the past like here) will stay though. So one time deletion might be needed to get into sane state. The issue here is that the UI does not know how to handle this and will select a random template for a given OS. And that might mean the old one.
OCP 4.3 and CNV 2.2, new deployment all templates have masquerade.
# oc get template -n openshift rhel7-server-tiny-v0.7.0 -o yaml | grep -i -A 3 interfaces interfaces: - masquerade: {} name: default networkInterfaceMultiqueue: true
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0307