Bug 1741626
Summary: | VM masquerade binding not working on RHEL7 worker nodes (OCP 4.2) | ||
---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | Jenifer Abrams <jhopper> |
Component: | Documentation | Assignee: | Andrew Burden <aburden> |
Status: | CLOSED ERRATA | QA Contact: | Irina Gulina <igulina> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 2.0 | CC: | aburden, atragler, cnv-qe-bugs, danken, dcbw, egarver, myakove, ncredi, phoracek, psutter, sscheink |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | 2.2.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | 2.1, 2.2 | Doc Type: | Release Note |
Doc Text: |
`masquerade` VM binding method is not working nor supported on RHEL7 worker nodes.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2020-01-30 16:27:13 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jenifer Abrams
2019-08-15 15:54:34 UTC
I was debugging this issue here is some more context. the cnv images we are using use the ubi8 image base and rhel 8. This image have a iptables version: iptables v1.8.2 (nf_tables) This mean the iptables binary is a wrapper for the nftable commands. The rhel 7.6 kernel is Linux vm-2 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux nftable is supported from >= 3.13 I try to find a iptables-legacy package but wasn't able to find it. Have have an issue with other combination. virt-launcher pod was fedora30 and the host was CoreOS8. from the fedora30 run iptables --version: iptables v1.8.2 (legacy) To fix that issue we introduce the nftables rule creation if the iptables fails. https://github.com/kubevirt/kubevirt/pull/2430 The question here is if we have a way to support the legacy iptables binary in the ubi8 image. from fedora30: yum provides iptables-legacy Last metadata expiration check: 0:43:31 ago on Thu 15 Aug 2019 02:21:32 PM UTC. iptables-1.8.2-1.fc30.x86_64 : Tools for managing Linux kernel packet filtering capabilities Repo : @System Matched from: Filename : /usr/sbin/iptables-legacy from ubi8: yum provides iptables-legacy Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:01:25 ago on Thu Aug 15 15:01:34 2019. Error: No Matches found Better ask Anita if el8 still carries any userland code that speaks to iptables kernel. (In reply to Dan Kenigsberg from comment #2) > Better ask Anita if el8 still carries any userland code that speaks to > iptables kernel. It does not. RHEL-8 only has iptables-nft. IIRC, OpenShift's solution is to mount the host's rootfs and call the host's native version of iptables. Is CNV different? Hi Eric, Thanks for the comment. right now we can't do it. The iptables are create in our virt-launcher pod (represent the running virtual machine) so we can't mount the host into rootfs to this pod because the user have access to it. This can lead to a security issue. Hi Sebastian, (In reply to Sebastian Scheinkman from comment #4) > The iptables are create in our virt-launcher pod (represent the running > virtual machine) so we can't mount the host into rootfs to this pod because > the user have access to it. > > This can lead to a security issue. So providing (read-only) access to host's rootfs to a container manipulating the host's firewall configuration may lead to a security issue? Who's auditing that setup? Cheers, Phil Can you explicitly state what RHEL version are used on the host, containers/pods, and virt-launcher pods? In general if there is a mismatch between container/host then the host's iptables must be used. This is the case in OpenShift. See here: - https://github.com/openshift/sdn/blob/master/images/node/Dockerfile#L22 - https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/openshift-sdn/sdn.yaml#L126 (not the read-only mount) (In reply to Eric Garver from comment #6) > Can you explicitly state what RHEL version are used on the host, > containers/pods, and virt-launcher pods? > > In general if there is a mismatch between container/host then the host's > iptables must be used. This is the case in OpenShift. > > See here: > - https://github.com/openshift/sdn/blob/master/images/node/Dockerfile#L22 > - > https://github.com/openshift/cluster-network-operator/blob/master/bindata/ > network/openshift-sdn/sdn.yaml#L126 (not the read-only mount) *NOTE the read-only mount (In reply to Eric Garver from comment #6) > Can you explicitly state what RHEL version are used on the host, > containers/pods, and virt-launcher pods? My combination was RHEL 7.6 worker node: 3.10.0-957.10.1.el7.x86_64 With CNV 2.0, the virt-launcher uses ubi8: https://access.redhat.com/containers/#/registry.access.redhat.com/container-native-virtualization/virt-launcher/images/v2.0.0-39 I believe Sebastian's report about the other mixing issue is for upstream Kubevirt w/ virt-launcher using fc30 on a CoreOS4.2 node. I will let the CNV team speak to how they want to handle the mismatch cases. > > In general if there is a mismatch between container/host then the host's > iptables must be used. This is the case in OpenShift. > > See here: > - https://github.com/openshift/sdn/blob/master/images/node/Dockerfile#L22 > - > https://github.com/openshift/cluster-network-operator/blob/master/bindata/ > network/openshift-sdn/sdn.yaml#L126 (not the read-only mount) Dan W is correct about how cnv can fix this deficiency, but we are unlikely to address it soon. Please document that in the context of cnv-2.x, the `masquerade` binding method is not supported on el7 nodes. Thanks Dan. Known Issue added to 2.1 Release Notes: "The `masquerade` binding method for virtual machines cannot be used in clusters with RHEL 7 compute nodes." PR: https://github.com/openshift/openshift-docs/pull/18255 Nelly, there doesn't seem to be a QE contact assigned to this bug. Can you please assign someone for review? please add fixed in version Right, yes, forgot I need to show the 'advanced fields' now for QE contact. Fixed in 2.1 and 2.2 because this release note will be published for 2.1 and I've made a note to retain it for subsequent versions as it will continue to be relevant. If there is any RFE opened to fix this in the future as the comment #1741626#c19 by Dan states, I would recommend refer that RFE in the release note, instead of the current BZ. Otherwise the current release note looks good. We don't have an RFE opened for this on BZ, it is tracked only internally on our Jira. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0307 Update: it was decided not to fix this issue ATM I think we should remove it from known issues if we believe this is not relevant for our customers The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |