Bug 1936926 - VM is created with kubevirt.io/v1 version, virt-template-validator fails to validate the VM
Summary: VM is created with kubevirt.io/v1 version, virt-template-validator fails to v...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: SSP
Version: 4.8.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 4.8.0
Assignee: Andrej Krejcir
QA Contact: Sarah Bennert
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-09 13:52 UTC by Ruth Netser
Modified: 2022-01-10 08:20 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 14:28:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt ssp-operator pull 138 0 None Waiting on Red Hat [bz 1770053] Unable to add 3rd cluster node (DR) in pacemaker with SAP Hana Multi-tier System Replication 2022-05-06 20:27:35 UTC
Github kubevirt ssp-operator pull 139 0 None open (v0.1) Template validator webhook rule for v1 VMs 2021-03-30 08:49:00 UTC
Github kubevirt ssp-operator pull 166 0 None open Update webhook version for template validator to v1 2021-05-26 13:27:16 UTC
Github kubevirt ssp-operator pull 176 0 None open [v0.11] Update webhook version for template validator to v1 2021-05-26 14:46:18 UTC
Red Hat Product Errata RHSA-2021:2920 0 None None None 2021-07-27 14:29:15 UTC

Description Ruth Netser 2021-03-09 13:52:48 UTC
Description of problem:
VMs in 4. are created with api version kubevirt.io/v1, template validator fails to validate the VM

Version-Release number of selected component (if applicable):
CNV 4.8.0

How reproducible:
100%

Steps to Reproduce:
1. Create a VM using a template


Actual results:
Template validator does not validate the template.

Expected results:
VM should be validated.

Additional info:
================= VM yaml ========================
apiVersion: v1
items:
- apiVersion: kubevirt.io/v1alpha3
  kind: VirtualMachine
  metadata:
    annotations:
      vm.kubevirt.io/flavor: tiny
      vm.kubevirt.io/os: fedora
      vm.kubevirt.io/validations: |
        [
          {
            "name": "minimal-required-memory",
            "path": "jsonpath::.spec.domain.resources.requests.memory",
            "rule": "integer",
            "message": "This VM requires more memory.",
            "min": 1073741824
          }
        ]
      vm.kubevirt.io/workload: server


================== VM created =======================
$ oc get vm -n default fed-vm -oyaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  annotations:
    kubevirt.io/latest-observed-api-version: v1
    kubevirt.io/storage-observed-api-version: v1alpha3
    vm.kubevirt.io/flavor: tiny
    vm.kubevirt.io/os: fedora
    vm.kubevirt.io/validations: |
      [
        {
          "name": "minimal-required-memory",
          "path": "jsonpath::.spec.domain.resources.requests.memory",
          "rule": "integer",
          "message": "This VM requires more memory.",
          "min": 1073741824
        }
      ]
    vm.kubevirt.io/workload: server
  creationTimestamp: "2021-03-09T13:30:07Z"



=========================================
]$ oc logs -n openshift-cnv virt-template-validator-5487554889-tx9t6
{"component":"kubevirt-template-validator","level":"info","msg":"kubevirt-template-validator v0.8.0-4-g57dd230 (revision: REVISION_placeholder) starting","pos":"app.go:75","timestamp":"2021-03-09T05:02:04.193153Z"}
{"component":"kubevirt-template-validator","level":"info","msg":"kubevirt-template-validator using kubevirt client-go (v0.0.0-master+$Format:%h$ $Format:%H$ 1970-01-01T00:00:00Z)","pos":"app.go:76","timestamp":"2021-03-09T05:02:04.193243Z"}
W0309 05:02:04.193370       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
{"component":"kubevirt-template-validator","level":"info","msg":"certificate from /etc/webhook/certs with common name 'virt-template-validator.openshift-cnv.svc' retrieved.","pos":"tlsinfo.go:131","timestamp":"2021-03-09T05:02:04.196515Z"}
{"component":"kubevirt-template-validator","level":"info","msg":"validator app: started informers","pos":"app.go:97","timestamp":"2021-03-09T05:02:04.220981Z"}
{"component":"kubevirt-template-validator","level":"info","msg":"validator app: synched informers","pos":"app.go:102","timestamp":"2021-03-09T05:02:04.422066Z"}
{"component":"kubevirt-template-validator","level":"info","msg":"validator app: running with TLSInfo.CertsDirectory/etc/webhook/certs","pos":"app.go:105","timestamp":"2021-03-09T05:02:04.422154Z"}
{"component":"kubevirt-template-validator","level":"info","msg":"validator app: TLS configured, serving over HTTPS on 0.0.0.0:8443","pos":"app.go:113","timestamp":"2021-03-09T05:02:04.422191Z"}
E0309 07:24:56.192190       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
E0309 07:31:56.702554       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""
E0309 13:27:03.870058       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=93, ErrCode=NO_ERROR, debug=""
E0309 13:30:20.411414       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""

Comment 2 Omer Yahud 2021-03-17 11:45:45 UTC
@akrejcir IIRC you already have a PR for this one

Comment 3 Andrej Krejcir 2021-03-18 10:09:42 UTC
I don't, but it should be a simple patch.

Comment 4 Andrej Krejcir 2021-03-19 13:58:12 UTC
I tried to reproduce this issue, but I cannot. For me, it works as expected.

Versions:
OCP: 4.8.0
Kubevirt: v0.39.0
Template validator: v0.9.0


Ruth, on which versions of components did you see this issue?

Comment 5 Ruth Netser 2021-03-22 07:03:07 UTC
Andrej, because of bug 1937307, I cannot extract the API version.
The cluster is with OCP 4.8.0-0.nightly-2021-03-17-045622, kubevirt-template-validator-container-v4.8.0-3

Comment 6 Andrej Krejcir 2021-03-26 11:58:34 UTC
I have created a PR, that should fix this bug. But I cannot verify it, because I cannot reproduce the issue.

Comment 8 Sarah Bennert 2021-05-10 22:38:31 UTC
# Openshift cluster
4.8.0-fc.2

# HCO Operator
$ oc -n openshift-cnv get deploy hco-operator -oyaml
        image: registry.redhat.io/container-native-virtualization/hyperconverged-cluster-operator@sha256:7a233fab66fe0e38258ab2876cbb2a8394fae9a9f108f724cc92bc76b9a56c91

hyperconverged-cluster-operator-container-v4.8.0-46

# SSP Operator
$ oc -n openshift-cnv get deploy ssp-operator -oyaml
        image: registry.redhat.io/container-native-virtualization/kubevirt-ssp-operator@sha256:2a97a915503dafbe583bfe5a202ab39f72acb32f69dc094a8aa78845971e1f2d

kubevirt-ssp-operator-container-v4.8.0-27



The validation for fedora-server-tiny is for a minimum requested memory of 1Gi

$ oc get template -n openshift fedora-server-tiny -oyaml
...
- apiVersion: kubevirt.io/v1
  kind: VirtualMachine
  metadata:
    annotations:
      vm.kubevirt.io/validations: |
        [
          {
            "name": "minimal-required-memory",
            "path": "jsonpath::.spec.domain.resources.requests.memory",
            "rule": "integer",
            "message": "This VM requires more memory.",
            "min": 1073741824
          }
        ]
    labels:
      app: ${NAME}
      vm.kubevirt.io/template: fedora-server-tiny
      vm.kubevirt.io/template.revision: "1"
      vm.kubevirt.io/template.version: v0.14.0
...



Example 1 (Requesting 1Mi):

cat << EOF > test-vm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: fedora
  annotations:
    vm.kubevirt.io/flavor: tiny
    vm.kubevirt.io/os: fedora
    vm.kubevirt.io/workload: server
spec:
  running: false
  template:
    spec:
      domain:
        cpu:
          sockets: 1
          cores: 1
          threads: 1
        resources:
          requests:
            memory: 1Mi
        devices: {}
EOF



$ oc create -f test-vm.yaml

Expected:
The request is invalid: spec.template.spec.domain.resources.requests.memory: spec.template.spec.domain.resources.requests.memory '1Mi': must be greater than or equal to 1Gi.

Actual:
virtualmachine.kubevirt.io/fedora created


Example 2 (Requesting 900Ki):

cat << EOF > test-vm2.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: fedora
  annotations:
    vm.kubevirt.io/flavor: tiny
    vm.kubevirt.io/os: fedora
    vm.kubevirt.io/workload: server
spec:
  running: false
  template:
    spec:
      domain:
        cpu:
          sockets: 1
          cores: 1
          threads: 1
        resources:
          requests:
            memory: 900Ki
        devices: {}
EOF

$ oc create -f test-vm2.yaml

Expected:
The request is invalid: spec.template.spec.domain.resources.requests.memory: spec.template.spec.domain.resources.requests.memory '900Ki': must be greater than or equal to 1Gi.

Actual:
The request is invalid: spec.template.spec.domain.resources.requests.memory: spec.template.spec.domain.resources.requests.memory '900Ki': must be greater than or equal to 1M.

Comment 12 Andrej Krejcir 2021-05-26 13:37:09 UTC
The verification method as described in Comment 8 is incorrect.

The VMs are missing required labels or annotation, without them the template validator will ignore the VM. Surprisingly, the "Example 2" VM should also be ignored by the validator, but it is not.

The VM needs these labels to reference an existing template:
- vm.kubevirt.io/template: "fedora-server-tiny",
- vm.kubevirt.io/template.revision: "1",
- vm.kubevirt.io/template.version: "v0.14.0"

Or an annotation that contains validation rules:
- vm.kubevirt.io/validations: ...



While testing the new validator, I have come across a related issue. The validator response was missing some required fields and so k8s webhook call failed with the following error:

$ oc create -f test-vm.yaml

Error from server (InternalError): error when creating "test-vm.yaml": Internal error occurred: failed calling webhook "virt-template-admission.kubevirt.io": expected webhook response of admission.k8s.io/v1, Kind=AdmissionReview, got /, Kind=


I have posted a PR to fix it.

Comment 13 Sarah Bennert 2021-06-08 16:54:31 UTC
Verified.

$ cat <<EOF > test-vm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: fedora-test-vm
  namespace: sarah-test
  annotations:
      vm.kubevirt.io/validations: '[
        {
            "name": "minimal-required-memory",
            "path": "jsonpath::.spec.domain.resources.requests.memory",
            "rule": "integer",
            "message": "This VM requires more memory.",
            "min": 1073741824
        }
      ]'
  labels:
      vm.kubevirt.io/template: "fedora-server-tiny"
      vm.kubevirt.io/template.revision: "1"
      vm.kubevirt.io/template.version: "v0.14.0"
spec:
  running: false
  template:
    spec:
      domain:
        cpu:
          sockets: 1
          cores: 1
          threads: 1
        resources:
          requests:
            memory: 1000Mi
        devices: {}
EOF

$ oc create -f test-vm.yaml

Expected/Actual:
The request is invalid: .spec.domain.resources.requests.memory: This VM requires more memory.: value 1048576000 is lower than minimum [1073741824]


Same test performed with memory request at "1Gi" and "1073741824".
In both cases, Expected/Actual:
virtualmachine.kubevirt.io/fedora-test-vm created

Comment 16 errata-xmlrpc 2021-07-27 14:28:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2920


Note You need to log in before you can comment on or make changes to this bug.