Bug 1881658 - Fail to start VM from template - "PVC default/fedora-dv owned by DataVolume fedora-dv cannot be used as a volume source. Use DataVolume instead"
Summary: Fail to start VM from template - "PVC default/fedora-dv owned by DataVolume f...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 2.5.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 2.5.0
Assignee: Alex Kalenyuk
QA Contact: Alex Kalenyuk
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-22 20:12 UTC by Ruth Netser
Modified: 2020-11-17 13:24 UTC (History)
7 users (show)

Fixed In Version: hco-bundle-registry-container-v2.5.0-329 virt-operator-container-v2.5.0-79
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-17 13:24:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 4277 0 None closed Allow PVC as volume source with a DV populating the PVC. 2020-12-29 10:02:06 UTC
Github kubevirt kubevirt pull 4315 0 None closed [release-0.34] Allow PVC as volume source with a DV populating the PVC. 2020-12-29 10:02:06 UTC
Red Hat Product Errata RHEA-2020:5127 0 None None None 2020-11-17 13:24:39 UTC

Description Ruth Netser 2020-09-22 20:12:38 UTC
Description of problem:
Cannot start a VM which was created from common templates:
"PVC default/fedora-dv owned by DataVolume fedora-dv cannot be used as a volume source. Use DataVolume instead"

Version-Release number of selected component (if applicable):
CNV 2.5.0

How reproducible:
100%

Steps to Reproduce:
1. Create a fedora DV

2. Create a Fedora VM using fedora common template
oc process -n openshift fedora-server-tiny-v0.11.3 -p NAME="fedora32" -p PVCNAME="fedora-dv"

3.  Start the VM


Actual results:
VM fails to start

Expected results:
VM should be running

Additional info:
=========================================================
$ oc get dv -oyaml
apiVersion: v1
items:
- apiVersion: cdi.kubevirt.io/v1beta1
  kind: DataVolume
  metadata:
    creationTimestamp: "2020-09-22T19:57:30Z"
    generation: 40
    managedFields:
    - apiVersion: cdi.kubevirt.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          .: {}
          f:pvc:
            .: {}
            f:accessModes: {}
            f:resources:
              .: {}
              f:requests:
                .: {}
                f:storage: {}
            f:storageClassName: {}
            f:volumeMode: {}
          f:source:
            .: {}
            f:http:
              .: {}
              f:url: {}
      manager: kubectl-create
      operation: Update
      time: "2020-09-22T19:57:30Z"
    - apiVersion: cdi.kubevirt.io/v1beta1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          .: {}
          f:conditions: {}
          f:phase: {}
          f:progress: {}
      manager: virt-cdi-controller
      operation: Update
      time: "2020-09-22T19:58:51Z"
    name: fedora-dv
    namespace: default
    resourceVersion: "13202357"
    selfLink: /apis/cdi.kubevirt.io/v1beta1/namespaces/default/datavolumes/fedora-dv
    uid: 77d0fde7-fb34-410f-8b2f-3d376142180b
  spec:
    pvc:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 25Gi
      storageClassName: hostpath-provisioner
      volumeMode: Filesystem
    source:
      http:
        url: http://cnv-qe-server.rhevdev.lab.eng.rdu2.redhat.com/files/cnv-tests/fedora-images/Fedora-Cloud-Base-32-1.6.x86_64.qcow2
  status:
    conditions:
    - lastHeartbeatTime: "2020-09-22T19:57:30Z"
      lastTransitionTime: "2020-09-22T19:57:30Z"
      message: PVC fedora-dv Bound
      reason: Bound
      status: "True"
      type: Bound
    - lastHeartbeatTime: "2020-09-22T19:58:51Z"
      lastTransitionTime: "2020-09-22T19:58:51Z"
      status: "True"
      type: Ready
    - lastHeartbeatTime: "2020-09-22T19:58:51Z"
      lastTransitionTime: "2020-09-22T19:58:51Z"
      message: Import Complete
      reason: Completed
      status: "False"
      type: Running
    phase: Succeeded
    progress: 100.0%
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


=========================================================
$ oc get vm fedora32 -oyaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  annotations:
    kubevirt.io/latest-observed-api-version: v1alpha3
    kubevirt.io/storage-observed-api-version: v1alpha3
  creationTimestamp: "2020-09-22T20:00:22Z"
  generation: 2
  labels:
    app: fedora32
    vm.kubevirt.io/template: fedora-server-tiny-v0.11.3
    vm.kubevirt.io/template.revision: "1"
    vm.kubevirt.io/template.version: v0.12.0
  managedFields:
  - apiVersion: kubevirt.io/v1alpha3
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
          f:vm.kubevirt.io/template: {}
          f:vm.kubevirt.io/template.revision: {}
          f:vm.kubevirt.io/template.version: {}
      f:spec:
        .: {}
        f:template:
          .: {}
          f:metadata:
            .: {}
            f:labels:
              .: {}
              f:kubevirt.io/domain: {}
              f:kubevirt.io/size: {}
          f:spec:
            .: {}
            f:domain:
              .: {}
              f:cpu:
                .: {}
                f:cores: {}
                f:sockets: {}
                f:threads: {}
              f:devices:
                .: {}
                f:disks: {}
                f:interfaces: {}
                f:networkInterfaceMultiqueue: {}
                f:rng: {}
              f:machine:
                .: {}
                f:type: {}
              f:resources:
                .: {}
                f:requests:
                  .: {}
                  f:memory: {}
            f:evictionStrategy: {}
            f:networks: {}
            f:terminationGracePeriodSeconds: {}
            f:volumes: {}
    manager: kubectl-create
    operation: Update
    time: "2020-09-22T20:00:22Z"
  - apiVersion: kubevirt.io/v1alpha3
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:running: {}
    manager: virt-api
    operation: Update
    time: "2020-09-22T20:00:34Z"
  - apiVersion: kubevirt.io/v1alpha3
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubevirt.io/latest-observed-api-version: {}
          f:kubevirt.io/storage-observed-api-version: {}
      f:status:
        .: {}
        f:conditions: {}
    manager: virt-controller
    operation: Update
    time: "2020-09-22T20:00:34Z"
  name: fedora32
  namespace: default
  resourceVersion: "13205685"
  selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachines/fedora32
  uid: 87222a9b-c44a-4ddf-b8d0-db33388f7929
spec:
  running: true
  template:
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/domain: fedora32
        kubevirt.io/size: tiny
    spec:
      domain:
        cpu:
          cores: 1
          sockets: 1
          threads: 1
        devices:
          disks:
          - disk:
              bus: virtio
            name: rootdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
          interfaces:
          - masquerade: {}
            name: default
          networkInterfaceMultiqueue: true
          rng: {}
        machine:
          type: pc-q35-rhel8.2.0
        resources:
          requests:
            memory: 1Gi
      evictionStrategy: LiveMigrate
      networks:
      - name: default
        pod: {}
      terminationGracePeriodSeconds: 180
      volumes:
      - name: rootdisk
        persistentVolumeClaim:
          claimName: fedora-dv
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
        name: cloudinitdisk
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-09-22T20:00:34Z"
    message: PVC default/fedora-dv owned by DataVolume fedora-dv cannot be used as a volume source. Use DataVolume instead
    reason: FailedCreate
    status: "True"
    type: Failure


=========================================================
$ oc logs -n openshift-cnv virt-controller-7f5554b5c5-gml7v
...
...
{"component":"virt-controller","kind":"","level":"error","msg":"Invalid VM spec","name":"fedora32","namespace":"default","pos":"util.go:133","reason":"PVC default/fedora-dv 
owned by DataVolume fedora-dv cannot be used as a volume source. Use DataVolume instead","service":"http","timestamp":"2020-09-22T20:06:02.112378Z","uid":"87222a9b-c44a-4ddf-b8d0-db33388f7929"}
{"component":"virt-controller","kind":"","level":"error","msg":"Creating the VirtualMachine failed.","name":"fedora32","namespace":"default","pos":"vm.go:265","service":"http","timestamp":"2020-09-22T20:06:02.112581Z","uid":"87222a9b-c44a-4ddf-b8d0-db33388f7929"}
{"component":"virt-controller","level":"info","msg":"re-enqueuing VirtualMachine default/fedora32","pos":"vm.go:138","reason":"PVC default/fedora-dv owned by DataVolume fedora-dv cannot be used as a volume source. Use DataVolume instead","service":"http","timestamp":"2020-09-22T20:06:02.112874Z"}


=========================================================


=========================================================

Comment 1 Fabian Deutsch 2020-09-22 20:31:35 UTC
The PVC that is getting passed to the VM upon creation is managed by a DV.
The error message clearly states that this is not supported, instead of using the PVC, the DV needs to be provided to the VM.

This is not possible with a single CLI call.

In 2.5 the common templates will remain to reference PVCs
In 2.6 we plan to move the templates to reference PVCs as well, but as a source for a DVTemplate within the VM definition

Comment 2 Ruth Netser 2020-09-23 06:59:55 UTC
This is the same flow that works in previous versions, the VM is created using a PVC.
1. Create a DV:

apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
  name: fedora-dv
spec:
  source:
      http:
         url: "http://cnv-qe-server.rhevdev.lab.eng.rdu2.redhat.com/files/cnv-tests/fedora-images/Fedora-Cloud-Base-32-1.6.x86_64.qcow2"
  pvc:
    storageClassName: hostpath-provisioner
    volumeMode: Filesystem
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 25Gi


2. Create  VM from template using the PVC that was created
yaml was generated by - oc process -n openshift fedora-server-tiny-v0.11.3 -p NAME="fedora32" -p PVCNAME="fedora-dv" -oyaml >fed_vm.yaml

apiVersion: v1
items:
- apiVersion: kubevirt.io/v1alpha3
  kind: VirtualMachine
  metadata:
    labels:
      app: fedora32
      vm.kubevirt.io/template: fedora-server-tiny-v0.11.3
      vm.kubevirt.io/template.revision: "1"
      vm.kubevirt.io/template.version: v0.12.0
    name: fedora32
  spec:
    running: false
    template:
      metadata:
        labels:
          kubevirt.io/domain: fedora32
          kubevirt.io/size: tiny
      spec:
        domain:
          cpu:
            cores: 1
            sockets: 1
            threads: 1
          devices:
            disks:
            - disk:
                bus: virtio
              name: rootdisk
            - disk:
                bus: virtio
              name: cloudinitdisk
            interfaces:
            - masquerade: {}
              name: default
            networkInterfaceMultiqueue: true
            rng: {}
          machine:
            type: pc-q35-rhel8.2.0
          resources:
            requests:
              memory: 1Gi
        evictionStrategy: LiveMigrate
        networks:
        - name: default
          pod: {}
        terminationGracePeriodSeconds: 180
        volumes:
        - name: rootdisk
          persistentVolumeClaim:
            claimName: fedora-dv
        - cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
          name: cloudinitdisk
kind: List
metadata: {}

Comment 3 Tomasz Barański 2020-09-23 09:34:10 UTC
This is the new intended behavior. If the PVC is created (and owned) by a DataVolume, a VM must reference the DV in the template, not the PVC:

apiVersion: v1
items:
- apiVersion: kubevirt.io/v1alpha3
  kind: VirtualMachine
  spec:
    template:
      spec:

        volumes:
        - name: rootdisk
          dataVolume:
            name: fedora-dv

          ^^^^^^^

Comment 5 Fabian Deutsch 2020-09-23 10:32:27 UTC
To summarize the offlist discussion:

This will likely affect the UI as well: Import using CDI, then create a VM but select a PVC created by a DV.

Then the question is: What is the defined/intended output of a CDI import? Is it a DV or PVC?
With this change https://github.com/kubevirt/kubevirt/pull/3406 it looks like the output is a DV, and we ask consumer sot consume the DV.

However, in https://github.com/kubevirt/kubevirt/pull/3406#discussion_r425308259 it is mentioned that there will be a remaining method to also consume the (unmanaged) PVCs.
How can we get unmanaged PVCs?

In order to clarify the output: Would it make sense to let a user define what the intended output is? DV or PVC?

Then there is the problem that today templates expect PVCs, in future this slightly changes, but we will have ot decide if the tempaltes will then take a PVC or DV as an input.

For now I'd propose to revert the check for owned PVCs in order to give us time to find a good solution for all impacted flows.

Comment 6 Alexander Wels 2020-09-23 11:43:01 UTC
No this is a feature not a bug. If you want to create an unmanaged PVC (Why?, specify a DV in the VM template instead of a PVC) add the appropriate annotations to the PVC. The error message clearly indicates that you as the user are doing something wrong and how to fix it. This fixes a BUG where people were creating a DV, and then using the PVC created and expecting it to work as if they passed a DV. In the not so far future we will make sure that the DV name != PVC name and then what you are currently doing will not work either. So solution simple, instead of:

- name: rootdisk
  persistentVolumeClaim:
  claimName: fedora-dv

use

- name: rootdisk
  dataVolume:
  name: fedora-dv

If you have created a DV for use with your template.

Comment 8 Alexander Wels 2020-10-05 14:34:16 UTC
Merged to master, and back ported to 0.34

Comment 9 Adam Litke 2020-10-05 20:08:15 UTC
Backport PR is still open.  Will move to ON_QA once a build including this fix is available.

Comment 13 errata-xmlrpc 2020-11-17 13:24:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Virtualization 2.5.0 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:5127


Note You need to log in before you can comment on or make changes to this bug.