Bug 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created
Summary: Custom template - clone existing PVC - the name of the target VM's data volum...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Console Kubevirt Plugin
Version: 4.7
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.7.0
Assignee: Rastislav Wagner
QA Contact: Guohua Ouyang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-17 11:54 UTC by Ruth Netser
Modified: 2021-02-24 15:54 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:53:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Screenshot (33.12 KB, image/png)
2021-01-17 11:54 UTC, Ruth Netser
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift console pull 7870 0 None closed Bug 1917124: Use name parameter to every DVTemplate in VM Template 2021-01-28 11:22:19 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:54:11 UTC

Description Ruth Netser 2021-01-17 11:54:56 UTC
Created attachment 1748258 [details]
Screenshot

Description of problem:
Creaing a custom template from CNV provided common templates and using "Clone existing" PVC as boot source -> the VM's data volume is hard coded.
As a result, only one VM can be created from this template.
When a VM will start, a DV with the hard coded name will be created, other VMs will fail to start.

Failed to create DataVolume: datavolumes.cdi.kubevirt.io "ten-dv-rootdisk" already exists

(Because of bug 1917118, the DV is created when the template is saved, once the bug is fixed, it will happen when the VM is started and a DV is created.)

Version-Release number of selected component (if applicable):
OCP 4.7.0-fc.2, CNV 2.6

How reproducible:
100%

Steps to Reproduce:
1. Create a custom template using an existing template
2. Use "Clone existing" PVC as boot source

Actual results:
Data volume name is hard-coded in the yaml.
Only one VM can be create using this template.

Expected results:
Data volume name should be generated during VM creation

Additional info:
==========================================

            - name: rootdisk
              dataVolume:
                name: ten-dv-rootdisk

==========================================

kind: Template
apiVersion: template.openshift.io/v1
metadata:
  annotations:
    iconClass: icon-rhel
    name.os.template.kubevirt.io/rhel8.3: Red Hat Enterprise Linux 8.0 or higher
    template.kubevirt.ui/parent-provider: Red Hat
    template.kubevirt.ui/parent-provider-url: 'https://www.redhat.com'
    template.kubevirt.ui/parent-support-level: Full
  selfLink: /apis/template.openshift.io/v1/namespaces/default/templates/ten-dv
  resourceVersion: '5013254'
  name: ten-dv
  uid: 79a90dcb-75f0-4dd5-b3fb-756ebb15fd9c
  creationTimestamp: '2021-01-17T11:07:34Z'
  managedFields:
    - manager: Mozilla
      operation: Update
      apiVersion: template.openshift.io/v1
      time: '2021-01-17T11:07:34Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:iconClass': {}
            'f:name.os.template.kubevirt.io/rhel8.3': {}
            'f:template.kubevirt.ui/parent-provider': {}
            'f:template.kubevirt.ui/parent-provider-url': {}
            'f:template.kubevirt.ui/parent-support-level': {}
          'f:labels':
            .: {}
            'f:flavor.template.kubevirt.io/small': {}
            'f:os.template.kubevirt.io/rhel8.3': {}
            'f:template.kubevirt.io/type': {}
            'f:vm.kubevirt.io/template': {}
            'f:vm.kubevirt.io/template.namespace': {}
            'f:workload.template.kubevirt.io/server': {}
        'f:objects': {}
        'f:parameters': {}
  namespace: default
  labels:
    flavor.template.kubevirt.io/small: 'true'
    os.template.kubevirt.io/rhel8.3: 'true'
    template.kubevirt.io/type: vm
    vm.kubevirt.io/template: rhel8-server-small
    vm.kubevirt.io/template.namespace: openshift
    workload.template.kubevirt.io/server: 'true'
objects:
  - apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachine
    metadata:
      annotations:
        vm.kubevirt.io/flavor: small
        vm.kubevirt.io/os: rhel8
        vm.kubevirt.io/validations: |
          [
            {
              "name": "minimal-required-memory",
              "path": "jsonpath::.spec.domain.resources.requests.memory",
              "rule": "integer",
              "message": "This VM requires more memory.",
              "min": 1610612736
            }
          ]
        vm.kubevirt.io/workload: server
      labels:
        app: '${NAME}'
        vm.kubevirt.io/template: rhel8-server-small
        vm.kubevirt.io/template.revision: '1'
        vm.kubevirt.io/template.version: v0.13.0
      name: '${NAME}'
    spec:
      dataVolumeTemplates:
        - metadata:
            name: ten-dv-rootdisk
          spec:
            pvc:
              resources:
                requests:
                  storage: 15Gi
              volumeMode: Filesystem
              accessModes:
                - ReadWriteOnce
              storageClassName: standard
            source:
              pvc:
                name: fedora-dv-n
                namespace: default
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/domain: '${NAME}'
            kubevirt.io/size: small
        spec:
          domain:
            cpu:
              cores: 1
              sockets: 1
              threads: 1
            devices:
              disks:
                - name: cloudinitdisk
                  disk:
                    bus: virtio
                - name: rootdisk
                  bootOrder: 1
                  disk:
                    bus: virtio
              interfaces:
                - masquerade: {}
                  name: default
                  model: virtio
              networkInterfaceMultiqueue: true
              rng: {}
            machine:
              type: pc-q35-rhel8.3.0
            resources:
              requests:
                memory: 2Gi
          evictionStrategy: LiveMigrate
          networks:
            - name: default
              pod: {}
          terminationGracePeriodSeconds: 180
          volumes:
            - name: cloudinitdisk
              cloudInitNoCloud:
                userData: |
                  #cloud-config
                  user: cloud-user
                  password: fg8n-6tzp-qig0
                  chpasswd:
                    expire: false
            - name: rootdisk
              dataVolume:
                name: ten-dv-rootdisk
          hostname: '${NAME}'
parameters:
  - name: NAME
    description: Name for the new VM
    required: true

Comment 1 Ruth Netser 2021-01-17 12:03:22 UTC
1st VM yaml:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  annotations:
    kubevirt.io/latest-observed-api-version: v1alpha3
    kubevirt.io/storage-observed-api-version: v1alpha3
    name.os.template.kubevirt.io/rhel8.3: Red Hat Enterprise Linux 8.0 or higher
    vm.kubevirt.io/flavor: small
    vm.kubevirt.io/os: rhel8
    vm.kubevirt.io/validations: |
      [
        {
          "name": "minimal-required-memory",
          "path": "jsonpath::.spec.domain.resources.requests.memory",
          "rule": "integer",
          "message": "This VM requires more memory.",
          "min": 1610612736
        }
      ]
    vm.kubevirt.io/workload: server
  selfLink: >-
    /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachines/ten-dv-imaginative-carp
  resourceVersion: '5014253'
  name: ten-dv-imaginative-carp
  uid: fb29224b-7cb9-4f77-b381-8f780dacbc93
  creationTimestamp: '2021-01-17T11:09:00Z'
  generation: 1
  managedFields:
    - apiVersion: kubevirt.io/v1alpha3
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:name.os.template.kubevirt.io/rhel8.3': {}
            'f:vm.kubevirt.io/flavor': {}
            'f:vm.kubevirt.io/os': {}
            'f:vm.kubevirt.io/validations': {}
            'f:vm.kubevirt.io/workload': {}
          'f:labels':
            'f:vm.kubevirt.io/template.version': {}
            'f:os.template.kubevirt.io/rhel8.3': {}
            'f:vm.kubevirt.io/template.namespace': {}
            'f:app': {}
            .: {}
            'f:vm.kubevirt.io/template.revision': {}
            'f:workload.template.kubevirt.io/server': {}
            'f:flavor.template.kubevirt.io/small': {}
            'f:vm.kubevirt.io/template': {}
        'f:spec':
          .: {}
          'f:dataVolumeTemplates': {}
          'f:running': {}
          'f:template':
            .: {}
            'f:metadata':
              .: {}
              'f:labels':
                .: {}
                'f:flavor.template.kubevirt.io/small': {}
                'f:kubevirt.io/domain': {}
                'f:kubevirt.io/size': {}
                'f:os.template.kubevirt.io/rhel8.3': {}
                'f:vm.kubevirt.io/name': {}
                'f:workload.template.kubevirt.io/server': {}
            'f:spec':
              .: {}
              'f:domain':
                .: {}
                'f:cpu':
                  .: {}
                  'f:cores': {}
                  'f:sockets': {}
                  'f:threads': {}
                'f:devices':
                  .: {}
                  'f:disks': {}
                  'f:interfaces': {}
                  'f:networkInterfaceMultiqueue': {}
                  'f:rng': {}
                'f:machine':
                  .: {}
                  'f:type': {}
                'f:resources':
                  .: {}
                  'f:requests':
                    .: {}
                    'f:memory': {}
              'f:evictionStrategy': {}
              'f:hostname': {}
              'f:networks': {}
              'f:terminationGracePeriodSeconds': {}
              'f:volumes': {}
      manager: Mozilla
      operation: Update
      time: '2021-01-17T11:09:00Z'
    - apiVersion: kubevirt.io/v1alpha3
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:kubevirt.io/latest-observed-api-version': {}
            'f:kubevirt.io/storage-observed-api-version': {}
        'f:status':
          .: {}
          'f:volumeSnapshotStatuses': {}
      manager: virt-controller
      operation: Update
      time: '2021-01-17T11:09:00Z'
  namespace: default
  labels:
    app: ten-dv-imaginative-carp
    flavor.template.kubevirt.io/small: 'true'
    os.template.kubevirt.io/rhel8.3: 'true'
    vm.kubevirt.io/template: ten-dv
    vm.kubevirt.io/template.namespace: default
    vm.kubevirt.io/template.revision: '1'
    vm.kubevirt.io/template.version: v0.13.0
    workload.template.kubevirt.io/server: 'true'
spec:
  dataVolumeTemplates:
    - metadata:
        creationTimestamp: null
        name: ten-dv-rootdisk
      spec:
        pvc:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 15Gi
          storageClassName: standard
          volumeMode: Filesystem
        source:
          pvc:
            name: fedora-dv-n
            namespace: default
  running: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        flavor.template.kubevirt.io/small: 'true'
        kubevirt.io/domain: ten-dv-imaginative-carp
        kubevirt.io/size: small
        os.template.kubevirt.io/rhel8.3: 'true'
        vm.kubevirt.io/name: ten-dv-imaginative-carp
        workload.template.kubevirt.io/server: 'true'
    spec:
      domain:
        cpu:
          cores: 1
          sockets: 1
          threads: 1
        devices:
          disks:
            - disk:
                bus: virtio
              name: cloudinitdisk
            - bootOrder: 1
              disk:
                bus: virtio
              name: rootdisk
          interfaces:
            - masquerade: {}
              model: virtio
              name: default
          networkInterfaceMultiqueue: true
          rng: {}
        machine:
          type: pc-q35-rhel8.3.0
        resources:
          requests:
            memory: 2Gi
      evictionStrategy: LiveMigrate
      hostname: ten-dv-imaginative-carp
      networks:
        - name: default
          pod: {}
      terminationGracePeriodSeconds: 180
      volumes:
        - cloudInitNoCloud:
            userData: |
              #cloud-config
              user: cloud-user
              password: fg8n-6tzp-qig0
              chpasswd:
                expire: false
          name: cloudinitdisk
        - dataVolume:
            name: ten-dv-rootdisk
          name: rootdisk
status:
  volumeSnapshotStatuses:
    - enabled: false
      name: cloudinitdisk
      reason: Volume type does not suport snapshots
    - enabled: false
      name: rootdisk
      reason: 'No Volume Snapshot Storage Class found for volume [rootdisk]'






2nd VM yaml:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  annotations:
    kubevirt.io/latest-observed-api-version: v1alpha3
    kubevirt.io/storage-observed-api-version: v1alpha3
    name.os.template.kubevirt.io/rhel8.3: Red Hat Enterprise Linux 8.0 or higher
    vm.kubevirt.io/flavor: small
    vm.kubevirt.io/os: rhel8
    vm.kubevirt.io/validations: |
      [
        {
          "name": "minimal-required-memory",
          "path": "jsonpath::.spec.domain.resources.requests.memory",
          "rule": "integer",
          "message": "This VM requires more memory.",
          "min": 1610612736
        }
      ]
    vm.kubevirt.io/workload: server
  selfLink: >-
    /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachines/ten-dv-external-horse
  resourceVersion: '5014716'
  name: ten-dv-external-horse
  uid: b2bbb621-99e0-463c-9f54-09de45e3bfcc
  creationTimestamp: '2021-01-17T11:09:39Z'
  generation: 1
  managedFields:
    - apiVersion: kubevirt.io/v1alpha3
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:name.os.template.kubevirt.io/rhel8.3': {}
            'f:vm.kubevirt.io/flavor': {}
            'f:vm.kubevirt.io/os': {}
            'f:vm.kubevirt.io/validations': {}
            'f:vm.kubevirt.io/workload': {}
          'f:labels':
            'f:vm.kubevirt.io/template.version': {}
            'f:os.template.kubevirt.io/rhel8.3': {}
            'f:vm.kubevirt.io/template.namespace': {}
            'f:app': {}
            .: {}
            'f:vm.kubevirt.io/template.revision': {}
            'f:workload.template.kubevirt.io/server': {}
            'f:flavor.template.kubevirt.io/small': {}
            'f:vm.kubevirt.io/template': {}
        'f:spec':
          .: {}
          'f:dataVolumeTemplates': {}
          'f:running': {}
          'f:template':
            .: {}
            'f:metadata':
              .: {}
              'f:labels':
                .: {}
                'f:flavor.template.kubevirt.io/small': {}
                'f:kubevirt.io/domain': {}
                'f:kubevirt.io/size': {}
                'f:os.template.kubevirt.io/rhel8.3': {}
                'f:vm.kubevirt.io/name': {}
                'f:workload.template.kubevirt.io/server': {}
            'f:spec':
              .: {}
              'f:domain':
                .: {}
                'f:cpu':
                  .: {}
                  'f:cores': {}
                  'f:sockets': {}
                  'f:threads': {}
                'f:devices':
                  .: {}
                  'f:disks': {}
                  'f:interfaces': {}
                  'f:networkInterfaceMultiqueue': {}
                  'f:rng': {}
                'f:machine':
                  .: {}
                  'f:type': {}
                'f:resources':
                  .: {}
                  'f:requests':
                    .: {}
                    'f:memory': {}
              'f:evictionStrategy': {}
              'f:hostname': {}
              'f:networks': {}
              'f:terminationGracePeriodSeconds': {}
              'f:volumes': {}
      manager: Mozilla
      operation: Update
      time: '2021-01-17T11:09:39Z'
    - apiVersion: kubevirt.io/v1alpha3
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:kubevirt.io/latest-observed-api-version': {}
            'f:kubevirt.io/storage-observed-api-version': {}
        'f:status':
          .: {}
          'f:conditions': {}
          'f:volumeSnapshotStatuses': {}
      manager: virt-controller
      operation: Update
      time: '2021-01-17T11:09:39Z'
  namespace: default
  labels:
    app: ten-dv-external-horse
    flavor.template.kubevirt.io/small: 'true'
    os.template.kubevirt.io/rhel8.3: 'true'
    vm.kubevirt.io/template: ten-dv
    vm.kubevirt.io/template.namespace: default
    vm.kubevirt.io/template.revision: '1'
    vm.kubevirt.io/template.version: v0.13.0
    workload.template.kubevirt.io/server: 'true'
spec:
  dataVolumeTemplates:
    - metadata:
        creationTimestamp: null
        name: ten-dv-rootdisk
      spec:
        pvc:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 15Gi
          storageClassName: standard
          volumeMode: Filesystem
        source:
          pvc:
            name: fedora-dv-n
            namespace: default
  running: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        flavor.template.kubevirt.io/small: 'true'
        kubevirt.io/domain: ten-dv-external-horse
        kubevirt.io/size: small
        os.template.kubevirt.io/rhel8.3: 'true'
        vm.kubevirt.io/name: ten-dv-external-horse
        workload.template.kubevirt.io/server: 'true'
    spec:
      domain:
        cpu:
          cores: 1
          sockets: 1
          threads: 1
        devices:
          disks:
            - disk:
                bus: virtio
              name: cloudinitdisk
            - bootOrder: 1
              disk:
                bus: virtio
              name: rootdisk
          interfaces:
            - masquerade: {}
              model: virtio
              name: default
          networkInterfaceMultiqueue: true
          rng: {}
        machine:
          type: pc-q35-rhel8.3.0
        resources:
          requests:
            memory: 2Gi
      evictionStrategy: LiveMigrate
      hostname: ten-dv-external-horse
      networks:
        - name: default
          pod: {}
      terminationGracePeriodSeconds: 180
      volumes:
        - cloudInitNoCloud:
            userData: |
              #cloud-config
              user: cloud-user
              password: fg8n-6tzp-qig0
              chpasswd:
                expire: false
          name: cloudinitdisk
        - dataVolume:
            name: ten-dv-rootdisk
          name: rootdisk
status:
  conditions:
    - lastProbeTime: null
      lastTransitionTime: '2021-01-17T11:09:39Z'
      message: >-
        Failed to create DataVolume: datavolumes.cdi.kubevirt.io
        "ten-dv-rootdisk" already exists
      reason: FailedDelete
      status: 'True'
      type: Failure
  volumeSnapshotStatuses:
    - enabled: false
      name: cloudinitdisk
      reason: Volume type does not suport snapshots
    - enabled: false
      name: rootdisk
      reason: 'No Volume Snapshot Storage Class found for volume [rootdisk]'

Comment 2 Yaacov Zamir 2021-01-18 11:17:09 UTC
Setting to blocker+, this bug will block a normal the flow of creating a VM by none admin user, without a simple workaround.

Comment 6 errata-xmlrpc 2021-02-24 15:53:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.