Description of problem: When creating VMI from vmi-ephemeral.yaml, the created VM's hostname is the name of the hosting node (e.g. cnv-executor-ysegev-node2). Version-Release number of selected component (if applicable): Client/server version: v0.12.0-alpha.2 How reproducible: Always Steps to Reproduce: 1. Create a VMI out of the vmi-ephemeral.yaml # oc create -f cluster/examples/vmi-ephemeral.yaml 2. Start a console to the VMI # virtctl console vmi-ephemeral 4. Once logged-in to the VM console - check hostname. # hostname Actual results: Hostname is the hosting node's name, e.g. cnv-executor-ysegev-node2. Expected results: VM's hostname should not depend on the hosting node, and should reflect to craeted VM. Additional info: 1. Same result when creating VMI from vmi-flavor-small.yaml 2. When creating VM's from vmi-fedora.yaml and vm-cirros.yaml, the behavior is as expected, i.e. hostnames are "vmi-fedora" and "vm-cirros", respectively.
it looks like that if the vm yaml doesn't have cloudinitdisk, it will not pick up the vmi name from meta.
Is VMI.spec.hostname set in the yamls? See also: https://kubevirt.io/user-guide/docs/latest/using-virtual-machines/dns.html
(In reply to Fabian Deutsch from comment #2) > Is VMI.spec.hostname set in the yamls? > > See also: > https://kubevirt.io/user-guide/docs/latest/using-virtual-machines/dns.html No, spec.hostname in not set in any of these yaml's (vm-cirros.yaml, vmi-fedora.yaml, vmi-flavor-small.yaml, vmi-ephemeral.yaml), neither those which end-up with the VMI name as the hostname nor those that end with the node as the hostname.
Yossi, for every yalm, please provide the image, if cloud-init is configured, and what the vm name finally is to understand the scheme and impact of this bug, i.e.: - yaml: bar.yaml image: cirros cloudinit: no - …
vm-cirros.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud exists vmi-fedora.yaml: containerDisk.image: fedora-cloud-container-disk-demo cloudInitNoCloud exists vmi-ephemeral.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud doesn't exist vmi-flavor-small.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud doesn't exist
Right, now for each of these I need to understand what the node/hostname is?
vm-cirros.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud exists hostname: vm-cirros vmi-fedora.yaml: containerDisk.image: fedora-cloud-container-disk-demo cloudInitNoCloud exists hostname: vmi-fedora vmi-ephemeral.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud doesn't exist hostname: hosting node's name (cnv-executor-ysegev-node2) vmi-flavor-small.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud doesn't exist hostname: hosting node's name (cnv-executor-ysegev-node2)
Thanks. please provide `kubectl get vmi $VMI" for vm-cirros.yaml and vmi-ephemeral.yaml.
[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi $VMI NAME AGE PHASE IP NODENAME vm-cirros 3d Running 10.130.0.76 cnv-executor-ysegev-node1.example.com vmi-ephemeral 10d Running 10.129.0.62 cnv-executor-ysegev-node2.example.com
Sorry, I meant: kubectl get -o yaml $VMI
Are you sure that yaml is the correct resource you want to get? When I try to run this command, it fails with an error saying that yaml is not a known resource. [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get -o yaml $VMI You must specify the type of resource to get. Use "kubectl api-resources" for a complete list of supported resources. error: Required resource not specified. Use "kubectl explain <resource>" for a detailed description of that resource (e.g. kubectl explain pods). See 'kubectl get -h' for help and examples. [cloud-user@cnv-executor-ysegev-master1 ~]$ [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get yaml $VMI error: the server doesn't have a resource type "yaml" [cloud-user@cnv-executor-ysegev-master1 ~]$ [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get yaml vm-cirros error: the server doesn't have a resource type "yaml"
[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi -o yaml vm-cirros apiVersion: kubevirt.io/v1alpha2 kind: VirtualMachineInstance metadata: creationTimestamp: 2019-01-06T14:34:05Z finalizers: - foregroundDeleteVirtualMachine generateName: vm-cirros generation: 1 labels: kubevirt.io/nodeName: cnv-executor-ysegev-node1.example.com kubevirt.io/vm: vm-cirros name: vm-cirros namespace: kubevirt ownerReferences: - apiVersion: kubevirt.io/v1alpha2 blockOwnerDeletion: true controller: true kind: VirtualMachine name: vm-cirros uid: b6de343e-0d11-11e9-b528-fa163eadcce0 resourceVersion: "2856394" selfLink: /apis/kubevirt.io/v1alpha2/namespaces/kubevirt/virtualmachineinstances/vm-cirros uid: 1e46a38f-11c0-11e9-b528-fa163eadcce0 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk volumeName: registryvolume - disk: bus: virtio name: cloudinitdisk volumeName: cloudinitvolume interfaces: - bridge: {} name: default features: acpi: enabled: true firmware: uuid: 0d2a2043-41c0-59c3-9b17-025022203668 machine: type: q35 resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/cirros-container-disk-demo:latest name: registryvolume - cloudInitNoCloud: userData: | #!/bin/sh echo 'printed from cloud-init userdata' name: cloudinitvolume status: conditions: - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable interfaces: - ipAddress: 10.130.0.76 mac: 0a:58:0a:82:00:4c name: default migrationMethod: BlockMigration nodeName: cnv-executor-ysegev-node1.example.com phase: Running
[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi -o yaml vmi-ephemeral apiVersion: kubevirt.io/v1alpha2 kind: VirtualMachineInstance metadata: creationTimestamp: 2018-12-31T12:10:54Z finalizers: - foregroundDeleteVirtualMachine generation: 1 labels: kubevirt.io/nodeName: cnv-executor-ysegev-node2.example.com special: vmi-ephemeral name: vmi-ephemeral namespace: kubevirt resourceVersion: "1441707" selfLink: /apis/kubevirt.io/v1alpha2/namespaces/kubevirt/virtualmachineinstances/vmi-ephemeral uid: 1f381856-0cf5-11e9-b528-fa163eadcce0 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk volumeName: registryvolume interfaces: - bridge: {} name: default features: acpi: enabled: true firmware: uuid: 091426af-12ad-4048-837d-251b9551a86b machine: type: q35 resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/cirros-container-disk-demo:latest name: registryvolume status: conditions: - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable interfaces: - ipAddress: 10.129.0.62 mac: 0a:58:0a:81:00:3e name: default migrationMethod: BlockMigration nodeName: cnv-executor-ysegev-node2.example.com phase: Running
Hm, that's odd. cloud-init is the only real difference, I wonder why this makes a difference. Reassigning it to network.
Yossi, please further debug where does the guest receives its faulty hostname. Is it applied by cloud-init? Is it received over dhpc? Does it occur only with the bridge default, or also with masquerade and L2?
The bug look a bit different now: I'm testing on OCP 4.1/CNV 2.0, so my cluster is deployed on bare-metal. The "problematic" VMs don't show the hosting node as hostname anymore, but rather the OS name, i.e. "cirros". The non-problematic VMs show the VM name as hostname, as expected. Attached domxmls of 3 virt-launcher of 3 VMs: - vm-cirros.domxml - where hostname is "vm-cirros" (valid). - vmi-pxe-boot.domxml - where hostname is "vmi-pxe-boot" (valid). - vmi-ephemeral.domxml - where hostname is "cirros" (buggy).
Created attachment 1560694 [details] vm-cirros.domxml - where hostname is "vm-cirros" (valid).
Created attachment 1560695 [details] vmi-pxe-boot.domxml - where hostname is "vmi-pxe-boot" (valid).
Created attachment 1560696 [details] vmi-ephemeral.domxml - where hostname is "cirros" (BUG).
(In reply to Dan Kenigsberg from comment #17) > Yossi, please further debug where does the guest receives its faulty > hostname. > Is it applied by cloud-init? vmi-ephemeral, which is the erroneous one, doesn't include cloud-init, which is also reflected in the attached vmi-ephemeral.domxml. The valid examples - vmi-pxe-boot and vm-cirros, both include cloud-init. > Is it received over dhpc? All examples, except for vmi-pxe-boot, are deployed using the original, basic spec yaml files taken from u/s, hence all get dynamic IP from DHCP. The difference in vmi-pxe-boot spec is that it includes non-default secondary L2 bridge interface > > Does it occur only with the bridge default, or also with masquerade and L2? vm-cirros (valid hostname) is configured with only the default interface, whereas vmi-pxe-boot (also valid hostname) is configured with non-default secondary L2 bridge. vmi-ephemeral and vmi-flavor-small, which both get invalid hostname, include only the default bridge interface.
The current scenario, where VM's with no cloud-init use some hard-coded default as the hostname, is acceptable. The "cirros" hostname, which is now seen when deploying vmi-ephemeral.yaml and vmi-flavor-small.yaml (both don't include cloud-init, and both deploy Cirros image), is acceptable as this default. Therefore - this bug can be closed.