Bug 1662674

Summary: Some VMI's are created with hosting node name as hostname
Product: Container Native Virtualization (CNV) Reporter: Yossi Segev <ysegev>
Component: NetworkingAssignee: Sebastian Scheinkman <sscheink>
Status: CLOSED NOTABUG QA Contact: Meni Yakove <myakove>
Severity: low Docs Contact:
Priority: medium    
Version: 1.4CC: cnv-qe-bugs, gouyang, ncredi, sgordon, vparekh, ysegev
Target Milestone: ---   
Target Release: 2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-02 13:51:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vm-cirros.domxml - where hostname is "vm-cirros" (valid).
none
vmi-pxe-boot.domxml - where hostname is "vmi-pxe-boot" (valid).
none
vmi-ephemeral.domxml - where hostname is "cirros" (BUG). none

Description Yossi Segev 2018-12-31 14:06:01 UTC
Description of problem:
When creating VMI from vmi-ephemeral.yaml, the created VM's hostname is the name of the hosting node (e.g. cnv-executor-ysegev-node2).


Version-Release number of selected component (if applicable):
Client/server version: v0.12.0-alpha.2


How reproducible:
Always


Steps to Reproduce:
1. Create a VMI out of the vmi-ephemeral.yaml
 # oc create -f cluster/examples/vmi-ephemeral.yaml
2. Start a console to the VMI
 # virtctl console vmi-ephemeral
4. Once logged-in to the VM console - check hostname.
 # hostname


Actual results:
Hostname is the hosting node's name, e.g. cnv-executor-ysegev-node2.


Expected results:
VM's hostname should not depend on the hosting node, and should reflect to craeted VM.


Additional info:
1. Same result when creating VMI from vmi-flavor-small.yaml
2. When creating VM's from vmi-fedora.yaml and vm-cirros.yaml, the behavior is as expected, i.e. hostnames are "vmi-fedora" and "vm-cirros", respectively.

Comment 1 Guohua Ouyang 2019-01-02 08:17:33 UTC
it looks like that if the vm yaml doesn't have cloudinitdisk, it will not pick up the vmi name from meta.

Comment 2 Fabian Deutsch 2019-01-02 13:52:52 UTC
Is VMI.spec.hostname set in the yamls?

See also: https://kubevirt.io/user-guide/docs/latest/using-virtual-machines/dns.html

Comment 3 Yossi Segev 2019-01-03 12:53:28 UTC
(In reply to Fabian Deutsch from comment #2)
> Is VMI.spec.hostname set in the yamls?
> 
> See also:
> https://kubevirt.io/user-guide/docs/latest/using-virtual-machines/dns.html

No, spec.hostname in not set in any of these yaml's (vm-cirros.yaml, vmi-fedora.yaml, vmi-flavor-small.yaml, vmi-ephemeral.yaml), neither those which end-up with the VMI name as the hostname nor those that end with the node as the hostname.

Comment 4 Fabian Deutsch 2019-01-08 13:30:04 UTC
Yossi, for every yalm, please provide the image, if cloud-init is configured, and what the vm name finally is to understand the scheme and impact of this bug, i.e.:

- yaml: bar.yaml
  image: cirros
  cloudinit: no
- …

Comment 5 Yossi Segev 2019-01-09 07:36:16 UTC
vm-cirros.yaml:
containerDisk.image: cirros-container-disk-demo
cloudInitNoCloud exists

vmi-fedora.yaml:
containerDisk.image: fedora-cloud-container-disk-demo
cloudInitNoCloud exists

vmi-ephemeral.yaml:
containerDisk.image: cirros-container-disk-demo
cloudInitNoCloud doesn't exist

vmi-flavor-small.yaml:
containerDisk.image: cirros-container-disk-demo
cloudInitNoCloud doesn't exist

Comment 6 Fabian Deutsch 2019-01-09 16:59:11 UTC
Right, now for each of these I need to understand what the node/hostname is?

Comment 7 Yossi Segev 2019-01-10 13:17:52 UTC
vm-cirros.yaml:
containerDisk.image: cirros-container-disk-demo
cloudInitNoCloud exists
hostname: vm-cirros

vmi-fedora.yaml:
containerDisk.image: fedora-cloud-container-disk-demo
cloudInitNoCloud exists
hostname: vmi-fedora

vmi-ephemeral.yaml:
containerDisk.image: cirros-container-disk-demo
cloudInitNoCloud doesn't exist
hostname: hosting node's name (cnv-executor-ysegev-node2)

vmi-flavor-small.yaml:
containerDisk.image: cirros-container-disk-demo
cloudInitNoCloud doesn't exist
hostname: hosting node's name (cnv-executor-ysegev-node2)

Comment 8 Fabian Deutsch 2019-01-10 13:21:49 UTC
Thanks.

please provide `kubectl get vmi $VMI" for vm-cirros.yaml and vmi-ephemeral.yaml.

Comment 9 Yossi Segev 2019-01-10 13:33:52 UTC
[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi $VMI
NAME            AGE       PHASE     IP            NODENAME
vm-cirros       3d        Running   10.130.0.76   cnv-executor-ysegev-node1.example.com
vmi-ephemeral   10d       Running   10.129.0.62   cnv-executor-ysegev-node2.example.com

Comment 10 Fabian Deutsch 2019-01-10 13:37:35 UTC
Sorry, I meant: kubectl get -o yaml $VMI

Comment 11 Yossi Segev 2019-01-10 13:53:00 UTC
Are you sure that yaml is the correct resource you want to get?
When I try to run this command, it fails with an error saying that yaml is not a known resource.

[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get -o yaml $VMI
You must specify the type of resource to get. Use "kubectl api-resources" for a complete list of supported resources.

error: Required resource not specified.
Use "kubectl explain <resource>" for a detailed description of that resource (e.g. kubectl explain pods).
See 'kubectl get -h' for help and examples.
[cloud-user@cnv-executor-ysegev-master1 ~]$ 
[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get yaml $VMI
error: the server doesn't have a resource type "yaml"
[cloud-user@cnv-executor-ysegev-master1 ~]$ 
[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get yaml vm-cirros
error: the server doesn't have a resource type "yaml"

Comment 13 Yossi Segev 2019-01-10 15:09:49 UTC
[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi -o yaml vm-cirros
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachineInstance
metadata:
  creationTimestamp: 2019-01-06T14:34:05Z
  finalizers:
  - foregroundDeleteVirtualMachine
  generateName: vm-cirros
  generation: 1
  labels:
    kubevirt.io/nodeName: cnv-executor-ysegev-node1.example.com
    kubevirt.io/vm: vm-cirros
  name: vm-cirros
  namespace: kubevirt
  ownerReferences:
  - apiVersion: kubevirt.io/v1alpha2
    blockOwnerDeletion: true
    controller: true
    kind: VirtualMachine
    name: vm-cirros
    uid: b6de343e-0d11-11e9-b528-fa163eadcce0
  resourceVersion: "2856394"
  selfLink: /apis/kubevirt.io/v1alpha2/namespaces/kubevirt/virtualmachineinstances/vm-cirros
  uid: 1e46a38f-11c0-11e9-b528-fa163eadcce0
spec:
  domain:
    devices:
      disks:
      - disk:
          bus: virtio
        name: containerdisk
        volumeName: registryvolume
      - disk:
          bus: virtio
        name: cloudinitdisk
        volumeName: cloudinitvolume
      interfaces:
      - bridge: {}
        name: default
    features:
      acpi:
        enabled: true
    firmware:
      uuid: 0d2a2043-41c0-59c3-9b17-025022203668
    machine:
      type: q35
    resources:
      requests:
        memory: 64M
  networks:
  - name: default
    pod: {}
  terminationGracePeriodSeconds: 0
  volumes:
  - containerDisk:
      image: kubevirt/cirros-container-disk-demo:latest
    name: registryvolume
  - cloudInitNoCloud:
      userData: |
        #!/bin/sh

        echo 'printed from cloud-init userdata'
    name: cloudinitvolume
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: null
    status: "True"
    type: LiveMigratable
  interfaces:
  - ipAddress: 10.130.0.76
    mac: 0a:58:0a:82:00:4c
    name: default
  migrationMethod: BlockMigration
  nodeName: cnv-executor-ysegev-node1.example.com
  phase: Running

Comment 14 Yossi Segev 2019-01-10 15:10:08 UTC
[cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi -o yaml vmi-ephemeral
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachineInstance
metadata:
  creationTimestamp: 2018-12-31T12:10:54Z
  finalizers:
  - foregroundDeleteVirtualMachine
  generation: 1
  labels:
    kubevirt.io/nodeName: cnv-executor-ysegev-node2.example.com
    special: vmi-ephemeral
  name: vmi-ephemeral
  namespace: kubevirt
  resourceVersion: "1441707"
  selfLink: /apis/kubevirt.io/v1alpha2/namespaces/kubevirt/virtualmachineinstances/vmi-ephemeral
  uid: 1f381856-0cf5-11e9-b528-fa163eadcce0
spec:
  domain:
    devices:
      disks:
      - disk:
          bus: virtio
        name: containerdisk
        volumeName: registryvolume
      interfaces:
      - bridge: {}
        name: default
    features:
      acpi:
        enabled: true
    firmware:
      uuid: 091426af-12ad-4048-837d-251b9551a86b
    machine:
      type: q35
    resources:
      requests:
        memory: 64M
  networks:
  - name: default
    pod: {}
  terminationGracePeriodSeconds: 0
  volumes:
  - containerDisk:
      image: kubevirt/cirros-container-disk-demo:latest
    name: registryvolume
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: null
    status: "True"
    type: LiveMigratable
  interfaces:
  - ipAddress: 10.129.0.62
    mac: 0a:58:0a:81:00:3e
    name: default
  migrationMethod: BlockMigration
  nodeName: cnv-executor-ysegev-node2.example.com
  phase: Running

Comment 15 Fabian Deutsch 2019-01-11 10:14:56 UTC
Hm, that's odd.
cloud-init is the only real difference, I wonder why this makes a difference. Reassigning it to network.

Comment 17 Dan Kenigsberg 2019-03-26 19:19:35 UTC
Yossi, please further debug where does the guest receives its faulty hostname.
Is it applied by cloud-init?
Is it received over dhpc?

Does it occur only with the bridge default, or also with masquerade and L2?

Comment 18 Yossi Segev 2019-05-01 09:50:09 UTC
The bug look a bit different now:
I'm testing on OCP 4.1/CNV 2.0, so my cluster is deployed on bare-metal.
The "problematic" VMs don't show the hosting node as hostname anymore, but rather the OS name, i.e. "cirros".
The non-problematic VMs show the VM name as hostname, as expected.
Attached domxmls of 3 virt-launcher of 3 VMs:
- vm-cirros.domxml - where hostname is "vm-cirros" (valid).
- vmi-pxe-boot.domxml - where hostname is "vmi-pxe-boot" (valid).
- vmi-ephemeral.domxml - where hostname is "cirros" (buggy).

Comment 19 Yossi Segev 2019-05-01 09:51:47 UTC
Created attachment 1560694 [details]
vm-cirros.domxml - where hostname is "vm-cirros" (valid).

Comment 20 Yossi Segev 2019-05-01 09:53:39 UTC
Created attachment 1560695 [details]
vmi-pxe-boot.domxml - where hostname is "vmi-pxe-boot" (valid).

Comment 21 Yossi Segev 2019-05-01 09:54:34 UTC
Created attachment 1560696 [details]
vmi-ephemeral.domxml - where hostname is "cirros" (BUG).

Comment 22 Yossi Segev 2019-05-01 12:18:16 UTC
(In reply to Dan Kenigsberg from comment #17)
> Yossi, please further debug where does the guest receives its faulty
> hostname.
> Is it applied by cloud-init?
vmi-ephemeral, which is the erroneous one, doesn't include cloud-init, which is also reflected in the attached vmi-ephemeral.domxml.
The valid examples - vmi-pxe-boot and vm-cirros, both include cloud-init.

> Is it received over dhpc?
All examples, except for vmi-pxe-boot, are deployed using the original, basic spec yaml files taken from u/s, hence all get dynamic IP from DHCP.
The difference in vmi-pxe-boot spec is that it includes non-default secondary L2 bridge interface
> 
> Does it occur only with the bridge default, or also with masquerade and L2?
vm-cirros (valid hostname) is configured with only the default interface, whereas vmi-pxe-boot (also valid hostname) is configured with non-default secondary L2 bridge.
vmi-ephemeral and vmi-flavor-small, which both get invalid hostname, include only the default bridge interface.

Comment 23 Yossi Segev 2019-05-02 13:49:43 UTC
The current scenario, where VM's with no cloud-init use some hard-coded default as the hostname, is acceptable.
The "cirros" hostname, which is now seen when deploying  vmi-ephemeral.yaml and vmi-flavor-small.yaml (both don't include cloud-init, and both deploy Cirros image), is acceptable as this default.
Therefore - this bug can be closed.