Bug 1662674
Summary: | Some VMI's are created with hosting node name as hostname | ||
---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | Yossi Segev <ysegev> |
Component: | Networking | Assignee: | Sebastian Scheinkman <sscheink> |
Status: | CLOSED NOTABUG | QA Contact: | Meni Yakove <myakove> |
Severity: | low | Docs Contact: | |
Priority: | medium | ||
Version: | 1.4 | CC: | cnv-qe-bugs, gouyang, ncredi, sgordon, vparekh, ysegev |
Target Milestone: | --- | ||
Target Release: | 2.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-05-02 13:51:28 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Attachments: |
Description
Yossi Segev
2018-12-31 14:06:01 UTC
it looks like that if the vm yaml doesn't have cloudinitdisk, it will not pick up the vmi name from meta. Is VMI.spec.hostname set in the yamls? See also: https://kubevirt.io/user-guide/docs/latest/using-virtual-machines/dns.html (In reply to Fabian Deutsch from comment #2) > Is VMI.spec.hostname set in the yamls? > > See also: > https://kubevirt.io/user-guide/docs/latest/using-virtual-machines/dns.html No, spec.hostname in not set in any of these yaml's (vm-cirros.yaml, vmi-fedora.yaml, vmi-flavor-small.yaml, vmi-ephemeral.yaml), neither those which end-up with the VMI name as the hostname nor those that end with the node as the hostname. Yossi, for every yalm, please provide the image, if cloud-init is configured, and what the vm name finally is to understand the scheme and impact of this bug, i.e.: - yaml: bar.yaml image: cirros cloudinit: no - … vm-cirros.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud exists vmi-fedora.yaml: containerDisk.image: fedora-cloud-container-disk-demo cloudInitNoCloud exists vmi-ephemeral.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud doesn't exist vmi-flavor-small.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud doesn't exist Right, now for each of these I need to understand what the node/hostname is? vm-cirros.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud exists hostname: vm-cirros vmi-fedora.yaml: containerDisk.image: fedora-cloud-container-disk-demo cloudInitNoCloud exists hostname: vmi-fedora vmi-ephemeral.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud doesn't exist hostname: hosting node's name (cnv-executor-ysegev-node2) vmi-flavor-small.yaml: containerDisk.image: cirros-container-disk-demo cloudInitNoCloud doesn't exist hostname: hosting node's name (cnv-executor-ysegev-node2) Thanks. please provide `kubectl get vmi $VMI" for vm-cirros.yaml and vmi-ephemeral.yaml. [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi $VMI NAME AGE PHASE IP NODENAME vm-cirros 3d Running 10.130.0.76 cnv-executor-ysegev-node1.example.com vmi-ephemeral 10d Running 10.129.0.62 cnv-executor-ysegev-node2.example.com Sorry, I meant: kubectl get -o yaml $VMI Are you sure that yaml is the correct resource you want to get? When I try to run this command, it fails with an error saying that yaml is not a known resource. [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get -o yaml $VMI You must specify the type of resource to get. Use "kubectl api-resources" for a complete list of supported resources. error: Required resource not specified. Use "kubectl explain <resource>" for a detailed description of that resource (e.g. kubectl explain pods). See 'kubectl get -h' for help and examples. [cloud-user@cnv-executor-ysegev-master1 ~]$ [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get yaml $VMI error: the server doesn't have a resource type "yaml" [cloud-user@cnv-executor-ysegev-master1 ~]$ [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get yaml vm-cirros error: the server doesn't have a resource type "yaml" [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi -o yaml vm-cirros apiVersion: kubevirt.io/v1alpha2 kind: VirtualMachineInstance metadata: creationTimestamp: 2019-01-06T14:34:05Z finalizers: - foregroundDeleteVirtualMachine generateName: vm-cirros generation: 1 labels: kubevirt.io/nodeName: cnv-executor-ysegev-node1.example.com kubevirt.io/vm: vm-cirros name: vm-cirros namespace: kubevirt ownerReferences: - apiVersion: kubevirt.io/v1alpha2 blockOwnerDeletion: true controller: true kind: VirtualMachine name: vm-cirros uid: b6de343e-0d11-11e9-b528-fa163eadcce0 resourceVersion: "2856394" selfLink: /apis/kubevirt.io/v1alpha2/namespaces/kubevirt/virtualmachineinstances/vm-cirros uid: 1e46a38f-11c0-11e9-b528-fa163eadcce0 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk volumeName: registryvolume - disk: bus: virtio name: cloudinitdisk volumeName: cloudinitvolume interfaces: - bridge: {} name: default features: acpi: enabled: true firmware: uuid: 0d2a2043-41c0-59c3-9b17-025022203668 machine: type: q35 resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/cirros-container-disk-demo:latest name: registryvolume - cloudInitNoCloud: userData: | #!/bin/sh echo 'printed from cloud-init userdata' name: cloudinitvolume status: conditions: - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable interfaces: - ipAddress: 10.130.0.76 mac: 0a:58:0a:82:00:4c name: default migrationMethod: BlockMigration nodeName: cnv-executor-ysegev-node1.example.com phase: Running [cloud-user@cnv-executor-ysegev-master1 ~]$ kubectl get vmi -o yaml vmi-ephemeral apiVersion: kubevirt.io/v1alpha2 kind: VirtualMachineInstance metadata: creationTimestamp: 2018-12-31T12:10:54Z finalizers: - foregroundDeleteVirtualMachine generation: 1 labels: kubevirt.io/nodeName: cnv-executor-ysegev-node2.example.com special: vmi-ephemeral name: vmi-ephemeral namespace: kubevirt resourceVersion: "1441707" selfLink: /apis/kubevirt.io/v1alpha2/namespaces/kubevirt/virtualmachineinstances/vmi-ephemeral uid: 1f381856-0cf5-11e9-b528-fa163eadcce0 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk volumeName: registryvolume interfaces: - bridge: {} name: default features: acpi: enabled: true firmware: uuid: 091426af-12ad-4048-837d-251b9551a86b machine: type: q35 resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/cirros-container-disk-demo:latest name: registryvolume status: conditions: - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable interfaces: - ipAddress: 10.129.0.62 mac: 0a:58:0a:81:00:3e name: default migrationMethod: BlockMigration nodeName: cnv-executor-ysegev-node2.example.com phase: Running Hm, that's odd. cloud-init is the only real difference, I wonder why this makes a difference. Reassigning it to network. Yossi, please further debug where does the guest receives its faulty hostname. Is it applied by cloud-init? Is it received over dhpc? Does it occur only with the bridge default, or also with masquerade and L2? The bug look a bit different now: I'm testing on OCP 4.1/CNV 2.0, so my cluster is deployed on bare-metal. The "problematic" VMs don't show the hosting node as hostname anymore, but rather the OS name, i.e. "cirros". The non-problematic VMs show the VM name as hostname, as expected. Attached domxmls of 3 virt-launcher of 3 VMs: - vm-cirros.domxml - where hostname is "vm-cirros" (valid). - vmi-pxe-boot.domxml - where hostname is "vmi-pxe-boot" (valid). - vmi-ephemeral.domxml - where hostname is "cirros" (buggy). Created attachment 1560694 [details]
vm-cirros.domxml - where hostname is "vm-cirros" (valid).
Created attachment 1560695 [details]
vmi-pxe-boot.domxml - where hostname is "vmi-pxe-boot" (valid).
Created attachment 1560696 [details]
vmi-ephemeral.domxml - where hostname is "cirros" (BUG).
(In reply to Dan Kenigsberg from comment #17) > Yossi, please further debug where does the guest receives its faulty > hostname. > Is it applied by cloud-init? vmi-ephemeral, which is the erroneous one, doesn't include cloud-init, which is also reflected in the attached vmi-ephemeral.domxml. The valid examples - vmi-pxe-boot and vm-cirros, both include cloud-init. > Is it received over dhpc? All examples, except for vmi-pxe-boot, are deployed using the original, basic spec yaml files taken from u/s, hence all get dynamic IP from DHCP. The difference in vmi-pxe-boot spec is that it includes non-default secondary L2 bridge interface > > Does it occur only with the bridge default, or also with masquerade and L2? vm-cirros (valid hostname) is configured with only the default interface, whereas vmi-pxe-boot (also valid hostname) is configured with non-default secondary L2 bridge. vmi-ephemeral and vmi-flavor-small, which both get invalid hostname, include only the default bridge interface. The current scenario, where VM's with no cloud-init use some hard-coded default as the hostname, is acceptable. The "cirros" hostname, which is now seen when deploying vmi-ephemeral.yaml and vmi-flavor-small.yaml (both don't include cloud-init, and both deploy Cirros image), is acceptable as this default. Therefore - this bug can be closed. |