Description of problem: Even after importing a VM as managed host from KubeVirt, the VM doesn't show mac even if the VMI has it on KubeVirt side. Here are manifest of VMI and Host on Foreman side [cloud-user@cnv-executor-vatsal-master1 ~]$ oc get vmi test-e2e-vm -o yaml apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: creationTimestamp: 2019-04-17T07:21:58Z finalizers: - foregroundDeleteVirtualMachine generateName: test-e2e-vm generation: 1 labels: kubevirt.io/nodeName: cnv-executor-vatsal-node2.example.com special: test-e2e-key vm.cnv.io/name: test-e2e-vm name: test-e2e-vm namespace: default ownerReferences: - apiVersion: kubevirt.io/v1alpha3 blockOwnerDeletion: true controller: true kind: VirtualMachine name: test-e2e-vm uid: 520bce68-5f50-11e9-85fd-fa163e387aec resourceVersion: "6346993" selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/test-e2e-vm uid: 7cb6db54-60e1-11e9-85fd-fa163e387aec spec: domain: cpu: cores: 1 devices: disks: - disk: bus: virtio name: rootdisk interfaces: - bridge: {} name: default features: acpi: enabled: true firmware: uuid: 6b0c5a8b-2f12-5a83-b8e0-a153b2c46f44 machine: type: q35 resources: requests: memory: 64Mi networks: - name: default pod: {} volumes: - name: rootdisk persistentVolumeClaim: claimName: test-e2e-pvc status: conditions: - lastProbeTime: null lastTransitionTime: null message: cannot migrate VMI with non-shared PVCs reason: DisksNotLiveMigratable status: "False" type: LiveMigratable - lastProbeTime: null lastTransitionTime: 2019-04-17T07:22:26Z status: "True" type: Ready interfaces: - ipAddress: 10.130.0.180 mac: 0a:58:0a:82:00:b4 name: default migrationMethod: BlockMigration nodeName: cnv-executor-vatsal-node2.example.com phase: Running yaml shown in foreman --- parameters: hostgroup: kubevirt foreman_subnets: - name: example network: 172.16.0.1 mask: 255.255.255.0 gateway: '' dns_primary: '' dns_secondary: '' from: '' to: '' boot_mode: DHCP ipam: DHCP vlanid: mtu: 1500 network_type: IPv4 description: '' foreman_interfaces: - ip: 172.16.0.203 ip6: '' mac: name: test-e2e-vm.example.com attrs: {} virtual: false link: true identifier: '' managed: true primary: true provision: true subnet: name: example network: 172.16.0.1 mask: 255.255.255.0 gateway: '' dns_primary: '' dns_secondary: '' from: '' to: '' boot_mode: DHCP ipam: DHCP vlanid: mtu: 1500 network_type: IPv4 description: '' subnet6: tag: attached_to: type: Interface location: Default Location location_title: Default Location organization: Default Organization organization_title: Default Organization domainname: example.com foreman_domain_description: example owner_name: Admin User owner_email: root.eng.rdu2.redhat.com ssh_authorized_keys: [] foreman_users: admin: firstname: Admin lastname: User mail: root.eng.rdu2.redhat.com description: '' fullname: Admin User name: admin ssh_authorized_keys: [] root_pw: "$5$QEnoSepZ62D6bPU2$mT8qktuMKVj6bZe/Uc0JKRg6CEb7hNcqQ42.kmHTO61" foreman_config_groups: [] puppetmaster: '' classes: [] Version-Release number of selected component (if applicable): Plugin master + foreman nightly How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: There's mac available on the VMI object but the Host yaml Expected results: Additional info:
I verified fog-kubevirt with 1.2.3 and it seems to work correctly. Looking at the yamls in comment #1 I see that interfaces in vmi and host objects have different names.
Piotr, the issue here is that Fog::Kubevirt::Compute::Server doesn't store the mac address if it is only reported on the VmInstance. Therefore we should add another handling for that to the same places where we invoke `Server.parse server` (lib/fog/kubevirt/compute/requests/get_server.rb and lib/fog/kubevirt/compute/requests/list_servers.rb). We already rely on vminstance to complete the Server data, but mac_address should be taken care of as well.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:3172