Bug 1700815 - [Kubevirt-Foreman] Mac address not shown even if VMI has mac address
Summary: [Kubevirt-Foreman] Mac address not shown even if VMI has mac address
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Compute Resources - CNV
Version: 6.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: 6.6.0
Assignee: Piotr Kliczewski
QA Contact: Vladimír Sedmík
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-17 12:15 UTC by Vatsal Parekh
Modified: 2019-10-22 19:49 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-22 19:49:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github fog fog-kubevirt pull 122 0 None None None 2019-04-23 09:01:26 UTC

Description Vatsal Parekh 2019-04-17 12:15:10 UTC
Description of problem:
Even after importing a VM as managed host from KubeVirt, the VM doesn't show mac even if the VMI has it on KubeVirt side.
Here are manifest of VMI and Host on Foreman side

[cloud-user@cnv-executor-vatsal-master1 ~]$ oc get vmi test-e2e-vm -o yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
  creationTimestamp: 2019-04-17T07:21:58Z
  finalizers:
  - foregroundDeleteVirtualMachine
  generateName: test-e2e-vm
  generation: 1
  labels:
    kubevirt.io/nodeName: cnv-executor-vatsal-node2.example.com
    special: test-e2e-key
    vm.cnv.io/name: test-e2e-vm
  name: test-e2e-vm
  namespace: default
  ownerReferences:
  - apiVersion: kubevirt.io/v1alpha3
    blockOwnerDeletion: true
    controller: true
    kind: VirtualMachine
    name: test-e2e-vm
    uid: 520bce68-5f50-11e9-85fd-fa163e387aec
  resourceVersion: "6346993"
  selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/test-e2e-vm
  uid: 7cb6db54-60e1-11e9-85fd-fa163e387aec
spec:
  domain:
    cpu:
      cores: 1
    devices:
      disks:
      - disk:
          bus: virtio
        name: rootdisk
      interfaces:
      - bridge: {}
        name: default
    features:
      acpi:
        enabled: true
    firmware:
      uuid: 6b0c5a8b-2f12-5a83-b8e0-a153b2c46f44
    machine:
      type: q35
    resources:
      requests:
        memory: 64Mi
  networks:
  - name: default
    pod: {}
  volumes:
  - name: rootdisk
    persistentVolumeClaim:
      claimName: test-e2e-pvc
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: null
    message: cannot migrate VMI with non-shared PVCs
    reason: DisksNotLiveMigratable
    status: "False"
    type: LiveMigratable
  - lastProbeTime: null
    lastTransitionTime: 2019-04-17T07:22:26Z
    status: "True"
    type: Ready
  interfaces:
  - ipAddress: 10.130.0.180
    mac: 0a:58:0a:82:00:b4
    name: default
  migrationMethod: BlockMigration
  nodeName: cnv-executor-vatsal-node2.example.com
  phase: Running

yaml shown in foreman
---
parameters:
  hostgroup: kubevirt
  foreman_subnets:
  - name: example
    network: 172.16.0.1
    mask: 255.255.255.0
    gateway: ''
    dns_primary: ''
    dns_secondary: ''
    from: ''
    to: ''
    boot_mode: DHCP
    ipam: DHCP
    vlanid: 
    mtu: 1500
    network_type: IPv4
    description: ''
  foreman_interfaces:
  - ip: 172.16.0.203
    ip6: ''
    mac: 
    name: test-e2e-vm.example.com
    attrs: {}
    virtual: false
    link: true
    identifier: ''
    managed: true
    primary: true
    provision: true
    subnet:
      name: example
      network: 172.16.0.1
      mask: 255.255.255.0
      gateway: ''
      dns_primary: ''
      dns_secondary: ''
      from: ''
      to: ''
      boot_mode: DHCP
      ipam: DHCP
      vlanid: 
      mtu: 1500
      network_type: IPv4
      description: ''
    subnet6: 
    tag: 
    attached_to: 
    type: Interface
  location: Default Location
  location_title: Default Location
  organization: Default Organization
  organization_title: Default Organization
  domainname: example.com
  foreman_domain_description: example
  owner_name: Admin User
  owner_email: root.eng.rdu2.redhat.com
  ssh_authorized_keys: []
  foreman_users:
    admin:
      firstname: Admin
      lastname: User
      mail: root.eng.rdu2.redhat.com
      description: ''
      fullname: Admin User
      name: admin
      ssh_authorized_keys: []
  root_pw: "$5$QEnoSepZ62D6bPU2$mT8qktuMKVj6bZe/Uc0JKRg6CEb7hNcqQ42.kmHTO61"
  foreman_config_groups: []
  puppetmaster: ''
classes: []

Version-Release number of selected component (if applicable):
Plugin master + foreman nightly

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
There's mac available on the VMI object but the Host yaml

Expected results:


Additional info:

Comment 3 Piotr Kliczewski 2019-04-17 13:21:36 UTC
I verified fog-kubevirt with 1.2.3 and it seems to work correctly. Looking at the yamls in comment #1 I see that interfaces in vmi and host objects have different names.

Comment 4 Moti Asayag 2019-04-22 15:17:24 UTC
Piotr, the issue here is that Fog::Kubevirt::Compute::Server doesn't store the mac address if it is only reported on the VmInstance.
Therefore we should add another handling for that to the same places where we invoke `Server.parse server` (lib/fog/kubevirt/compute/requests/get_server.rb and lib/fog/kubevirt/compute/requests/list_servers.rb). We already rely on vminstance to complete the Server data, but mac_address should be taken care of as well.

Comment 8 Bryan Kearney 2019-10-22 19:49:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:3172


Note You need to log in before you can comment on or make changes to this bug.