Description of problem: Deploying an environment with bare metal nodes simulated as VMs via redfish interface with sushy-tools. After master and worker nodes have been provisioned the master nodes have UEFI firmware set while the worker nodes have BIOS firmware set. There seems to be a config drift in the bootstrap and baremetal-operator Ironic. On the bootstrap Ironic we set "capabilities": "boot_mode:uefi" as introduced by https://github.com/openshift/installer/pull/2727/files while in bmo Ironic the nodes do not have capabilities set: (openstack-cli) [kni@provisionhost-0 ~]$ openstack baremetal node show openshift-worker-0 -f yaml allocation_uuid: null automated_clean: null bios_interface: no-bios boot_interface: ipxe chassis_uuid: null clean_step: {} conductor: master-0.ocp-edge-cluster.qe.lab.redhat.com conductor_group: '' console_enabled: false console_interface: no-console created_at: '2020-02-10T20:51:10+00:00' deploy_interface: direct deploy_step: {} description: null driver: redfish driver_info: deploy_kernel: http://172.22.0.3:6180/images/ironic-python-agent.kernel deploy_ramdisk: http://172.22.0.3:6180/images/ironic-python-agent.initramfs redfish_address: http://192.168.123.1:8000 redfish_password: '******' redfish_system_id: /redfish/v1/Systems/9a7aabf3-0e44-4a25-8dc9-4ff8fcbe445d redfish_username: admin driver_internal_info: agent_last_heartbeat: '2020-02-10T20:57:29.519171' agent_url: http://172.22.0.95:9999 agent_version: 5.0.1.dev7 deploy_boot_mode: bios deploy_steps: null is_whole_disk_image: true last_power_state_change: '2020-02-10T20:58:34.159989' root_uuid_or_disk_id: '0x00000000' extra: {} fault: null inspect_interface: inspector inspection_finished_at: null inspection_started_at: '2020-02-10T20:51:12+00:00' instance_info: configdrive: '******' image_checksum: http://172.22.0.3:6180/images/rhcos-44.81.202001241431.0-openstack.x86_64.qcow2/rhcos-44.81.202001241431.0-compressed.x86_64.qcow2.md5sum image_source: http://172.22.0.3:6180/images/rhcos-44.81.202001241431.0-openstack.x86_64.qcow2/rhcos-44.81.202001241431.0-compressed.x86_64.qcow2 image_type: whole-disk-image image_url: '******' root_gb: 10 instance_uuid: 78285fd9-739c-4409-87a5-30310529e3bf last_error: null maintenance: false maintenance_reason: null management_interface: redfish name: openshift-worker-0 network_interface: noop owner: null power_interface: redfish power_state: power on properties: capabilities: cpu_vt:true,cpu_aes:true,cpu_hugepages:true,cpu_hugepages_1g:true cpu_arch: x86_64 cpus: '8' local_gb: 50 memory_mb: '16384' root_device: name: /dev/sda protected: false protected_reason: null provision_state: active provision_updated_at: '2020-02-10T20:58:35+00:00' raid_config: {} raid_interface: no-raid rescue_interface: no-rescue reservation: null resource_class: null storage_interface: noop target_power_state: null target_provision_state: null target_raid_config: {} traits: [] updated_at: '2020-02-10T20:58:35+00:00' uuid: 29817852-edcc-4c94-beca-a9d6101374f0 vendor_interface: no-vendor Version: 4.4.0-0.nightly-2020-02-10-143346
Priority still accurate and re-adding upcomingsprint keyword to confirm we are aware of the bug and will endeavour to work on this in coming sprints.
*** Bug 1852617 has been marked as a duplicate of this bug. ***
This should be resolved by https://github.com/metal3-io/baremetal-operator/pull/586
Is it the same as https://bugzilla.redhat.com/show_bug.cgi?id=1862964?
Yes, this is a duplicate of 1862964
*** This bug has been marked as a duplicate of bug 1862964 ***