Bug 1865883 - aarch64: migration failure: unable to find any master var store for loader
Summary: aarch64: migration failure: unable to find any master var store for loader
Keywords:
Status: CLOSED DUPLICATE of bug 1852910
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.2
Hardware: aarch64
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 8.3
Assignee: Virtualization Maintenance
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1825378
TreeView+ depends on / blocked
 
Reported: 2020-08-04 12:48 UTC by Andrew Jones
Modified: 2020-09-09 07:29 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-09 07:29:16 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
domain xml (4.12 KB, text/plain)
2020-08-10 14:40 UTC, Andrew Jones
no flags Details

Description Andrew Jones 2020-08-04 12:48:35 UTC
When migrating a guest installed with a minimal virt-install command line, the migration fails with

error: operation failed: unable to find any master var store for loader: /usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw

Changing the guest XML to specify the loader as /usr/share/AAVMF/AAVMF_CODE.fd, instead of /usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw, works around the problem.

Comment 1 Andrea Bolognani 2020-08-10 13:29:13 UTC
Please include the domain XML as well as version numbers
for all virtualization components on both sides of the
migration.

Comment 2 Andrew Jones 2020-08-10 14:40:03 UTC
Created attachment 1710966 [details]
domain xml


Both machines have the same packages installed

libvirt-6.6.0-2.module+el8.3.0+7567+dc41c0a9.aarch64
qemu-kvm-5.0.0-2.module+el8.3.0+7379+0505d6ca.aarch64
edk2-aarch64-20200602gitca407c7246bf-2.el8.noarch
virt-install-2.2.1-3.el8.noarch

Installed the guest with

virt-install -n guest -l /path/to/latest/rhel8 --vcpus 16 --disk size=6 --memory 4096 --network bridge=br0

Migrated with

virsh migrate --live --verbose rhel8 qemu+ssh://$PEER/system


/var/lib/libvirt/images/ is an NFS share

Comment 6 Andrea Bolognani 2020-09-04 16:21:52 UTC
I managed to reproduce this today, with zero setup.

We should not hit that error message, since there is a firmware
definition file on the system that clearly defines the relationship
between the firmware image in use and the corresponding varstore
template:

  $ cat /usr/share/qemu/firmware/60-edk2-aarch64.json
  {
      "description": "UEFI firmware for ARM64 virtual machines",
      "interface-types": [
          "uefi"
      ],
      "mapping": {
          "device": "flash",
          "executable": {
              "filename": "/usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw",
              "format": "raw"
          },
          "nvram-template": {
              "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
              "format": "raw"
          }
      },
      "targets": [
          {
              "architecture": "aarch64",
              "machines": [
                  "virt-*"
              ]
          }
      ],
      "features": [
  
      ],
      "tags": [
  
      ]
  }

My current theory is that the file is not being parsed for some
reason, and this hardcoded list from src/qemu/qemu_conf.c ends up
being used instead:

  #ifndef DEFAULT_LOADER_NVRAM
  # define DEFAULT_LOADER_NVRAM \
      "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd:" \
      "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd:" \
      "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd:" \
      "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
  #endif

This would explain why pointing to AAVMF_CODE.fd makes things work.
Note that

  $ ls -l /usr/share/AAVMF/AAVMF_CODE.fd
  lrwxrwxrwx. 1 root root 42 Aug 10 02:44 /usr/share/AAVMF/AAVMF_CODE.fd -> ../edk2/aarch64/QEMU_EFI-silent-pflash.raw

so it's literally the same file we're talking about.

I'll investigate further, and hopefully get to the bottom of it all,
next week.

Comment 7 Michal Privoznik 2020-09-08 17:26:55 UTC
I think this is a dup of bug 1852910. It has the same symptoms.

Comment 8 Andrew Jones 2020-09-08 18:08:27 UTC
(In reply to Michal Privoznik from comment #7)
> I think this is a dup of bug 1852910. It has the same symptoms.

Indeed. Sorry Andrea, I completely forgot that I had already reported this.

Comment 9 Andrea Bolognani 2020-09-09 07:29:16 UTC
Closing as duplicate.

*** This bug has been marked as a duplicate of bug 1852910 ***


Note You need to log in before you can comment on or make changes to this bug.