Bug 1231716 - aarch64 guest install, dracut needs to add virtio-pci
Summary: aarch64 guest install, dracut needs to add virtio-pci
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: systemd
Version: 7.2
Hardware: aarch64
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: systemd-maint
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On: 1278165
Blocks: 1212027 1289485 1313485
TreeView+ depends on / blocked
 
Reported: 2015-06-15 09:44 UTC by Andrew Jones
Modified: 2018-03-12 13:07 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-26 17:02:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Andrew Jones 2015-06-15 09:44:21 UTC
AArch64 guests can now use virtio-pci, and we're converting over to it from virtio-mmio. The RHELSA installation initrd in compose/.../pxeboot/ already has the virtio_pci module, so I can initiate an install of a virtio-pci guest, and succeed to install it. Strangely, even in that case though, I end up with an installed initrd that does not have virtio-pci (I expected dracut to pick up all currently loaded modules, but I guess it doesn't). Also, not so strangely, if I install a guest using virtio-mmio, but then convert it to use virtio-pci, the guest's initrd needs virtio-pci added. It would be better if we just always add the module unconditionally to the initrd, in order to make sure we can both reboot a virtio-pci guest after install, and also to convert guests.

Comment 6 Andrew Jones 2015-09-23 12:55:24 UTC
Hmm, I just noticed that this bug is supposedly already ON_QA. However, as you can see from comment 4, installing a latest compose still has the problem. This bug has not been fixed.

Comment 10 Cole Robinson 2015-09-29 19:52:37 UTC
I'm poking a bit at virtio-pci + aarch64. I don't think putting virtio-pci in dracut is the whole story though.

The libvirt support for aarch64 + pci uses qemu's generic PCI express host hw. The qemu patches from January are here:

https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg00332.html

In patch #3 Alex pointed out that linux kernel support for arm64 generic PCI still needed work:

https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg00331.html

And it seems some of those changes he mentions only landed in kernel.git in August, and are queued for 4.3. For example:

https://github.com/torvalds/linux/commit/aa4a5c0d2d7e3c30f9df033ea0367f148bb369f6

That patch isn't in rhelsa kernel, or fedora. So not sure if libvirt + aarch64 + pci ever worked for any RH distro. Though drjones I see your demo qemu command line at bug 1231719#c2 doesn't use an explicit PCI bridge on the command line, so maybe it's using some different code path, and that's why it works for the install case.

Also FWIW, virtio_pci is in the initrd of Fedora 23 install media, and in the post-install initrd, even if using virtio-mmio. But it isn't enough to make libvirt + aarch64 + pci work.

Comment 11 Andrew Jones 2015-09-30 09:15:33 UTC
(In reply to Cole Robinson from comment #10)
> I'm poking a bit at virtio-pci + aarch64. I don't think putting virtio-pci
> in dracut is the whole story though.
> 
> The libvirt support for aarch64 + pci uses qemu's generic PCI express host
> hw. The qemu patches from January are here:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg00332.html
> 
> In patch #3 Alex pointed out that linux kernel support for arm64 generic PCI
> still needed work:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg00331.html

We don't care about this, it's for DT boots.

> 
> And it seems some of those changes he mentions only landed in kernel.git in
> August, and are queued for 4.3. For example:
> 
> https://github.com/torvalds/linux/commit/
> aa4a5c0d2d7e3c30f9df033ea0367f148bb369f6
> 
> That patch isn't in rhelsa kernel, or fedora. So not sure if libvirt +
> aarch64 + pci ever worked for any RH distro. Though drjones I see your demo
> qemu command line at bug 1231719#c2 doesn't use an explicit PCI bridge on
> the command line, so maybe it's using some different code path, and that's
> why it works for the install case.

Yes, the ACPI code path. Works for install and normal boot/use case.

> 
> Also FWIW, virtio_pci is in the initrd of Fedora 23 install media, and in
> the post-install initrd, even if using virtio-mmio. But it isn't enough to
> make libvirt + aarch64 + pci work.

Fedora requires acpi=force added to the guest command line, RHELSA guests use ACPI by default. I'm pretty sure I've pointed this out in different places/BZs before; yes, see bug 1231727, for example. I didn't point it out here, because this bug is specific to RHELSA dracut not putting the virtio-pci module into the guest's initrd, which doesn't require booting with virtio-pci to be verified (lsinitrd is enough).

Comment 12 Harald Hoyer 2015-10-09 15:01:02 UTC
Please retry with dracut-033-358.el7

Comment 13 Andrew Jones 2015-10-18 17:45:45 UTC
(In reply to Harald Hoyer from comment #12)
> Please retry with dracut-033-358.el7

Hmm... Has it been changed so only the modules loaded during install are in the resulting initramfs? As I stated in the description, we should unconditionally add virtio-pci, allowing us to install with virtio-mmio and then convert guests to virtio-pci later. Here's my test results

# rpm -qa | grep dracut
dracut-network-033-358.el7.aarch64
dracut-config-rescue-033-358.el7.aarch64
dracut-033-358.el7.aarch64

# lsinitrd /boot/initramfs-4.2.0-0.21.el7.aarch64.img | grep virtio
-rw-r--r--   1 root     root        62720 Oct  6 22:57 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/net/virtio_net.ko
-rw-r--r--   1 root     root        32840 Oct  6 22:56 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/scsi/virtio_scsi.ko
drwxr-xr-x   2 root     root            0 Oct 16 16:22 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/virtio
-rw-r--r--   1 root     root        21608 Oct  6 22:56 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/virtio/virtio.ko
-rw-r--r--   1 root     root        21104 Oct  6 22:56 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/virtio/virtio_mmio.ko
-rw-r--r--   1 root     root        28368 Oct  6 22:56 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/virtio/virtio_ring.ko

Also, we should also add [back] virtio-blk (I think that was there before). And, while we're at it, it'd be nice to add all the other virtio drivers that may be needed early on boot; virtio-gpu, virtio-input, virtio-console, virtio-rng

Here's a list of all the modules we build

# grep VIRTIO /boot/config-4.2.0-0.21.el7.aarch64 
CONFIG_VIRTIO_BLK=m
CONFIG_SCSI_VIRTIO=m
CONFIG_VIRTIO_NET=m
CONFIG_VIRTIO_CONSOLE=m
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_VIRTIO=m
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_INPUT=m
CONFIG_VIRTIO_MMIO=m
# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set

Thanks,
drew

Comment 20 Lukáš Nykrýn 2015-11-09 13:15:54 UTC
Looks like a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1278165

Comment 23 Andrew Jones 2015-11-09 15:59:40 UTC
I've tested the systemd test package and it works. Once that package gets into a compose we can then try a guest install to make sure all the virtio drivers get added to the initrd.

Comment 24 Andrea Bolognani 2016-05-12 11:54:05 UTC
Seems to be working with the latest nightly compose:

# systemd-detect-virt 
kvm

# uname -a
Linux localhost.localdomain 4.5.0-0.35.el7.aarch64 #1 SMP Fri May 6 08:25:11 EDT 2016 aarch64 aarch64 aarch64 GNU/Linux

# lsinitrd | grep virtio_pci
-rw-r--r--   1 root     root        41350 May  6 15:18 usr/lib/modules/4.5.0-0.35.el7.aarch64/kernel/drivers/virtio/virtio_pci.ko

Comment 26 Lukáš Nykrýn 2016-11-01 14:04:30 UTC
This should be a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1278165 but I would like if someone from QE could look at it.

Comment 27 John Feeney 2018-01-26 16:53:35 UTC
Just trying to clean up old AArch64 bzs and found this. Since comment #26 suggests this a dupliate, can we close it since it has been waiting patiently for an answer for over a year? Thanks.

Comment 28 Andrew Jones 2018-01-26 16:56:11 UTC
(In reply to John Feeney from comment #27)
> Just trying to clean up old AArch64 bzs and found this. Since comment #26
> suggests this a dupliate, can we close it since it has been waiting
> patiently for an answer for over a year? Thanks.

Should be close-able. We switched to virtio-pci long ago and can do installations just fine, so it must be resolved already.

Comment 29 John Feeney 2018-01-26 17:02:25 UTC
Thanks much.


Note You need to log in before you can comment on or make changes to this bug.