Bug 1237250
Summary: | aarch64: libguestfs should now prefer virtio-pci | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Andrew Jones <drjones> |
Component: | libguestfs | Assignee: | Richard W.M. Jones <rjones> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 7.2 | CC: | jcm, jfeeney, leiwang, linl, ptoscano, rjones, wshi, yoguo |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | aarch64 | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libguestfs-1.36.1-1.el7 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-08-01 22:08:55 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1359086 | ||
Bug Blocks: | 1212027, 1221569, 1288337, 1301891 | ||
Attachments: |
Description
Andrew Jones
2015-06-30 15:52:49 UTC
I couldn't get this to work upstream, unfortunately. Any tips? I'm using latest qemu from git and kernel 4.1.0-0.rc7.git0.1.fc23.aarch64 The qemu command line is: /home/rjones/d/qemu/aarch64-softmmu/qemu-system-aarch64 \ -global virtio-blk-pci.scsi=off \ -nodefconfig \ -enable-fips \ -nodefaults \ -display none \ -M virt \ -cpu host \ -machine accel=kvm:tcg \ -m 768 \ -no-reboot \ -rtc driftfix=slew \ -global kvm-pit.lost_tick_policy=discard \ -drive if=pflash,format=raw,file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,readonly \ -drive if=pflash,format=raw,file=/home/rjones/d/libguestfs/tmp/libguestfsUATPeC/vars.fd.2 \ -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel \ -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/initrd \ -device virtio-scsi-pci,id=scsi \ -drive file=/home/rjones/d/libguestfs/tmp/libguestfsUATPeC/scratch.1,cache=unsafe,format=raw,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none \ -device scsi-hd,drive=appliance \ -device virtio-serial-pci \ -serial stdio \ -chardev socket,path=/home/rjones/d/libguestfs/tmp/libguestfsUATPeC/guestfsd.sock,id=channel0 \ -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ -append 'panic=1 console=ttyAMA0 earlyprintk=pl011,0x9000000 ignore_loglevel efi-rtc=noprobe udevtimeout=6000 udev.event-timeout=6000 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color' UEFI loads and runs, apparently normally. The kernel loads and starts running, apparently normally. However the kernel cannot see the SCSI drives. There are virtio_scsi.ko and virtio_pci.ko modules and they are loaded by the guest kernel. I will attach the full log shortly. Created attachment 1044732 [details]
log.txt
No difference after removing acpi=off from the command line. Created attachment 1044770 [details]
0001-arm-Prefer-virtio-pci-instead-of-virtio-mmio-RHBZ-12.patch
The patch would be something like this one, but as mentioned
above I cannot get it to work on Fedora.
There's an interesting thing: The patch in comment 3 *works* on armv7hl. Now it's not an exact comparison because I'm using quite a different (actually, somewhat older) kernel on 32 bit ARM: 4.0.6-300.fc22.armv7hl+lpae So I don't exactly know what to make of that. (In reply to Richard W.M. Jones from comment #1) > I couldn't get this to work upstream, unfortunately. Any tips? > I'm using latest qemu from git and kernel 4.1.0-0.rc7.git0.1.fc23.aarch64 > I'm not sure the Fedora kernel has everything it needs for ACPI+PCI. I was able to boot a RHELSA guest with a nearly identical commandline as is in comment 1. I only changed the kernel and initrd to 4.1.0-0.rc7.10.el7.aarch64 (the initrd also needed virtio-pci added), changed the root disk image path to my guest image, and removed acpi=off from the command line. I also used latest qemu from git. I'm not sure what to make of comment 5 either (I haven't looked at the status of the arm kernel and pci support), but I guess it's possible that the PCIe host bridge support actually works with that guest kernel and devicetree. In RHEL 7.3 virtio-mmio will be preferred. In 7.4 we can look at defaulting to virtio-pci. Thanks - looking forward to 7.4 :) Upstream commit is 4a9af91e3632047579c3a7e011c1484c97bd959a, so this is expected to be fixed in the rebase. packages info: libguestfs-1.36.3-1.el7.aarch64 qemu-kvm-rhev-2.6.0-28.el7_3.9.aarch64 kernel-4.5.0-15.el7.aarch64 I didn't understand this bug clearly. I just executed the command of libguestfs-test-tool on a rhel7.3 machine with aarch64 cpu. I don't kown how to verify whether virtio-pci has been used instead of virtio-mmio in new libguestfs version. Any other tips that you can give me about the reproduce steps? thanks. (In reply to YongkuiGuo from comment #12) > packages info: > libguestfs-1.36.3-1.el7.aarch64 > qemu-kvm-rhev-2.6.0-28.el7_3.9.aarch64 > kernel-4.5.0-15.el7.aarch64 > > I didn't understand this bug clearly. I just executed the command of > libguestfs-test-tool on a rhel7.3 machine with aarch64 cpu. I don't kown how > to verify whether virtio-pci has been used instead of virtio-mmio in new > libguestfs version. Any other tips that you can give me about the reproduce > steps? thanks. libguestfs-test-tool is the correct approach. Can you attach the full log from that to this bug and I will show you what to look for. (In reply to Richard W.M. Jones from comment #13) > (In reply to YongkuiGuo from comment #12) > > packages info: > > libguestfs-1.36.3-1.el7.aarch64 > > qemu-kvm-rhev-2.6.0-28.el7_3.9.aarch64 > > kernel-4.5.0-15.el7.aarch64 > > > > I didn't understand this bug clearly. I just executed the command of > > libguestfs-test-tool on a rhel7.3 machine with aarch64 cpu. I don't kown how > > to verify whether virtio-pci has been used instead of virtio-mmio in new > > libguestfs version. Any other tips that you can give me about the reproduce > > steps? thanks. > > libguestfs-test-tool is the correct approach. Can you attach the > full log from that to this bug and I will show you what to look > for. And I should say that you actually need to run libguestfs-test-tool both ways: LIBGUESTFS_BACKEND=direct libguestfs-test-tool > test.direct 2>&1 libguestfs-test-tool > test.libvirt 2>&1 and for the libvirt case you'll also need to find the right file ~/.cache/libvirt/qemu/log/guestfs-XXXX.log and attach that. Created attachment 1267845 [details]
the output with "LIBGUESTFS_BACKEND=direct libguestfs-test-tool"
Created attachment 1267846 [details]
the output with "libguestfs-test-tool"
(In reply to YongkuiGuo from comment #15) > Created attachment 1267845 [details] > the output with "LIBGUESTFS_BACKEND=direct libguestfs-test-tool" In this one (running qemu directly), notice the use of virtio-scsi-pci: -device virtio-scsi-pci,id=scsi \ -drive file=/tmp/libguestfs9QSxvc/scratch.1,cache=unsafe,format=raw,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \ -device scsi-hd,drive=appliance \ Previously this would have been using 'virtio-scsi-device' which is qemu's name for virtio-mmio. (In reply to YongkuiGuo from comment #16) > Created attachment 1267846 [details] > the output with "libguestfs-test-tool" You'll need to find /var/log/libvirt/qemu/guestfs-mn4kkegshfkaihjr.log to see the differences, since there is no visible difference in the libvirt XML that libguestfs generates, but there should be a difference in the qemu command line which libvirt generates. Created attachment 1267852 [details]
../libvirt/qemu/log/guestfs-xxx.log for libvirt case
(In reply to YongkuiGuo from comment #18) > Created attachment 1267852 [details] > ../libvirt/qemu/log/guestfs-xxx.log for libvirt case In this one it's the presence again of: -device virtio-scsi-pci,id=scsi0,bus=pci.1,addr=0x0 which shows that this is using virtio-pci. If it said "virtio-scsi-device", it would still be using virtio-mmio. This bug can be marked as VERIFIED based on the evidence provided. In the file of guestfs-mn4kkegshfkaihjr.log, I find out virtio-scsi-pci, virtio-serial-pci and virtio-rng-pci, which were modified in 0001-arm-Prefer-virtio-pci-instead-of-virtio-mmio-RHBZ-12.patch. Now I understand. Thanks again. Verified with packages info: libguestfs-1.36.3-1.el7.aarch64 Step: 1. Prepare a rhel7.3 machine with aarch64 cpu 2. LIBGUESTFS_BACKEND=direct libguestfs-test-tool --------------------------------------- ... ... device virtio-scsi-pci,id=scsi \ -drive file=/tmp/libguestfs9QSxvc/scratch.1,cache=unsafe,format=raw,id=hd0,if=none \ ---------------------------------------- 3. libguestfs-test-tool 4. cat ./libvirt/qemu/log/guestfs-xxx.log ---------------------------------------- ... ... -device virtio-scsi-pci,id=scsi0,bus=pci.1,addr=0x0 -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.3,addr=0x0 ---------------------------------------- It is using virtio-pci clearly,rather than virtio-mmio. So verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2023 |