RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1607841 - "bus-type" is "unknown" in the result of guest-get-fsinfo cmd of guest agent
Summary: "bus-type" is "unknown" in the result of guest-get-fsinfo cmd of guest agent
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: virtio-win
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Basil Salman
QA Contact: xiagao
URL:
Whiteboard:
Depends On: 1682882
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-24 11:25 UTC by xiagao
Modified: 2020-06-02 09:26 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-02 09:26:03 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
IDE Device manager view by connections (145.16 KB, image/png)
2019-05-22 15:04 UTC, Bishara AbuHattoum
no flags Details
IDE ATA Disk (37.24 KB, image/png)
2019-05-22 15:05 UTC, Bishara AbuHattoum
no flags Details
IDE ATA Channel (38.63 KB, image/png)
2019-05-22 15:05 UTC, Bishara AbuHattoum
no flags Details
SCSI Device manager view by connection (19.81 KB, image/png)
2019-05-22 15:06 UTC, Bishara AbuHattoum
no flags Details

Description xiagao 2018-07-24 11:25:39 UTC
Description of problem:
As summary.
And from https://bugzilla.redhat.com/show_bug.cgi?id=1565431#c32
" Technically a storage miniport doesn't deal with IOCTL_STORAGE_QUERY_PROPERTY IRP directly. It is a class driver job. But yes, class should return the relevant information collected from virtio-scsi miniport driver (and not only)  in response to such request. In case of virtio-scsi the bus type should be BusTypeSas (0x0A). "

Version-Release number of selected component (if applicable):
mingw-qemu-ga-win-7.6.1-2.el7ev

How reproducible:
100%

Steps to Reproduce:
1.boot up windows guest with virtio serial driver and guest agent installed.
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x5 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win2008-sp2-64-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1 \
    -drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/disk.qcow2 \
    -device scsi-hd,id=image2,drive=drive_image2 \

2.issue fsfreeze cmd from host
#nc -U /var/tmp/qga.sock
{"return": [{"name": "\\\\?\\Volume{7d9c10b7-8914-11e8-81e7-806e6f6e6963}\\", "mountpoint": "E:\\", "disk": [{"bus-type": "unknown", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "target": 0}], "type": "CDFS"}, {"name": "\\\\?\\Volume{7d9c10b6-8914-11e8-81e7-806e6f6e6963}\\", "mountpoint": "D:\\", "disk": [{"bus-type": "unknown", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "target": 0}], "type": "CDFS"}, {"name": "\\\\?\\Volume{3eb920f1-898f-11e8-ae9d-806e6f6e6963}\\", "mountpoint": "C:\\", "disk": [{"bus-type": "unknown", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": 0, "function": 2}, "target": 0}], "type": "NTFS"}, {"name": "\\\\?\\Volume{3eb920f0-898f-11e8-ae9d-806e6f6e6963}\\", "mountpoint": "S:\\", "disk": [{"bus-type": "unknown", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": 0, "function": 1}, "target": 0}], "type": "NTFS"}]}

Actual results:
bus-type is "unknown"

Expected results:
"disk": [{"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 5, "domain": 0, "function": 0}, "target": 0}]

Additional info:
Use the same qemu cmd line for rhel guest,get the following result. 
{"execute":"guest-get-fsinfo"}
{"return": [{"name": "sda1", "mountpoint": "/boot", "disk": [{"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 5, "domain": 0, "function": 0}, "target": 0}], "type": "xfs"}, {"name": "dm-0", "mountpoint": "/", "disk": [{"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 5, "domain": 0, "function": 0}, "target": 0}], "type": "xfs"}]}

Comment 3 Yvugenfi@redhat.com 2019-04-02 14:52:58 UTC
I think the problem was solved in upstream and we already backported it to mingw-qemu-ga-win-7.7.0.1-3.el7ev.


Can you please test mingw-qemu-ga-win-7.7.0.1-3.el7ev?

Comment 4 xiagao 2019-04-03 07:10:18 UTC
Test on the latest build.
mingw-qemu-ga-win-100.0.0.0-3.el7ev

Result:
{"execute":"guest-get-fsinfo"}
{"return": [{"name": "\\\\?\\Volume{10afcef8-0000-0000-0000-d01200000000}\\", "total-bytes": 31895580672, "mountpoint": "C:\\", "disk": [{"serial": "MYDISK-1", "bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "used-bytes": 14914523136, "type": "NTFS"}, {"name": "\\\\?\\Volume{52ba8851-0000-0000-0000-100000000000}\\", "total-bytes": 5365559296, "mountpoint": "D:\\", "disk": [{"serial": "MYDISK-2", "bus-type": "sas", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive1", "target": 0}], "used-bytes": 29200384, "type": "NTFS"}, {"name": "\\\\?\\Volume{10afcef8-0000-0000-0000-100000000000}\\", "total-bytes": 314568704, "mountpoint": "S:\\", "disk": [{"serial": "MYDISK-1", "bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "used-bytes": 35090432, "type": "NTFS"}]}

qemu command line:(one virtio-blk disk and one virtio-scsi disk)
-device pcie-root-port,id=pcie-root-port-6,slot=6,chassis=6,bus=pcie.0  \
-blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=win2019.qcow2,node-name=system_disk_file \
-blockdev driver=qcow2,node-name=system_disk,file=system_disk_file \
-device virtio-blk-pci,bus=pcie-root-port-6,drive=system_disk,id=disk1,werror=stop,rerror=stop,serial=MYDISK-1 \

-device pcie-root-port,id=pcie-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0  \
-device virtio-scsi-pci,id=scsi1,bus=pcie-root-port-5 -drive file=my-data-disk1.qcow2,if=none,id=drive-data-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop,discard=on -device scsi-hd,drive=drive-data-disk,bus=scsi1.0,id=data-disk,serial=MYDISK-2 \


The field of 'bus-type' has value,but for virtio-blk disk and virtio-scsi disk, is {"bus": -1, "slot": -1, "domain": -1, "function": -1} correct ?
Btw, are there any documents to introduce the fields in the result of "guest-get-fsinfo"?

Thanks,
xiaoling

Comment 5 Yvugenfi@redhat.com 2019-04-03 11:52:13 UTC
Double checking.

Comment 6 Bishara AbuHattoum 2019-04-16 14:54:04 UTC
After double checking with Yan it seems that the bug is still present.

The guest-get-fsinfo should fetches every disk's pci-controller's bus, slot, domain and function. But in the Windows code it tries to fetch a disk's pci slot number instead of its pci-controller's slot number, but a disk does not have a pci slot number, its pci-controller has one.
The behavior of the code in that situation is that if fetching one of the (bus, slot, domain and function) fails it causes the rest to fail, thus, bus = -1, slot = -1, domain = -1 and function = -1.

Currently working on finding the solution.

Comment 7 Bishara AbuHattoum 2019-05-07 08:23:51 UTC
It seems that the bug was really fixed on the upstream but another bug is now present, and that's why my last verdict was that the bug was still present.
For SCSI the bug now is fixed, tested with Windows 10 x64 on SCSI, guest-get-fsinfo results in:

{"return": [{"name": "\\\\?\\Volume{f3ee8117-0000-0000-0000-602200000000}\\", "total-bytes": 31633436672, "mountpoint": "C:\\", "disk": [{"bus-type": "sas", "bus": 0, "unit": 0, "pci-controller": {"bus": 2, "slot": 5, "domain": 0, "function": 0}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "used-bytes": 14707494912, "type": "NTFS"}, {"name": "\\\\?\\Volume{366267d0-0000-0000-0000-100000000000}\\", "total-bytes": 10734268416, "mountpoint": "F:\\", "disk": [{"bus-type": "sas", "bus": 0, "unit": 0, "pci-controller": {"bus": 2, "slot": 5, "domain": 0, "function": 0}, "dev": "\\\\.\\PhysicalDrive1", "target": 1}], "used-bytes": 45527040, "type": "NTFS"}, {"name": "\\\\?\\Volume{f3ee8117-0000-0000-0000-100000000000}\\", "mountpoint": "System Reserved", "disk": [{"bus-type": "sas", "bus": 0, "unit": 0, "pci-controller": {"bus": 2, "slot": 5, "domain": 0, "function": 0}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "type": "NTFS"}]}

But now another bug is present, when using default SATA or IDE, guest-get-fsinfo results in:

{"return": [{"name": "\\\\?\\Volume{e8534a5f-565c-11e9-a063-806e6f6e6963}\\", "total-bytes": 316628992, "mountpoint": "D:\\", "disk": [{"bus-type": "sata", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\?\\Volume{e8534a5f-565c-11e9-a063-806e6f6e6963}", "target": 0}], "used-bytes": 316628992, "type": "CDFS"}, {"name": "\\\\?\\Volume{fc4eba08-0000-0000-0000-602200000000}\\", "total-bytes": 536293142528, "mountpoint": "C:\\", "disk": [{"serial": "QM00013", "bus-type": "sata", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "used-bytes": 21838770176, "type": "NTFS"}, {"name": "\\\\?\\Volume{fc4eba08-0000-0000-0000-100000000000}\\", "mountpoint": "System Reserved", "disk": [{"serial": "QM00013", "bus-type": "sata", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "type": "NTFS"}]}

I think that this behavior is due to the SATA and IDE controllers sitting on the PCI bus but not via a PCI slot, and that's why it has no PCI slot number, and when fetching the PCI slot number fails, the others fail, and -1 is assigned to all of them.
Is that the wanted behavior? if it is, then the code works as expected now on the upstream, if not, then this is a bug and need to be addressed.

Comment 8 Bishara AbuHattoum 2019-05-22 15:04:40 UTC
Created attachment 1572046 [details]
IDE Device manager view by connections

Comment 9 Bishara AbuHattoum 2019-05-22 15:05:11 UTC
Created attachment 1572047 [details]
IDE ATA Disk

Comment 10 Bishara AbuHattoum 2019-05-22 15:05:40 UTC
Created attachment 1572048 [details]
IDE ATA Channel

Comment 11 Bishara AbuHattoum 2019-05-22 15:06:30 UTC
Created attachment 1572049 [details]
SCSI Device manager view by connection

Comment 12 Bishara AbuHattoum 2019-05-22 15:22:25 UTC
The topology of an IDE Disk is different from a SCSI Disk, (follow attached screenshots of the device manager in view by connection).
A SCSI Disk has a SCSI Controller as a parent that is a PCI device and has a PCI info, (slot, bus, device, function), and guest-get-fsinfo retrieves this info and fills the pci-controller field.
But an IDE Disk has an ATA Channel as a parent, (controller), and the ATA Channel itself has an identifier as info, again, very different than the SCSI scenario.
So the format, (bus, slot, device, function), is a PCI format and is not applicable to non PCI devices.

So for a SCSI Disk the result would be something like:
"disk": [{"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 5, "domain": 0, "function": 0}, "target": 0}]

But for an IDE Disk it should be something like this:
"disk": [{"bus-type": "ide", "channel": 0, "unit": 0, "something-applicable-with-ide": {"channel": 0, "more-ide-applicable": 0}, "target": 0}]

Right now in the code we have, the code assumes that the parent of the Disk is a PCI storage controller and tries to retrieve, (bus, slot, device, function).

Comment 13 Bishara AbuHattoum 2019-06-11 10:30:32 UTC
After consulting with Vadim (vrozenfe), I think for now it is enough that we are supporting disk bust types: SAS and SCSI, but for the future, we should try to consider supporting all of the disk bus types just to be on the safe side.
I suggest we should close the bug as a WONTFIX.

Comment 14 xiagao 2019-06-27 07:29:43 UTC
Hi Bishara AbuHattoum,

As you said in comment 7,"It seems that the bug was really fixed on the upstream..", could you apply the fix to internal.
I test with latest mingw-qemu-ga-win, still hit the same issue.

version: mingw-qemu-ga-win-100.0.0.0-3

 {"return": [{"name": "\\\\?\\Volume{c183de31-9387-11e9-9fd2-806e6f6e6963}\\", "total-bytes": 258742272, "mountpoint": "E:\\", "disk": [{"bus-type": "sas", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\?\\Volume{c183de31-9387-11e9-9fd2-806e6f6e6963}", "target": 2}], "used-bytes": 258742272, "type": "CDFS"}, {"name": "\\\\?\\Volume{c183de30-9387-11e9-9fd2-806e6f6e6963}\\", "total-bytes": 2936563712, "mountpoint": "D:\\", "disk": [{"bus-type": "sas", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\?\\Volume{c183de30-9387-11e9-9fd2-806e6f6e6963}", "target": 1}], "used-bytes": 2936563712, "type": "CDFS"}, {"name": "\\\\?\\Volume{a0f74882-0000-0000-0000-d01200000000}\\", "total-bytes": 31895580672, "mountpoint": "C:\\", "disk": [{"bus-type": "sas", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "used-bytes": 19398578176, "type": "NTFS"}, {"name": "\\\\?\\Volume{a0f74882-0000-0000-0000-100000000000}\\", "total-bytes": 314568704, "mountpoint": "S:\\", "disk": [{"bus-type": "sas", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "used-bytes": 35090432, "type": "NTFS"}]}

Comment 15 Bishara AbuHattoum 2019-07-02 08:17:40 UTC
Hi

Can you provide the QEMU command line you used?

Comment 16 xiagao 2019-07-02 08:58:39 UTC
{"execute":"guest-get-fsinfo"}
{"return": [{"name": "\\\\?\\Volume{0cca1c1a-9b5c-11e9-b622-806e6f6e6963}\\", "total-bytes": 4843268096, "mountpoint": "E:\\", "disk": [{"bus-type": "sata", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\?\\Volume{0cca1c1a-9b5c-11e9-b622-806e6f6e6963}", "target": 0}], "used-bytes": 4843268096, "type": "UDF"}, {"name": "\\\\?\\Volume{0cca1c19-9b5c-11e9-b622-806e6f6e6963}\\", "total-bytes": 258000896, "mountpoint": "D:\\", "disk": [{"bus-type": "sata", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\?\\Volume{0cca1c19-9b5c-11e9-b622-806e6f6e6963}", "target": 0}], "used-bytes": 258000896, "type": "CDFS"}, {"name": "\\\\?\\Volume{ef89e07e-0000-0000-0000-d01200000000}\\", "total-bytes": 31895580672, "mountpoint": "C:\\", "disk": [{"serial": "MYDISK-1", "bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "used-bytes": 15312441344, "type": "NTFS"}, {"name": "\\\\?\\Volume{ef89e07e-0000-0000-0000-100000000000}\\", "total-bytes": 314568704, "mountpoint": "S:\\", "disk": [{"serial": "MYDISK-1", "bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive0", "target": 0}], "used-bytes": 35090432, "type": "NTFS"}]}


qemu-cmd line
/usr/libexec/qemu-kvm -name vm1 -enable-kvm -m 3G -smp 24,maxcpus=24,cores=12,threads=1,sockets=2 -nodefaults -cpu Broadwell,+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,xsave -rtc base=localtime,driftfix=none -boot order=cd,menu=on -monitor stdio -M q35 -vga std -vnc :11 -qmp tcp:0:4444,server,nowait -device piix3-usb-uhci,id=usb -device usb-tablet,id=input0 \
-device pcie-root-port,id=pcie-root-port-6,slot=6,chassis=6,bus=pcie.0 \
-blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=win2019-64-virtio.qcow2,node-name=system_disk_file \
-blockdev driver=qcow2,node-name=system_disk,file=system_disk_file \
-device virtio-blk-pci,bus=pcie-root-port-6,drive=system_disk,id=disk1,werror=stop,rerror=stop,serial=MYDISK-1,bootindex=0 \
-device pcie-root-port,id=pcie-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0 -netdev tap,script=/etc/qemu-ifup,downscript=no,id=hostnet0,vhost=on,queues=4 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:52:11:36:3f:00,bus=pcie-root-port-7,mq=on,vectors=10 \
-drive file=/home/kvm_autotest_root/iso/ISO/Win2019/en_windows_server_2019_x64_dvd_4cb967d8.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,drive=drive-ide0-1-0,id=ide0-1-0 -cdrom /home/kvm_autotest_root/iso/windows/virtio-win-1.9.7-3.el8.iso \
-device pcie-root-port,id=pcie-root-port-4,slot=4,chassis=4,bus=pcie.0 -device virtio-serial-pci,id=virtio-serial1,max_ports=31,bus=pcie-root-port-4 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,bus=virtio-serial1.0,chardev=channel2,name=org.qemu.guest_agent.0

Comment 18 Basil Salman 2020-05-11 14:53:24 UTC
Hi,

I failed to reproduce with the latest mingw-qemu-ga-win (https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=1125096).
Can you confirm that this bug is fixed?

Comment 19 xiagao 2020-05-12 09:26:33 UTC
Test mingw-qemu-ga-win-101.1.0-1.el7ev

Like the above comment, bus-type now is correct, but in pci-controller, value is {"bus": -1, "slot": -1, "domain": -1, "function": -1} which is not correct.

{"return": [{"name": "\\\\?\\Volume{6b527181-93cf-11ea-8854-806e6f6e6963}\\", "total-bytes": 3695179776, "mountpoint": "E:\\", "disk": [{"bus-type": "sata", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\?\\Volume{6b527181-93cf-11ea-8854-806e6f6e6963}", "target": 0}], "used-bytes": 3695179776, "type": "UDF"}, {"name": "\\\\?\\Volume{6b527180-93cf-11ea-8854-806e6f6e6963}\\", "total-bytes": 592447488, "mountpoint": "D:\\", "disk": [{"bus-type": "sata", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\?\\Volume{6b527180-93cf-11ea-8854-806e6f6e6963}", "target": 0}], "used-bytes": 592447488, "type": "CDFS"}, {"name": "\\\\?\\Volume{7a0d8df2-0000-0000-0000-d01200000000}\\", "total-bytes": 31895580672, "mountpoint": "C:\\", "disk": [{"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive1", "target": 0}], "used-bytes": 15401558016, "type": "NTFS"}, {"name": "\\\\?\\Volume{7a0d8df2-0000-0000-0000-100000000000}\\", "total-bytes": 314568704, "mountpoint": "S:\\", "disk": [{"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": -1, "slot": -1, "domain": -1, "function": -1}, "dev": "\\\\.\\PhysicalDrive1", "target": 0}], "used-bytes": 35090432, "type": "NTFS"}]}


qemu cmd line:
-blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=$1,node-name=system_file \
-blockdev driver=qcow2,node-name=drive_system_disk,file=system_file \
-object iothread,id=thread0 -device virtio-blk-pci,iothread=thread0,drive=drive_system_disk,id=virtio-disk0,bootindex=0 \


-drive file=/home/kvm_autotest_root/iso/ISO/Win2012/en_windows_server_2012_x64_dvd_915478.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,drive=drive-ide0-1-0,id=ide0-1-0 \
-cdrom /home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-179.iso \

-blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/home/data-disk1.qcow2,node-name=data_file \
-blockdev driver=qcow2,node-name=drive_data_disk,file=data_file \
-device virtio-scsi-pci,id=scsi1,bus=pci.8 -device scsi-hd,drive=drive_data_disk,bus=scsi1.0,id=data_disk \


-device virtio-serial-pci,id=virtio-serial1,max_ports=31,bus=pci.6 \
-chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait  -device virtserialport,bus=virtio-serial1.0,chardev=channel2,name=org.qemu.guest_agent.0 \


Note You need to log in before you can comment on or make changes to this bug.