Bug 1477852
Summary: | Can't dump physical pci device from vmware guest by virsh | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | mxie <mxie> | ||||||||||
Component: | libvirt | Assignee: | Virtualization Maintenance <virt-maint> | ||||||||||
Status: | CLOSED WONTFIX | QA Contact: | mxie <mxie> | ||||||||||
Severity: | low | Docs Contact: | |||||||||||
Priority: | low | ||||||||||||
Version: | 8.0 | CC: | dyuan, jsuchane, juzhou, mxie, mzhan, ptoscano, tzheng, xiaodwan, xuzhang, yalzhang | ||||||||||
Target Milestone: | rc | Keywords: | Triaged | ||||||||||
Target Release: | --- | ||||||||||||
Hardware: | x86_64 | ||||||||||||
OS: | Unspecified | ||||||||||||
Whiteboard: | |||||||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||||
Doc Text: | Story Points: | --- | |||||||||||
Clone Of: | Environment: | ||||||||||||
Last Closed: | 2021-01-15 07:40:38 UTC | Type: | Bug | ||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||
Documentation: | --- | CRM: | |||||||||||
Verified Versions: | Category: | --- | |||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
Embargoed: | |||||||||||||
Bug Depends On: | 1472719 | ||||||||||||
Bug Blocks: | |||||||||||||
Attachments: |
|
Description
mxie@redhat.com
2017-08-03 05:52:45 UTC
Created attachment 1308545 [details]
vmware-guest-pci-device-2
Created attachment 1308546 [details]
vmware-guest-pci-device-1
Old bug... would it be possible to get the .vmx file of the guest? Sorry for late reply. Reproduce the bug with latest versions: libvirt-4.5.0-6.el7.x86_64 qemu-kvm-rhev-2.12.0-9.el7.x86_64 Steps: 1.Prepare a guest on ESXi5.5 which has physical PCI device,pls refer to screenshot "guest-esxi5.5-pci" 2.Use virsh to dump guest's xml, there is no info about physical PCI device # virsh -c vpx://root.75.182/data/10.73.3.19/?no_verify=1 Enter root's password for 10.73.75.182: Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # dumpxml esx5.5-rhel7.4-x86_64-passthru <domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'> <name>esx5.5-rhel7.4-x86_64-passthru</name> <uuid>42377aab-a6cb-fada-6dd1-0df758ba3439</uuid> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <memtune> <min_guarantee unit='KiB'>2097152</min_guarantee> </memtune> <vcpu placement='static'>1</vcpu> <cputune> <shares>1000</shares> </cputune> <os> <type arch='x86_64'>hvm</type> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <disk type='file' device='disk'> <source file='[ESX5.5] esx5.5-rhel7.4-x86_64-passthru/esx5.5-rhel7.4-x86_64-passthru.vmdk'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='scsi' index='0' model='vmpvscsi'/> <interface type='bridge'> <mac address='00:50:56:b7:7e:22'/> <source bridge='VM Network'/> <model type='vmxnet3'/> </interface> <video> <model type='vmvga' vram='8192' primary='yes'/> </video> </devices> <vmware:datacenterpath>data</vmware:datacenterpath> <vmware:moref>vm-427</vmware:moref> </domain> 3.Check guest.vmx as below: # cat esx5.5-rhel7.4-x86_64-passthru.vmx .encoding = "UTF-8" config.version = "8" virtualHW.version = "10" vmci0.present = "TRUE" displayName = "esx5.5-rhel7.4-x86_64-passthru" extendedConfigFile = "esx5.5-rhel7.4-x86_64-passthru.vmxf" svga.vramSize = "8388608" memSize = "2048" sched.cpu.units = "mhz" sched.cpu.latencySensitivity = "normal" tools.upgrade.policy = "manual" scsi0.virtualDev = "pvscsi" scsi0.present = "TRUE" vmci.filter.enable = "TRUE" ide1:0.startConnected = "FALSE" ide1:0.deviceType = "atapi-cdrom" ide1:0.clientDevice = "TRUE" ide1:0.fileName = "emptyBackingString" ide1:0.present = "TRUE" scsi0:0.deviceType = "scsi-hardDisk" scsi0:0.fileName = "esx5.5-rhel7.4-x86_64-passthru.vmdk" sched.scsi0:0.shares = "normal" sched.scsi0:0.throughputCap = "off" scsi0:0.present = "TRUE" floppy0.startConnected = "FALSE" floppy0.clientDevice = "TRUE" floppy0.fileName = "vmware-null-remote-floppy" ethernet0.virtualDev = "vmxnet3" ethernet0.networkName = "VM Network" ethernet0.addressType = "vpx" ethernet0.generatedAddress = "00:50:56:b7:7e:22" ethernet0.present = "TRUE" guestOS = "rhel7-64" toolScripts.afterPowerOn = "TRUE" toolScripts.afterResume = "TRUE" toolScripts.beforeSuspend = "TRUE" toolScripts.beforePowerOff = "TRUE" tools.syncTime = "FALSE" tools.guest.desktop.autolock = "FALSE" uuid.bios = "42 37 7a ab a6 cb fa da-6d d1 0d f7 58 ba 34 39" vc.uuid = "50 37 d7 06 fa 72 e8 21-84 be 9c dd 16 a9 5f 3b" nvram = "esx5.5-rhel7.4-x86_64-passthru.nvram" pciBridge0.present = "TRUE" svga.present = "TRUE" pciBridge4.present = "TRUE" pciBridge4.virtualDev = "pcieRootPort" pciBridge4.functions = "8" pciBridge5.present = "TRUE" pciBridge5.virtualDev = "pcieRootPort" pciBridge5.functions = "8" pciBridge6.present = "TRUE" pciBridge6.virtualDev = "pcieRootPort" pciBridge6.functions = "8" pciBridge7.present = "TRUE" pciBridge7.virtualDev = "pcieRootPort" pciBridge7.functions = "8" hpet0.present = "TRUE" virtualHW.productCompatibility = "hosted" ethernet0.pciSlotNumber = "192" pciBridge0.pciSlotNumber = "17" pciBridge4.pciSlotNumber = "21" pciBridge5.pciSlotNumber = "22" pciBridge6.pciSlotNumber = "23" pciBridge7.pciSlotNumber = "24" replay.supported = "FALSE" scsi0.pciSlotNumber = "160" scsi0.sasWWID = "50 05 05 6e 8d e9 2d 10" softPowerOff = "TRUE" vmci0.pciSlotNumber = "32" vmotion.checkpointFBSize = "8388608" pciPassthru0.deviceId = "0x1521" pciPassthru0.id = "02:00.1" pciPassthru0.systemId = "559ce51f-08f9-eeda-a53c-2c59e54291e4" pciPassthru0.vendorId = "0x8086" migrate.hostlog = "esx5.5-rhel7.4-x86_64-passthru-42d5d586.hlog" sched.cpu.min = "0" sched.cpu.shares = "normal" sched.mem.min = "2048" sched.mem.minSize = "2048" sched.mem.shares = "normal" pciPassthru0.present = "TRUE" Created attachment 1474167 [details]
guest-esxi5.5-pci.png
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |