Description of problem: I want to setup pci passthrough capability for one network interface on my compute. I use pci_passthrough_whitelist option in nova.conf to match my interface but I don't see the interface selected on the nova.log. My configuration was working on RHOP 8.0 (Liberty). I tried several several syntaxes : pci_passthrough_whitelist = {"devname":"ens2f1","physical_network":"datacentre"} pci_passthrough_whitelist = {"vendor_id":"8086","product_id":"10f8","address": "0000:87:00.1","physical_network":"datacentre"} nova.compute.resource_tracker is listed my PCI port : [...] {"dev_id": "pci_0000_87_00_1", "product_id": "10f8", "dev_type": "type-PF", "numa_node": 1, "vendor_id": "8086", "label": "label_8086_10f8", "address": "0000:87:00.1"} [...] but with the same result : 2017-02-01 15:41:58.720 8826 INFO nova.compute.resource_tracker [req-832b23d8-1883-4291-8035-e45b319bba4b - - - - -] Final resource view: name=g5-overcloud-compute-3.localdomain phys_ram=196482MB used_ram=2048MB phys_disk=558GB used_disk=0GB total_vcpus=20 used_vcpus=0 pci_stats=[] Version-Release number of selected component (if applicable): Red Hat OpenStack 9.0 How reproducible: Easy Steps to Reproduce: 1. Configure pci_passthrough_whitelist in nova.conf 2. Restart nova compute service 3. Check nova.log for the result Actual results: nova.compute.resource_tracker [req-832b23d8-1883-4291-8035-e45b319bba4b - - - - -] Final resource view: name=g5-overcloud-compute-3.localdomain phys_ram=196482MB used_ram=2048MB phys_disk=558GB used_disk=0GB total_vcpus=20 used_vcpus=0 pci_stats=[] Expected results: nova.compute.resource_tracker [req-832b23d8-1883-4291-8035-e45b319bba4b - - - - -] Final resource view: name=g5-overcloud-compute-3.localdomain phys_ram=196482MB used_ram=2048MB phys_disk=558GB used_disk=0GB total_vcpus=20 used_vcpus=0 pci_stats=[PCI Information] Additional info:
A quick investigation of the PCI matching function in OSP 9.0 would suggest that it's may not be nova itself at fault here. $ tox -e venv python >>> import nova.pci.whitelist >>> from oslo_serialization import jsonutils >>> filter = nova.pci.whitelist.Whitelist(['[{"vendor_id":"8086","product_id":"10f8","address": "0000:87:00.1","physical_network":"datacentre"}]']) >>> filter <nova.pci.whitelist.Whitelist object at 0x7f5009c79c88> >>> dev_dict = jsonutils.loads('{"dev_id": "pci_0000_87_00_1", "product_id": "10f8", "dev_type": "type-PF", "numa_node": 1, "vendor_id": "8086", "label": "label_8086_10f8", "address": "0000:87:00.1"}') >>> dev_dict {'product_id': '10f8', 'dev_id': 'pci_0000_87_00_1', 'numa_node': 1, 'label': 'label_8086_10f8', 'dev_type': 'type-PF', 'address': '0000:87:00.1', 'vendor_id': '8086'} >>> filter.device_assignable(dev_dict) True >>> dev_dict = jsonutils.loads('{"dev_id": "pci_0000_87_00_1", "product_id": "10f8", "dev_type": "type-PF", "numa_node": 1, "vendor_id": "8087", "label": "label_8087_10f8", "address": "0000:87:00.1"}') >>> filter.device_assignable(dev_dict) False Could you provide the version of libvirt used on this host? Also, could you search for the following logs and report their values? Hypervisor: assignable PCI devices: Hypervisor/Node resource view:
Could you also provide a little more context to this log? [...] {"dev_id": "pci_0000_87_00_1", "product_id": "10f8", "dev_type": "type-PF", "numa_node": 1, "vendor_id": "8086", "label": "label_8086_10f8", "address": "0000:87:00.1"} [...] In fact, a copy of you log file would help significantly.
Finally, it has been suggested to me that you need to add the following to the filter as the only device you have available is a physical function (PF): "device_type": "type-PF" e.g. pci_passthrough_whitelist = {"devname":"ens2f1","physical_network":"datacentre", "device_type":"type-PF"}
Created attachment 1249076 [details] Bug-1419637-nova-compute.log
Thanks for your feedback Stephen. [root@g5-overcloud-compute-3 nova]# libvirtd --version libvirtd (libvirt) 2.0.0 [root@g5-overcloud-compute-3 nova]# virsh --version 2.0.0 I tried with to add "device_type": "type-PF" but I have the same result. I just add to the ticket the nova-compute.log (debug) just after openstack-nova-compute restart. Let me know if you need something else.
(In reply to Vincent Misson from comment #5) > Thanks for your feedback Stephen. > > [root@g5-overcloud-compute-3 nova]# libvirtd --version > libvirtd (libvirt) 2.0.0 > [root@g5-overcloud-compute-3 nova]# virsh --version > 2.0.0 > > I tried with to add "device_type": "type-PF" but I have the same result. > > I just add to the ticket the nova-compute.log (debug) just after > openstack-nova-compute restart. > Let me know if you need something else. Hello Vincent, I think you should try white-listing the device without using the "devname" filed, also please do not use the "address" filed as well. You could try: pci_passthrough_whitelist = {"vendor_id":"8086","product_id":"10f8", "physical_network":"datacentre", "device_type": "type-PF"} The use of "devname" and a"address" fields was available for whitelising PFs. It has been fixed recently with this patch[1], but I'm afraid, it is not available in your version. [1] https://review.openstack.org/#/c/363884/
(In reply to Vincent Misson from comment #5) > Thanks for your feedback Stephen. > > [root@g5-overcloud-compute-3 nova]# libvirtd --version > libvirtd (libvirt) 2.0.0 > [root@g5-overcloud-compute-3 nova]# virsh --version > 2.0.0 > > I tried with to add "device_type": "type-PF" but I have the same result. > > I just add to the ticket the nova-compute.log (debug) just after > openstack-nova-compute restart. > Let me know if you need something else. Hello Vincent, I think you should try white-listing the device without using the "devname" field, also please do not use the "address" filed as well. You could try: pci_passthrough_whitelist = {"vendor_id":"8086","product_id":"10f8", "physical_network":"datacentre", "device_type": "type-PF"} The use of "devname" and a"address" fields was available for whitelising PFs. It has been fixed recently with this patch[1], but I'm afraid, it is not available in your version. [1] https://review.openstack.org/#/c/363884/
Hi Vladik, you find the problem :) Using pci_passthrough_whitelist = {"vendor_id":"8086","product_id":"10f8", "physical_network":"datacentre", "device_type": "type-PF"} I'm now able see the port as a usage resource : Final resource view: name=g5-overcloud-compute-3.localdomain phys_ram=196482MB used_ram=2048MB phys_disk=558GB used_disk=0GB total_vcpus=20 used_vcpus=0 pci_stats=[PciDevicePool(count=2,numa_node=1,product_id='10f8',tags={dev_type='type-PF',device_type='type-PF',physical_network='datacentre'},vendor_id='8086')] Thanks for your help.