Bug 1419637 - pci_passthrough_whitelist not filter correctly the device
Summary: pci_passthrough_whitelist not filter correctly the device
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 9.0 (Mitaka)
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Eoghan Glynn
QA Contact: Prasanth Anbalagan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-06 15:49 UTC by Vincent Misson
Modified: 2019-09-09 15:29 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-14 15:20:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Bug-1419637-nova-compute.log (292.82 KB, text/plain)
2017-02-10 16:05 UTC, Vincent Misson
no flags Details

Description Vincent Misson 2017-02-06 15:49:11 UTC
Description of problem: I want to setup pci passthrough capability for one network interface on my compute.
I use pci_passthrough_whitelist option in nova.conf to match my interface but I don't see the interface selected on the nova.log. My configuration was working on RHOP 8.0 (Liberty).

I tried several several syntaxes :
pci_passthrough_whitelist = {"devname":"ens2f1","physical_network":"datacentre"}

pci_passthrough_whitelist = {"vendor_id":"8086","product_id":"10f8","address": "0000:87:00.1","physical_network":"datacentre"}

nova.compute.resource_tracker is listed my PCI port :
[...]
{"dev_id": "pci_0000_87_00_1", "product_id": "10f8", "dev_type": "type-PF", "numa_node": 1, "vendor_id": "8086", "label": "label_8086_10f8", "address": "0000:87:00.1"}
[...]

but with the same result : 2017-02-01 15:41:58.720 8826 INFO nova.compute.resource_tracker [req-832b23d8-1883-4291-8035-e45b319bba4b - - - - -] Final resource view: name=g5-overcloud-compute-3.localdomain phys_ram=196482MB used_ram=2048MB phys_disk=558GB used_disk=0GB total_vcpus=20 used_vcpus=0 pci_stats=[]


Version-Release number of selected component (if applicable): Red Hat OpenStack 9.0


How reproducible: Easy


Steps to Reproduce:
1. Configure pci_passthrough_whitelist in nova.conf
2. Restart nova compute service
3. Check nova.log for the result

Actual results:
nova.compute.resource_tracker [req-832b23d8-1883-4291-8035-e45b319bba4b - - - - -] Final resource view: name=g5-overcloud-compute-3.localdomain phys_ram=196482MB used_ram=2048MB phys_disk=558GB used_disk=0GB total_vcpus=20 used_vcpus=0 pci_stats=[]

Expected results:
nova.compute.resource_tracker [req-832b23d8-1883-4291-8035-e45b319bba4b - - - - -] Final resource view: name=g5-overcloud-compute-3.localdomain phys_ram=196482MB used_ram=2048MB phys_disk=558GB used_disk=0GB total_vcpus=20 used_vcpus=0 pci_stats=[PCI Information]

Additional info:

Comment 1 Stephen Finucane 2017-02-10 15:10:06 UTC
A quick investigation of the PCI matching function in OSP 9.0 would suggest that it's may not be nova itself at fault here.

    $ tox -e venv python
    >>> import nova.pci.whitelist
    >>> from oslo_serialization import jsonutils
    >>> filter = nova.pci.whitelist.Whitelist(['[{"vendor_id":"8086","product_id":"10f8","address": "0000:87:00.1","physical_network":"datacentre"}]'])
    >>> filter
    <nova.pci.whitelist.Whitelist object at 0x7f5009c79c88>
    >>> dev_dict = jsonutils.loads('{"dev_id": "pci_0000_87_00_1", "product_id": "10f8", "dev_type": "type-PF", "numa_node": 1, "vendor_id": "8086", "label": "label_8086_10f8", "address": "0000:87:00.1"}')
    >>> dev_dict
    {'product_id': '10f8', 'dev_id': 'pci_0000_87_00_1', 'numa_node': 1, 'label': 'label_8086_10f8', 'dev_type': 'type-PF', 'address': '0000:87:00.1', 'vendor_id': '8086'}
    >>> filter.device_assignable(dev_dict)
    True
    >>> dev_dict = jsonutils.loads('{"dev_id": "pci_0000_87_00_1", "product_id": "10f8", "dev_type": "type-PF", "numa_node": 1, "vendor_id": "8087", "label": "label_8087_10f8", "address": "0000:87:00.1"}')
    >>> filter.device_assignable(dev_dict)
    False

Could you provide the version of libvirt used on this host? Also, could you search for the following logs and report their values?

    Hypervisor: assignable PCI devices:

    Hypervisor/Node resource view:

Comment 2 Stephen Finucane 2017-02-10 15:14:45 UTC
Could you also provide a little more context to this log?

    [...]
    {"dev_id": "pci_0000_87_00_1", "product_id": "10f8", "dev_type": "type-PF", "numa_node": 1, "vendor_id": "8086", "label": "label_8086_10f8", "address": "0000:87:00.1"}
    [...]

In fact, a copy of you log file would help significantly.

Comment 3 Stephen Finucane 2017-02-10 15:31:06 UTC
Finally, it has been suggested to me that you need to add the following to the filter as the only device you have available is a physical function (PF):

    "device_type": "type-PF"

e.g.

    pci_passthrough_whitelist = {"devname":"ens2f1","physical_network":"datacentre", "device_type":"type-PF"}

Comment 4 Vincent Misson 2017-02-10 16:05:44 UTC
Created attachment 1249076 [details]
Bug-1419637-nova-compute.log

Comment 5 Vincent Misson 2017-02-10 16:06:35 UTC
Thanks for your feedback Stephen.

[root@g5-overcloud-compute-3 nova]# libvirtd --version
libvirtd (libvirt) 2.0.0
[root@g5-overcloud-compute-3 nova]# virsh --version
2.0.0

I tried with to add "device_type": "type-PF" but I have the same result.

I just add to the ticket the nova-compute.log (debug) just after openstack-nova-compute restart.
Let me know if you need something else.

Comment 6 Vladik Romanovsky 2017-02-14 14:09:07 UTC
(In reply to Vincent Misson from comment #5)
> Thanks for your feedback Stephen.
> 
> [root@g5-overcloud-compute-3 nova]# libvirtd --version
> libvirtd (libvirt) 2.0.0
> [root@g5-overcloud-compute-3 nova]# virsh --version
> 2.0.0
> 
> I tried with to add "device_type": "type-PF" but I have the same result.
> 
> I just add to the ticket the nova-compute.log (debug) just after
> openstack-nova-compute restart.
> Let me know if you need something else.

Hello Vincent,

I think you should try white-listing the device without using the "devname" filed, also please do not use the "address" filed as well.
You could try:
pci_passthrough_whitelist = {"vendor_id":"8086","product_id":"10f8", "physical_network":"datacentre", "device_type": "type-PF"}

The use of "devname" and a"address" fields was available for whitelising PFs.
It has been fixed recently with this patch[1], but I'm afraid, it is not available in your version.

[1] https://review.openstack.org/#/c/363884/

Comment 7 Vladik Romanovsky 2017-02-14 14:09:36 UTC
(In reply to Vincent Misson from comment #5)
> Thanks for your feedback Stephen.
> 
> [root@g5-overcloud-compute-3 nova]# libvirtd --version
> libvirtd (libvirt) 2.0.0
> [root@g5-overcloud-compute-3 nova]# virsh --version
> 2.0.0
> 
> I tried with to add "device_type": "type-PF" but I have the same result.
> 
> I just add to the ticket the nova-compute.log (debug) just after
> openstack-nova-compute restart.
> Let me know if you need something else.

Hello Vincent,

I think you should try white-listing the device without using the "devname" field, also please do not use the "address" filed as well.
You could try:
pci_passthrough_whitelist = {"vendor_id":"8086","product_id":"10f8", "physical_network":"datacentre", "device_type": "type-PF"}

The use of "devname" and a"address" fields was available for whitelising PFs.
It has been fixed recently with this patch[1], but I'm afraid, it is not available in your version.

[1] https://review.openstack.org/#/c/363884/

Comment 8 Vincent Misson 2017-02-14 15:20:48 UTC
Hi Vladik,

you find the problem :)

Using pci_passthrough_whitelist = {"vendor_id":"8086","product_id":"10f8", "physical_network":"datacentre", "device_type": "type-PF"} 
I'm now able see the port as a usage resource : 
Final resource view: name=g5-overcloud-compute-3.localdomain phys_ram=196482MB used_ram=2048MB phys_disk=558GB used_disk=0GB total_vcpus=20 used_vcpus=0 pci_stats=[PciDevicePool(count=2,numa_node=1,product_id='10f8',tags={dev_type='type-PF',device_type='type-PF',physical_network='datacentre'},vendor_id='8086')]

Thanks for your help.


Note You need to log in before you can comment on or make changes to this bug.