Bug 2088398
| Summary: | DDP package playbook support for Intel Ethernet 800 series adapter using director | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Vijayalakshmi Candappa <vcandapp> |
| Component: | tripleo-ansible | Assignee: | Vijayalakshmi Candappa <vcandapp> |
| Status: | CLOSED ERRATA | QA Contact: | Miguel Angel Nieto <mnietoji> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 17.0 (Wallaby) | CC: | astillma, hakhande, jamsmith, jschluet, kgilliga, markgilliard13, mgeary, oblaut, spower |
| Target Milestone: | ga | Keywords: | Bugfix, Triaged |
| Target Release: | 17.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | tripleo-ansible-3.3.1-0.20220720020863.fa5422f.el9ost | Doc Type: | Enhancement |
| Doc Text: |
In NFV deployments, you cannot use heat templates to configure Dynamic Device Personalization (DDP) anymore. You can now configure DDP packages during RHOSP node provision with this custom ansible playbook: `/usr/share/ansible/tripleo-playbooks/cli-overcloud-kernel-ddp-pkg.yaml`. You can use the following `baremetal_deployment.yaml` file as an example:
----
- name: ComputeSriovOffload
count: 1
instances:
- hostname: computehwoffload
name: computea
defaults:
networks:
- network: internal_api
subnet: internal_api_subnet
- network: tenant
subnet: tenant_subnet
- network: storage
subnet: storage_subnet
network_config:
template: /home/stack/ospd-17.0/nic-configs/computesriov.yaml
ansible_playbooks:
- playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-kernel-ddp-pkg.yaml
extra_vars:
ddp_package: 'ddp'
- playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml
extra_vars:
kernel_args: 'default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt intel_iommu=on tsx=off isolcpus=2-19,22-39'
reboot_wait_timeout: 900
tuned_profile: 'cpu-partitioning'
tuned_isolated_cores: '2-19,22-39'
- playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml
extra_vars:
memory_channels: '4'
lcore: '0,20,1,21'
pmd: '2,3'
socket_mem: '4096,1024'
disable_emc: false
enable_tso: false
revalidator: ''
handler: ''
pmd_auto_lb: false
pmd_load_threshold: ''
pmd_improvement_threshold: ''
pmd_rebal_interval: ''
nova_postcopy: true
----
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-09-21 12:21:34 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Vijayalakshmi Candappa
2022-05-19 11:28:03 UTC
Configured:
ansible_playbooks:
- playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-kernel-ddp-pkg.yaml
extra_vars:
ddp_package: 'ddp-comms'
- playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml
extra_vars:
.....
After provisioning takes place, i connect to the compute and execute
[root@computeovsdpdksriov-r740 heat-admin]# dmesg | grep -i ddp
[ 141.296418] ice 0000:3b:00.0: The DDP package was successfully loaded: ICE COMMS Package version 1.3.20.0
[ 145.039802] ice 0000:3b:00.1: DDP package already present on device: ICE COMMS Package version 1.3.20.0
I reboot the compute and execute again:
[root@computeovsdpdksriov-r740 heat-admin]# dmesg | grep -i ddp
[ 137.921852] ice 0000:3b:00.0: The DDP package was successfully loaded: ICE COMMS Package version 1.3.20.0
[ 141.916190] ice 0000:3b:00.1: DDP package already present on device: ICE COMMS Package version 1.3.20.0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 17.0 (Wallaby)), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2022:6543 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |