Bug 2044119

Summary: [RHEL9] _storage_test_pool_pvs get wrong data type in test-verify-pool-members.yml
Product: Red Hat Enterprise Linux 9 Reporter: guazhang <guazhang>
Component: rhel-system-rolesAssignee: Rich Megginson <rmeggins>
Status: CLOSED ERRATA QA Contact: Jakub Haruda <jharuda>
Severity: unspecified Docs Contact: Gabi Fialová <gfialova>
Priority: unspecified    
Version: 9.0CC: czhong, gfialova, jharuda, nhosoi, pkettman, rmeggins, spetrosi
Target Milestone: rcKeywords: Triaged
Target Release: 9.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: role:storage
Fixed In Version: rhel-system-roles-1.19.3-1.el9 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-11-15 10:22:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description guazhang@redhat.com 2022-01-24 02:13:28 UTC
Description of problem:
install upstream ansible version then run the storage role testing, then found the var "_storage_test_pool_pvs" get wrong data type, please help to check

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. git clone https://github.com/ansible/ansible.git 
2. python3 setup.py install 
3. echo "localhost  ansible_connection=local" > host
5. ansible-playbook -vv -i host  tests_create_lvm_cache_then_remove.yml

Actual results:
get error

Expected results:
no error

Additional info:




TASK [Check the type of each PV] *******************************************************************************************************
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tests/test-verify-pool-members.yml:46
fatal: [localhost]: FAILED! => {"msg": "Invalid data passed to 'loop', it requires a list, got this instead: [] + [ '/dev/nvme1n1p1' ] + [ '/dev/nvme4n1p1' ]. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."}

PLAY RECAP *****************************************************************************************************************************
localhost                  : ok=52   changed=2    unreachable=0    failed=1    skipped=29   rescued=0    ignored=0   

[root@storageqe-70 tests]# lsblk
NAME                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                              8:0    0   223G  0 disk 
├─sda1                           8:1    0   600M  0 part /boot/efi
├─sda2                           8:2    0     1G  0 part /boot
└─sda3                           8:3    0 221.4G  0 part 
  ├─rhel_storageqe--70-root    253:0    0    70G  0 lvm  /
  ├─rhel_storageqe--70-swap    253:1    0   7.6G  0 lvm  [SWAP]
  └─rhel_storageqe--70-home    253:2    0 143.8G  0 lvm  /home
nvme1n1                        259:0    0 894.3G  0 disk 
└─nvme1n1p1                    259:7    0 894.3G  0 part 
  └─foo-test_corig             253:6    0     5G  0 lvm  
    └─foo-test                 253:3    0     5G  0 lvm  
nvme4n1                        259:1    0 894.3G  0 disk 
└─nvme4n1p1                    259:6    0 894.3G  0 part 
  ├─foo-test_cache_cpool_cdata 253:4    0     4G  0 lvm  
  │ └─foo-test                 253:3    0     5G  0 lvm  
  └─foo-test_cache_cpool_cmeta 253:5    0     8M  0 lvm  
    └─foo-test                 253:3    0     5G  0 lvm  
nvme2n1                        259:2    0 894.3G  0 disk 
nvme3n1                        259:3    0 894.3G  0 disk 
nvme0n1                        259:4    0 894.3G  0 disk 
[root@storageqe-70 tests]# lsscsi 
[0:2:0:0]    disk    DELL     PERC H330 Mini   4.30  /dev/sda 
[N:0:1:1]    disk    Dell Express Flash CD5 960G SFF__1         /dev/nvme0n1
[N:1:4:1]    disk    Samsung SSD 983 DCT 960GB__1               /dev/nvme1n1
[N:2:4:1]    disk    Samsung SSD 983 DCT 960GB__1               /dev/nvme2n1
[N:3:1:1]    disk    Dell Express Flash CD5 960G SFF__1         /dev/nvme3n1
[N:4:4:1]    disk    Samsung SSD 983 DCT 960GB__1               /dev/nvme4n1


upstream ansible version
 ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any
point.
ansible [core 2.13.0.dev0]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/site-packages/ansible_core-2.12.1-py3.9.egg/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.9.9 (main, Jan  8 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]
  jinja version = 3.0.3
  libyaml = True


Don't hit the error 
# ansible --version
ansible [core 2.12.1]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.9.9 (main, Jan  8 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]
  jinja version = 2.11.3
  libyaml = True





The reason I ask is because it looks like this bug was introduced with this commit: https://github.com/linux-system-roles/storage/commit/1c4b709cd8aa0fc4fb19834260b1edafa3c58899

commit 1c4b709cd8aa0fc4fb19834260b1edafa3c58899
Author: David Lehman <dlehman>
Date:   Tue Jun 9 14:19:12 2020 -0400

    Add validation of pool members.
...

- set_fact:
    _storage_test_pool_pvs: "{{ _storage_test_pool_pvs }} + [ '{{ pv_paths.results[idx].device }}' ]"
  loop: "{{ _storage_test_pool_pvs_lvm }}"
  loop_control:
    index_var: idx
  when: storage_test_pool.type == 'lvm'

This jinja code is not correct - it should be

    _storage_test_pool_pvs: "{{ _storage_test_pool_pvs + [pv_paths.results[idx].device] }}"



https://bugzilla.redhat.com/show_bug.cgi?id=2016517

Comment 1 Rich Megginson 2022-01-27 22:55:27 UTC
we will need this fix for ansible-core 2.13 support

Comment 2 Rich Megginson 2022-06-23 15:38:19 UTC
*** Bug 2100368 has been marked as a duplicate of this bug. ***

Comment 6 Rich Megginson 2022-07-05 13:46:33 UTC
(In reply to guazhang from comment #5)
> Hi,
> 
> Hit the failed, please have a look
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=2103800
> https://beaker.engineering.redhat.com/recipes/12233981#task146958047
> https://beaker.engineering.redhat.com/jobs/6781166

The only failure I see is the one for the new BZ you created - https://bugzilla.redhat.com/show_bug.cgi?id=2103800

I don't see the failure related to "Invalid data passed" - so I think this BZ can be verified

Comment 13 errata-xmlrpc 2022-11-15 10:22:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rhel-system-roles bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:8117