Bug 1894676
Summary: | storage: must list disks in order to identify an existing pool | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | David Lehman <dlehman> |
Component: | rhel-system-roles | Assignee: | Pavel Cahyna <pcahyna> |
Status: | CLOSED ERRATA | QA Contact: | guazhang <guazhang> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 8.3 | CC: | cwei, djez, ovasik, pcahyna, rmeggins |
Target Milestone: | rc | Keywords: | Triaged |
Target Release: | 8.0 | Flags: | pm-rhel:
mirror+
|
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | role:storage | ||
Fixed In Version: | rhel-system-roles-1.0.0-28.el8 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-05-18 16:02:34 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
David Lehman
2020-11-04 18:41:56 UTC
Hi, How to rest the bug ? Could you provide the test playbook ? HI acked the bug and run the case in upstream. https://github.com/linux-system-roles/storage/pull/59 cat tests_existing_lvm_pool.yml --- - hosts: all become: true vars: mount_location: '/opt/test1' volume_group_size: '6g' volume_size: '2g' pool_name: foo tasks: - include_role: name: linux-system-roles.storage - include_tasks: get_unused_disk.yml vars: min_size: "{{ volume_group_size }}" max_return: 1 - name: Create one LVM logical volume under one volume group include_role: name: linux-system-roles.storage vars: storage_pools: - name: "{{ pool_name }}" disks: "{{ unused_disks }}" volumes: - name: test1 size: "{{ volume_size }}" - include_tasks: verify-role-results.yml - name: Create another volume in the existing pool, identified only by name. include_role: name: linux-system-roles.storage vars: storage_pools: - name: "{{ pool_name }}" volumes: - name: newvol size: '2 GiB' fs_type: ext4 fs_label: newvol - include_tasks: verify-role-results.yml - name: Clean up. include_role: name: linux-system-roles.storage vars: storage_pools: - name: "{{ pool_name }}" state: absent - include_tasks: verify-role-results.yml TASK [linux-system-roles.storage : manage the pools and volumes to match the specified state] ******************************** task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:104 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'disks' fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1610790067.2291336-242204-36942466005033/AnsiballZ_blivet.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1610790067.2291336-242204-36942466005033/AnsiballZ_blivet.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1610790067.2291336-242204-36942466005033/AnsiballZ_blivet.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.blivet', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_blivet_payload_yd6izuvk/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1265, in <module>\n File \"/tmp/ansible_blivet_payload_yd6izuvk/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1262, in main\n File \"/tmp/ansible_blivet_payload_yd6izuvk/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1215, in run_module\n File \"/tmp/ansible_blivet_payload_yd6izuvk/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 947, in manage_pool\n File \"/tmp/ansible_blivet_payload_yd6izuvk/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 837, in manage\n File \"/tmp/ansible_blivet_payload_yd6izuvk/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 747, in _look_up_disks\nKeyError: 'disks'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} rhel-system-roles-1.0-23.el8.noarch The test case you posted is good. I have pulled this change out of the referenced pull request for clarity. Once that pull request is final/merged I will post a new pull request with only the changes related to this bug, which are minimal. Hi, thanks for the updated, please posted your final PR to the bug and build the fixed package for the bug, so I can test it in time. Pull Request: https://github.com/linux-system-roles/storage/pull/201 Hi test pass with rhel-system-roles-1.0.0-28.el8.noarch. ansible-playbook -vv -i host tests_existing_lvm_pool.yml move to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (rhel-system-roles bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2021:1909 |