Bug 1848254

Summary: storage: tests_raid_volume_options.yml, max_return: 3 -> disks_needed: 3
Product: Red Hat Enterprise Linux 8 Reporter: Zhang Yi <yizhan>
Component: rhel-system-rolesAssignee: Pavel Cahyna <pcahyna>
Status: CLOSED ERRATA QA Contact: Zhang Yi <yizhan>
Severity: low Docs Contact:
Priority: unspecified    
Version: 8.3CC: ffan, storage-qe
Target Milestone: rcKeywords: Rebase
Target Release: 8.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: role:storage
Fixed In Version: rhel-system-roles-1.0-18.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-04 04:03:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Zhang Yi 2020-06-18 05:42:50 UTC
Description of problem:
storage:tests_raid_volume_options.yml, max_return: 3 -> disks_needed: 3

Cloned from https://github.com/linux-system-roles/storage/issues/112

Version-Release number of selected component (if applicable):
rhel-system-roles-1.0-11.el8

How reproducible:


Steps to Reproduce:

TASK [storage : manage the pools and volumes to match the specified state] *******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969 && echo ansible-tmp-1592361950.787065-137572-241363501133969="` echo /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-1370915xok6as5/tmp3pryeh_0 TO /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/ /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1150, in run_module
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 868, in manage_volume
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 353, in manage
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 527, in _create
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 482, in _process_device_numbers
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [],
            "safe_mode": false,
            "use_partitions": true,
            "volumes": [
                {
                    "disks": [
                        "sdj",
                        "sdk"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "test1",
                    "raid_chunk_size": null,
                    "raid_device_count": 2,
                    "raid_level": "raid1",
                    "raid_metadata_version": "1.0",
                    "raid_spare_count": 1,
                    "size": 0,
                    "state": "present",
                    "type": "raid"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "failed to set up volume 'test1': cannot create RAID with 2 members (2 active and 1 spare)",
    "packages": [],
    "pools": [],
    "volumes": []
}

PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost                  : ok=31   changed=0    unreachable=0    failed=1    skipped=19   rescued=0    ignored=0 

Actual results:


Expected results:


Additional info:

This case need 3 disks for raid testing(raid_device_count: 2, raid_spare_count: 1).
If the disk num we get is less than 3 during get_unused_disk, the next task will be failed.

So I think we'd better change to disks_needed: 3, and we can add another case which test the 2 disks raid creating failure scenario

Comment 1 Zhang Yi 2020-08-21 14:45:41 UTC
upstream fix merged.

Comment 7 errata-xmlrpc 2020-11-04 04:03:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rhel-system-roles bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:4809