RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1848254 - storage: tests_raid_volume_options.yml, max_return: 3 -> disks_needed: 3
Summary: storage: tests_raid_volume_options.yml, max_return: 3 -> disks_needed: 3
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: rhel-system-roles
Version: 8.3
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: rc
: 8.3
Assignee: Pavel Cahyna
QA Contact: Zhang Yi
URL:
Whiteboard: role:storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-18 05:42 UTC by Zhang Yi
Modified: 2020-11-04 04:03 UTC (History)
2 users (show)

Fixed In Version: rhel-system-roles-1.0-18.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-04 04:03:31 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github linux-system-roles storage issues 112 0 None closed storage: tests_raid_volume_options.yml, max_return: 3 -> disks_needed: 3 2020-11-04 16:20:57 UTC
Github linux-system-roles storage pull 157 0 None closed add 'disks_needed: 3' to tests_raid_volume_options.yml 2020-11-04 16:20:58 UTC

Description Zhang Yi 2020-06-18 05:42:50 UTC
Description of problem:
storage:tests_raid_volume_options.yml, max_return: 3 -> disks_needed: 3

Cloned from https://github.com/linux-system-roles/storage/issues/112

Version-Release number of selected component (if applicable):
rhel-system-roles-1.0-11.el8

How reproducible:


Steps to Reproduce:

TASK [storage : manage the pools and volumes to match the specified state] *******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969 && echo ansible-tmp-1592361950.787065-137572-241363501133969="` echo /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-1370915xok6as5/tmp3pryeh_0 TO /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/ /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1150, in run_module
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 868, in manage_volume
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 353, in manage
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 527, in _create
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 482, in _process_device_numbers
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [],
            "safe_mode": false,
            "use_partitions": true,
            "volumes": [
                {
                    "disks": [
                        "sdj",
                        "sdk"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "test1",
                    "raid_chunk_size": null,
                    "raid_device_count": 2,
                    "raid_level": "raid1",
                    "raid_metadata_version": "1.0",
                    "raid_spare_count": 1,
                    "size": 0,
                    "state": "present",
                    "type": "raid"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "failed to set up volume 'test1': cannot create RAID with 2 members (2 active and 1 spare)",
    "packages": [],
    "pools": [],
    "volumes": []
}

PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost                  : ok=31   changed=0    unreachable=0    failed=1    skipped=19   rescued=0    ignored=0 

Actual results:


Expected results:


Additional info:

This case need 3 disks for raid testing(raid_device_count: 2, raid_spare_count: 1).
If the disk num we get is less than 3 during get_unused_disk, the next task will be failed.

So I think we'd better change to disks_needed: 3, and we can add another case which test the 2 disks raid creating failure scenario

Comment 1 Zhang Yi 2020-08-21 14:45:41 UTC
upstream fix merged.

Comment 7 errata-xmlrpc 2020-11-04 04:03:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rhel-system-roles bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:4809


Note You need to log in before you can comment on or make changes to this bug.