RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1986630 - rhel-system-roles test case tests_raid_pool_options.yml failed
Summary: rhel-system-roles test case tests_raid_pool_options.yml failed
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: rhel-system-roles
Version: 8.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: beta
: ---
Assignee: Rich Megginson
QA Contact: CS System Management SST QE
URL:
Whiteboard: role:storage
Depends On: 1987170 1987176
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-27 23:04 UTC by Zhang Yi
Modified: 2021-08-13 02:32 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-13 02:17:12 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Zhang Yi 2021-07-27 23:04:21 UTC
Description of problem:
rhel-system-roles test case tests_raid_pool_options.yml failed 

Version-Release number of selected component (if applicable):
rhel-system-roles-1.4.1-1.el8.noarch

How reproducible:
100%

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

[root@storageqe-62 rhel-system-roles.storage]# mdadm --create /dev/md/vg1-1 --run --level=raid1 --raid-devices=2 --spare-devices=1 --metadata=1.0 --bitmap=internal --chunk=512 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: specifying chunk size is forbidden for this level



test log:
TASK [rhel-system-roles.storage : manage the pools and volumes to match the specified state] **************************************************
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:57
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1627372925.8650928-152297-62523316850252 `" && echo ansible-tmp-1627372925.8650928-152297-62523316850252="` echo /root/.ansible/tmp/ansible-tmp-1627372925.8650928-152297-62523316850252 `" ) && sleep 0'
Using module file /usr/share/ansible/roles/rhel-system-roles.storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-151482v9t3q57j/tmpkwrjdrft TO /root/.ansible/tmp/ansible-tmp-1627372925.8650928-152297-62523316850252/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1627372925.8650928-152297-62523316850252/ /root/.ansible/tmp/ansible-tmp-1627372925.8650928-152297-62523316850252/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1627372925.8650928-152297-62523316850252/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1627372925.8650928-152297-62523316850252/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_stckw2zc/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1496, in run_module
  File "/usr/lib/python3.6/site-packages/blivet/actionlist.py", line 48, in wrapped_func
    return func(obj, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/actionlist.py", line 327, in process
    action.execute(callbacks)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/deviceaction.py", line 335, in execute
    self.device.create()
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/devices/storage.py", line 467, in create
    self._create()
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/devices/md.py", line 598, in _create
    chunk_size=int(self.chunk_size))
  File "/usr/lib64/python3.6/site-packages/gi/overrides/BlockDev.py", line 1062, in wrapped
    raise transform[1](msg)
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "diskvolume_mkfs_option_map": {},
            "packages_only": false,
            "pool_defaults": {
                "disks": [],
                "encryption": false,
                "encryption_cipher": null,
                "encryption_key": null,
                "encryption_key_size": null,
                "encryption_luks_version": null,
                "encryption_password": null,
                "raid_chunk_size": null,
                "raid_device_count": null,
                "raid_level": null,
                "raid_metadata_version": null,
                "raid_spare_count": null,
                "state": "present",
                "type": "lvm",
                "volumes": []
            },
            "pools": [
                {
                    "disks": [
                        "sdb",
                        "sdc",
                        "sdd"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_password": null,
                    "name": "vg1",
                    "raid_chunk_size": null,
                    "raid_device_count": 2,
                    "raid_level": "raid1",
                    "raid_metadata_version": "1.0",
                    "raid_spare_count": 1,
                    "state": "present",
                    "type": "lvm",
                    "volumes": [
                        {
                            "_device": "/dev/mapper/vg1-lv1",
                            "_mount_id": "/dev/mapper/vg1-lv1",
                            "_raw_device": "/dev/mapper/vg1-lv1",
                            "compression": null,
                            "deduplication": null,
                            "disks": [],
                            "encryption": false,
                            "encryption_cipher": null,
                            "encryption_key": null,
                            "encryption_key_size": null,
                            "encryption_luks_version": null,
                            "encryption_password": null,
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "lv1",
                            "raid_chunk_size": null,
                            "raid_device_count": null,
                            "raid_level": null,
                            "raid_metadata_version": null,
                            "raid_spare_count": null,
                            "size": "2g",
                            "state": "present",
                            "type": "lvm",
                            "vdo_pool_size": null
                        },
                        {
                            "_device": "/dev/mapper/vg1-lv2",
                            "_mount_id": "/dev/mapper/vg1-lv2",
                            "_raw_device": "/dev/mapper/vg1-lv2",
                            "compression": null,
                            "deduplication": null,
                            "disks": [],
                            "encryption": false,
                            "encryption_cipher": null,
                            "encryption_key": null,
                            "encryption_key_size": null,
                            "encryption_luks_version": null,
                            "encryption_password": null,
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test2",
                            "name": "lv2",
                            "raid_chunk_size": null,
                            "raid_device_count": null,
                            "raid_level": null,
                            "raid_metadata_version": null,
                            "raid_spare_count": null,
                            "size": "3g",
                            "state": "present",
                            "type": "lvm",
                            "vdo_pool_size": null
                        },
                        {
                            "_device": "/dev/mapper/vg1-lv3",
                            "_mount_id": "/dev/mapper/vg1-lv3",
                            "_raw_device": "/dev/mapper/vg1-lv3",
                            "compression": null,
                            "deduplication": null,
                            "disks": [],
                            "encryption": false,
                            "encryption_cipher": null,
                            "encryption_key": null,
                            "encryption_key_size": null,
                            "encryption_luks_version": null,
                            "encryption_password": null,
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test3",
                            "name": "lv3",
                            "raid_chunk_size": null,
                            "raid_device_count": null,
                            "raid_level": null,
                            "raid_metadata_version": null,
                            "raid_spare_count": null,
                            "size": "3g",
                            "state": "present",
                            "type": "lvm",
                            "vdo_pool_size": null
                        }
                    ]
                }
            ],
            "safe_mode": false,
            "use_partitions": true,
            "volume_defaults": {
                "compression": null,
                "deduplication": null,
                "disks": [],
                "encryption": false,
                "encryption_cipher": null,
                "encryption_key": null,
                "encryption_key_size": null,
                "encryption_luks_version": null,
                "encryption_password": null,
                "fs_create_options": "",
                "fs_label": "",
                "fs_overwrite_existing": true,
                "fs_type": "xfs",
                "mount_check": 0,
                "mount_device_identifier": "uuid",
                "mount_options": "defaults",
                "mount_passno": 0,
                "mount_point": "",
                "raid_chunk_size": null,
                "raid_device_count": null,
                "raid_level": null,
                "raid_metadata_version": null,
                "raid_spare_count": null,
                "size": 0,
                "state": "present",
                "type": "lvm",
                "vdo_pool_size": null
            },
            "volumes": []
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "Failed to commit changes to disk: Process reported exit code 1: mdadm: specifying chunk size is forbidden for this level\n",
    "packages": [
        "mdadm",
        "lvm2",
        "dosfstools",
        "e2fsprogs",
        "xfsprogs"
    ],
    "pools": [],
    "volumes": []
}

TASK [rhel-system-roles.storage : failed message] *********************************************************************************************
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:71
fatal: [localhost]: FAILED! => {
    "changed": false,
    "msg": {
        "actions": [],
        "changed": false,
        "crypts": [],
        "exception": "  File \"/tmp/ansible_blivet_payload_stckw2zc/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1496, in run_module\n  File \"/usr/lib/python3.6/site-packages/blivet/actionlist.py\", line 48, in wrapped_func\n    return func(obj, *args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/actionlist.py\", line 327, in process\n    action.execute(callbacks)\n  File \"/usr/lib/python3.6/site-packages/blivet/threads.py\", line 53, in run_with_lock\n    return m(*args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/deviceaction.py\", line 335, in execute\n    self.device.create()\n  File \"/usr/lib/python3.6/site-packages/blivet/threads.py\", line 53, in run_with_lock\n    return m(*args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/devices/storage.py\", line 467, in create\n    self._create()\n  File \"/usr/lib/python3.6/site-packages/blivet/threads.py\", line 53, in run_with_lock\n    return m(*args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/devices/md.py\", line 598, in _create\n    chunk_size=int(self.chunk_size))\n  File \"/usr/lib64/python3.6/site-packages/gi/overrides/BlockDev.py\", line 1062, in wrapped\n    raise transform[1](msg)\n",
        "failed": true,
        "invocation": {
            "module_args": {
                "disklabel_type": null,
                "diskvolume_mkfs_option_map": {},
                "packages_only": false,
                "pool_defaults": {
                    "disks": [],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_password": null,
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "state": "present",
                    "type": "lvm",
                    "volumes": []
                },
                "pools": [
                    {
                        "disks": [
                            "sdb",
                            "sdc",
                            "sdd"
                        ],
                        "encryption": false,
                        "encryption_cipher": null,
                        "encryption_key": null,
                        "encryption_key_size": null,
                        "encryption_luks_version": null,
                        "encryption_password": null,
                        "name": "vg1",
                        "raid_chunk_size": null,
                        "raid_device_count": 2,
                        "raid_level": "raid1",
                        "raid_metadata_version": "1.0",
                        "raid_spare_count": 1,
                        "state": "present",
                        "type": "lvm",
                        "volumes": [
                            {
                                "_device": "/dev/mapper/vg1-lv1",
                                "_mount_id": "/dev/mapper/vg1-lv1",
                                "_raw_device": "/dev/mapper/vg1-lv1",
                                "compression": null,
                                "deduplication": null,
                                "disks": [],
                                "encryption": false,
                                "encryption_cipher": null,
                                "encryption_key": null,
                                "encryption_key_size": null,
                                "encryption_luks_version": null,
                                "encryption_password": null,
                                "fs_create_options": "",
                                "fs_label": "",
                                "fs_overwrite_existing": true,
                                "fs_type": "xfs",
                                "mount_check": 0,
                                "mount_device_identifier": "uuid",
                                "mount_options": "defaults",
                                "mount_passno": 0,
                                "mount_point": "/opt/test1",
                                "name": "lv1",
                                "raid_chunk_size": null,
                                "raid_device_count": null,
                                "raid_level": null,
                                "raid_metadata_version": null,
                                "raid_spare_count": null,
                                "size": "2g",
                                "state": "present",
                                "type": "lvm",
                                "vdo_pool_size": null
                            },
                            {
                                "_device": "/dev/mapper/vg1-lv2",
                                "_mount_id": "/dev/mapper/vg1-lv2",
                                "_raw_device": "/dev/mapper/vg1-lv2",
                                "compression": null,
                                "deduplication": null,
                                "disks": [],
                                "encryption": false,
                                "encryption_cipher": null,
                                "encryption_key": null,
                                "encryption_key_size": null,
                                "encryption_luks_version": null,
                                "encryption_password": null,
                                "fs_create_options": "",
                                "fs_label": "",
                                "fs_overwrite_existing": true,
                                "fs_type": "xfs",
                                "mount_check": 0,
                                "mount_device_identifier": "uuid",
                                "mount_options": "defaults",
                                "mount_passno": 0,
                                "mount_point": "/opt/test2",
                                "name": "lv2",
                                "raid_chunk_size": null,
                                "raid_device_count": null,
                                "raid_level": null,
                                "raid_metadata_version": null,
                                "raid_spare_count": null,
                                "size": "3g",
                                "state": "present",
                                "type": "lvm",
                                "vdo_pool_size": null
                            },
                            {
                                "_device": "/dev/mapper/vg1-lv3",
                                "_mount_id": "/dev/mapper/vg1-lv3",
                                "_raw_device": "/dev/mapper/vg1-lv3",
                                "compression": null,
                                "deduplication": null,
                                "disks": [],
                                "encryption": false,
                                "encryption_cipher": null,
                                "encryption_key": null,
                                "encryption_key_size": null,
                                "encryption_luks_version": null,
                                "encryption_password": null,
                                "fs_create_options": "",
                                "fs_label": "",
                                "fs_overwrite_existing": true,
                                "fs_type": "xfs",
                                "mount_check": 0,
                                "mount_device_identifier": "uuid",
                                "mount_options": "defaults",
                                "mount_passno": 0,
                                "mount_point": "/opt/test3",
                                "name": "lv3",
                                "raid_chunk_size": null,
                                "raid_device_count": null,
                                "raid_level": null,
                                "raid_metadata_version": null,
                                "raid_spare_count": null,
                                "size": "3g",
                                "state": "present",
                                "type": "lvm",
                                "vdo_pool_size": null
                            }
                        ]
                    }
                ],
                "safe_mode": false,
                "use_partitions": true,
                "volume_defaults": {
                    "compression": null,
                    "deduplication": null,
                    "disks": [],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_password": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "",
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "size": 0,
                    "state": "present",
                    "type": "lvm",
                    "vdo_pool_size": null
                },
                "volumes": []
            }
        },
        "leaves": [],
        "mounts": [],
        "msg": "Failed to commit changes to disk: Process reported exit code 1: mdadm: specifying chunk size is forbidden for this level\n",
        "packages": [
            "mdadm",
            "lvm2",
            "dosfstools",
            "e2fsprogs",
            "xfsprogs"
        ],
        "pools": [],
        "volumes": []
    }
}

TASK [rhel-system-roles.storage : Unmask the systemd cryptsetup services] *********************************************************************
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:75

PLAY RECAP ************************************************************************************************************************************
localhost                  : ok=34   changed=0    unreachable=0    failed=1    skipped=18   rescued=1    ignored=0   

021-07-27 04:02:17,027 INFO blivet/MainThread: executing action: [212] create device mdarray vg1-1 (id 208)
2021-07-27 04:02:17,032 DEBUG blivet/MainThread:                MDRaidArrayDevice.create: vg1-1 ; status: False ;
2021-07-27 04:02:17,036 DEBUG blivet/MainThread:                    MDRaidArrayDevice.setup_parents: name: vg1-1 ; orig: False ;
2021-07-27 04:02:17,040 DEBUG blivet/MainThread:                      PartitionDevice.setup: sdb1 ; orig: False ; status: True ; controllable: True ;
2021-07-27 04:02:17,043 DEBUG blivet/MainThread:                      MDRaidMember.setup: device: /dev/sdb1 ; type: mdmember ; status: False ;
2021-07-27 04:02:17,047 DEBUG blivet/MainThread:                      PartitionDevice.setup: sdc1 ; orig: False ; status: True ; controllable: True ;
2021-07-27 04:02:17,051 DEBUG blivet/MainThread:                      MDRaidMember.setup: device: /dev/sdc1 ; type: mdmember ; status: False ;
2021-07-27 04:02:17,055 DEBUG blivet/MainThread:                      PartitionDevice.setup: sdd1 ; orig: False ; status: True ; controllable: True ;
2021-07-27 04:02:17,058 DEBUG blivet/MainThread:                      MDRaidMember.setup: device: /dev/sdd1 ; type: mdmember ; status: False ;
2021-07-27 04:02:17,062 DEBUG blivet/MainThread:                  MDRaidArrayDevice._create: vg1-1 ; status: False ;
2021-07-27 04:02:17,063 DEBUG blivet/MainThread: non-existent RAID raid1 size == 111.79 GiB
2021-07-27 04:02:17,064 INFO program/MainThread: Running [15] mdadm --create /dev/md/vg1-1 --run --level=raid1 --raid-devices=2 --spare-devices=1 --metadata=1.0 --bitmap=internal --chunk=512 /dev/sdb1 /dev/sdc1 /dev/sdd1 ...
2021-07-27 04:02:17,070 INFO program/MainThread: stdout[15]: 
2021-07-27 04:02:17,071 INFO program/MainThread: stderr[15]: mdadm: specifying chunk size is forbidden for this level

Comment 2 Rich Megginson 2021-07-28 15:15:45 UTC

*** This bug has been marked as a duplicate of bug 1917308 ***

Comment 3 Rich Megginson 2021-07-28 19:02:22 UTC
Looks like the fix in https://bugzilla.redhat.com/show_bug.cgi?id=1966712 broke mdadm in another place?

Comment 4 XiaoNi 2021-07-29 01:07:17 UTC
Hi all

raid1 doesn't use chunk always. So mdadm disables --chunk for raid1. It needs to modify the test case codes.
The patch from upstream is:

commit 5b30a34aa4b5ea7a8202314c1d737ec4a481c127
Author: Mateusz Grzonka <mateusz.grzonka>
Date:   Thu Jul 15 12:25:23 2021 +0200

    Add error handling for chunk size in RAID1
    
    Print error if chunk size is set as it is not supported.
    
    Signed-off-by: Mateusz Grzonka <mateusz.grzonka>
    Signed-off-by: Jes Sorensen <jsorensen>

diff --git a/Create.c b/Create.c
index 18b5e64..f5d57f8 100644
--- a/Create.c
+++ b/Create.c
@@ -254,9 +254,8 @@ int Create(struct supertype *st, char *mddev,
        case LEVEL_MULTIPATH:
        case LEVEL_CONTAINER:
                if (s->chunk) {
-                       s->chunk = 0;
-                       if (c->verbose > 0)
-                               pr_err("chunk size ignored for this level\n");
+                       pr_err("specifying chunk size is forbidden for this level\n");
+                       return 1;
                }
                break;
        default:

Thanks
Xiao


Note You need to log in before you can comment on or make changes to this bug.