Description of problem: Creating LVM RAID with the Storage System Role doesn't have the option to set custom stripe sizes; it has a hard-coded default value of 64K After a discussion with Brian Smith, this should be considered an RFE for the storage system role to support configuring the stripe size for RAID LVM volumes Version-Release number of selected component (if applicable): Tested on: redhat.rhel_system_roles 1.16.2 fedora.linux_system_roles 1.30.0 How reproducible: Running the role with specific options for an LVM RAID setup Steps to Reproduce: Run the following playbook: --- - name: PoC RAID0 HANA storage with storage system role hosts: <managed node> become: yes vars: unused_disks: - <disk1> - <disk2> tasks: - name: Run storage role vars: storage_safe_mode: false storage_use_partitions: true storage_pools: - name: hanadata_vg type: lvm disks: "{{ unused_disks }}" state: present volumes: - name: hanadata_lv size: 100% mount_point: /hana/data fs_type: xfs raid_disks: "{{ [unused_disks[0], unused_disks[1]] }}" raid_level: raid0 ansible.builtin.include_role: name: fedora.linux_system_roles.storage #name: redhat.rhel_system_roles.storage ... Actual results: - stripe size is set to 64K by default [root@ansible3 ~]# lvs -ao +lv_full_name,devices,stripe_size | awk '/hanadata/' hanadata_lv hanadata_vg rwi-aor--- 199.99g hanadata_vg/hanadata_lv hanadata_lv_rimage_0(0),hanadata_lv_rimage_1(0) 64.00k [hanadata_lv_rimage_0] hanadata_vg iwi-aor--- <100.00g hanadata_vg/hanadata_lv_rimage_0 /dev/sdf1(0) 0 [hanadata_lv_rimage_1] hanadata_vg iwi-aor--- <100.00g hanadata_vg/hanadata_lv_rimage_1 /dev/sdg1(0) 0 [root@ansible3 ~]# Expected results: Similarly to raid_chunk_size, there should be the option to a custom value, if needed Additional info: The Azure documentation asks for specific stripe sizes here: https://learn.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#stripe-sizes-when-using-logical-volume-managers