Bug 2142550

Summary: RFE for the storage system role to support configuring the stripe size for RAID LVM volumes
Product: Red Hat Enterprise Linux 8 Reporter: Vojtech Trefny <vtrefny>
Component: python-blivetAssignee: Vojtech Trefny <vtrefny>
Status: CLOSED ERRATA QA Contact: Fine Fan <ffan>
Severity: medium Docs Contact:
Priority: medium    
Version: 8.6CC: briasmit, ffan
Target Milestone: rcKeywords: FutureFeature, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: python-blivet-3.6.0-5.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2168476 (view as bug list) Environment:
Last Closed: 2023-11-14 15:34:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2141961    

Description Vojtech Trefny 2022-11-14 11:56:19 UTC
This bug was initially created as a copy of Bug #2141961

I am copying this bug because: blivet currently doesn't support setting stripe size for RAID LVs



Description of problem:

Creating LVM RAID with the Storage System Role doesn't have the option to set custom stripe sizes; it has a hard-coded default value of 64K

After a discussion with Brian Smith, this should be considered an RFE for the storage system role to support configuring the stripe size for RAID LVM volumes

Version-Release number of selected component (if applicable):

Tested on: 

redhat.rhel_system_roles 1.16.2
fedora.linux_system_roles 1.30.0

How reproducible:

Running the role with specific options for an LVM RAID setup

Steps to Reproduce:

Run the following playbook:

---
- name: PoC RAID0 HANA storage with storage system role
  hosts: <managed node>
  become: yes
  vars:
    unused_disks: 
      - <disk1>
      - <disk2>
  tasks:
    - name: Run storage role
      vars:
        storage_safe_mode: false
        storage_use_partitions: true
        storage_pools:
          - name: hanadata_vg
            type: lvm
            disks: "{{ unused_disks }}"
            state: present
            volumes:
              - name: hanadata_lv
                size: 100%
                mount_point: /hana/data
                fs_type: xfs
                raid_disks: "{{ [unused_disks[0], unused_disks[1]] }}"
                raid_level: raid0
      ansible.builtin.include_role:
        name: fedora.linux_system_roles.storage
        #name: redhat.rhel_system_roles.storage
...


Actual results:

- stripe size is set to 64K by default

[root@ansible3 ~]# lvs -ao +lv_full_name,devices,stripe_size | awk '/hanadata/'
  hanadata_lv            hanadata_vg rwi-aor---  199.99g                                                     hanadata_vg/hanadata_lv          hanadata_lv_rimage_0(0),hanadata_lv_rimage_1(0) 64.00k
  [hanadata_lv_rimage_0] hanadata_vg iwi-aor--- <100.00g                                                     hanadata_vg/hanadata_lv_rimage_0 /dev/sdf1(0)                                        0 
  [hanadata_lv_rimage_1] hanadata_vg iwi-aor--- <100.00g                                                     hanadata_vg/hanadata_lv_rimage_1 /dev/sdg1(0)                                        0 
[root@ansible3 ~]# 


Expected results:

Similarly to raid_chunk_size, there should be the option to a custom value, if needed

Additional info:

The Azure documentation asks for specific stripe sizes here: https://learn.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#stripe-sizes-when-using-logical-volume-managers

Comment 6 errata-xmlrpc 2023-11-14 15:34:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (python-blivet bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7004