Bug 1894647

Summary: storage: pool metadata usage must be accounted for by the user
Product: Red Hat Enterprise Linux 8 Reporter: David Lehman <dlehman>
Component: rhel-system-rolesAssignee: David Lehman <dlehman>
Status: CLOSED ERRATA QA Contact: ChanghuiZhong <czhong>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.3CC: cwei, czhong, djez, guazhang, ovasik, pcahyna, rmeggins
Target Milestone: rcKeywords: Triaged
Target Release: 8.0Flags: pm-rhel: mirror+
Hardware: All   
OS: Linux   
Whiteboard: role:storage
Fixed In Version: rhel-system-roles-1.0.0-28.el8 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-05-18 16:02:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description David Lehman 2020-11-04 17:07:32 UTC
Description of problem:
When defining volumes within pools, it is on the user to adjust volume sizes to account for metadata space usage within the pool. This is a poor user experience and should be handled by the role.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. create a pool
2. create volumes with cumulative size equal to the size of the disk(s) backing the pool

Actual results:
Failure due to insufficient space since the pool will use some of the disk space for metadata.

Expected results:
No failure. Last volume size should be trimmed as needed to fit, possibly subject to some maximum amount trimmed.

Additional info:

Comment 1 ChanghuiZhong 2021-01-21 09:31:11 UTC
$ cat tests_create_lv_size_equal_to_vg.yml
---
- hosts: all
  become: true
  vars:
    storage_safe_mode: false
    mount_location: '/opt/test1'
    volume_group_size: '10g'
    lv_size: '10g'
    unused_disk_subfact: '{{ ansible_devices[unused_disks[0]] }}'
    disk_size: '{{ unused_disk_subfact.sectors|int *
                   unused_disk_subfact.sectorsize|int }}'

  tasks:
    - include_role:
        name: linux-system-roles.storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 1

    - name: Create one lv which size is equal to vg size
      include_role:
        name: linux-system-roles.storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              volumes:
                - name: test1
                  size: "{{ lv_size }}"
                  mount_point: "{{ mount_location }}"

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: linux-system-roles.storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              state: "absent"
              volumes:
                - name: test1
                  mount_point: "{{ mount_location }}"

    - include_tasks: verify-role-results.yml



output:
TASK [linux-system-roles.storage : debug] ********************************************
task path: /home/system-role/storage/tasks/main-blivet.yml:84
ok: [192.168.122.6] => {
    "_storage_pools": [
        {
            "disks": [
                "vdb"
            ],
            "encryption": false,
            "encryption_cipher": null,
            "encryption_key": null,
            "encryption_key_size": null,
            "encryption_luks_version": null,
            "encryption_password": null,
            "name": "foo",
            "raid_chunk_size": null,
            "raid_device_count": null,
            "raid_level": null,
            "raid_metadata_version": null,
            "raid_spare_count": null,
            "state": "present",
            "type": "lvm",
            "volumes": [
                {
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_password": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "test1",
                    "pool": "foo",
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "size": "10g",
                    "state": "present",
                    "type": "lvm"
                }
            ]
        }
    ]
}

TASK [linux-system-roles.storage : debug] ********************************************
task path: /home/system-role/storage/tasks/main-blivet.yml:87
ok: [192.168.122.6] => {
    "_storage_volumes": []
}

TASK [linux-system-roles.storage : get required packages] ****************************
task path: /home/system-role/storage/tasks/main-blivet.yml:90
ok: [192.168.122.6] => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": ["lvm2", "xfsprogs"], "pools": [], "volumes": []}

TASK [linux-system-roles.storage : make sure required packages are installed] ********
task path: /home/system-role/storage/tasks/main-blivet.yml:99
ok: [192.168.122.6] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}

TASK [linux-system-roles.storage : manage the pools and volumes to match the specified state] ***
task path: /home/system-role/storage/tasks/main-blivet.yml:104
fatal: [192.168.122.6]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "specified size for volume '10 GiB' exceeds available space in pool 'foo' (10 GiB)", "packages": [], "pools": [], "volumes": []}

PLAY RECAP ***************************************************************************
192.168.122.6              : ok=35   changed=0    unreachable=0    failed=1    skipped=15   rescued=0    ignored=0

Comment 2 ChanghuiZhong 2021-01-21 09:34:08 UTC
I wrote a new test case for this bz:
https://github.com/linux-system-roles/storage/pull/189

Comment 6 David Lehman 2021-02-16 18:04:44 UTC
Upstream Pull Request: https://github.com/linux-system-roles/storage/pull/199

Comment 12 ChanghuiZhong 2021-02-18 04:46:52 UTC
test pass with rhel-system-roles-1.0.0-28.el8

move to verified

Comment 14 errata-xmlrpc 2021-05-18 16:02:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rhel-system-roles bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:1909