RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1894647 - storage: pool metadata usage must be accounted for by the user
Summary: storage: pool metadata usage must be accounted for by the user
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: rhel-system-roles
Version: 8.3
Hardware: All
OS: Linux
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: David Lehman
QA Contact: ChanghuiZhong
URL:
Whiteboard: role:storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-04 17:07 UTC by David Lehman
Modified: 2022-08-02 18:10 UTC (History)
7 users (show)

Fixed In Version: rhel-system-roles-1.0.0-28.el8
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 16:02:34 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github linux-system-roles storage issues 13 0 None open Insufficient space when lv size matches vg 2021-02-16 17:56:19 UTC
Github linux-system-roles storage pull 199 0 None open Trim volume size as needed to fit in pool free space 2021-02-16 18:04:44 UTC

Description David Lehman 2020-11-04 17:07:32 UTC
Description of problem:
When defining volumes within pools, it is on the user to adjust volume sizes to account for metadata space usage within the pool. This is a poor user experience and should be handled by the role.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. create a pool
2. create volumes with cumulative size equal to the size of the disk(s) backing the pool

Actual results:
Failure due to insufficient space since the pool will use some of the disk space for metadata.

Expected results:
No failure. Last volume size should be trimmed as needed to fit, possibly subject to some maximum amount trimmed.

Additional info:

Comment 1 ChanghuiZhong 2021-01-21 09:31:11 UTC
$ cat tests_create_lv_size_equal_to_vg.yml
---
- hosts: all
  become: true
  vars:
    storage_safe_mode: false
    mount_location: '/opt/test1'
    volume_group_size: '10g'
    lv_size: '10g'
    unused_disk_subfact: '{{ ansible_devices[unused_disks[0]] }}'
    disk_size: '{{ unused_disk_subfact.sectors|int *
                   unused_disk_subfact.sectorsize|int }}'

  tasks:
    - include_role:
        name: linux-system-roles.storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 1

    - name: Create one lv which size is equal to vg size
      include_role:
        name: linux-system-roles.storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              volumes:
                - name: test1
                  size: "{{ lv_size }}"
                  mount_point: "{{ mount_location }}"

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: linux-system-roles.storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              state: "absent"
              volumes:
                - name: test1
                  mount_point: "{{ mount_location }}"

    - include_tasks: verify-role-results.yml



output:
TASK [linux-system-roles.storage : debug] ********************************************
task path: /home/system-role/storage/tasks/main-blivet.yml:84
ok: [192.168.122.6] => {
    "_storage_pools": [
        {
            "disks": [
                "vdb"
            ],
            "encryption": false,
            "encryption_cipher": null,
            "encryption_key": null,
            "encryption_key_size": null,
            "encryption_luks_version": null,
            "encryption_password": null,
            "name": "foo",
            "raid_chunk_size": null,
            "raid_device_count": null,
            "raid_level": null,
            "raid_metadata_version": null,
            "raid_spare_count": null,
            "state": "present",
            "type": "lvm",
            "volumes": [
                {
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_password": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "test1",
                    "pool": "foo",
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "size": "10g",
                    "state": "present",
                    "type": "lvm"
                }
            ]
        }
    ]
}

TASK [linux-system-roles.storage : debug] ********************************************
task path: /home/system-role/storage/tasks/main-blivet.yml:87
ok: [192.168.122.6] => {
    "_storage_volumes": []
}

TASK [linux-system-roles.storage : get required packages] ****************************
task path: /home/system-role/storage/tasks/main-blivet.yml:90
ok: [192.168.122.6] => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": ["lvm2", "xfsprogs"], "pools": [], "volumes": []}

TASK [linux-system-roles.storage : make sure required packages are installed] ********
task path: /home/system-role/storage/tasks/main-blivet.yml:99
ok: [192.168.122.6] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}

TASK [linux-system-roles.storage : manage the pools and volumes to match the specified state] ***
task path: /home/system-role/storage/tasks/main-blivet.yml:104
fatal: [192.168.122.6]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "specified size for volume '10 GiB' exceeds available space in pool 'foo' (10 GiB)", "packages": [], "pools": [], "volumes": []}

PLAY RECAP ***************************************************************************
192.168.122.6              : ok=35   changed=0    unreachable=0    failed=1    skipped=15   rescued=0    ignored=0

Comment 2 ChanghuiZhong 2021-01-21 09:34:08 UTC
I wrote a new test case for this bz:
https://github.com/linux-system-roles/storage/pull/189

Comment 6 David Lehman 2021-02-16 18:04:44 UTC
Upstream Pull Request: https://github.com/linux-system-roles/storage/pull/199

Comment 12 ChanghuiZhong 2021-02-18 04:46:52 UTC
test pass with rhel-system-roles-1.0.0-28.el8

move to verified

Comment 14 errata-xmlrpc 2021-05-18 16:02:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rhel-system-roles bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:1909


Note You need to log in before you can comment on or make changes to this bug.