Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
.`tests_luks.yml` no longer cause partition case fail with NVME disk
Previously, NVME disks used a different partition naming convention than the one used by `virtio/scsi` and the Storage role did not reflect it. As a consequence, running the Storage role with NVME disks resulted in a crash. With this fix, the Storage RHEL System Role now obtains the partition name from the `blivet` module.
It is not related to LUKS, correct? By the way, is there a way to reproduce the problem without using an actual nvme device, like in the VMs used in the CI?
(In reply to Pavel Cahyna from comment #1)
> It is not related to LUKS, correct? By the way, is there a way to reproduce
> the problem without using an actual nvme device, like in the VMs used in the
> CI?
Yes, it also can be reproduced with bellow playbook, currently I've no idea how to reproduce it without an actual nvme device.
I saw David already have an fix for it, maybe he have more hints for the root cause and idea to reproduce in VMs.
If not, maybe we can consider enable nvme disk in VMs.
```
---
- hosts: all
become: true
vars:
storage_safe_mode: false
mount_location: '/opt/test1'
volume_size: '5g'
tasks:
- include_role:
name: storage
- include_tasks: get_unused_disk.yml
vars:
min_size: "{{ volume_size }}"
max_return: 1
##
## Partition
##
- name: Create an encrypted partition volume w/ default fs
include_role:
name: storage
vars:
storage_pools:
- name: foo
type: partition
disks: "{{ unused_disks }}"
volumes:
- name: test1
type: partition
mount_point: "{{ mount_location }}"
# size: 4g
- include_tasks: verify-role-results.yml
- name: Remove the encryption layer
include_role:
name: storage
vars:
storage_pools:
- name: foo
type: partition
disks: "{{ unused_disks }}"
volumes:
- name: test1
type: partition
mount_point: "{{ mount_location }}"
# size: 4g
- include_tasks: verify-role-results.yml
- name: Clean up
include_role:
name: storage
vars:
storage_pools:
- name: foo
type: partition
disks: "{{ unused_disks }}"
- include_tasks: verify-role-results.yml
```
The problem is one of predicting the names of partitions on nvme drives, where they use a different formula than on scsi/virtio. So any test case that uses partitions will likely hit this.
Do we actually document partition pools as supported?
> The problem is one of predicting the names of partitions on nvme drives, where they use a different formula than on scsi/virtio
And what about multipath, will it also use a different formula than scsi/virtio/(ata)?
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (rhel-system-roles bug fix and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHEA-2021:1909
Description of problem: storage: tests_luks.yml failed with nvme disk Version-Release number of selected component (if applicable): How reproducible: rhel-system-roles-1.0-12.el8.noarch Steps to Reproduce: 1. run bellow playbook with NVMe disk 2. 3. Actual results: Expected results: Additional info: It failed on Partition cases, and passed on DISK/LVM cases # cat tests_luks.yml --- - hosts: all become: true vars: storage_safe_mode: false mount_location: '/opt/test1' volume_size: '5g' tasks: - include_role: name: storage - include_tasks: get_unused_disk.yml vars: min_size: "{{ volume_size }}" max_return: 1 ## ## Partition ## - name: Create an encrypted partition volume w/ default fs include_role: name: storage vars: storage_pools: - name: foo type: partition disks: "{{ unused_disks }}" volumes: - name: test1 type: partition mount_point: "{{ mount_location }}" # size: 4g encryption: true encryption_passphrase: 'yabbadabbadoo' - include_tasks: verify-role-results.yml - name: Remove the encryption layer include_role: name: storage vars: storage_pools: - name: foo type: partition disks: "{{ unused_disks }}" volumes: - name: test1 type: partition mount_point: "{{ mount_location }}" # size: 4g encryption: false encryption_passphrase: 'yabbadabbadoo' - include_tasks: verify-role-results.yml