RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2103800 - [RHEL9.1] tests_lvm_pool_members_scsi_generated.yml failed to add disk 'sdc' to pool 'foo'
Summary: [RHEL9.1] tests_lvm_pool_members_scsi_generated.yml failed to add disk 'sdc' ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: python-blivet
Version: 9.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Vojtech Trefny
QA Contact: Release Test Team
Sagar Dubewar
URL:
Whiteboard: role:storage
: 2153437 (view as bug list)
Depends On:
Blocks: 2160465
TreeView+ depends on / blocked
 
Reported: 2022-07-05 00:13 UTC by guazhang@redhat.com
Modified: 2023-05-09 08:37 UTC (History)
8 users (show)

Fixed In Version: python-blivet-3.6.0-5.el9
Doc Type: Bug Fix
Doc Text:
.Installer creating LUKSv2 devices with sector size of 512 bytes Previously, the RHEL installer created LUKSv2 devices with 4096 bytes sectors if the disk had 4096 bytes physical sectors. With this update, installer now creates LUKSv2 devices with sector size of 512 bytes to offer better disk compatibility with different physical sector sizes used together in one LVM volume group even when the LVM physical volumes are encrypted.
Clone Of:
: 2160465 (view as bug list)
Environment:
Last Closed: 2023-05-09 07:36:35 UTC
Type: Bug
Target Upstream Version:
Embargoed:
sdubewar: needinfo-
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-126947 0 None None None 2022-07-05 00:21:04 UTC
Red Hat Issue Tracker RTT-5116 0 None None None 2023-01-18 18:44:05 UTC
Red Hat Issue Tracker RTT-5117 0 None None None 2023-01-18 18:44:09 UTC
Red Hat Product Errata RHBA-2023:2230 0 None None None 2023-05-09 07:36:45 UTC

Internal Links: 2153437

Description guazhang@redhat.com 2022-07-05 00:13:47 UTC
Description of problem:
storage role tests tests_lvm_pool_members_scsi_generated.yml failed and 

TASK [rhel-system-roles.storage : manage the pools and volumes to match the specified state] ***
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:77
fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []}


Version-Release number of selected component (if applicable):
 ansible-core-2.13.1-2.el9.x86_64
rhel-system-roles-1.19.3-1.el9.noarch

How reproducible:


Steps to Reproduce:
1. ansible-playbook -vv -i host tests_lvm_pool_members_scsi_generated.yml
2.
3.

Actual results:


Expected results:


Additional info:


TASK [rhel-system-roles.storage : manage the pools and volumes to match the specified state] ***
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:77
fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []}

TASK [rhel-system-roles.storage : failed message] ******************************
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:99
fatal: [localhost]: FAILED! => {"changed": false, "msg": {"actions": [], "changed": false, "crypts": [], "failed": true, "invocation": {"module_args": {"disklabel_type": null, "diskvolume_mkfs_option_map": {}, "packages_only": false, "pool_defaults": {"disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "state": "present", "type": "lvm", "volumes": []}, "pools": [{"disks": ["sdb", "sdc", "sdd"], "encryption": true, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": 0, "encryption_luks_version": "luks2", "encryption_password": "yabbadabbadoo", "name": "foo", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "state": "present", "type": "lvm", "volumes": []}], "safe_mode": false, "use_partitions": true, "volume_defaults": {"cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "size": 0, "state": "present", "thin": null, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null}, "volumes": []}}, "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []}}

TASK [rhel-system-roles.storage : Unmask the systemd cryptsetup services] ******
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:103

PLAY RECAP *********************************************************************
localhost                  : ok=382  changed=16   unreachable=0    failed=1    skipped=243  rescued=1    ignored=0   
STDERR:[WARNING]: Unable to query 'service' tool (1):
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
RETURN:2

NAME                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                                             8:0    1 447.1G  0 disk  
├─sda1                                          8:1    1     1G  0 part  /boot
└─sda2                                          8:2    1 446.1G  0 part  
  ├─rhel_storageqe--104-root                  253:0    0    70G  0 lvm   /
  ├─rhel_storageqe--104-swap                  253:1    0   7.7G  0 lvm   [SWAP]
  └─rhel_storageqe--104-home                  253:2    0 368.4G  0 lvm   /home
sdb                                             8:16   1 447.1G  0 disk  
└─sdb1                                          8:17   1 447.1G  0 part  
  └─luks-7af2d787-cbe8-4dd9-8dfb-186b4f0a9943 253:5    0 447.1G  0 crypt 
sdc                                             8:32   1 447.1G  0 disk  
sdd                                             8:48   1 447.1G  0 disk  


http://lab-04.rhts.eng.pek2.redhat.com/beaker/logs/tasks/146967+/146967330/taskout.log

https://beaker.engineering.redhat.com/recipes/12235077#task146967326,task146967328

Comment 1 Rich Megginson 2022-08-04 00:50:18 UTC
@vtrefny Any ideas?

Comment 2 Vojtech Trefny 2022-08-24 12:26:30 UTC
The error comes from blivet, for some reason it thinks the disks have different sector sizes. But from logs it looks like it knows that all three have 512 sectors. I need to dig deeper, but the issue definitely isn't in the role, it just raises the error thrown from blivet.

Comment 3 Vojtech Trefny 2023-01-12 13:47:51 UTC
upstream PR: https://github.com/storaged-project/blivet/pull/1096

The issue is cause by cryptsetup creating 4096 sector dm-crypt device on top of disks with 4096 physical block size and 512 logical block size. Normally, the disks with 4096 physical sector size and 512 logical sector size can be combined in one LVM volume group, but with encryption it's not possible because of the cryptsetup optimal sector size autodection. The workaround for now is to force cryptsetup to use 512 encryption sector.

This can be tested in a VM, at least two disks are needed: one (or more) with the default 512 logical block size and one with 4096 physical size and 512 logical size. In libvirt such disk can be created by adding

<blockio logical_block_size="512" physical_block_size="4096"/>

to the XML disk definition. It's not necessary to run the storage role test case from the report, simple autopart installation with encryption is also affected by this.

Comment 4 Vojtech Trefny 2023-01-12 13:52:57 UTC
*** Bug 2153437 has been marked as a duplicate of this bug. ***

Comment 12 Jan Stodola 2023-02-15 15:04:30 UTC
Checked that python-blivet-3.6.0-5.el9 is in nightly compose RHEL-9.2.0-20230214.15

Moving to VERIFIED

Comment 16 errata-xmlrpc 2023-05-09 07:36:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (python-blivet bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2230


Note You need to log in before you can comment on or make changes to this bug.