Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
+++ This bug was initially created as a clone of Bug #2103800 +++
Description of problem:
storage role tests tests_lvm_pool_members_scsi_generated.yml failed and
TASK [rhel-system-roles.storage : manage the pools and volumes to match the specified state] ***
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:77
fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []}
Version-Release number of selected component (if applicable):
ansible-core-2.13.1-2.el9.x86_64
rhel-system-roles-1.19.3-1.el9.noarch
How reproducible:
Steps to Reproduce:
1. ansible-playbook -vv -i host tests_lvm_pool_members_scsi_generated.yml
2.
3.
Actual results:
Expected results:
Additional info:
TASK [rhel-system-roles.storage : manage the pools and volumes to match the specified state] ***
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:77
fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []}
TASK [rhel-system-roles.storage : failed message] ******************************
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:99
fatal: [localhost]: FAILED! => {"changed": false, "msg": {"actions": [], "changed": false, "crypts": [], "failed": true, "invocation": {"module_args": {"disklabel_type": null, "diskvolume_mkfs_option_map": {}, "packages_only": false, "pool_defaults": {"disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "state": "present", "type": "lvm", "volumes": []}, "pools": [{"disks": ["sdb", "sdc", "sdd"], "encryption": true, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": 0, "encryption_luks_version": "luks2", "encryption_password": "yabbadabbadoo", "name": "foo", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "state": "present", "type": "lvm", "volumes": []}], "safe_mode": false, "use_partitions": true, "volume_defaults": {"cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "size": 0, "state": "present", "thin": null, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null}, "volumes": []}}, "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []}}
TASK [rhel-system-roles.storage : Unmask the systemd cryptsetup services] ******
task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:103
PLAY RECAP *********************************************************************
localhost : ok=382 changed=16 unreachable=0 failed=1 skipped=243 rescued=1 ignored=0
STDERR:[WARNING]: Unable to query 'service' tool (1):
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
[WARNING]: TASK: Verify the volumes listed in storage_pools were correctly
managed: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
RETURN:2
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 447.1G 0 disk
├─sda1 8:1 1 1G 0 part /boot
└─sda2 8:2 1 446.1G 0 part
├─rhel_storageqe--104-root 253:0 0 70G 0 lvm /
├─rhel_storageqe--104-swap 253:1 0 7.7G 0 lvm [SWAP]
└─rhel_storageqe--104-home 253:2 0 368.4G 0 lvm /home
sdb 8:16 1 447.1G 0 disk
└─sdb1 8:17 1 447.1G 0 part
└─luks-7af2d787-cbe8-4dd9-8dfb-186b4f0a9943 253:5 0 447.1G 0 crypt
sdc 8:32 1 447.1G 0 disk
sdd 8:48 1 447.1G 0 disk
http://lab-04.rhts.eng.pek2.redhat.com/beaker/logs/tasks/146967+/146967330/taskout.loghttps://beaker.engineering.redhat.com/recipes/12235077#task146967326,task146967328
--- Additional comment from Rich Megginson on 2022-08-04 00:50:18 UTC ---
@vtrefny Any ideas?
--- Additional comment from Vojtech Trefny on 2022-08-24 12:26:30 UTC ---
The error comes from blivet, for some reason it thinks the disks have different sector sizes. But from logs it looks like it knows that all three have 512 sectors. I need to dig deeper, but the issue definitely isn't in the role, it just raises the error thrown from blivet.
--- Additional comment from Vojtech Trefny on 2023-01-12 13:47:51 UTC ---
upstream PR: https://github.com/storaged-project/blivet/pull/1096
The issue is cause by cryptsetup creating 4096 sector dm-crypt device on top of disks with 4096 physical block size and 512 logical block size. Normally, the disks with 4096 physical sector size and 512 logical sector size can be combined in one LVM volume group, but with encryption it's not possible because of the cryptsetup optimal sector size autodection. The workaround for now is to force cryptsetup to use 512 encryption sector.
This can be tested in a VM, at least two disks are needed: one (or more) with the default 512 logical block size and one with 4096 physical size and 512 logical size. In libvirt such disk can be created by adding
<blockio logical_block_size="512" physical_block_size="4096"/>
to the XML disk definition. It's not necessary to run the storage role test case from the report, simple autopart installation with encryption is also affected by this.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (python-blivet bug fix and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2023:2790