Bug 2103800
| Summary: | [RHEL9.1] tests_lvm_pool_members_scsi_generated.yml failed to add disk 'sdc' to pool 'foo' | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | guazhang <guazhang> | |
| Component: | python-blivet | Assignee: | Vojtech Trefny <vtrefny> | |
| Status: | CLOSED ERRATA | QA Contact: | Release Test Team <release-test-team-automation> | |
| Severity: | unspecified | Docs Contact: | Sagar Dubewar <sdubewar> | |
| Priority: | unspecified | |||
| Version: | 9.1 | CC: | gfialova, jikortus, jstodola, rmeggins, sdubewar, spetrosi, vtrefny, yizhan | |
| Target Milestone: | rc | Keywords: | Triaged | |
| Target Release: | --- | Flags: | sdubewar:
needinfo-
pm-rhel: mirror+ |
|
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | role:storage | |||
| Fixed In Version: | python-blivet-3.6.0-5.el9 | Doc Type: | Bug Fix | |
| Doc Text: |
.Installer creating LUKSv2 devices with sector size of 512 bytes
Previously, the RHEL installer created LUKSv2 devices with 4096 bytes sectors if the disk had 4096 bytes physical sectors. With this update, installer now creates LUKSv2 devices with sector size of 512 bytes to offer better disk compatibility with different physical sector sizes used together in one LVM volume group even when the LVM physical volumes are encrypted.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 2160465 (view as bug list) | Environment: | ||
| Last Closed: | 2023-05-09 07:36:35 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2160465 | |||
@vtrefny Any ideas? The error comes from blivet, for some reason it thinks the disks have different sector sizes. But from logs it looks like it knows that all three have 512 sectors. I need to dig deeper, but the issue definitely isn't in the role, it just raises the error thrown from blivet. upstream PR: https://github.com/storaged-project/blivet/pull/1096 The issue is cause by cryptsetup creating 4096 sector dm-crypt device on top of disks with 4096 physical block size and 512 logical block size. Normally, the disks with 4096 physical sector size and 512 logical sector size can be combined in one LVM volume group, but with encryption it's not possible because of the cryptsetup optimal sector size autodection. The workaround for now is to force cryptsetup to use 512 encryption sector. This can be tested in a VM, at least two disks are needed: one (or more) with the default 512 logical block size and one with 4096 physical size and 512 logical size. In libvirt such disk can be created by adding <blockio logical_block_size="512" physical_block_size="4096"/> to the XML disk definition. It's not necessary to run the storage role test case from the report, simple autopart installation with encryption is also affected by this. *** Bug 2153437 has been marked as a duplicate of this bug. *** Checked that python-blivet-3.6.0-5.el9 is in nightly compose RHEL-9.2.0-20230214.15 Moving to VERIFIED Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (python-blivet bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:2230 |
Description of problem: storage role tests tests_lvm_pool_members_scsi_generated.yml failed and TASK [rhel-system-roles.storage : manage the pools and volumes to match the specified state] *** task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:77 fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []} Version-Release number of selected component (if applicable): ansible-core-2.13.1-2.el9.x86_64 rhel-system-roles-1.19.3-1.el9.noarch How reproducible: Steps to Reproduce: 1. ansible-playbook -vv -i host tests_lvm_pool_members_scsi_generated.yml 2. 3. Actual results: Expected results: Additional info: TASK [rhel-system-roles.storage : manage the pools and volumes to match the specified state] *** task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:77 fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []} TASK [rhel-system-roles.storage : failed message] ****************************** task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:99 fatal: [localhost]: FAILED! => {"changed": false, "msg": {"actions": [], "changed": false, "crypts": [], "failed": true, "invocation": {"module_args": {"disklabel_type": null, "diskvolume_mkfs_option_map": {}, "packages_only": false, "pool_defaults": {"disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "state": "present", "type": "lvm", "volumes": []}, "pools": [{"disks": ["sdb", "sdc", "sdd"], "encryption": true, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": 0, "encryption_luks_version": "luks2", "encryption_password": "yabbadabbadoo", "name": "foo", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "state": "present", "type": "lvm", "volumes": []}], "safe_mode": false, "use_partitions": true, "volume_defaults": {"cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "size": 0, "state": "present", "thin": null, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null}, "volumes": []}}, "leaves": [], "mounts": [], "msg": "failed to add disk 'sdc' to pool 'foo': Disk luks-sdc1 cannot be added to this volume group. LVM doesn't allow using physical volumes with inconsistent (logical) sector sizes.", "packages": [], "pools": [], "volumes": []}} TASK [rhel-system-roles.storage : Unmask the systemd cryptsetup services] ****** task path: /usr/share/ansible/roles/rhel-system-roles.storage/tasks/main-blivet.yml:103 PLAY RECAP ********************************************************************* localhost : ok=382 changed=16 unreachable=0 failed=1 skipped=243 rescued=1 ignored=0 STDERR:[WARNING]: Unable to query 'service' tool (1): [WARNING]: TASK: Verify the volumes listed in storage_pools were correctly managed: The loop variable 'storage_test_pool' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior. [WARNING]: TASK: Verify the volumes listed in storage_pools were correctly managed: The loop variable 'storage_test_pool' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior. [WARNING]: TASK: Verify the volumes listed in storage_pools were correctly managed: The loop variable 'storage_test_pool' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior. [WARNING]: TASK: Verify the volumes listed in storage_pools were correctly managed: The loop variable 'storage_test_pool' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior. [WARNING]: TASK: Verify the volumes listed in storage_pools were correctly managed: The loop variable 'storage_test_pool' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior. RETURN:2 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 1 447.1G 0 disk ├─sda1 8:1 1 1G 0 part /boot └─sda2 8:2 1 446.1G 0 part ├─rhel_storageqe--104-root 253:0 0 70G 0 lvm / ├─rhel_storageqe--104-swap 253:1 0 7.7G 0 lvm [SWAP] └─rhel_storageqe--104-home 253:2 0 368.4G 0 lvm /home sdb 8:16 1 447.1G 0 disk └─sdb1 8:17 1 447.1G 0 part └─luks-7af2d787-cbe8-4dd9-8dfb-186b4f0a9943 253:5 0 447.1G 0 crypt sdc 8:32 1 447.1G 0 disk sdd 8:48 1 447.1G 0 disk http://lab-04.rhts.eng.pek2.redhat.com/beaker/logs/tasks/146967+/146967330/taskout.log https://beaker.engineering.redhat.com/recipes/12235077#task146967326,task146967328