Bug 1750680
Summary: | LUKS2 reencryption ignores option disabling VK upload in kernel keyring | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Ondrej Kozina <okozina> |
Component: | cryptsetup | Assignee: | Ondrej Kozina <okozina> |
Status: | CLOSED ERRATA | QA Contact: | guazhang <guazhang> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 8.1 | CC: | agk, cmarthal, guazhang, jbrassow, mbroz, okozina, prajnoha, storage-qe |
Target Milestone: | rc | ||
Target Release: | 8.2 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | cryptsetup-2.2.2-1.el8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1659579 | Environment: | |
Last Closed: | 2020-04-28 16:54:37 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1757783 | ||
Bug Blocks: |
Description
Ondrej Kozina
2019-09-10 08:50:20 UTC
Hello [root@storageqe-24 ~]# vgcreate snapper_thinp /dev/sdc [root@storageqe-24 ~]# lvcreate --thinpool POOL --zero n -L 1G snapper_thinp Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. Logical volume "POOL" created. [root@storageqe-24 ~]# lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin Logical volume "origin" created. [root@storageqe-24 ~]# echo Str0ngP455w0rd### | cryptsetup luksFormat /dev/snapper_thinp/origin [root@storageqe-24 ~]# echo Str0ngP455w0rd### | cryptsetup luksOpen --disable-keyring /dev/snapper_thinp/origin luks_origin [root@storageqe-24 ~]# mkfs.ext4 /dev/mapper/luks_origin mke2fs 1.44.6 (5-Mar-2019) Creating filesystem with 258048 4k blocks and 64512 inodes Filesystem UUID: e8d48b20-555d-4403-b6d1-35ab0effda53 Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done [root@storageqe-24 ~]# lvextend -L +500M -r /dev/snapper_thinp/origin fsck from util-linux 2.32.1 /dev/mapper/luks_origin: clean, 11/64512 files, 8785/258048 blocks WARNING: Sum of all thin volume sizes (<1.49 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Size of logical volume snapper_thinp/origin changed from 1.00 GiB (256 extents) to <1.49 GiB (381 extents). Logical volume snapper_thinp/origin successfully resized. resize2fs 1.44.6 (5-Mar-2019) Resizing the filesystem on /dev/mapper/luks_origin to 386048 (4k) blocks. The filesystem on /dev/mapper/luks_origin is now 386048 (4k) blocks long. [root@storageqe-24 ~]# mkdir -p /mnt/origin [root@storageqe-24 ~]# [root@storageqe-24 ~]# mount /dev/mapper/luks_origin /mnt/origin/ [root@storageqe-24 ~]# [root@storageqe-24 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 32G 10M 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 32G 26M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/rhel_storageqe--24-root 50G 3.3G 47G 7% / /dev/sda2 1014M 165M 850M 17% /boot /dev/mapper/rhel_storageqe--24-home 5.4T 39G 5.4T 1% /home tmpfs 6.3G 0 6.3G 0% /run/user/0 /dev/mapper/luks_origin 1.5G 3.0M 1.4G 1% /mnt/origin [root@storageqe-24 ~]# lvextend -L +500M -r /dev/snapper_thinp/origin WARNING: Sum of all thin volume sizes (<1.98 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Size of logical volume snapper_thinp/origin changed from <1.49 GiB (381 extents) to <1.98 GiB (506 extents). Logical volume snapper_thinp/origin successfully resized. resize2fs 1.44.6 (5-Mar-2019) Filesystem at /dev/mapper/luks_origin is mounted on /mnt/origin; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/luks_origin is now 514048 (4k) blocks long. [root@storageqe-24 ~]# echo Str0ngP455w0rd### | cryptsetup reencrypt --resilience none --active-name luks_origin Progress: 44.8%, ETA 00:07, 900 MiB written, speed 141.3 MiB/sFailed to write hotzone area starting at 996147200. Fatal error while reencrypting chunk starting at 1978368, 102400 sectors long. Reencryption was run in online mode. [root@storageqe-24 ~]# lvextend -L +500M -r /dev/snapper_thinp/origin fsadm: Can not find active LUKS device. Unlock "/dev/mapper/snapper_thinp-origin" volume first. Filesystem check failed. HI, please have a look the error if expectation ? The LV 'origin' is twice the size of thin pool. During reencryption the whole device gets written and therefore it can't fit in the thin pool. This works as expected. Perhaps you can simplify test for this bug: The bug is in the fact that online reencryption completely ignored user's preference of using --disable-keyring option during device activation. So you can perhaps just activate the device with "cryptsetup open /your/device crypt_mapping --disable-keyring" Later, "cryptsetup status crypt_mapping" should show following output: (....) key location: dm-crypt <====== here (...) sector size: 512 offset: 4096 sectors size: 293597184 sectors mode: read/write flags: discards If you reencrypt the device you just need to verify that key remained in dm-crypt after the operation got finished. It should be in dm-crypt for the whole duration of reencryption operation, by the way. You can check the same also with "dmsetup table crypt_mapping --showkeys". The key loaded via kernel keyring will look like ":64:logon:cryptsetup:b85...." whereas key loaded directly in dm-crypt will manifest in plain hexbyte represenation of binary volume key. thanks Ondrej for the details, move to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1848 |