RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1733391 - attempting to decrypt a snapshot of an encrypted lvm origin volume fails: " Failed to activate overlay device luks_fs_snap2-overlay with actual origin table"
Summary: attempting to decrypt a snapshot of an encrypted lvm origin volume fails: " F...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: cryptsetup
Version: 8.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 8.0
Assignee: Ondrej Kozina
QA Contact: Corey Marthaler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-25 23:20 UTC by Corey Marthaler
Modified: 2021-09-06 15:22 UTC (History)
5 users (show)

Fixed In Version: cryptsetup-2.2.0-1.el8
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-05 22:17:14 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3569 0 None None None 2019-11-05 22:17:26 UTC

Description Corey Marthaler 2019-07-25 23:20:57 UTC
Description of problem:
I'm not sure this is an entirely supported configuration, but that said it worked fine until I introduced this with snapshots of cache origin volumes. I'm still attempting to boil this down into less steps/complexity required to reproduce. 


# Create LVM cache origin
[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity /dev/sdc1 
  Logical volume "corigin" created.
[root@hayes-02 ~]# lvcreate  -L 2G -n fs_A_pool cache_sanity /dev/sdd1
  Logical volume "fs_A_pool" created.
[root@hayes-02 ~]# lvcreate  -L 12M -n fs_A_pool_meta cache_sanity /dev/sdd1
  Logical volume "fs_A_pool_meta" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV             VG           Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  corigin        cache_sanity -wi-a-----  4.00g                                                     /dev/sdc1(0)  
  fs_A_pool      cache_sanity -wi-a-----  2.00g                                                     /dev/sdd1(0)  
  fs_A_pool_meta cache_sanity -wi-a----- 12.00m                                                     /dev/sdd1(512)
[root@hayes-02 ~]# lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 32 --poolmetadata cache_sanity/fs_A_pool_meta cache_sanity/fs_A_pool
  WARNING: Converting cache_sanity/fs_A_pool and cache_sanity/fs_A_pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/fs_A_pool and cache_sanity/fs_A_pool_meta to cache pool.
[root@hayes-02 ~]# lvconvert --yes --type cache --cachemetadataformat 2 --cachepool cache_sanity/fs_A_pool cache_sanity/corigin
  Logical volume cache_sanity/corigin is now cached.
[root@hayes-02 ~]# lvs -a -o +devices
  LV                VG           Attr       LSize  Pool        Origin          Data%  Meta%  Move Log Cpy%Sync Convert Devices           
  corigin           cache_sanity Cwi-a-C---  4.00g [fs_A_pool] [corigin_corig] 0.00   4.62            0.00             corigin_corig(0)  
  [corigin_corig]   cache_sanity owi-aoC---  4.00g                                                                     /dev/sdc1(0)      
  [fs_A_pool]       cache_sanity Cwi---C---  2.00g                             0.00   4.62            0.00             fs_A_pool_cdata(0)
  [fs_A_pool_cdata] cache_sanity Cwi-ao----  2.00g                                                                     /dev/sdd1(0)      
  [fs_A_pool_cmeta] cache_sanity ewi-ao---- 12.00m                                                                     /dev/sdd1(512)    
  [lvol0_pmspare]   cache_sanity ewi------- 12.00m                                                                     /dev/sdb1(0)      

# Encrypt LVM cache origin
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --encrypt --init-only --type luks2 /dev/cache_sanity/corigin --header /tmp/cache_luks_header.1234567890
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/corigin luks_corigin --header /tmp/cache_luks_header.1234567890

# Create a COW snapshot of the cache origin and open it for use
[root@hayes-02 ~]# lvcreate  -s /dev/cache_sanity/corigin -c 64 -n fs_snap1 -L 4100.00m
  Logical volume "fs_snap1" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/fs_snap1 luks_fs_snap1 --header /tmp/cache_luks_header.1234567890

# Online re-encrypt the cache origin
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_corigin --header /tmp/cache_luks_header.1234567890
Finished, time 03:17.215, 4096 MiB written, speed  20.8 MiB/s   

# The snap COW is now nearly full due the entirety of the origin being reencrypted
[root@hayes-02 ~]# lvs -a -o +devices
  LV                VG           Attr       LSize  Pool        Origin          Data%  Meta%  Move Log Cpy%Sync Convert Devices           
  corigin           cache_sanity owi-aoC---  4.00g [fs_A_pool] [corigin_corig] 99.07  6.84            0.00             corigin_corig(0)  
  [corigin_corig]   cache_sanity owi-aoC---  4.00g                                                                     /dev/sdc1(0)      
  [fs_A_pool]       cache_sanity Cwi---C---  2.00g                             99.07  6.84            0.00             fs_A_pool_cdata(0)
  [fs_A_pool_cdata] cache_sanity Cwi-ao----  2.00g                                                                     /dev/sdd1(0)      
  [fs_A_pool_cmeta] cache_sanity ewi-ao---- 12.00m                                                                     /dev/sdd1(512)    
  fs_snap1          cache_sanity swi-aos---  4.00g             corigin         99.93                                   /dev/sdb1(3)      
  [lvol0_pmspare]   cache_sanity ewi------- 12.00m                                                                     /dev/sdb1(0)      

# Create an additional COW snapshot and open it for use
[root@hayes-02 ~]# lvcreate  -s /dev/cache_sanity/corigin -c 128 -n fs_snap2 -L 4100.00m
  Logical volume "fs_snap2" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/fs_snap2 luks_fs_snap2 --header /tmp/cache_luks_header.1234567890

# Online re-encrypt the cache origin again
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_corigin --header /tmp/cache_luks_header.1234567890
Finished, time 03:39.092, 4096 MiB written, speed  18.7 MiB/s   
# The newest COW snap is also almost full
[root@hayes-02 ~]# lvs -a -o +devices
  LV                VG           Attr       LSize  Pool        Origin          Data%  Meta%  Move Log Cpy%Sync Convert Devices           
  corigin           cache_sanity owi-aoC---  4.00g [fs_A_pool] [corigin_corig] 99.57  6.71            0.00             corigin_corig(0)  
  [corigin_corig]   cache_sanity owi-aoC---  4.00g                                                                     /dev/sdc1(0)      
  [fs_A_pool]       cache_sanity Cwi---C---  2.00g                             99.57  6.71            0.00             fs_A_pool_cdata(0)
  [fs_A_pool_cdata] cache_sanity Cwi-ao----  2.00g                                                                     /dev/sdd1(0)      
  [fs_A_pool_cmeta] cache_sanity ewi-ao---- 12.00m                                                                     /dev/sdd1(512)    
  fs_snap1          cache_sanity swi-aos---  4.00g             corigin         99.93                                   /dev/sdb1(3)      
  fs_snap2          cache_sanity swi-aos---  4.00g             corigin         99.92                                   /dev/sdb1(1028)   
  [lvol0_pmspare]   cache_sanity ewi------- 12.00m                                                                     /dev/sdb1(0)      

# Attempt to decrypt the latest snapshot
[root@hayes-02 ~]#  echo foobarglarch | cryptsetup reencrypt --decrypt --active-name luks_fs_snap2 /dev/cache_sanity/fs_snap2 --header /tmp/cache_luks_header.1234567890
device-mapper: reload ioctl on   failed: Required key not available
Failed to activate overlay device luks_fs_snap2-overlay with actual origin table.
device-mapper: remove ioctl on luks_fs_snap2-overlay  failed: No such device or address
Failed to initalize reencryption device stack.


Jul 25 17:54:25 hayes-02 kernel: device-mapper: crypt: xts(aes) using implementation "xts-aes-aesni"
Jul 25 17:54:25 hayes-02 kernel: device-mapper: table: 253:13: crypt: Error decoding and setting key
Jul 25 17:54:25 hayes-02 kernel: device-mapper: ioctl: error adding target to table


Version-Release number of selected component (if applicable):
kernel-4.18.0-121.el8    BUILT: Tue Jul 23 09:49:25 CDT 2019
lvm2-2.03.05-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
lvm2-libs-2.03.05-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
lvm2-dbusd-2.03.05-2.el8    BUILT: Wed Jul 24 08:07:38 CDT 2019
lvm2-lockd-2.03.05-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
cryptsetup-2.2.0-0.2.el8    BUILT: Mon Jun 17 04:08:11 CDT 2019
cryptsetup-libs-2.2.0-0.2.el8    BUILT: Mon Jun 17 04:08:11 CDT 2019
cryptsetup-reencrypt-2.2.0-0.2.el8    BUILT: Mon Jun 17 04:08:11 CDT 2019


How reproducible:
Everytime

Comment 1 Corey Marthaler 2019-07-25 23:25:38 UTC
# Attempting to decrypt an open snapshot volume (with an open origin volume) works fine.
[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n orig test
  Logical volume "orig" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --encrypt --init-only --type luks2 /dev/test/orig --header /tmp/luks_header.19988
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/test/orig luks_origin --header /tmp/luks_header.19988
[root@hayes-02 ~]# lvcreate  -s /dev/test/orig -c 64 -n snap1 -L 4100.00m # larger than origin so wont fill if origin is reencrypted
  Logical volume "snap1" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/test/snap1 luks_snap1 --header /tmp/luks_header.19988
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --decrypt --active-name luks_snap1 /dev/test/snap1 --header /tmp/luks_header.19988
Finished, time 02:03.714, 4096 MiB written, speed  33.1 MiB/s   






# Attempting to decrypt an open snapshot volume who's origin volume is also open and had been online re-encrypted also works fine.
[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n orig test
  Logical volume "orig" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --encrypt --init-only --type luks2 /dev/test/orig --header /tmp/luks_header.20000
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/test/orig luks_origin --header /tmp/luks_header.20000
[root@hayes-02 ~]# lvcreate  -s /dev/test/snap1 -c 64 -n fs_snap1 -L 4100.00m # larger than origin so wont fill if origin is reencrypted
  Snapshot origin LV snap1 not found in Volume group test.
[root@hayes-02 ~]# lvcreate  -s /dev/test/orig -c 64 -n snap1 -L 4100.00m # larger than origin so wont fill if origin is reencrypted
  Logical volume "snap1" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/test/snap1 luks_snap1 --header /tmp/luks_header.20000
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_origin --header /tmp/luks_header.20000
Finished, time 03:26.017, 4096 MiB written, speed  19.9 MiB/s   
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --decrypt --active-name luks_snap1 /dev/test/snap1 --header /tmp/luks_header.20000
Finished, time 02:23.606, 4096 MiB written, speed  28.5 MiB/s

Comment 2 Corey Marthaler 2019-07-25 23:39:22 UTC
# Attempting to decrypt an open snapshot volume who's origin volume is cached, is also open and had been online re-encrypted also works fine.

[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity /dev/sdc1 
  Logical volume "corigin" created.
[root@hayes-02 ~]# lvcreate  -L 2G -n fs_A_pool cache_sanity /dev/sdd1
  Logical volume "fs_A_pool" created.
[root@hayes-02 ~]# lvcreate  -L 12M -n fs_A_pool_meta cache_sanity /dev/sdd1
  Logical volume "fs_A_pool_meta" created.
[root@hayes-02 ~]# lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 32 --poolmetadata cache_sanity/fs_A_pool_meta cache_sanity/fs_A_pool
  WARNING: Converting cache_sanity/fs_A_pool and cache_sanity/fs_A_pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/fs_A_pool and cache_sanity/fs_A_pool_meta to cache pool.
[root@hayes-02 ~]# lvconvert --yes --type cache --cachemetadataformat 2 --cachepool cache_sanity/fs_A_pool cache_sanity/corigin
  Logical volume cache_sanity/corigin is now cached.
[root@hayes-02 ~]# lvs -a -o +devices
  LV                VG           Attr       LSize  Pool        Origin          Data%  Meta%  Move Log Cpy%Sync Convert Devices           
  corigin           cache_sanity Cwi-a-C---  4.00g [fs_A_pool] [corigin_corig] 0.00   4.62            0.00             corigin_corig(0)  
  [corigin_corig]   cache_sanity owi-aoC---  4.00g                                                                     /dev/sdc1(0)      
  [fs_A_pool]       cache_sanity Cwi---C---  2.00g                             0.00   4.62            0.00             fs_A_pool_cdata(0)
  [fs_A_pool_cdata] cache_sanity Cwi-ao----  2.00g                                                                     /dev/sdd1(0)      
  [fs_A_pool_cmeta] cache_sanity ewi-ao---- 12.00m                                                                     /dev/sdd1(512)    
  [lvol0_pmspare]   cache_sanity ewi------- 12.00m                                                                     /dev/sdb1(0)      
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --encrypt --init-only --type luks2 /dev/cache_sanity/corigin --header /tmp/cache_luks_header.0987654321
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/corigin luks_corigin --header /tmp/cache_luks_header.0987654321
[root@hayes-02 ~]# lvcreate  -s /dev/cache_sanity/corigin -c 64 -n fs_snap1 -L 4100.00m
  Logical volume "fs_snap1" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/fs_snap1 luks_fs_snap1 --header /tmp/cache_luks_header.0987654321
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_corigin --header /tmp/cache_luks_header.0987654321
Finished, time 03:15.839, 4096 MiB written, speed  20.9 MiB/s   
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --decrypt --active-name luks_fs_snap1 /dev/cache_sanity/fs_snap1 --header /tmp/cache_luks_header.0987654321
Finished, time 02:26.792, 4096 MiB written, speed  27.9 MiB/s

Comment 3 Corey Marthaler 2019-07-26 22:51:19 UTC
A cached origin is not required. 

[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity
  Logical volume "corigin" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --encrypt --init-only --type luks2 /dev/cache_sanity/corigin --header /tmp/cache_luks_header.1234567890
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/corigin luks_corigin --header /tmp/cache_luks_header.1234567890
[root@hayes-02 ~]# lvcreate  -s /dev/cache_sanity/corigin -c 64 -n fs_snap1 -L 4100.00m
  Logical volume "fs_snap1" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/fs_snap1 luks_fs_snap1 --header /tmp/cache_luks_header.1234567890
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_corigin --header /tmp/cache_luks_header.1234567890
Finished, time 03:27.876, 4096 MiB written, speed  19.7 MiB/s   
[root@hayes-02 ~]# lvs -a -o +devices
  LV       VG           Attr       LSize Pool Origin  Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  corigin  cache_sanity owi-aos--- 4.00g                                                      /dev/sde1(0)   
  fs_snap1 cache_sanity swi-aos--- 4.00g      corigin 99.93                                   /dev/sde1(1024)
[root@hayes-02 ~]# lvcreate  -s /dev/cache_sanity/corigin -c 128 -n fs_snap2 -L 4100.00m
  Logical volume "fs_snap2" created.
[root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/fs_snap2 luks_fs_snap2 --header /tmp/cache_luks_header.1234567890
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_corigin --header /tmp/cache_luks_header.1234567890
Finished, time 03:33.398, 4096 MiB written, speed  19.2 MiB/s   
[root@hayes-02 ~]# lvs -a -o +devices
  LV       VG           Attr       LSize Pool Origin  Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  corigin  cache_sanity owi-aos--- 4.00g                                                      /dev/sde1(0)   
  fs_snap1 cache_sanity swi-aos--- 4.00g      corigin 99.93                                   /dev/sde1(1024)
  fs_snap2 cache_sanity swi-aos--- 4.00g      corigin 99.92                                   /dev/sde1(2049)
[root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --decrypt --active-name luks_fs_snap2 /dev/cache_sanity/fs_snap2 --header /tmp/cache_luks_header.1234567890
device-mapper: reload ioctl on   failed: Required key not available
Failed to activate overlay device luks_fs_snap2-overlay with actual origin table.
device-mapper: remove ioctl on luks_fs_snap2-overlay  failed: No such device or address
Failed to initalize reencryption device stack.

Comment 4 Ondrej Kozina 2019-07-29 14:14:31 UTC
(In reply to Corey Marthaler from comment #3)

> [root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/fs_snap2 luks_fs_snap2 --header /tmp/cache_luks_header.1234567890
> [root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_corigin --header /tmp/cache_luks_header.1234567890
> Finished, time 03:33.398, 4096 MiB written, speed  19.2 MiB/s   
> [root@hayes-02 ~]# lvs -a -o +devices
>   LV       VG           Attr       LSize Pool Origin  Data%  Meta%  Move Log Cpy%Sync Convert Devices        
>   corigin  cache_sanity owi-aos--- 4.00g                                                      /dev/sde1(0)   
>   fs_snap1 cache_sanity swi-aos--- 4.00g      corigin 99.93                                   /dev/sde1(1024)
>   fs_snap2 cache_sanity swi-aos--- 4.00g      corigin 99.92                                   /dev/sde1(2049)
> [root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --decrypt --active-name luks_fs_snap2 /dev/cache_sanity/fs_snap2 --header /tmp/cache_luks_header.1234567890
> device-mapper: reload ioctl on   failed: Required key not available
> Failed to activate overlay device luks_fs_snap2-overlay with actual origin table.
> device-mapper: remove ioctl on luks_fs_snap2-overlay  failed: No such device or address
> Failed to initalize reencryption device stack.

This is expected to fail, but for other reason. Actually you have found a gap in pre reencryption initialization checks. I'll explain step-by-step:

> [root@hayes-02 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/fs_snap2 luks_fs_snap2 --header /tmp/cache_luks_header.1234567890

1) It opens luks_fs_snap2 with detached header where volume key is 'X':

> [root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_corigin --header /tmp/cache_luks_header.1234567890

2) Reencrypts (changes volume key) for device luks_corigin. The volume key in your detached header changes from 'X' to 'Y'.

> [root@hayes-02 ~]# echo foobarglarch | cryptsetup reencrypt --decrypt --active-name luks_fs_snap2 /dev/cache_sanity/fs_snap2 --header /tmp/cache_luks_header.1234567890

3) Now it tries to decrypt device luks_fs_snap2 activated using header where volume key in that time was 'X' (step 1). Unfortunately now, thanks to step 2) it's already 'Y' in the header.
This must not pass, ever! It would have destroyed data in luks_fs_snap2. The cryptsetup should have reported that active device luks_fs_snap2 has apparently different key (dm table in general) than expected.

Corey I'll fix that on our side. It should have failed much sooner than this late (almost too late:)). But on the other hand this test is probably doing something not expected. Basically when you create volume snapshot, keep separate header copy from the same time as well?

Comment 5 Ondrej Kozina 2019-08-01 10:16:20 UTC
Fixed upstream with commits:

https://gitlab.com/cryptsetup/cryptsetup/commit/98e0c8d6091c20bd25d3910e77e2ded238ebfd10
https://gitlab.com/cryptsetup/cryptsetup/commit/3bea349f9ee6083d48c001c03f4ce8fe44d27ea0

The expected error message in for step 3) above is:

"Mismatching parameters on device luks_fs_snap2."

Comment 7 Corey Marthaler 2019-08-19 22:55:49 UTC
Fix verified in the latest rpms.

cryptsetup-2.2.0-1.el8    BUILT: Fri Aug 16 01:22:41 CDT 2019
cryptsetup-libs-2.2.0-1.el8    BUILT: Fri Aug 16 01:22:41 CDT 2019
cryptsetup-reencrypt-2.2.0-1.el8    BUILT: Fri Aug 16 01:22:41 CDT 2019


[root@hayes-01 ~]# lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity
  Logical volume "corigin" created.
[root@hayes-01 ~]# echo foobarglarch | cryptsetup reencrypt --encrypt --init-only --type luks2 /dev/cache_sanity/corigin --header /tmp/cache_luks_header.1234567890
[root@hayes-01 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/corigin luks_corigin --header /tmp/cache_luks_header.1234567890
[root@hayes-01 ~]# lvcreate  -s /dev/cache_sanity/corigin -c 64 -n fs_snap1 -L 4100.00m
  Logical volume "fs_snap1" created.
[root@hayes-01 ~]# echo foobarglarch | cryptsetup luksOpen /dev/cache_sanity/fs_snap1 luks_fs_snap1 --header /tmp/cache_luks_header.1234567890
[root@hayes-01 ~]# echo foobarglarch | cryptsetup reencrypt --active-name luks_corigin --header /tmp/cache_luks_header.1234567890
Finished, time 03:13.153, 4096 MiB written, speed  21.2 MiB/s
[root@hayes-01 ~]# echo foobarglarch | cryptsetup reencrypt --decrypt --active-name luks_fs_snap1 /dev/cache_sanity/fs_snap1 --header /tmp/cache_luks_header.1234567890
Mismatching parameters on device luks_fs_snap1.
Failed to initialize LUKS2 reencryption in metadata.
[root@hayes-01 ~]# echo $?
1

Comment 8 Ondrej Kozina 2019-09-02 09:34:38 UTC
The bug was introduced and fixed during 8.1 devel phase, no docs text needed.

Comment 10 errata-xmlrpc 2019-11-05 22:17:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3569


Note You need to log in before you can comment on or make changes to this bug.