RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1961291 - Enable forward error correction (FEC) feature for dm-verity
Summary: Enable forward error correction (FEC) feature for dm-verity
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: cryptsetup
Version: 9.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: beta
: 9.0 Beta
Assignee: Ondrej Kozina
QA Contact: guazhang@redhat.com
URL:
Whiteboard:
Depends On: 1990465
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-17 16:15 UTC by Ondrej Kozina
Modified: 2021-12-07 21:38 UTC (History)
5 users (show)

Fixed In Version: cryptsetup-2.3.6-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-07 21:35:16 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 2 guazhang@redhat.com 2021-05-18 00:13:50 UTC
Hi

Could you please share some test steps here or how to test it ?

Comment 7 guazhang@redhat.com 2021-06-22 01:46:09 UTC
Hi

veritysetup format   --data-blocks '65536'  --hash-offset '268435456'  --data-block-size '4096'  --fec-device '/dev/mapper/vg01-lv01'  --fec-offset '272629760'   /dev/mapper/vg01-lv01 /dev/mapper/vg01-lv01
VERITY header information for /dev/mapper/vg01-lv01
UUID:            	9565647c-e119-4048-902b-b3550ce58886
Hash type:       	1
Data blocks:     	65536
Data block size: 	4096
Hash block size: 	4096
Hash algorithm:  	sha256
Salt:            	5e33c22366f941a6503f91cd0879368995a6b78f9093e34c2093745c70ba08f2
Root hash:      	6e928ee53db049d6c4ea324447518fe4e1c6fb42978a9e70be53adea430cb9df

[root@storageqe-69 ~]# dd if=/dev/mapper/vg01-lv01  bs=4096 count=65536 | sha256sum 
65536+0 records in
65536+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.565716 s, 475 MB/s
d67df5ac6b6ad138883fc74b2bf3527de27137d821c514136caf104b146df6b5  -
[root@storageqe-69 ~]# 
[root@storageqe-69 ~]# dd if=/dev/mapper/vg01-lv01  skip=65536 bs=4096 count=1024 | sha256sum
1024+0 records in
1024+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00953271 s, 440 MB/s
739ffd1fa2e61ef0f23b789c7e37ec00b1058af61b2383c755cd2d3d44a4e358  -
[root@storageqe-69 ~]# 
[root@storageqe-69 ~]# dd if=/dev/mapper/verity_name | sha256sum
524288+0 records in
524288+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.74616 s, 360 MB/s
a6d72ac7690f53be6ae46ba88506bd97302a093f7108472bd9efc3cefda06484  -
[root@storageqe-69 ~]# 



from the result, the data_digest != verity_digest and can not find the log like "device-mapper: verity-fec: 7:0: FEC 630784: corrected 1526 errors" in /var/log/messages
please have a look the test steps 




INFO: [2021-06-21 21:26:46] Running: 'veritysetup format   --data-blocks '65536'  --hash-offset '268435456'  --data-block-size '4096'  --fec-device '/dev/mapper/vg01-lv01'  --fec-offset '272629760'   /dev/mapper/vg01-lv01 /dev/mapper/vg01-lv01'...
VERITY header information for /dev/mapper/vg01-lv01
UUID:            	98878ced-f09b-41bc-8328-a80546764750
Hash type:       	1
Data blocks:     	65536
Data block size: 	4096
Hash block size: 	4096
Hash algorithm:  	sha256
Salt:            	e78bacfefdd810ae46721e6a45c184744752cb001c89dbfc89ede20bf8273f3b
Root hash:      	dbb65e8aeb29cf440ff25bf7b0b85f132830f5c171f6ea96a7ddd2e06182de68
{'UUID': '98878ced-f09b-41bc-8328-a80546764750', 'Hash_type': '1', 'Data_blocks': '65536', 'Data_block_size': '4096', 'Hash_block_size': '4096', 'Hash_algorithm': 'sha256', 'Salt': 'e78bacfefdd810ae46721e6a45c184744752cb001c89dbfc89ede20bf8273f3b', 'Root_hash': 'dbb65e8aeb29cf440ff25bf7b0b85f132830f5c171f6ea96a7ddd2e06182de68', 'data_disk': '/dev/mapper/vg01-lv01', 'hash_disk': '/dev/mapper/vg01-lv01', 'spce': (), 'data_blocks': 65536, 'hash_offset': '268435456', 'data_block_size': 4096, 'fec_device': '/dev/mapper/vg01-lv01', 'fec_offset': 272629760}
data_digest is a6d72ac7690f53be6ae46ba88506bd97302a093f7108472bd9efc3cefda06484
hash_digest is f470efbc06ab6e211799d6099911220e06e486e6815d2268f0a40ca66764f0fe
INFO: [2021-06-21 21:26:49] Running: 'veritysetup open   --data-blocks '65536'  --hash-offset '268435456'  --data-block-size '4096'  --fec-device '/dev/mapper/vg01-lv01'  --fec-offset '272629760'   /dev/mapper/vg01-lv01 verity_name /dev/mapper/vg01-lv01 dbb65e8aeb29cf440ff25bf7b0b85f132830f5c171f6ea96a7ddd2e06182de68'...

INFO: [2021-06-21 21:26:49] Running: 'lsblk'...
NAME            MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
loop0             7:0    0   15G  0 loop  
loop1             7:1    0   15G  0 loop  
loop2             7:2    0   15G  0 loop  
loop3             7:3    0   15G  0 loop  
loop4             7:4    0   15G  0 loop  
vg01-lv01     253:0    0   10G  0 lvm   
  verity_name 253:1    0  256M  1 crypt 
loop5             7:5    0   15G  0 loop  
sda               8:0    0  1.8T  0 disk  
sda1            8:1    0    1G  0 part  /boot
sda2            8:2    0  7.8G  0 part  [SWAP]
sda3            8:3    0  1.8T  0 part  /
sdb               8:16   0  1.8T  0 disk  
sdc               8:32   0  1.8T  0 disk  
sr0              11:0    1 1024M  0 rom
verity_digest is e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
FAIL: data_digest != verity_digest
INFO: [2021-06-21 21:26:49] Running: 'dd if=/dev/urandom of=/dev/mapper/vg01-lv01 seek=1233 count=3 bs=512 conv=notrunc'...
3+0 records in
3+0 records out
1536 bytes (1.5 kB, 1.5 KiB) copied, 0.00014392 s, 10.7 MB/s
INFO: corrupted_data_digest is fed7d608d0964f6d7335e8c2822befbbf270c710c7e3b62ab10a570828b33265
INFO: the corrupted_data_digest != data_digest
INFO: [2021-06-21 21:26:50] Running: 'veritysetup verify   --data-blocks '65536'  --hash-offset '268435456'  --data-block-size '4096'  --fec-device '/dev/mapper/vg01-lv01'  --fec-offset '272629760'   /dev/mapper/vg01-lv01 /dev/mapper/vg01-lv01 dbb65e8aeb29cf440ff25bf7b0b85f132830f5c171f6ea96a7ddd2e06182de68'...
Verification failed at position 630784.
Verification of data area failed.
Found 1531 repairable errors with FEC device.
INFO: [2021-06-21 21:26:52] Running: 'dd if=/dev/zero of=/dev/mapper/vg01-lv01 bs=512 count=10'...
10+0 records in
10+0 records out
5120 bytes (5.1 kB, 5.0 KiB) copied, 4.7528e-05 s, 108 MB/s
INFO: [2021-06-21 21:26:52] Running: 'veritysetup verify   --data-blocks '65536'  --hash-offset '268435456'  --data-block-size '4096'  --fec-device '/dev/mapper/vg01-lv01'  --fec-offset '272629760'   /dev/mapper/vg01-lv01 /dev/mapper/vg01-lv01 dbb65e8aeb29cf440ff25bf7b0b85f132830f5c171f6ea96a7ddd2e06182de68'...
Verification failed at position 630784.
Verification of data area failed.
Found 1531 repairable errors with FEC device.
INFO: [2021-06-21 21:26:53] Running: 'dd if=/dev/zero of=/dev/mapper/vg01-lv01 bs=512 count=1'...
1+0 records in
1+0 records out
512 bytes copied, 3.4755e-05 s, 14.7 MB/s
INFO: [2021-06-21 21:26:53] Running: 'veritysetup verify   --data-blocks '65536'  --hash-offset '268435456'  --data-block-size '4096'  --fec-device '/dev/mapper/vg01-lv01'  --fec-offset '272629760'   /dev/mapper/vg01-lv01 /dev/mapper/vg01-lv01 dbb65e8aeb29cf440ff25bf7b0b85f132830f5c171f6ea96a7ddd2e06182de68'...
Verification failed at position 630784.
Verification of data area failed.
Found 1531 repairable errors with FEC device.
INFO: [2021-06-21 21:26:55] Running: 'veritysetup status   verity_name '...
/dev/mapper/verity_name is active.
  type:        VERITY
  status:      verified
  hash type:   1
  data block:  4096
  hash block:  4096
  hash name:   sha256
  salt:        e78bacfefdd810ae46721e6a45c184744752cb001c89dbfc89ede20bf8273f3b
  data device: /dev/mapper/vg01-lv01
  size:        524288 sectors
  mode:        readonly
  hash device: /dev/mapper/vg01-lv01
  hash offset: 524296 sectors
  FEC device:  /dev/mapper/vg01-lv01
  FEC offset:  532480 sectors
  FEC roots:   2
  root hash:   dbb65e8aeb29cf440ff25bf7b0b85f132830f5c171f6ea96a7ddd2e06182de68

Comment 8 Ondrej Kozina 2021-06-22 10:19:32 UTC
(In reply to guazhang from comment #7)
> Hi
> 
> veritysetup format   --data-blocks '65536'  --hash-offset '268435456' 
> --data-block-size '4096'  --fec-device '/dev/mapper/vg01-lv01'  --fec-offset
> '272629760'   /dev/mapper/vg01-lv01 /dev/mapper/vg01-lv01
> VERITY header information for /dev/mapper/vg01-lv01
> UUID:            	9565647c-e119-4048-902b-b3550ce58886
> Hash type:       	1
> Data blocks:     	65536
> Data block size: 	4096
> Hash block size: 	4096
> Hash algorithm:  	sha256
> Salt:            
> 5e33c22366f941a6503f91cd0879368995a6b78f9093e34c2093745c70ba08f2
> Root hash:      
> 6e928ee53db049d6c4ea324447518fe4e1c6fb42978a9e70be53adea430cb9df
> 
> [root@storageqe-69 ~]# dd if=/dev/mapper/vg01-lv01  bs=4096 count=65536 |
> sha256sum 
> 65536+0 records in
> 65536+0 records out
> 268435456 bytes (268 MB, 256 MiB) copied, 0.565716 s, 475 MB/s
> d67df5ac6b6ad138883fc74b2bf3527de27137d821c514136caf104b146df6b5  -
> [root@storageqe-69 ~]# 
> [root@storageqe-69 ~]# dd if=/dev/mapper/vg01-lv01  skip=65536 bs=4096
> count=1024 | sha256sum
> 1024+0 records in
> 1024+0 records out
> 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00953271 s, 440 MB/s
> 739ffd1fa2e61ef0f23b789c7e37ec00b1058af61b2383c755cd2d3d44a4e358  -
> [root@storageqe-69 ~]# 
> [root@storageqe-69 ~]# dd if=/dev/mapper/verity_name | sha256sum
> 524288+0 records in
> 524288+0 records out
> 268435456 bytes (268 MB, 256 MiB) copied, 0.74616 s, 360 MB/s
> a6d72ac7690f53be6ae46ba88506bd97302a093f7108472bd9efc3cefda06484  -
> [root@storageqe-69 ~]# 
> 
> 
> 
> from the result, the data_digest != verity_digest and can not find the log
> like "device-mapper: verity-fec: 7:0: FEC 630784: corrected 1526 errors" in
> /var/log/messages
> please have a look the test steps 

Not sure what went wrong with digests (perhaps try example script below) but you will definitely not see any errors/corrections with FEC in kernel log because there's not 'corruption' in data/hash area yet.
First corrupt the data area or hash area, read dm-verity later and you should see notifications about FEC being triggered.

Comment 9 Ondrej Kozina 2021-06-22 10:20:29 UTC
I've added 'direct' flag and run this script in loop. It worked as expected for me.


#!/bin/bash

set -xe

# format device
veritysetup format /dev/loop0 /dev/loop0 --fec-device /dev/loop0 --data-block-size 4096 --data-blocks 65536 --hash-offset 268435456 --fec-offset 272629760 | tee /tmp/format
ROOT=$(grep -e "Root hash:" /tmp/format | cut -f 2)
echo "root: $ROOT"

dd if=/dev/loop0 bs=4096 count=65536 iflag=direct | sha256sum | cut -d " " -f 1 | tee /tmp/data_digest
dd if=/dev/loop0 bs=4096 skip=65536 count=1024 iflag=direct | sha256sum | cut -d " " -f 1 | tee /tmp/hash_digest

# open dm-verity device
veritysetup open /dev/loop0 dmv1 /dev/loop0 $ROOT --fec-device /dev/loop0 --hash-offset 268435456 --fec-offset 272629760

dd if=/dev/mapper/dmv1 bs=4096 iflag=direct | sha256sum | cut -d " " -f 1 | tee /tmp/verity_digest

# must match
diff /tmp/data_digest /tmp/verity_digest

# corrupt data area
dd if=/dev/urandom of=/dev/loop0 bs=512 seek=1233 count=3 oflag=direct
dd if=/dev/loop0 bs=4096 count=65536 iflag=direct | sha256sum | cut -d " " -f 1 | tee /tmp/corrupted_data_digest

# must not match
! diff /tmp/data_digest /tmp/corrupted_data_digest

# read dm-verity
dd if=/dev/mapper/dmv1 bs=4096 iflag=direct | sha256sum | cut -d " " -f 1 | tee /tmp/verity_new_digest

# must match
diff /tmp/verity_new_digest /tmp/data_digest

# corrupt hash area
dd if=/dev/urandom of=/dev/loop0 bs=512 seek=524301 count=7 oflag=direct
dd if=/dev/loop0 bs=4096 skip=65536 count=1024 iflag=direct | sha256sum | cut -d " " -f 1 | tee /tmp/corrupted_hash_digest

# must not match
! diff /tmp/hash_digest /tmp/corrupted_hash_digest

# read dm-verity
dd if=/dev/mapper/dmv1 bs=4096 iflag=direct | sha256sum | cut -d " " -f 1 | tee /tmp/verity_new_digest

# must match
diff /tmp/verity_new_digest /tmp/data_digest

veritysetup close dmv1

Comment 10 guazhang@redhat.com 2021-06-23 05:34:42 UTC
Thanks for the details testing.

test pass with the script, move to verified.


Note You need to log in before you can comment on or make changes to this bug.