Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1630192

Summary: Possible infinite hang during rebuild if physical volume greater than 16T. [rhel-7.5.z]
Product: Red Hat Enterprise Linux 7 Reporter: Oneata Mircea Teodor <toneata>
Component: kmod-kvdoAssignee: vdo-internal <vdo-internal>
Status: CLOSED ERRATA QA Contact: Jakub Krysl <jkrysl>
Severity: unspecified Docs Contact: Marie Hornickova <mdolezel>
Priority: high    
Version: 7.7CC: awalsh, bubrown, dkeefe, jkrysl, jmagrini, jpittman, knappch, lmiksik, msakai, rhandlin, ryan.p.norwood, sweettea, vdo-qe
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 6.1.0.187 Doc Type: Bug Fix
Doc Text:
If a block map page of a physical volume on a VDO device was bigger than 16 TB, a 64 bit block number was truncated into 32 bits. Consequently, both normal recovery and read-only rebuild became unresponsive. With this update, the underlying source code has been fixed, and physical volumes bigger than 16 TB no longer hang during recovery or read-only rebuild.
Story Points: ---
Clone Of: 1628318 Environment:
Last Closed: 2018-11-06 16:16:04 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1628318    
Bug Blocks:    

Description Oneata Mircea Teodor 2018-09-18 07:40:56 UTC
This bug has been copied from bug #1628318 and has been proposed to be backported to 7.5 z-stream (EUS).

Comment 4 Jakub Krysl 2018-10-16 15:48:10 UTC
Tested with:
kmod-kvdo-6.1.0.187-17.el7_5.x86_64
vdo-6.1.0.185-18.x86_64
3.10.0-862.14.4.el7.x86_64

Crashed server and these are the /var/log/messages when running 'vdo start --name vdoX':
[ 4380.689234] kvdo0:dmsetup: starting device 'vdo0' device instantiation 0 write policy aut0.698524] kvdo0:dmsetup: underlying device, REQ_FLUSH: not supported, REQ_FUA: not supported 
[ 4380.708238] kvdo0:dmsetup: Using mode sync automkvdo0:dmsetup: zones: 1 logphysical, 1 hash; base threads: 5 
[ 4382.691082] kvdo0:logQ0: Finished reading recovery journal 
63] kvdo0:logQ0: Highest-numbered recovery journal block has sequence number 9915, and the highest-numbered usable block is 9915 
[ 4382.750573] kvdo0:logQ0: Replaying entries into slab journals 
eplayed 748985 journal entries into slab journals 
[ 4382.880865] kvdo0:logQ0: Recreating missing journal entries 
[ 4382.887125] kvdo0:journalQ: Synthesized 1 missing journal entries 
[ 4382.899001] kvdo0:journalQ: Saving recovery progress 
[ 4383.103462] kvdo0:logQ0: Replaying 1542693 recovery entries into block map 
[ 4385.498639] kvdo0:logQ0: Flushing block map changes 
[ 4386.542008] kvdo0:journalQ: Entering recovery mode 
[ 4386.602942] kvdo0:dmsetup: uds: kvdo0:dedupeQ: loading or rebuilding index: dev=/dev/mapper/360a980003246694a412b456733445170 offset=4096 size=2781704192 
uds: kvdo0:dedupeQ: Using 16 indexing zones for concurrency. 
 
[ 4386.624409] Setting UDS index target state to online 
[ 4386.630185] kvdo0:dmsetup: device 'vdo0' started 
[ 4386.640402] kvdo0:dmsetup: resuming device 'vdo0' 
[ 4386.645733] kvdo0:dmsetup: device 'vdo0' resumed 
[ 4386.729186] kvdo0:packerQ: compression is enabled 
[ 4386.856548] kvdo0:journalQ: Exiting recovery mode 
[ 4387.083561] uds: kvdo0:dedupeQ: index_0: index could not be loaded: UDS Error: Index not saved cleanly (1069) 
[ 4387.800828] uds: kvdo0:dedupeQ: index_0: Reolume from chapter 0 through chapter 23 
[ 4389.994849] uds: kvdo0:dedupeQ: replay changed index page map update from 0 to 22 
[ 4473.484307] kvdo1:dmsetup: starting device 'vdo1' device instantiation 1 write policy auto 
[ 447 1 physical, 1 hash; base threads: 5 
as dirty, rebuilding reference counts 
9861] kvdo1:logQ0: Finished reading recovery journal 
kvdo1:logQ0: Highest-numbered recovery journal block has sequence numbered usable block is 9566 500206] kvdo1:logQ0: Replaying entries into slab journals 
[ 28] kvdo1:logQ0: Replayed 3757 journal entries injournals 
gQ0: Recreating missing journal entries 
[ 4476.657565] kvdo1:journalQ: Synthesized 1172 missing journal entries 
[ 4476.672507] kvdo1:journalQ: Saving recovery progress 
960334] kvdo1:logQ0: Replaying 1494606 recovery entries into block map 
ev/mapper/vdo0 offset=4096 size=17366034788352 
 
[ 4482.503273] Setting UDS index target state to online 
[ 4482.509143] kvdo1:dmsetup: device 'vdo1' started 
do1' resumed 
[ 4482.649668] uds: kvdo1:dedupeQ: Using 16 indexing zones for concurrency. 
[ 4482.850845] kvdo1:packerQ: compression is enabled 
[-- MARK -- Tue Oct 16 14:15:00 2018] 
[-- MARK -- Tue Oct 16 14:20:00 2018] 
4905.471356] uds: kvdo1:dedupeQ: index_0: index could not be loaded: UDS Error: Index not saved cleanly (1069) 

There is no infinite hang, just the load of index in the top vdo took a bit longer to load. But as index does not have to be available and the command exited right away and vdo was usable, setting to verified.

Comment 12 errata-xmlrpc 2018-11-06 16:16:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3509