Bug 1881131 - dm-cache logic in writethrough mode
Summary: dm-cache logic in writethrough mode
Keywords:
Status: NEW
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: lvm2
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: LVM Team
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-21 15:15 UTC by Zdenek Kabelac
Modified: 2023-08-10 15:41 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: ---
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
pm-rhel: lvm-technical-solution?
pm-rhel: lvm-test-coverage?


Attachments (Terms of Use)

Description Zdenek Kabelac 2020-09-21 15:15:38 UTC
From bug 1881056 it looks like we can have 'dirty' writethrough cache - since if there is a failure on origin disk or even 'cache data' disk - we can get into troubles with 'flushing/clearing' such cache.

So we need to solve what happens in case there is 'write' error on origin.

From lvm2 POV it would be the best is 'writethrough' cache can never get dirty (as that is 'naive' understanding of this logic.

ATM it appears that even if 'write' fails on origin - we still get possibly some data stored in cache  (causing dirtying it) and then may try endlessly flushing
it to origin.

Comment 1 Zdenek Kabelac 2020-09-21 15:19:15 UTC
A side comment from description is -  when a cached block 'from' origin is held - and user replaces 'drive' with 'dd_rescue' tool - there can be a difference between the origin and cached content.

So my idea here would be to drop a cached chunk if it hold 'write-erroring' area - so the read of
such block will also give user a read error  (matching 'writehthrough'  experience)


Note You need to log in before you can comment on or make changes to this bug.