Bug 1881131

Summary: dm-cache logic in writethrough mode
Product: [Community] LVM and device-mapper Reporter: Zdenek Kabelac <zkabelac>
Component: lvm2Assignee: LVM Team <lvm-team>
lvm2 sub component: Cache Logical Volumes QA Contact: cluster-qe <cluster-qe>
Status: NEW --- Docs Contact:
Severity: unspecified    
Priority: unspecified CC: agk, heinzm, jbrassow, msnitzer, prajnoha, roy, thornber, zkabelac
Version: unspecifiedFlags: pm-rhel: lvm-technical-solution?
pm-rhel: lvm-test-coverage?
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: ---
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Zdenek Kabelac 2020-09-21 15:15:38 UTC
From bug 1881056 it looks like we can have 'dirty' writethrough cache - since if there is a failure on origin disk or even 'cache data' disk - we can get into troubles with 'flushing/clearing' such cache.

So we need to solve what happens in case there is 'write' error on origin.

From lvm2 POV it would be the best is 'writethrough' cache can never get dirty (as that is 'naive' understanding of this logic.

ATM it appears that even if 'write' fails on origin - we still get possibly some data stored in cache  (causing dirtying it) and then may try endlessly flushing
it to origin.

Comment 1 Zdenek Kabelac 2020-09-21 15:19:15 UTC
A side comment from description is -  when a cached block 'from' origin is held - and user replaces 'drive' with 'dd_rescue' tool - there can be a difference between the origin and cached content.

So my idea here would be to drop a cached chunk if it hold 'write-erroring' area - so the read of
such block will also give user a read error  (matching 'writehthrough'  experience)