Bug 1859149

Summary: Cache in writethrough mode should not mark blocks as dirty after crash
Product: [Fedora] Fedora Reporter: Zdenek Kabelac <zkabelac>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
Status: CLOSED EOL QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 33CC: agk, anprice, bmarzins, bmr, cfeist, djuran, heinzm, jonathan, kzak, lvm-team, lzap, mcsontos, msnitzer, pcfe, prajnoha, prockai, zkabelac
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-30 16:21:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Zdenek Kabelac 2020-07-21 10:45:47 UTC
It's been noticed that cache with write-though mode may still mark all blocks in cache as dirty (likely after crash) - but they were never dirty in the first place.

kernel 5.7.

Such cache then wants to 'flush' cache before detaching, which eventually may not be possible due to caching device got missing. However we want to support detach of cache in write-through mode without this kind of operation.

Comment 1 Lukas Zapletal 2020-07-21 14:14:45 UTC
Hello,

for the record, this is how I created my volumes:

PV_SLOW=/dev/sda1
PV_FAST=/dev/nvme0n1p5
VG=vg_home
LV_SLOW=lv_home
LV_FAST=lv_home_cache
pvcreate $PV_SLOW
pvcreate $PV_FAST
vgcreate $VG $PV_SLOW $PV_FAST
lvcreate -l 100%PVS -n $LV_SLOW $VG $PV_SLOW
lvcreate --type cache-pool -l 100%PVS -n $LV_FAST $VG $PV_FAST
lvconvert --type cache --cachepool $LV_FAST $VG/$LV_SLOW

This was done in Fedora 30, then upgrade was performed up to F32 and then NVMe SSD crashed, cache device was lost and I performed:

lvconvert --uncache -y --force $LV_FAST

Here is the LVM config backup file if that makes any difference:

# Generated by LVM2 version 2.02.183(2) (2018-12-07): Mon May 20 11:32:35 2019

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'lvconvert --type cache --cachepool lv_home_fast vg_home/lv_home_slow'"

creation_host = "box.home.lan"	# Linux box.home.lan 5.0.16-300.fc30.x86_64 #1 SMP Tue May 14 19:33:09 UTC 2019 x86_64
creation_time = 1558344755	# Mon May 20 11:32:35 2019

vg_home {
	id = "PYy3Yq-v5G1-1804-6oCw-luPX-KeHt-HqVEIy"
	seqno = 6
	format = "lvm2"			# informational
	status = ["RESIZEABLE", "READ", "WRITE"]
	flags = []
	extent_size = 8192		# 4 Megabytes
	max_lv = 0
	max_pv = 0
	metadata_copies = 0

	physical_volumes {

		pv0 {
			id = "adA45i-bc1s-r9WR-JOzi-BoYQ-hM5B-pjtUE0"
			device = "/dev/sda1"	# Hint only

			status = ["ALLOCATABLE"]
			flags = []
			dev_size = 2147483648	# 1024 Gigabytes
			pe_start = 2048
			pe_count = 262143	# 1024 Gigabytes
		}

		pv1 {
			id = "oLpSLQ-3Qwd-bIgN-TNna-hFnW-Rc0k-EQ1zPK"
			device = "/dev/nvme0n1p5"	# Hint only

			status = ["ALLOCATABLE"]
			flags = []
			dev_size = 313921871	# 149,69 Gigabytes
			pe_start = 2048
			pe_count = 38320	# 149,688 Gigabytes
		}
	}

	logical_volumes {

		lv_home_slow {
			id = "QKmcLS-1WQy-JTfc-2R9T-Uu56-e4R4-QYgzve"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_time = 1558344369	# 2019-05-20 11:26:09 +0200
			creation_host = "box.home.lan"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 262143	# 1024 Gigabytes

				type = "cache"
				cache_pool = "lv_home_fast"
				origin = "lv_home_slow_corig"
			}
		}

		lv_home_fast {
			id = "N2NNiN-vdlj-BLvW-NTy2-9yNl-iycT-PwCepA"
			status = ["READ", "WRITE"]
			flags = []
			creation_time = 1558344635	# 2019-05-20 11:30:35 +0200
			creation_host = "box.home.lan"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 38296	# 149,594 Gigabytes

				type = "cache-pool+METADATA_FORMAT"
				data = "lv_home_fast_cdata"
				metadata = "lv_home_fast_cmeta"
				chunk_size = 320
				metadata_format = 2
				cache_mode = "writethrough"
				policy = "smq"
			}
		}

		lvol0_pmspare {
			id = "fitnGe-SgWp-YeJ8-4K1S-sVa2-krxU-u1byZ4"
			status = ["READ", "WRITE"]
			flags = []
			creation_time = 1558344635	# 2019-05-20 11:30:35 +0200
			creation_host = "box.home.lan"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 12	# 48 Megabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv1", 0
				]
			}
		}

		lv_home_fast_cmeta {
			id = "U2e8zg-0AKm-dFd4-oaDt-Bdx8-ZQCV-W8QinN"
			status = ["READ", "WRITE"]
			flags = []
			creation_time = 1558344635	# 2019-05-20 11:30:35 +0200
			creation_host = "box.home.lan"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 12	# 48 Megabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv1", 12
				]
			}
		}

		lv_home_fast_cdata {
			id = "or7p60-sFiC-XEn2-QYsS-FRFZ-y87H-QBd11x"
			status = ["READ", "WRITE"]
			flags = []
			creation_time = 1558344635	# 2019-05-20 11:30:35 +0200
			creation_host = "box.home.lan"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 38296	# 149,594 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv1", 24
				]
			}
		}

		lv_home_slow_corig {
			id = "KDnbZX-qBNv-cRB1-mWgW-80cd-3biB-1WCXpb"
			status = ["READ", "WRITE"]
			flags = []
			creation_time = 1558344755	# 2019-05-20 11:32:35 +0200
			creation_host = "box.home.lan"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 262143	# 1024 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 0
				]
			}
		}
	}

}

Comment 2 Zdenek Kabelac 2020-07-21 15:48:37 UTC
So from the provided lvm2 metadata it's clear the cache chunk size was quite small  (160KiB)

So likely it's been 'dirtied' cache marking - which happened with unclean shutdown.
Such case is ATM 'silentely' handle in the background - but when caching device become
unusable the tool was unable to proceed.


Access to cache origin can be 'recovered' by removing reference to caching devices 
(erasing entries:

lv_home_slow, lv_home_fast, lvol0_pmspare, lv_home_fast_cmeta, lv_home_fast_cdata

and renaming lv_home_slow_corig -> lv_home_slow

and use id = "QKmcLS-1WQy-JTfc-2R9T-Uu56-e4R4-QYgzve"  with this LV
(restored from removed lv_home_fast)


If the PV1 is no longer available it can be dropped from VG metadata as well.

Then just vgcfgrestore -f fixedmetata  -  and  lv_home_slow should be again accessible without any cachhing.

Since it's been in 'writethrough' mode - data should be correct.

Comment 3 Lukas Zapletal 2020-07-23 07:29:40 UTC
Can you provide commands which does that so I can put them in my blog please?

Comment 4 Ben Cotton 2020-08-11 13:48:21 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 33 development cycle.
Changing version to 33.

Comment 5 Ben Cotton 2021-11-04 17:34:36 UTC
This message is a reminder that Fedora 33 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora 33 on 2021-11-30.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
Fedora 'version' of '33'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 33 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 6 Ben Cotton 2021-11-30 16:21:12 UTC
Fedora 33 changed to end-of-life (EOL) status on 2021-11-30. Fedora 33 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.