Bug 2078452

Summary: System crash attempting to remove a large writecache from a VG
Product: Red Hat Enterprise Linux 8 Reporter: Carlos Maiolino <cmaiolin>
Component: lvm2Assignee: David Teigland <teigland>
lvm2 sub component: Cache Logical Volumes QA Contact: cluster-qe <cluster-qe>
Status: NEW --- Docs Contact:
Severity: medium    
Priority: medium CC: agk, heinzm, jbrassow, mpatocka, msnitzer, prajnoha, zkabelac
Version: 8.6Keywords: Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Carlos Maiolino 2022-04-25 11:26:30 UTC
Description of problem:

While attempting to remove a large writecache LV (1TB) from a VG, the system crashed
due to Out-Of-Memory.

My system actually has 32GiB RAM, and I was playing with a 1TB size writecache

Version-Release number of selected component (if applicable):

RHEL 8.6

How reproducible:

Always

Steps to Reproduce:
- Add a large device as a writecache to a VG:
   #lvconvert --type writecache --cachevol fast vg/main

- Attempt to remove it from the VG with lvconvert --uncache (or split cache)


Actual results:
System crashes and enters in an unrecoverable state even if the volume isn't mounted automatically. During boot, lvm spends a long time trying to scan the device with the large cache, and the system fails to boot because it starts to throw OOM errors before finishing to boot.

Expected results:

- LVM prevents a large device to be added as a writecache if the system has not enough memory to deal with it.

- System shouldn't crash while removing a cache from the VG.
If a system has been using a cache device, but for some reason the total RAM has to be reduced, this shouldn't cause a crash while removing the cache device from the volume group.

Additional info:



I managed to boot in recovery mode, but any attempt to remove the writecache device from the VG failed. The system was unable to remove it and crashed.

I manage to manually remove the writecache device from the VG by hacking the current VG configuration file, removing the writecache from it, and restoring the configuration without the writecache device by using vgrestore.

Comment 1 David Teigland 2022-04-25 14:27:05 UTC
I think this is a duplicate of bug 2059644 where the fix was to print a warning when writecache requires >50% of system memory, and requires a confirmation if it requires >90% of memory.  It doesn't currently prevent creating a large writecache, but we could extend the solution to actually fail to create at some memory percentage.

Comment 2 Carlos Maiolino 2022-04-25 15:05:14 UTC
(In reply to David Teigland from comment #1)
> I think this is a duplicate of bug 2059644 where the fix was to print a
> warning when writecache requires >50% of system memory, and requires a
> confirmation if it requires >90% of memory.  It doesn't currently prevent
> creating a large writecache, but we could extend the solution to actually
> fail to create at some memory percentage.

Hi David.

Yeah, that seems a reasonable solution to avoid people creating caches that will
likely crash their systems, but I don't think this will do anything if somebody
reduces their system's memory after creating the cache.

But anyway, this is just an idea based on the fact my biggest problem when I hit
this, was to actually recover the system, giving the fact I couldn't boot it because
of the big cache, and was unable to remove it too after I realized my mistake.

Cheers

Comment 3 David Teigland 2022-04-25 17:12:24 UTC
> likely crash their systems, but I don't think this will do anything if
> somebody reduces their system's memory after creating the cache.

Yes, we could apply similar checking in the activation path and fail to activate an LV if we think the writecache would use too much memory.  Or dm-writecache could do some similar checking in the kernel.  Mikulas, do you think either of those options makes sense?

> But anyway, this is just an idea based on the fact my biggest problem when I
> hit this, was to actually recover the system, giving the fact I couldn't boot it
> because of the big cache, and was unable to remove it too after I realized my
> mistake.

If the system is autoactivating the problematic LV then it's difficult to intervene and fix the problem, so I do think we should have a better way to handle this.

Once you've started the system successfully and the problematic writecache is inactive, then lvconvert --splitcache --force LV will forcibly detach the writecache without attempting to activate and write back the data, with potential data loss.