Bug 1098130 - LVM Cache Logical Volumes
Summary: LVM Cache Logical Volumes
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: Changes Tracking
Version: 25
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Jaroslav Reznik
QA Contact:
URL:
Whiteboard: ChangeAcceptedF21 SelfContainedChange
Depends On: 1099541 1099552
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-15 11:12 UTC by Jaroslav Reznik
Modified: 2019-08-19 07:37 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-12 10:18:07 UTC
Type: ---
Embargoed:
pbokoc: fedora_requires_release_note-


Attachments (Terms of Use)

Description Jaroslav Reznik 2014-05-15 11:12:54 UTC
This is a tracking bug for Change: LVM Cache Logical Volumes
For more details, see: https://fedoraproject.org//wiki/Changes/Cache_Logical_Volumes

LVM can now use fast block devices (e.g. SSDs and PCIe Flash) to improve the performance of larger but slower block devices.  These hierarchical or layered logical volumes are called "Cache Logical Volumes" in LVM.

Comment 1 Jaroslav Reznik 2014-07-04 10:43:43 UTC
This message is a reminder that Fedora 21 Accepted Changes Freeze Deadline is on 2014-07-08 [1].

At this point, all accepted Changes should be substantially complete, and testable. Additionally, if a change is to be enabled by default, it must be so enabled at Change Freeze.

This bug should be set to the MODIFIED state to indicate that it achieved completeness. Status will be provided to FESCo right after the deadline. If, for any reasons, your Change is not in required state, let me know and we will try to find solution. For Changes you decide to cancel/move to the next release, please use the NEW status and set needinfo on me and it will be acted upon. 

In case of any questions, don't hesitate to ask Wrangler (jreznik). Thank you.

[1] https://fedoraproject.org/wiki/Releases/21/Schedule

Comment 2 Jaroslav Reznik 2014-10-07 12:23:52 UTC
This message is a reminder that Fedora 21 Change Checkpoint: 100% Code Complete Deadline (Former Accepted Changes 100% Complete) is on 2014-10-14 [1].

All Accepted Changes has to be code complete and ready to be validated in the Beta release (optionally by Fedora QA). Required bug state at this point is ON_QA.

As for several System Wide Changes, Beta Change Deadline is a point of contingency plan. All incompleted Changes will be reported to FESCo on 2014-10-15 meeting. In case of any questions, don't hesitate to ask Wrangler (jreznik).

[1] https://fedoraproject.org/wiki/Releases/21/Schedule

Comment 3 Vratislav Podzimek 2014-10-14 18:21:31 UTC
The "The Anaconda team must develop a UI for configuring cache LVs during installation. If Anaconda support is not provided, users will have to configure cache LVs after installation or by dropping into a command line. Also, Anaconda could fail if installing a new OS onto an existing cache LV if support is not provided." part of the change is not complete for Fedora 21 as there is no UI for configuring cache LVs during installation in the anaconda installer. This part should be postponed to Fedora 22.

Comment 4 Jaroslav Reznik 2014-10-15 12:06:07 UTC
Thanks, moving to F22.

Comment 5 Petr Bokoc 2014-10-20 15:15:50 UTC
Setting relnotes flag to - for F21.

Comment 6 Jaroslav Reznik 2015-03-03 15:48:45 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 22 development cycle.
Changing version to '22'.

More information and reason for this action is here:
https://fedoraproject.org/wiki/Fedora_Program_Management/HouseKeeping/Fedora22

Comment 7 Vratislav Podzimek 2015-08-07 11:08:47 UTC
Just to let you guys know, starting with python-blivet-1.12-1 and anaconda-23.19-1 it will be possible to create a cached LV and install a system to it via kickstart. See https://github.com/rhinstaller/pykickstart/blob/master/docs/kickstart-docs.rst#logvol for more details.

Comment 8 Jean-Christophe Berthon 2016-03-28 20:33:24 UTC
Dear All,
I just wanted to add a bit of information after trying the feature.

I have Fedora 23 64bit installed on my old desktop PC (from 2008). When I installed it, I had setup the 2x HDD (each 160GB) in a RAID-1 using mdadm. I partitioned the RAID-1 volume in a /boot (ext4), encrypted swap and another LUKS "container" with cryptsetup. The latter contains a LVM stack with 1 VG and 1 thin pool in the VG. This thin pool is used to create to thin LV, one for the / and one for the /home. Both are XFS file systems.

Last week I bought my first SSD. It's a EVO 750 from Samsung with 256GB, so bigger than my RAID-1 array. But I wanted to test the caching capabilities of Linux, so I decided to use it as a cache disk.

I quickly created the PV, extended the VG and created the cache pool and cache pool metadata. I followed the lvmcache(7) man page instructions for that.

My first problem was that I was not able to cache only the LV for /home. It's a thin LV. I don't recall the error message but I understood that I needed to cache the thin pool instead of the thin volume only. I did that and it worked brilliantly until next reboot.

At the next reboot after entering the pass-phrase to unlock LUKS, nothing was happening (the boot showed that the LUKS partition was unlock and that the basic target was reached, nothing else after). The problem was that the / (the root) was on the same thin pool as the /home, so it was cached too. However the Fedora kernel (true up to 4.4.6-300) is compiled with DM_CACHE as a module and not built in.

I fixed easily my problem after booting with Fedora 23 on a USB key, I unlocked the LUKS partition, removed the cache LV and rebooted and I could boot again successfully. I then recompile the kernel 4.4.6-300 but with DM_CACHE* as built in the kernel instead of module. I installed my custom kernel and recreated the cache volume. After the reboot, Fedora was able to start successfully.


**Conclusion from my experience**
Either have some documentation explaining that Fedora does not support caching the /root. Maybe the user space tools (and potentially Anaconda) should verify that for the user, but it is a change that might be difficult to push upstream.
Or provide a Kernel with the DM_CACHE options built-in instead of as modules.



Side note: acceleration from the cache is not always evident/visible in everyday use, but the system was stable and I had no crash. I have the feeling that I hear less the HDD noise when reading files, but it might be psychological ;-) When experimenting between writeback and writethrough, I did not feel any differences between the 2 cache modes (I had the feeling that sometimes when writing a file, the HDD were not being used in writeback mode, maybe the delay when finally the data hit the HDD is very long, however I had the feeling that after a reboot in writeback the Kernel was making sure the cache was "flushed" to the HDD, I heard the HDD loud and long, and iostat was reporting a lot of reading from the SSD and lots of writing on the HDD. Writethrough did not have this behaviour, or not visible). However, in all cases when I run some disk benchmarks (gziping or gunziping the kernel, iozone, dbench, etc.) without or with cache, and with the 2 cache mode, I see improvements with caching, and especially with write cache when using writeback. So the benchmarks confirmed the theories. But this does not transform the experience really.
Note that my motherboard is quite old and only support SATA-2, so the SSD is not used to its maximum capabilities. I still have a Intel Core2 CPU, so quite old and the RAM is still as slow, even without the SSD. So I guess that the RAM caching/buffering are good enough to hide the feeling of the acceleration of the storage back-end, and that my computer has other "bottlenecks" which make caching using the SSD not a transforming experience ;-)
Final remark: although my HDDs were encrypted (LUKS), the SSD cache was not encrypted. I haven't tried encrypting it with LUKS yet.

Comment 9 Vratislav Podzimek 2016-04-05 09:25:16 UTC
First of all, big THANKS for the feedback!

(In reply to Jean-Christophe Berthon from comment #8)
> Dear All,
> I just wanted to add a bit of information after trying the feature.
> 
> I have Fedora 23 64bit installed on my old desktop PC (from 2008). When I
> installed it, I had setup the 2x HDD (each 160GB) in a RAID-1 using mdadm. I
> partitioned the RAID-1 volume in a /boot (ext4), encrypted swap and another
> LUKS "container" with cryptsetup. The latter contains a LVM stack with 1 VG
> and 1 thin pool in the VG. This thin pool is used to create to thin LV, one
> for the / and one for the /home. Both are XFS file systems.
> 
> Last week I bought my first SSD. It's a EVO 750 from Samsung with 256GB, so
> bigger than my RAID-1 array. But I wanted to test the caching capabilities
> of Linux, so I decided to use it as a cache disk.
> 
> I quickly created the PV, extended the VG and created the cache pool and
> cache pool metadata. I followed the lvmcache(7) man page instructions for
> that.
> 
> My first problem was that I was not able to cache only the LV for /home.
> It's a thin LV. I don't recall the error message but I understood that I
> needed to cache the thin pool instead of the thin volume only. I did that
> and it worked brilliantly until next reboot.
> 
> At the next reboot after entering the pass-phrase to unlock LUKS, nothing
> was happening (the boot showed that the LUKS partition was unlock and that
> the basic target was reached, nothing else after). The problem was that the
> / (the root) was on the same thin pool as the /home, so it was cached too.
> However the Fedora kernel (true up to 4.4.6-300) is compiled with DM_CACHE
> as a module and not built in.
> 
> I fixed easily my problem after booting with Fedora 23 on a USB key, I
> unlocked the LUKS partition, removed the cache LV and rebooted and I could
> boot again successfully. I then recompile the kernel 4.4.6-300 but with
> DM_CACHE* as built in the kernel instead of module. I installed my custom
> kernel and recreated the cache volume. After the reboot, Fedora was able to
> start successfully.
> 
> 
> **Conclusion from my experience**
> Either have some documentation explaining that Fedora does not support
> caching the /root. Maybe the user space tools (and potentially Anaconda)
> should verify that for the user, but it is a change that might be difficult
> to push upstream.
> Or provide a Kernel with the DM_CACHE options built-in instead of as modules.
Did you run 'dracut -f' after the cache was attached to the thin pool? That should be all that's needed. Dracut needs to pull the DM cache stuff into the initrd.img and then the system should boot just fine (blame "fastest boot ever" for this not being included by default). I agree this should be mentioned somewhere in the documentation.


> 
> Side note: acceleration from the cache is not always evident/visible in
> everyday use, but the system was stable and I had no crash. I have the
> feeling that I hear less the HDD noise when reading files, but it might be
> psychological ;-) When experimenting between writeback and writethrough, I
> did not feel any differences between the 2 cache modes (I had the feeling
> that sometimes when writing a file, the HDD were not being used in writeback
> mode, maybe the delay when finally the data hit the HDD is very long,
> however I had the feeling that after a reboot in writeback the Kernel was
> making sure the cache was "flushed" to the HDD, I heard the HDD loud and
> long, and iostat was reporting a lot of reading from the SSD and lots of
> writing on the HDD. Writethrough did not have this behaviour, or not
> visible). However, in all cases when I run some disk benchmarks (gziping or
> gunziping the kernel, iozone, dbench, etc.) without or with cache, and with
> the 2 cache mode, I see improvements with caching, and especially with write
> cache when using writeback. So the benchmarks confirmed the theories. But
> this does not transform the experience really.
How are you switching the cache modes? From my experience, one needs to recreate the cache pool and attach it again to do the change. Doing 'lvconvert --cachemode writethrough|writeback VG/CacheLV' as lvmcache(7) suggests doesn't change anything on my system.


> Note that my motherboard is quite old and only support SATA-2, so the SSD is
> not used to its maximum capabilities. I still have a Intel Core2 CPU, so
> quite old and the RAM is still as slow, even without the SSD. So I guess
> that the RAM caching/buffering are good enough to hide the feeling of the
> acceleration of the storage back-end, and that my computer has other
> "bottlenecks" which make caching using the SSD not a transforming experience
> ;-)
> Final remark: although my HDDs were encrypted (LUKS), the SSD cache was not
> encrypted. I haven't tried encrypting it with LUKS yet.
That's a bit unfortunate. :) The easiest way to overcome this issue is to make the PV encrypted - i.e. partition->LUKS->PV (LVM). Also don't forget that putting a writeback cache on a single SSD on top of a RAID basically eliminates any recovery advantages of the RAID.

Comment 10 GuL 2016-05-20 15:20:48 UTC
Dear all,

I just tested lvm cache feature and I am very happy with its speed increase. I have a similar computer than the user above, dating from 2008 : Sata 2, Core 2 Quad, HDD 500GB and Samsung SSD 850 EVO 250GB. I have written a tutorial (in french) to migrate a windows/fedora dual boot from HDD to SSD with lvm cache: http://forums.fedora-fr.org/viewtopic.php?id=65381 .

However, I have troubles to clear cache, either after a filesystem corruption (bad hdparm benchmark option), or after dd_ing into a partition with an active cache.

I need to uncache the partition in order to resize it, but it is for now impossible. 

# lvconvert --splitcache --force fedora/home
  7 blocks must still be flushed.
  7 blocks must still be flushed.
  7 blocks must still be flushed.
[...]

Here are some system details:

# lvs
  LV   VG     Attr       LSize   Pool        Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home fedora Cwi-aoC--- 100,00g [ssd_cache]        1,77   23,67           0,04            
  root fedora -wi-ao----  79,49g                                                           
  swap fedora -wi-ao----   7,45g

Please not the persistent 0.04% in Move Log Copy

# dmsetup table fedora-home
0 209715200 cache 253:3 253:2 253:4 128 1 writethrough cleaner 0

I am using fedora 24 beta with lvm 2.02.150(2). As suggested in https://bugzilla.redhat.com/show_bug.cgi?id=1276722 , the problem should be solved starting from lvm 2.02.130-3. But it seems not.

Any help would be appreciated. Many thanks

Comment 11 Vratislav Podzimek 2016-05-27 08:31:11 UTC
Hi, great to here you like this feature! I had to deal with a similar issue as you are probably facing - I had a cache on an SSD that started failing. The only solution that worked for me was to export the LVM metadata with 'vgcfgbackup', manually edit it in a text editor to remove the cache and then apply the change with 'vgcfgrestore'. The filesystem on the cached LV was damaged (due to the unflushed blocks), but it recovered quite nicely with just a few files missing. Please don't hesitate to contact me if you need any help with this process (I'm not an LVM expert, though). If you want to get a better idea about what needs to be changed, try to create an LV, export the metadata, convert the LV to a cached LV and compare the exported metadata. You should be able to see what has changed.

Comment 12 GuL 2016-05-27 08:53:38 UTC
Hi Vratislav,
Thank you for your help. I will try your solution next time. Another solution I used was to dd the LV to another partition, remove the LV, create it again without the cache, dd it back, grow the partition and only after that start again the cache in writeback mode.
Cheers

Comment 13 Jan Kurik 2016-07-26 04:26:57 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 25 development cycle.
Changing version to '25'.

Comment 14 Fedora End Of Life 2017-11-16 19:44:43 UTC
This message is a reminder that Fedora 25 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 25. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '25'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 25 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 15 Fedora End Of Life 2017-12-12 10:18:07 UTC
Fedora 25 changed to end-of-life (EOL) status on 2017-12-12. Fedora 25 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.