Bug 1135639 - lvconvert ignores '--cachemode writethrough' for cache pools
Summary: lvconvert ignores '--cachemode writethrough' for cache pools
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 20
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Jonathan Earl Brassow
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: 1119326 1148592 1171634 1171637
TreeView+ depends on / blocked
 
Reported: 2014-08-29 20:32 UTC by Chris Siebenmann
Modified: 2015-02-05 07:38 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1148592 (view as bug list)
Environment:
Last Closed: 2015-02-05 07:38:14 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Chris Siebenmann 2014-08-29 20:32:47 UTC
Description of problem:

Despite claiming to support it, 'lvconvert --type cache-pool --cachemode
writethrough' silently creates a cache pool (and thus eventually a
cache) that is actually operating in writeback mode, thus exposing the
user to a completely unexpected possibility of data loss.

(I rate this as a high impact issue because of the potential for
unexpected data loss, in fact data loss that the user thought they
were explicitly insuring against by specifying writethrough instead of
writeback caching.)

I expect that this is an upstream bug.

Version-Release number of selected component (if applicable):

lvm2-2.02.106-1.fc20.x86_64

Steps to Reproduce:

Assume that you have an initial volume group testing and a logical
volume in it testing/test that you wish to add a writethrough cache
to. Then:

 # lvcreate -L 5G -n cache_data testing
 # lvcreate -L 500M -n cache_meta testing
 # lvconvert --type cache-pool --cachemode writethrough --poolmetadata testing/cache_meta testing/cache_data
 # lvconvert --type cache --cachepool testing/cache_data testing/test
 testing/test is now cached.

At this point the dm-cache device is created and set up. It should be
in writethrough mode, because that is what we set when we created the
cache-pool LV. However:

 # dmsetup status testing-test
 0 10485760 cache 8 174/128000 128 3/81920 37 79 0 0 0 3 0 1 writeback 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 512 discard_promote_adjustment 1 read_promote_adjustment 4 write_promote_adjustment 8

The actual dm-cache device is in writeback mode, not writethrough (the
'1 writeback' in the status line). The dm-cache device is capable of
being in writethrough mode if we fiddle with it:

 # dmsetup table testing-test
 0 10485760 cache 253:5 253:4 253:6 128 0 default 0
 # dmsetup reload testing-test --table '0 10485760 cache 253:5 253:4 253:6 128 1 writethrough default 0'
 # dmsetup suspend testing-test; dmsetup resume testing-test
 # dmsetup status testing-test
 0 10485760 cache 8 174/128000 128 3/81920 49 67 0 0 0 0 0 1 writethrough 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 512 discard_promote_adjustment 1 read_promote_adjustment 4 write_promote_adjustment 8

It now reports '1 writethrough'.

However if we reboot the machine this dm-cache device will go right back
to being in writeback mode.

Comment 1 Jonathan Earl Brassow 2014-09-17 03:21:18 UTC
Fix committed upstream:
commit 9d57aa9a0fe00322cb188ad1f3103d57392546e7
Author: Jonathan Brassow <jbrassow>
Date:   Tue Sep 16 22:19:53 2014 -0500

    cache-pool:  Fix specification of cachemode when converting to cache-pool
    
    Failure to copy the 'feature_flags' lvconvert_param to the matching
    lv_segment field meant that when a user specified the cachemode argument,
    the request was not honored.

Comment 2 Chris Siebenmann 2015-01-30 19:34:45 UTC
For what it's worth, this seems to be fixed in the current Fedora 21 version
of lvm2, lvm2-2.02.115-2.fc21.x86_64. I no longer have convenient Fedora 20
machines to test on so I don't know its status there.

Comment 3 Peter Rajnoha 2015-02-05 07:38:14 UTC
I don't expect any new updates for F20, marking this bz with CLOSED/NEXTRELEASE.


Note You need to log in before you can comment on or make changes to this bug.