RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1171634 - lvconvert ignores '--cachemode writethrough' for cache pools
Summary: lvconvert ignores '--cachemode writethrough' for cache pools
Keywords:
Status: CLOSED DUPLICATE of bug 1171637
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1135639 1148592 1171637
Blocks: 1119326
TreeView+ depends on / blocked
 
Reported: 2014-12-08 09:35 UTC by Deepak P Joshi
Modified: 2015-01-08 22:41 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1148592
Environment:
Last Closed: 2015-01-08 22:41:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Deepak P Joshi 2014-12-08 09:35:23 UTC
+++ This bug was initially created as a clone of Bug #1148592 +++

+++ This bug was initially created as a clone of Bug #1135639 +++

Description of problem:

Despite claiming to support it, 'lvconvert --type cache-pool --cachemode
writethrough' silently creates a cache pool (and thus eventually a
cache) that is actually operating in writeback mode, thus exposing the
user to a completely unexpected possibility of data loss.

(I rate this as a high impact issue because of the potential for
unexpected data loss, in fact data loss that the user thought they
were explicitly insuring against by specifying writethrough instead of
writeback caching.)

I expect that this is an upstream bug.

Version-Release number of selected component (if applicable):

lvm2-2.02.106-1.fc20.x86_64

Steps to Reproduce:

Assume that you have an initial volume group testing and a logical
volume in it testing/test that you wish to add a writethrough cache
to. Then:

 # lvcreate -L 5G -n cache_data testing
 # lvcreate -L 500M -n cache_meta testing
 # lvconvert --type cache-pool --cachemode writethrough --poolmetadata testing/cache_meta testing/cache_data
 # lvconvert --type cache --cachepool testing/cache_data testing/test
 testing/test is now cached.

At this point the dm-cache device is created and set up. It should be
in writethrough mode, because that is what we set when we created the
cache-pool LV. However:

 # dmsetup status testing-test
 0 10485760 cache 8 174/128000 128 3/81920 37 79 0 0 0 3 0 1 writeback 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 512 discard_promote_adjustment 1 read_promote_adjustment 4 write_promote_adjustment 8

The actual dm-cache device is in writeback mode, not writethrough (the
'1 writeback' in the status line). The dm-cache device is capable of
being in writethrough mode if we fiddle with it:

 # dmsetup table testing-test
 0 10485760 cache 253:5 253:4 253:6 128 0 default 0
 # dmsetup reload testing-test --table '0 10485760 cache 253:5 253:4 253:6 128 1 writethrough default 0'
 # dmsetup suspend testing-test; dmsetup resume testing-test
 # dmsetup status testing-test
 0 10485760 cache 8 174/128000 128 3/81920 49 67 0 0 0 0 0 1 writethrough 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 512 discard_promote_adjustment 1 read_promote_adjustment 4 write_promote_adjustment 8

It now reports '1 writethrough'.

However if we reboot the machine this dm-cache device will go right back
to being in writeback mode.

--- Additional comment from Jonathan Earl Brassow on 2014-09-16 23:21:18 EDT ---

Fix committed upstream:
commit 9d57aa9a0fe00322cb188ad1f3103d57392546e7
Author: Jonathan Brassow <jbrassow>
Date:   Tue Sep 16 22:19:53 2014 -0500

    cache-pool:  Fix specification of cachemode when converting to cache-pool
    
    Failure to copy the 'feature_flags' lvconvert_param to the matching
    lv_segment field meant that when a user specified the cachemode argument,
    the request was not honored.


This problem is also observered on RHEL 6.6

Version-Release number of selected component (if applicable):
2.6.32-504.el6.x86_64

lvm2-2.02.111-2.el6.x86_64

Comment 2 Jonathan Earl Brassow 2015-01-08 22:41:27 UTC

*** This bug has been marked as a duplicate of bug 1171637 ***


Note You need to log in before you can comment on or make changes to this bug.