RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1109974 - need a more graceful error when attempting to cache an origin that is already cached
Summary: need a more graceful error when attempting to cache an origin that is already...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 1114078 1119326
TreeView+ depends on / blocked
 
Reported: 2014-06-16 19:09 UTC by Corey Marthaler
Modified: 2021-09-03 12:49 UTC (History)
6 users (show)

Fixed In Version: lvm2-2.02.112-1.el7
Doc Type: Bug Fix
Doc Text:
Lvm2 supports caching only certain LV types. Missed check allowed to pass unsupported volume type into internal code which then thrown internal error message. Check for supported types is now correctly checked at command level.
Clone Of:
: 1114078 (view as bug list)
Environment:
Last Closed: 2015-03-05 13:09:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0513 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2015-03-05 16:14:41 UTC

Description Corey Marthaler 2014-06-16 19:09:43 UTC
Description of problem:
Attempt to cache an origin volume that has already been cached

Create origin (slow) volume
lvcreate -L 4G -n already_cached cache_sanity /dev/sde1

Create first cache data and cache metadata (fast) volumes
lvcreate -L 2G -n pool_1 cache_sanity /dev/sdc1
lvcreate -L 8M -n pool_1_meta cache_sanity /dev/sdc1
Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --type cache-pool --cachemode writethrough --poolmetadata cache_sanity/pool_1_meta cache_sanity/pool_1

Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --type cache --cachepool cache_sanity/pool_1 cache_sanity/already_cached

Create second cached volume by combining the cache pool (fast) and origin (slow) volumes
lvcreate -L 2G -n pool_2 cache_sanity /dev/sdc1
lvcreate -L 8M -n pool_2_meta cache_sanity /dev/sdc1
Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --type cache-pool --cachemode writethrough --poolmetadata cache_sanity/pool_2_meta cache_sanity/pool_2


Attempt to create another cached volume by combining the second cache pool (fast) and the already cached origin (slow) volume.

[root@host-001 ~]# lvs -a -o +devices
  LV                     Attr       LSize   Pool   Origin                  Devices                
  already_cached         Cwi-a-C---   4.00g pool_1 [already_cached_corig]  already_cached_corig(0)
  [already_cached_corig] -wi-ao----   4.00g                                /dev/sde1(0)
  [lvol0_pmspare]        ewi-------   8.00m                                /dev/sdf1(0)
  pool_1                 Cwi-a-C---   2.00g                                pool_1_cdata(0)
  [pool_1_cdata]         Cwi-aoC---   2.00g                                /dev/sdc1(0)
  [pool_1_cmeta]         ewi-aoC---   8.00m                                /dev/sdc1(512)
  pool_2                 Cwi-a-C---   2.00g                                pool_2_cdata(0)
  [pool_2_cdata]         Cwi-a-C---   2.00g                                /dev/sdc1(514)
  [pool_2_cmeta]         ewi-a-C---   8.00m                                /dev/sdc1(1026)

[root@host-001 ~]# lvconvert --type cache --cachepool cache_sanity/pool_2 cache_sanity/already_cached
  Internal error: The origin, already_cached, cannot be of cache type




Version-Release number of selected component (if applicable):
3.10.0-123.el7.x86_64
lvm2-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-libs-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-cluster-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-persistent-data-0.3.2-1.el7    BUILT: Thu Apr  3 09:58:51 CDT 2014
cmirror-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014

Comment 2 Jonathan Earl Brassow 2014-07-23 03:18:06 UTC
this should actually be allowed to stack cache on cache... see bug 727072.  It would be two layers of cache in that case.  Although, we may not have to do this in 7.1.

Comment 3 Zdenek Kabelac 2014-11-05 14:40:13 UTC
https://www.redhat.com/archives/lvm-devel/2014-November/msg00040.html

As of now - disabled explicitly by lvm2 code.

We do not support snapshotting of cached volumes until we resolve what this actually means -  libdm currently requires snapshot to be a top-level LV so it's not going to work well in some orders.

Comment 5 Corey Marthaler 2014-11-20 20:48:16 UTC
Verified the "Internal error" no longer shows up. However I'd argue the new warning message is actually more confusing than what the old internal error warning said, "The origin, $origin, cannot be of cache type"

[root@host-115 ~]# lvs -a -o +devices
 LV                     Attr       LSize Pool     Origin                 Data%  Meta%  Cpy%Sync Devices
 already_cached         Cwi-a-C--- 4.00g [pool_1] [already_cached_corig] 0.02   3.47   0.00     already_cached_corig(0)
 [already_cached_corig] owi-aoC--- 4.00g                                                        /dev/sdb1(0)
 [lvol0_pmspare]        ewi------- 8.00m                                                        /dev/sda1(0)
 [pool_1]               Cwi---C--- 2.00g                                 0.02   3.47   0.00     pool_1_cdata(0)
 [pool_1_cdata]         Cwi-ao---- 2.00g                                                        /dev/sdd1(0)
 [pool_1_cmeta]         ewi-ao---- 8.00m                                                        /dev/sdd1(512)
 pool_2                 Cwi---C--- 2.00g                                                        pool_2_cdata(0)
 [pool_2_cdata]         Cwi------- 2.00g                                                        /dev/sdd1(514)
 [pool_2_cmeta]         ewi------- 8.00m                                                        /dev/sdd1(1026)

[root@host-115 ~]# lvconvert --yes --type cache --cachepool cache_sanity/pool_2 cache_sanity/already_cached
  Cache is not supported with cache segment type of the original logical volume cache_sanity/already_cached.

Comment 7 errata-xmlrpc 2015-03-05 13:09:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0513.html


Note You need to log in before you can comment on or make changes to this bug.