RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1202916 - LVM Cache: Full Support
Summary: LVM Cache: Full Support
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.7
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-17 17:00 UTC by Jonathan Earl Brassow
Modified: 2015-07-22 07:38 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
⁠LVM Cache As of Red Hat Enterprise Linux 6.7, LVM cache is fully supported. This feature allows users to create logical volumes (LVs) with a small fast device performing as a cache to larger slower devices. Refer to the lvmcache(7) manual page for information on creating cache logical volumes. Note the following restrictions on the use of cache LVs: - The cache LV must be a top-level device. It cannot be used as a thin-pool LV, an image of a RAID LV, or any other sub-LV type. - The cache LV sub-LVs (the origin LV, metadata LV, and data LV) can only be of linear, stripe, or RAID type. - The properties of the cache LV cannot be changed after creation. To change cache properties, remove the cache as described in lvmcache(7) and recreate it with the desired properties.
Clone Of:
Environment:
Last Closed: 2015-07-22 07:38:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:1411 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2015-07-20 18:06:52 UTC

Description Jonathan Earl Brassow 2015-03-17 17:00:17 UTC
LVM cache will be supported in RHEL6.7.  The feature is already in position, but we need release notes stating this.

Comment 2 Corey Marthaler 2015-06-12 21:48:48 UTC
QA has verified the the following items required for "full support" in RHEL6.7.

1. The ability to create, remove, report on cache LVs created on top of linear, stripe or RAID LVs.
2. The cache LV components (data and metadata sub-LVs) can be linear, stripe or RAID LVs.
3. The user will be able to select the cache mode (either writeback or writethrough).
   (This does not mean they will be able to switch cachemode while the cache is in-use)
4. The user will be able to select the granularity of cache chunks (i.e. chunk size).
5. Cache size, mode, and policy cannot be changed after creation.


Currently open 6.7 bugs from QA cache test cases:
1216214 - "attempt to access beyond end of device" when attempting a raid cache pool on 1k extent VG
1114113 - OOM issues when caching many volumes
1113770 - need a more graceful way to fail when attempting to cache RO devices


2.6.32-563.el6.x86_64
lvm2-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
lvm2-libs-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
lvm2-cluster-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
udev-147-2.62.el6    BUILT: Thu Apr 23 05:44:37 CDT 2015
device-mapper-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-libs-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-event-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-event-libs-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015

Comment 3 errata-xmlrpc 2015-07-22 07:38:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1411.html


Note You need to log in before you can comment on or make changes to this bug.