RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1255184 - cache pool conversion issues with 'cachemode'
Summary: cache pool conversion issues with 'cachemode'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-08-19 21:23 UTC by Corey Marthaler
Modified: 2023-03-08 07:27 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.02.129-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 12:47:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2147 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2015-11-19 11:11:07 UTC

Description Corey Marthaler 2015-08-19 21:23:53 UTC
Description of problem:
At first I thought this had to do with specifying the cachemode when converting to cache pool, but I was also able to hit this with without specifying the cachemode at conversion time. However as soon as I ran an lvs and include a 'cachemode' in the output the "Internal error" error appears and regardless of what cachemode is supplied, whatever is in lvm.conf is what is used.




lvm2-2.02.127-1.el7.x86_64

Create origin (slow) volume
lvcreate -L 4G -n display_cache cache_sanity /dev/sdb1

Create cache data and cache metadata (fast) volumes
lvcreate -L 4G -n pool cache_sanity /dev/sde1
lvcreate -L 12M -n pool_meta cache_sanity /dev/sde1

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --cachemode writeback -c 64 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting logical volume cache_sanity/pool and cache_sanity/pool_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)

[root@host-115 ~]# lvs -a -o +devices,cachemode
  LV              Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices         Cachemode
  display_cache   -wi-a-----   4.00g                                                     /dev/sdb1(0)             
  [lvol0_pmspare] ewi-------  12.00m                                                     /dev/sdc1(0)             
  pool            Cwi---C---   4.00g                                                     pool_cdata(0)   writeback
  [pool_cdata]    Cwi-------   4.00g                                                     /dev/sde1(0)             
  [pool_cmeta]    ewi-------  12.00m                                                     /dev/sde1(1024)          

[root@host-115 ~]# lvs --noheadings -o lv_name --select 'cachemode=writeback'
  pool




lvm2-2.02.128-1.el7.x86_64

Create origin (slow) volume
lvcreate -L 4G -n display_cache cache_sanity /dev/sde1

Create cache data and cache metadata (fast) volumes
lvcreate -L 4G -n pool cache_sanity /dev/sdd1
lvcreate -L 12M -n pool_meta cache_sanity /dev/sdd1

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --cachemode writeback -c 32 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting logical volume cache_sanity/pool and cache_sanity/pool_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)

# Where did the pool volume go?
[root@host-109 ~]# lvs -a -o +devices,cachemode
  Internal error: LV cache_sanity/pool has uknown feature flags 0.
  _do_report_object: report function failed for field cachemode
  LV              Attr       LSize   Pool Origin Data%  Meta%   Cpy%Sync Devices         Cachemode
  display_cache   -wi-a-----   4.00g                                     /dev/sde1(0)             
  [lvol0_pmspare] ewi-------  12.00m                                     /dev/sdc1(0)             
  [pool_cdata]    Cwi-------   4.00g                                     /dev/sdd1(0)             
  [pool_cmeta]    ewi-------  12.00m                                     /dev/sdd1(1024)          

[root@host-109 ~]# lvs --noheadings -o lv_name --select 'cachemode=writeback'
  Internal error: LV cache_sanity/pool has uknown feature flags 0.
  _do_report_object: report function failed for field cachemode

[root@host-109 ~]#  lvconvert --yes --type cache --cachepool cache_sanity/pool cache_sanity/display_cache
  Logical volume cache_sanity/display_cache is now cached.

[root@host-109 ~]# lvs --noheadings -o lv_name --select 'cachemode=writeback'
[root@host-109 ~]# dmsetup status
cache_sanity-display_cache: 0 8388608 cache 8 135/3072 128 0/65536 0 61 0 0 0 0 0 1 writethrough 2 migration_threshold 2048 smq 0 rw - 

[root@host-109 ~]# lvs -a -o +devices,cachemode
  LV                    Attr       LSize   Pool   Origin                Data%  Meta%   Cpy%Sync Devices                Cachemode   
  display_cache         Cwi-a-C---   4.00g [pool] [display_cache_corig] 0.00   4.39    100.00   display_cache_corig(0) writethrough
  [display_cache_corig] owi-aoC---   4.00g                                                      /dev/sdd1(0)                       
  [lvol0_pmspare]       ewi-------  12.00m                                                      /dev/sdc1(0)                       
  [pool]                Cwi---C---   4.00g                              0.00   4.39    100.00   pool_cdata(0)          writethrough
  [pool_cdata]          Cwi-ao----   4.00g                                                      /dev/sde1(0)                       
  [pool_cmeta]          ewi-ao----  12.00m                                                      /dev/sde1(1024)

Comment 1 Zdenek Kabelac 2015-08-20 09:26:09 UTC
All cache attributes are now primarily entered when LV is cached - cache-pool should be seen as 'wrapper' for Data & Metadata LV  (building brick).

However for backward compatibility we fully support also cache-pool with all caching attributes set - so of course this bug needs couple fixes.

However it's now valid unused cache-pool is now without any cache-mode set - this setting will be taken from configuration just in the moment you attach cache-pool to some LV.

Currently the only setting which is set when cache-pool is created is cache-pool chunk size - which is needs for estimation of cache-pool metadata volume size.

Comment 3 Zdenek Kabelac 2015-08-26 11:46:33 UTC
Addressed with upstream commit:

https://www.redhat.com/archives/lvm-devel/2015-August/msg00185.html

Comment 6 Corey Marthaler 2015-09-02 21:00:12 UTC
Fix verified in the latest rpms.

3.10.0-313.el7.x86_64
lvm2-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
lvm2-libs-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
lvm2-cluster-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-libs-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-event-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-event-libs-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015



[root@host-109 ~]# grep writethrough /etc/lvm/lvm.conf
        # Possible options are: writethrough, writeback.
        # writethrough - Data blocks are immediately written from
        cache_pool_cachemode = "writethrough"


[root@host-109 ~]# vgcreate cache_sanity /dev/sd[abcdefgh]1
  Volume group "cache_sanity" successfully created
[root@host-109 ~]# lvcreate -L 4G -n display_cache cache_sanity /dev/sde1
  Logical volume "display_cache" created.
[root@host-109 ~]# lvcreate -L 4G -n pool cache_sanity /dev/sdd1
  Logical volume "pool" created.
[root@host-109 ~]# lvcreate -L 12M -n pool_meta cache_sanity /dev/sdd1
  Logical volume "pool_meta" created.
[root@host-109 ~]# lvconvert --yes --type cache-pool --cachemode writeback -c 32 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting logical volume cache_sanity/pool and cache_sanity/pool_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/pool to cache pool.
[root@host-109 ~]# lvs -a -o +devices,cachemode
  LV              VG            Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices         Cachemode
  display_cache   cache_sanity  -wi-a-----   4.00g                                   /dev/sde1(0)
  [lvol0_pmspare] cache_sanity  ewi-------  12.00m                                   /dev/sda1(0)
  pool            cache_sanity  Cwi---C---   4.00g                                   pool_cdata(0)   writeback
  [pool_cdata]    cache_sanity  Cwi-------   4.00g                                   /dev/sdd1(0)
  [pool_cmeta]    cache_sanity  ewi-------  12.00m                                   /dev/sdd1(1024)

[root@host-109 ~]# lvs --noheadings -o lv_name --select 'cachemode=writeback'
  pool
[root@host-109 ~]# lvconvert --yes --type cache --cachepool cache_sanity/pool cache_sanity/display_cache
  Logical volume cache_sanity/display_cache is now cached.
[root@host-109 ~]# lvs --noheadings -o lv_name --select 'cachemode=writeback'
  display_cache

[root@host-109 ~]# dmsetup status
cache_sanity-display_cache: 0 8388608 cache 8 266/3072 64 3/131072 6 64 0 0 0 3 0 1 writeback 2 migration_threshold 2048 smq 0 rw - 

[root@host-109 ~]# lvs -a -o +devices,cachemode
  LV                    VG            Attr       LSize Pool   Origin                Data%  Meta% Cpy%Sync Devices                Cachemode
  display_cache         cache_sanity  Cwi-a-C--- 4.00g [pool] [display_cache_corig] 0.00   8.66  0.00     display_cache_corig(0) writeback
  [display_cache_corig] cache_sanity  owi-aoC--- 4.00g                                                    /dev/sde1(0)
  [lvol0_pmspare]       cache_sanity  ewi------- 12.00m                                                   /dev/sda1(0)
  [pool]                cache_sanity  Cwi---C--- 4.00g                              0.00   8.66  0.00     pool_cdata(0)          writeback
  [pool_cdata]          cache_sanity  Cwi-ao---- 4.00g                                                    /dev/sdd1(0)
  [pool_cmeta]          cache_sanity  ewi-ao---- 12.00m                                                   /dev/sdd1(1024)

Comment 7 errata-xmlrpc 2015-11-19 12:47:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2147.html


Note You need to log in before you can comment on or make changes to this bug.