Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1592490

Summary: cache metadata format no longer honored
Product: Red Hat Enterprise Linux 7 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Cache Logical Volumes QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: unspecified CC: agk, heinzm, jbrassow, msnitzer, prajnoha, rhandlin, zkabelac
Version: 7.6Keywords: Regression, Reopened
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.02.180-4.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-30 11:03:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2018-06-18 16:13:25 UTC
Description of problem:

[root@host-086 ~]# pvs -a -o +pv_tags
  PV          VG            Fmt  Attr PSize   PFree   PV Tags
  /dev/sda1                      ---       0       0         
  /dev/sdb1   cache_sanity  lvm2 a--  <24.99g <24.99g        
  /dev/sdc1   cache_sanity  lvm2 a--  <24.99g <24.99g fast   
  /dev/sdd1                      ---       0       0         
  /dev/sde1                      ---       0       0         
  /dev/sdf1   cache_sanity  lvm2 a--  <24.99g <24.99g slow   
  /dev/sdg1                      ---       0       0         
  /dev/sdh1   cache_sanity  lvm2 a--  <24.99g <24.99g        
  /dev/sdi1                      ---       0       0         

[root@host-086 ~]# lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity @slow
  Logical volume "corigin" created.
[root@host-086 ~]# lvcreate  -L 2G -n cache_then_deactivate_pool cache_sanity @fast
  Logical volume "cache_then_deactivate_pool" created.
[root@host-086 ~]# lvcreate  -L 12M -n cache_then_deactivate_pool_meta cache_sanity @fast
  Logical volume "cache_then_deactivate_pool_meta" created.

[root@host-086 ~]# lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writeback -c 64 --poolmetadata cache_sanity/cache_then_deactivate_pool_meta cache_sanity/cache_then_deactivate_pool
  WARNING: Converting cache_sanity/cache_then_deactivate_pool and cache_sanity/cache_then_deactivate_pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/cache_then_deactivate_pool and cache_sanity/cache_then_deactivate_pool_meta to cache pool.
[root@host-086 ~]# lvconvert --yes --type cache --cachemetadataformat 2 --cachepool cache_sanity/cache_then_deactivate_pool cache_sanity/corigin
  Logical volume cache_sanity/corigin is now cached.

# lvm cache metadata format doesn't appear to be consistent with what was specified (2)

[root@host-086 ~]# lvs -a -o +cachemetadataformat
  LV                                 VG            Attr       LSize   Pool                         Origin          Data%  Meta%  Move Log Cpy%Sync Convert CMFmt
  [cache_then_deactivate_pool]       cache_sanity  Cwi---C---   2.00g                                              0.03   2.38            0.00                 1
  [cache_then_deactivate_pool_cdata] cache_sanity  Cwi-ao----   2.00g                                                                                           
  [cache_then_deactivate_pool_cmeta] cache_sanity  ewi-ao----  12.00m                                                                                           
  corigin                            cache_sanity  Cwi-a-C---   4.00g [cache_then_deactivate_pool] [corigin_corig] 0.03   2.38            0.00                 1
  [corigin_corig]                    cache_sanity  owi-aoC---   4.00g                                                                                           
  [lvol0_pmspare]                    cache_sanity  ewi-------  12.00m                                                                                                                                                                      


Version-Release number of selected component (if applicable):
3.10.0-906.el7.x86_64

lvm2-2.02.179-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
lvm2-libs-2.02.179-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
lvm2-cluster-2.02.179-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
lvm2-lockd-2.02.179-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
lvm2-python-boom-0.8.5-6.el7    BUILT: Mon Jun 18 01:16:13 CDT 2018
cmirror-2.02.179-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
device-mapper-1.02.148-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
device-mapper-libs-1.02.148-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
device-mapper-event-1.02.148-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
device-mapper-event-libs-1.02.148-1.el7    BUILT: Mon Jun 18 01:12:41 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017


How reproducible:
Everytime

Comment 3 Zdenek Kabelac 2018-06-22 13:16:05 UTC
MQ policy is the old one - not upgraded to format 2.

Users are supposed to use SMQ policy.

The kernel handles  MQ via  alias to SMQ - but in lvm2 terminology - we keep the name historical and associated with format 1 (kernel also ignores any  MQ related parameters - so the alias is not a 'fair' match - as SMQ works slightly differently.

So ATM it this works as designed, unless I'd be convinced we want to support creation of caches with MQ policy and format 2.

Comment 4 Corey Marthaler 2018-06-22 19:13:23 UTC
Why the change in behavior all of a sudden, and should it be documented? This has been allowed in all prior versions of rhel7 and now changes in 7.6.

### RHEL7.5

# Create origin (slow) volume
lvcreate --wipesignatures y --activate ey -L 4G -n display_cache cache_sanity @slow

# Create cache data and cache metadata (fast) volumes
lvcreate --activate ey -L 4G -n pool cache_sanity @fast
lvcreate --activate ey -L 12M -n pool_meta cache_sanity @fast

# Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: mq  mode: writethrough
lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 32 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting cache_sanity/pool and cache_sanity/pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)

# Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachemetadataformat 2 --cachepool cache_sanity/pool cache_sanity/display_cache

[root@harding-02 ~]# lvs -a -o +devices,cachepolicy,cachemetadataformat
 LV                    VG            Attr       LSize    Pool   Origin                Data%  Meta% Cpy%Sync Devices                   CachePolicy CMFmt
 display_cache         cache_sanity  Cwi-a-C---    4.00g [pool] [display_cache_corig] 0.00   8.95  0.00     display_cache_corig(0)    mq              2
 [display_cache_corig] cache_sanity  owi-aoC---    4.00g                                                    /dev/mapper/mpathe1(0)
 [lvol0_pmspare]       cache_sanity  ewi-------   12.00m                                                    /dev/mapper/mpatha1(1027)
 [pool]                cache_sanity  Cwi---C---    4.00g                              0.00   8.95  0.00     pool_cdata(0)             mq              2
 [pool_cdata]          cache_sanity  Cwi-ao----    4.00g                                                    /dev/mapper/mpatha1(0)
 [pool_cmeta]          cache_sanity  ewi-ao----   12.00m                                                    /dev/mapper/mpatha1(1024)

[root@harding-02 ~]# dmsetup status | grep cache_sanity-display_cache:
cache_sanity-display_cache: 0 8388608 cache 8 272/3072 64 0/131072 0 64 0 0 0 0 0 2 metadata2 writethrough 2 migration_threshold 2048 mq 10 random_threshold 0 sequential_threshold 0 discard_promote_adjustment 0 read_promote_adjustment 0 write_promote_adjustment 0 rw - 




### RHEL7.6

# Create origin (slow) volume
lvcreate --wipesignatures y  -L 4G -n display_cache cache_sanity @slow

# Create cache data and cache metadata (fast) volumes
lvcreate  -L 4G -n pool cache_sanity @fast
lvcreate  -L 12M -n pool_meta cache_sanity @fast

# Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: mq  mode: writethrough
lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 64 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting cache_sanity/pool and cache_sanity/pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)

# Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachemetadataformat 2 --cachepool cache_sanity/pool cache_sanity/display_cache

[root@host-086 ~]#  lvs -a -o +devices,cachepolicy,cachemetadataformat
 LV                    VG            Attr       LSize   Pool   Origin                Data%  Meta% Cpy%Sync Devices                CachePolicy CMFmt
 display_cache         cache_sanity  Cwi-a-C---   4.00g [pool] [display_cache_corig] 0.00   4.46  0.00     display_cache_corig(0) mq              1
 [display_cache_corig] cache_sanity  owi-aoC---   4.00g                                                    /dev/sdc1(0)
 [lvol0_pmspare]       cache_sanity  ewi-------  12.00m                                                    /dev/sdb1(1027)
 [pool]                cache_sanity  Cwi---C---   4.00g                              0.00   4.46  0.00     pool_cdata(0)          mq              1
 [pool_cdata]          cache_sanity  Cwi-ao----   4.00g                                                    /dev/sdb1(0)
 [pool_cmeta]          cache_sanity  ewi-ao----  12.00m                                                    /dev/sdb1(1024)

[root@host-086 ~]# dmsetup status | grep cache_sanity-display_cache:
cache_sanity-display_cache: 0 8388608 cache 8 135/3072 128 0/65536 0 49 0 0 0 0 0 1 writethrough 2 migration_threshold 2048 mq 10 random_threshold 0 sequential_threshold 0 discard_promote_adjustment 0 read_promote_adjustment 0 write_promote_adjustment 0 rw -

Comment 5 Zdenek Kabelac 2018-06-22 19:35:46 UTC
hmmm so this rather bug has leaked already into 7.5 - where it was catching to support format2 - so not all validation was in place.

But in that case we may possibly need to preserve this logic - it's just one extra if case - so not big issue really.

I'll take closer look what would be the impact - we could consider  'format2' as having higher precedence then oldish  'mq' policy.

Comment 6 Zdenek Kabelac 2018-08-07 22:41:22 UTC
Dropped connection between policy and format - so it can be set independently.

https://www.redhat.com/archives/lvm-devel/2018-August/msg00013.html

Comment 8 Corey Marthaler 2018-08-24 21:04:54 UTC
Fix verified in the latest rpms.

3.10.0-937.el7.x86_64
lvm2-2.02.180-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
lvm2-libs-2.02.180-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
lvm2-cluster-2.02.180-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
lvm2-lockd-2.02.180-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
lvm2-python-boom-0.9-8.el7    BUILT: Tue Aug 21 11:28:32 CDT 2018
cmirror-2.02.180-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
device-mapper-1.02.149-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
device-mapper-libs-1.02.149-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
device-mapper-event-1.02.149-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
device-mapper-event-libs-1.02.149-5.el7    BUILT: Tue Aug 21 11:29:37 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017



[root@hayes-01 ~]# lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity /dev/sdb1
  Logical volume "corigin" created.
[root@hayes-01 ~]# lvcreate  -L 2G -n cache_then_deactivate_pool cache_sanity /dev/sdc1
  Logical volume "cache_then_deactivate_pool" created.
[root@hayes-01 ~]# lvcreate  -L 12M -n cache_then_deactivate_pool_meta cache_sanity /dev/sdc1
  Logical volume "cache_then_deactivate_pool_meta" created.

[root@hayes-01 ~]# lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writeback -c 64 --poolmetadata cache_sanity/cache_then_deactivate_pool_meta cache_sanity/cache_then_deactivate_pool
  WARNING: Converting cache_sanity/cache_then_deactivate_pool and cache_sanity/cache_then_deactivate_pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/cache_then_deactivate_pool and cache_sanity/cache_then_deactivate_pool_meta to cache pool.
[root@hayes-01 ~]# lvconvert --yes --type cache --cachemetadataformat 2 --cachepool cache_sanity/cache_then_deactivate_pool cache_sanity/corigin
  Logical volume cache_sanity/corigin is now cached.

[root@hayes-01 ~]# lvs -a -o +cachemetadataformat
  LV                                 Attr       LSize  Pool                         Origin          Data% Meta%   Cpy%Sync CMFmt
  [cache_then_deactivate_pool]       Cwi---C---  2.00g                                              0.03  2.51    0.00         2
  [cache_then_deactivate_pool_cdata] Cwi-ao----  2.00g
  [cache_then_deactivate_pool_cmeta] ewi-ao---- 12.00m
  corigin                            Cwi-a-C---  4.00g [cache_then_deactivate_pool] [corigin_corig] 0.03  2.51    0.00         2
  [corigin_corig]                    owi-aoC---  4.00g
  [lvol0_pmspare]                    ewi------- 12.00m

Comment 10 errata-xmlrpc 2018-10-30 11:03:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3193