RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1789582 - should writecache volumes have data and copy percents like regular cache volumes
Summary: should writecache volumes have data and copy percents like regular cache volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.2
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: 8.0
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-09 20:18 UTC by Corey Marthaler
Modified: 2021-09-07 11:55 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.03.08-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 16:58:59 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-32889 0 None None None 2021-09-07 11:53:35 UTC
Red Hat Product Errata RHEA-2020:1881 0 None None None 2020-04-28 16:59:15 UTC

Description Corey Marthaler 2020-01-09 20:18:10 UTC
Description of problem:
I'm going through the cache test matrix to see which scenarios apply to writecache and I noticed there's no data or copy percent attrs, should there be? Presumably the writecache pool volume can fill up like a normal cache pool depending on the load?


# WRITECACHE
[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n display_writecache writecache_sanity @slow
  Volume group "writecache_sanity" not found
  Cannot process volume group writecache_sanity
[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n display_writecache VG @slow
  Logical volume "display_writecache" created.
[root@hayes-02 ~]# lvcreate  -L 4G -n pool VG @fast
  Logical volume "pool" created.
[root@hayes-02 ~]# lvchange -an VG
[root@hayes-02 ~]# lvconvert --yes --type writecache --cachevol VG/pool VG/display_writecache
  Logical volume VG/display_writecache now has write cache.
[root@hayes-02 ~]# lvchange -ay VG
[root@hayes-02 ~]# lvs -a -o +devices
  LV                          VG Attr       LSize Pool        Origin                      Data%  Meta%  Move Log Cpy%Sync Devices
  display_writecache          VG Cwi-a-C--- 4.00g [pool_cvol] [display_writecache_wcorig]                                 display_writecache_wcorig(0)
  [display_writecache_wcorig] VG owi-aoC--- 4.00g                                                                         /dev/sdd1(0)
  [pool_cvol]                 VG Cwi-aoC--- 4.00g                                                                         /dev/sde1(0)




# CACHE
[root@hayes-02 ~]# lvcreate -L 4G -n origin VG
  Logical volume "origin" created.
[root@hayes-02 ~]# lvcreate -L 4G -n pool VG
  Logical volume "pool" created.
[root@hayes-02 ~]# lvcreate -L 12M -n pool_meta VG
  Logical volume "pool_meta" created.
[root@hayes-02 ~]# lvconvert --yes --type cache-pool --cachepolicy smq --cachemode writethrough -c 32 --poolmetadata VG/pool_meta VG/pool
  WARNING: Converting VG/pool and VG/pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted VG/pool and VG/pool_meta to cache pool.
[root@hayes-02 ~]# lvconvert --yes --type cache --cachepool VG/pool VG/origin
  Logical volume VG/origin is now cached.
[root@hayes-02 ~]# lvs -a -o +devices
  LV                          VG Attr       LSize  Pool         Origin                      Data%  Meta%  Move Log Cpy%Sync Devices
  display_writecache          VG Cwi-a-C---  4.00g [pool_cvol]  [display_writecache_wcorig]                                 display_writecache_wcorig(0)
  [display_writecache_wcorig] VG owi-aoC---  4.00g                                                                          /dev/sdd1(0)
  [lvol0_pmspare]             VG ewi------- 12.00m                                                                          /dev/sdb1(2051)
  origin                      VG Cwi-a-C---  4.00g [pool_cpool] [origin_corig]              0.00   8.95            0.00     origin_corig(0)
  [origin_corig]              VG owi-aoC---  4.00g                                                                          /dev/sdb1(0)
  [pool_cpool]                VG Cwi---C---  4.00g                                          0.00   8.95            0.00     pool_cpool_cdata(0)
  [pool_cpool_cdata]          VG Cwi-ao----  4.00g                                                                          /dev/sdb1(1024)
  [pool_cpool_cmeta]          VG ewi-ao---- 12.00m                                                                          /dev/sdb1(2048)
  [pool_cvol]                 VG Cwi-aoC---  4.00g                                                                          /dev/sde1(0)


Version-Release number of selected component (if applicable):
kernel-4.18.0-167.el8    BUILT: Sat Dec 14 19:43:52 CST 2019
lvm2-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-libs-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-dbusd-2.03.07-1.el8    BUILT: Mon Dec  2 00:12:23 CST 2019
device-mapper-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019

Comment 1 David Teigland 2020-02-03 17:59:54 UTC
data% for writecache indicates how full the cache is, in master here:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=bddbbcb98ca135b91aa688c04c1c8be7d76a2bd1

# lvs -a foo
  LV            VG  Attr       LSize   Pool        Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  [fast_cvol]   foo Cwi-aoC--- 900.00m                                                                  
  main          foo Cwi-aoC--- 500.00g [fast_cvol] [main_wcorig] 0.22                                   
  [main_wcorig] foo owi-aoC--- 500.00g                                                                  

# dd if=/dev/zero of=/mnt/9M bs=1M count=9

# sync

# lvs foo
  LV   VG  Attr       LSize   Pool        Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  main foo Cwi-aoC--- 500.00g [fast_cvol] [main_wcorig] 1.23

Comment 4 Corey Marthaler 2020-02-24 23:01:23 UTC
Fix verified in the latest rpms. Data percent is now available and increases as writing to the volume increases

kernel-4.18.0-179.el8    BUILT: Fri Feb 14 17:03:01 CST 2020
lvm2-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
lvm2-libs-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020



SCENARIO - [cache_origin_rename_in_between_luks_encryption_operations]
Create a writecache volume with fs data, encrypt origin, and then rename the cache origin volume (pool rename not supported) in between re-encryption stack operations

*** Writecache info for this scenario ***
*  origin (slow):  /dev/sdi1
*  pool (fast):    /dev/sde1
************************************

Adding "slow" and "fast" tags to corresponding pvs
Create origin (slow) volume
lvcreate --wipesignatures y  -L 4G -n rename_orig_A writecache_sanity @slow

Create writecache cvol (fast) volumes
lvcreate  -L 4G -n rename_pool_A writecache_sanity @fast

Deactivte both fast and slow volumes before conversion to write cache
Create writecached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type writecache --cachesettings ' low_watermark=42 high_watermark=56 writeback_jobs=2389 autocommit_blocks=2802 autocommit_time=2548' --cachevol writecache_sanity/rename_pool_A writecache_sanity/rename_orig_A
Activating volume: rename_orig_A

Encrypting rename_orig_A volume
cryptsetup luksFormat /dev/writecache_sanity/rename_orig_A
cryptsetup luksOpen /dev/writecache_sanity/rename_orig_A luks_rename_orig_A 
Placing an xfs filesystem on origin volume
Mounting origin volume

Writing files to /mnt/rename_orig_A
Checking files on /mnt/rename_orig_A

syncing before snap creation...
data percent: 9

[root@hayes-02 ~]# lvs -a -o +devices
  LV                     VG                Attr       LSize Pool                 Origin                 Data%  Meta%  Move Log Cpy%Sync Convert Devices                
  rename_orig_A          writecache_sanity Cwi-aoC--- 4.00g [rename_pool_A_cvol] [rename_orig_A_wcorig] 9.34                                    rename_orig_A_wcorig(0)
  [rename_orig_A_wcorig] writecache_sanity owi-aoC--- 4.00g                                                                                     /dev/sdi1(0)           
  [rename_pool_A_cvol]   writecache_sanity Cwi-aoC--- 4.00g                                                                                     /dev/sde1(0)           

[root@hayes-02 ~]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/luks_rename_orig_A  4.0G  416M  3.6G  11% /mnt/rename_orig_A

Comment 6 errata-xmlrpc 2020-04-28 16:58:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1881


Note You need to log in before you can comment on or make changes to this bug.