Bug 1789582

Summary: should writecache volumes have data and copy percents like regular cache volumes
Product: Red Hat Enterprise Linux 8 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
lvm2 sub component: Cache Logical Volumes QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: low    
Priority: unspecified CC: agk, heinzm, jbrassow, mcsontos, msnitzer, pasik, prajnoha, teigland, zkabelac
Version: 8.2Flags: pm-rhel: mirror+
Target Milestone: rc   
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.03.08-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-04-28 16:58:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2020-01-09 20:18:10 UTC
Description of problem:
I'm going through the cache test matrix to see which scenarios apply to writecache and I noticed there's no data or copy percent attrs, should there be? Presumably the writecache pool volume can fill up like a normal cache pool depending on the load?


# WRITECACHE
[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n display_writecache writecache_sanity @slow
  Volume group "writecache_sanity" not found
  Cannot process volume group writecache_sanity
[root@hayes-02 ~]# lvcreate --wipesignatures y  -L 4G -n display_writecache VG @slow
  Logical volume "display_writecache" created.
[root@hayes-02 ~]# lvcreate  -L 4G -n pool VG @fast
  Logical volume "pool" created.
[root@hayes-02 ~]# lvchange -an VG
[root@hayes-02 ~]# lvconvert --yes --type writecache --cachevol VG/pool VG/display_writecache
  Logical volume VG/display_writecache now has write cache.
[root@hayes-02 ~]# lvchange -ay VG
[root@hayes-02 ~]# lvs -a -o +devices
  LV                          VG Attr       LSize Pool        Origin                      Data%  Meta%  Move Log Cpy%Sync Devices
  display_writecache          VG Cwi-a-C--- 4.00g [pool_cvol] [display_writecache_wcorig]                                 display_writecache_wcorig(0)
  [display_writecache_wcorig] VG owi-aoC--- 4.00g                                                                         /dev/sdd1(0)
  [pool_cvol]                 VG Cwi-aoC--- 4.00g                                                                         /dev/sde1(0)




# CACHE
[root@hayes-02 ~]# lvcreate -L 4G -n origin VG
  Logical volume "origin" created.
[root@hayes-02 ~]# lvcreate -L 4G -n pool VG
  Logical volume "pool" created.
[root@hayes-02 ~]# lvcreate -L 12M -n pool_meta VG
  Logical volume "pool_meta" created.
[root@hayes-02 ~]# lvconvert --yes --type cache-pool --cachepolicy smq --cachemode writethrough -c 32 --poolmetadata VG/pool_meta VG/pool
  WARNING: Converting VG/pool and VG/pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted VG/pool and VG/pool_meta to cache pool.
[root@hayes-02 ~]# lvconvert --yes --type cache --cachepool VG/pool VG/origin
  Logical volume VG/origin is now cached.
[root@hayes-02 ~]# lvs -a -o +devices
  LV                          VG Attr       LSize  Pool         Origin                      Data%  Meta%  Move Log Cpy%Sync Devices
  display_writecache          VG Cwi-a-C---  4.00g [pool_cvol]  [display_writecache_wcorig]                                 display_writecache_wcorig(0)
  [display_writecache_wcorig] VG owi-aoC---  4.00g                                                                          /dev/sdd1(0)
  [lvol0_pmspare]             VG ewi------- 12.00m                                                                          /dev/sdb1(2051)
  origin                      VG Cwi-a-C---  4.00g [pool_cpool] [origin_corig]              0.00   8.95            0.00     origin_corig(0)
  [origin_corig]              VG owi-aoC---  4.00g                                                                          /dev/sdb1(0)
  [pool_cpool]                VG Cwi---C---  4.00g                                          0.00   8.95            0.00     pool_cpool_cdata(0)
  [pool_cpool_cdata]          VG Cwi-ao----  4.00g                                                                          /dev/sdb1(1024)
  [pool_cpool_cmeta]          VG ewi-ao---- 12.00m                                                                          /dev/sdb1(2048)
  [pool_cvol]                 VG Cwi-aoC---  4.00g                                                                          /dev/sde1(0)


Version-Release number of selected component (if applicable):
kernel-4.18.0-167.el8    BUILT: Sat Dec 14 19:43:52 CST 2019
lvm2-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-libs-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-dbusd-2.03.07-1.el8    BUILT: Mon Dec  2 00:12:23 CST 2019
device-mapper-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019

Comment 1 David Teigland 2020-02-03 17:59:54 UTC
data% for writecache indicates how full the cache is, in master here:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=bddbbcb98ca135b91aa688c04c1c8be7d76a2bd1

# lvs -a foo
  LV            VG  Attr       LSize   Pool        Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  [fast_cvol]   foo Cwi-aoC--- 900.00m                                                                  
  main          foo Cwi-aoC--- 500.00g [fast_cvol] [main_wcorig] 0.22                                   
  [main_wcorig] foo owi-aoC--- 500.00g                                                                  

# dd if=/dev/zero of=/mnt/9M bs=1M count=9

# sync

# lvs foo
  LV   VG  Attr       LSize   Pool        Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  main foo Cwi-aoC--- 500.00g [fast_cvol] [main_wcorig] 1.23

Comment 4 Corey Marthaler 2020-02-24 23:01:23 UTC
Fix verified in the latest rpms. Data percent is now available and increases as writing to the volume increases

kernel-4.18.0-179.el8    BUILT: Fri Feb 14 17:03:01 CST 2020
lvm2-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
lvm2-libs-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020



SCENARIO - [cache_origin_rename_in_between_luks_encryption_operations]
Create a writecache volume with fs data, encrypt origin, and then rename the cache origin volume (pool rename not supported) in between re-encryption stack operations

*** Writecache info for this scenario ***
*  origin (slow):  /dev/sdi1
*  pool (fast):    /dev/sde1
************************************

Adding "slow" and "fast" tags to corresponding pvs
Create origin (slow) volume
lvcreate --wipesignatures y  -L 4G -n rename_orig_A writecache_sanity @slow

Create writecache cvol (fast) volumes
lvcreate  -L 4G -n rename_pool_A writecache_sanity @fast

Deactivte both fast and slow volumes before conversion to write cache
Create writecached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type writecache --cachesettings ' low_watermark=42 high_watermark=56 writeback_jobs=2389 autocommit_blocks=2802 autocommit_time=2548' --cachevol writecache_sanity/rename_pool_A writecache_sanity/rename_orig_A
Activating volume: rename_orig_A

Encrypting rename_orig_A volume
cryptsetup luksFormat /dev/writecache_sanity/rename_orig_A
cryptsetup luksOpen /dev/writecache_sanity/rename_orig_A luks_rename_orig_A 
Placing an xfs filesystem on origin volume
Mounting origin volume

Writing files to /mnt/rename_orig_A
Checking files on /mnt/rename_orig_A

syncing before snap creation...
data percent: 9

[root@hayes-02 ~]# lvs -a -o +devices
  LV                     VG                Attr       LSize Pool                 Origin                 Data%  Meta%  Move Log Cpy%Sync Convert Devices                
  rename_orig_A          writecache_sanity Cwi-aoC--- 4.00g [rename_pool_A_cvol] [rename_orig_A_wcorig] 9.34                                    rename_orig_A_wcorig(0)
  [rename_orig_A_wcorig] writecache_sanity owi-aoC--- 4.00g                                                                                     /dev/sdi1(0)           
  [rename_pool_A_cvol]   writecache_sanity Cwi-aoC--- 4.00g                                                                                     /dev/sde1(0)           

[root@hayes-02 ~]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/luks_rename_orig_A  4.0G  416M  3.6G  11% /mnt/rename_orig_A

Comment 6 errata-xmlrpc 2020-04-28 16:58:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1881