Bug 1328245

Summary: lvchange --zero fails to update the state of active thin pools
Product: Red Hat Enterprise Linux 6 Reporter: Ben Turner <bturner>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Changing Logical Volumes (RHEL6) QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact: Milan Navratil <mnavrati>
Severity: medium    
Priority: high CC: agk, asoman, bturner, heinzm, jbrassow, msnitzer, pprakash, prajnoha, prockai, rbednar, rcyriac, rnachimu, tlavigne, zkabelac
Version: 6.8   
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.02.143-9.el6 Doc Type: Bug Fix
Doc Text:
Change now takes effect immediately after using "lvchange --zero n" against an active thin pool Previously, when the "lvchange --zero n" command was used against an active thin pool, the change did not take effect until the next time the pool was deactivated. With this update, the change takes effect immediately.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-21 12:02:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ben Turner 2016-04-18 20:10:33 UTC
Description of problem:

The behavior of lvchange --zero has changed in 6.8 where it no longer updates the state of active thin pools.

Version-Release number of selected component (if applicable):

lvm2-2.02.143-7.el6.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Run lvchange --zero n on an active thin pool
2.  Check dmsetup table, zeroing disabled does not show up until the thin pool is deactivated and reactivated.

Actual results:

From what I understand this change was intended, if so it just needs to be documented.

Expected results:

From what I understand this change was intended, if so it just needs to be documented.

Additional info:

See https://bugzilla.redhat.com/show_bug.cgi?id=1324236 for further info on this issue.

Comment 2 Alasdair Kergon 2016-04-18 20:13:04 UTC
No, it wasn't an intended change.

Comment 8 Ramesh N 2016-05-05 10:35:01 UTC
I am am trying I am not able to reproduce the bug in a VM environment. I see 'skip_block_zeroing' flag after lvchange. I am not sure what is different in my VM setup.

[root@dhcp35-4 ~]# dmsetup table 
vg--brick1-pool--brick1-tpool: 0 41732096 thin-pool 253:2 253:3 512 0 1 skip_block_zeroing 
vg--brick1-pool--brick1_tdata: 0 41732096 linear 252:16 512
vg--brick1-pool--brick1_tmeta: 0 208896 linear 252:16 41732608
vg_dhcp354-lv_swap: 0 4194304 linear 252:2 36718592
vg_dhcp354-lv_root: 0 36716544 linear 252:2 2048
vg--brick1-pool--brick1: 0 41732096 linear 253:4 0
vg--brick1-brick1: 0 41940992 thin 253:4 1
[root@dhcp35-4 ~]#

[root@dhcp35-4 ~]# rpm -qa|grep lvm
mesa-private-llvm-3.6.2-1.el6.x86_64
lvm2-libs-2.02.143-7.el6.x86_64
lvm2-2.02.143-7.el6.x86_64
[root@dhcp35-4 ~]# 


Ben, Are you able to re-produce this issue consistently?.

Comment 9 Ramesh N 2016-05-31 07:33:17 UTC
Today I again tried to reproduce this bug with latest 6.8 VM. But I am not able to reproduce. I could see the flag 'skip_block_zeroing' immediately after lvchange. 

#Disks in the VM
[root@dhcp42-241 ~]# lsblk
NAME                            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                              11:0    1 1024M  0 rom  
vda                             252:0    0   15G  0 disk 
├─vda1                          252:1    0  500M  0 part /boot
└─vda2                          252:2    0 14.5G  0 part 
  ├─vg_dhcp42241-lv_root (dm-0) 253:0    0   13G  0 lvm  /
  └─vg_dhcp42241-lv_swap (dm-1) 253:1    0  1.5G  0 lvm  [SWAP]
vdb                             252:16   0   10G  0 disk 
vdc                             252:32   0   10G  0 disk 

#PV Create
[root@dhcp42-241 ~]# pvcreate /dev/vdc
  Physical volume "/dev/vdc" successfully created

#VG Create
[root@dhcp42-241 ~]# vgcreate rhsvg1 /dev/vdc
  Volume group "rhsvg1" successfully created

#LV Create
[root@dhcp42-241 ~]# lvcreate --thinpool rhsvg1/tp1 --size 4G --chunksize 256K
  Logical volume "tp1" created.

#dmsetup before lvchange
[root@dhcp42-241 ~]# dmsetup table
rhsvg1-tp1_tdata: 0 8388608 linear 252:32 10240
vg_dhcp42241-lv_swap: 0 3145728 linear 252:2 27281408
vg_dhcp42241-lv_root: 0 27279360 linear 252:2 2048
rhsvg1-tp1_tmeta: 0 8192 linear 252:32 8398848
rhsvg1-tp1: 0 8388608 thin-pool 253:2 253:3 512 0 0 

#lvchange
[root@dhcp42-241 ~]# lvchange --zero n rhsvg1/tp1
  Logical volume "tp1" changed.

#dmsetup after lvchange
[root@dhcp42-241 ~]# dmsetup table
rhsvg1-tp1-tpool: 0 8388608 thin-pool 253:2 253:3 512 0 1 skip_block_zeroing 
rhsvg1-tp1_tdata: 0 8388608 linear 252:32 10240
vg_dhcp42241-lv_swap: 0 3145728 linear 252:2 27281408
vg_dhcp42241-lv_root: 0 27279360 linear 252:2 2048
rhsvg1-tp1_tmeta: 0 8192 linear 252:32 8398848
[root@dhcp42-241 ~]#


#Package Versions
[root@dhcp42-241 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.8 (Santiago)
[root@dhcp42-241 ~]# rpm -qa|grep lvm
mesa-private-llvm-3.6.2-1.el6.x86_64
lvm2-2.02.143-7.el6.x86_64
lvm2-libs-2.02.143-7.el6.x86_64
[root@dhcp42-241 ~]#

Zdenek, Please let me know if you can reproduce this bug.

Comment 10 Zdenek Kabelac 2016-05-31 08:37:04 UTC
Hi

The mandatory part is - you need to have active some thin volume.

As long as you only have the 'thin-pool' the manipulation with 'skip' flag works normally and it's the same reason why it has 'escaped' in internal lvm2 testing.

Comment 11 Ramesh N 2016-05-31 08:49:31 UTC
(In reply to Zdenek Kabelac from comment #10)
> Hi
> 
> The mandatory part is - you need to have active some thin volume.
> 
> As long as you only have the 'thin-pool' the manipulation with 'skip' flag
> works normally and it's the same reason why it has 'escaped' in internal
> lvm2 testing.

Thanks Zdenek, I think now I can understand the reasoon why we are not hitting this issue from VDSM. We are doing the following for gluster brick provisioning in VDSM

1. Create PV.
2. Create VG
3. Create Thinpool and change the  --zero using lvchange command.
4. Create ThinLVs using blivet API.

Zdenek, I hope we won't hit this regression in the above scenario. Please confirm then we can close the bz#1332166

Comment 12 Zdenek Kabelac 2016-05-31 08:52:20 UTC
Yep for sequence in comment 11 it should be safe and table should have proper 'skip' flag set.

A side note: you do not need to use 'extra' lvchange command - you can specify thin-pool zeroing option directly during thin-pool creation...

Comment 13 Ramesh N 2016-05-31 09:02:33 UTC
(In reply to Zdenek Kabelac from comment #12)
> Yep for sequence in comment 11 it should be safe and table should have
> proper 'skip' flag set.
> 

Thanks for confirming.

> A side note: you do not need to use 'extra' lvchange command - you can
> specify thin-pool zeroing option directly during thin-pool creation...

Yeap. But this is something which we can change in future releases. Not a bug/blocker for the current release.

Comment 18 Zdenek Kabelac 2016-09-19 12:59:32 UTC
Two fixing patches upstream:

https://www.redhat.com/archives/lvm-devel/2016-September/msg00047.html
https://www.redhat.com/archives/lvm-devel/2016-September/msg00049.html


should make --discards & --zero options again usable on live & unused thin-pools.

Comment 21 Roman Bednář 2016-11-14 16:33:15 UTC
Marking verified using latest rpms. lvchange now updates device status properly.

# lsblk
NAME                          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
...
sdb                             8:16   0   40G  0 disk 
sdc                             8:32   0   40G  0 disk 
sda                             8:0    0   40G  0 disk 
├─vg-POOL_tmeta (dm-2)        253:2    0   12M  0 lvm  
│ └─vg-POOL-tpool (dm-4)      253:4    0   10G  0 lvm  
│   ├─vg-POOL (dm-5)          253:5    0   10G  0 lvm  
│   └─vg-thin_lv (dm-6)       253:6    0    5G  0 lvm  
└─vg-POOL_tdata (dm-3)        253:3    0   10G  0 lvm  
  └─vg-POOL-tpool (dm-4)      253:4    0   10G  0 lvm  
    ├─vg-POOL (dm-5)          253:5    0   10G  0 lvm  
    └─vg-thin_lv (dm-6)       253:6    0    5G  0 lvm  
... 

# vgs
  VG         #PV #LV #SN Attr   VSize  VFree 
  vg           1   2   0 wz--nc 40.00g 29.97g
  ...

# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  POOL    vg         twi-aotz--  10.00g             0.00   0.65                            
  thin_lv vg         Vwi-a-tz--   5.00g POOL        0.00                                   
  ...     
                                            
# dmsetup table
vg-POOL: 0 20971520 linear 253:4 0
vg-POOL-tpool: 0 20971520 thin-pool 253:2 253:3 128 0 0 
vg-POOL_tdata: 0 20971520 linear 8:0 26624
vg-POOL_tmeta: 0 24576 linear 8:0 20998144
vg-thin_lv: 0 10485760 thin 253:4 1
...
===================================================================
Before fix:

lvm2-2.02.143-7.el6

# lvchange --zero n vg/POOL
  Logical volume "POOL" changed.

# dmsetup table
vg-POOL: 0 20971520 linear 253:4 0
vg-POOL-tpool: 0 20971520 thin-pool 253:2 253:3 128 0 0 
vg-POOL_tdata: 0 20971520 linear 8:0 26624
vg-POOL_tmeta: 0 24576 linear 8:0 20998144
vg-thin_lv: 0 10485760 thin 253:4 1
...

==================================================================
After fix:

# lvchange --zero n vg/POOL
  Logical volume "POOL" changed.

# dmsetup table
vg-POOL: 0 8388608 linear 253:4 0
vg-POOL-tpool: 0 8388608 thin-pool 253:2 253:3 128 0 1 skip_block_zeroing 
vg-POOL_tdata: 0 8388608 linear 8:0 10240
vg-POOL_tmeta: 0 8192 linear 8:0 8398848
vg-thin_lv: 0 10485760 thin 253:4 1
...


Tested with:

2.6.32-663.el6.x86_64

lvm2-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
lvm2-libs-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
lvm2-cluster-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
udev-147-2.73.el6_8.2    BUILT: Tue Aug 30 15:17:19 CEST 2016
device-mapper-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-libs-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-event-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-event-libs-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-persistent-data-0.6.2-0.1.rc7.el6    BUILT: Tue Mar 22 14:58:09 CET 2016
cmirror-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016

Comment 25 errata-xmlrpc 2017-03-21 12:02:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0798.html