RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1328245 - lvchange --zero fails to update the state of active thin pools
Summary: lvchange --zero fails to update the state of active thin pools
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.8
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
Milan Navratil
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-18 20:10 UTC by Ben Turner
Modified: 2017-03-21 12:02 UTC (History)
14 users (show)

Fixed In Version: lvm2-2.02.143-9.el6
Doc Type: Bug Fix
Doc Text:
Change now takes effect immediately after using "lvchange --zero n" against an active thin pool Previously, when the "lvchange --zero n" command was used against an active thin pool, the change did not take effect until the next time the pool was deactivated. With this update, the change takes effect immediately.
Clone Of:
Environment:
Last Closed: 2017-03-21 12:02:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1332166 0 high CLOSED Update brick configuration steps on a RHEL 6.8 setup to deal with RHEL 6.7 -> RHEL 6.8 lvchange regression 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2017:0798 0 normal SHIPPED_LIVE lvm2 bug fix update 2017-03-21 12:51:51 UTC

Internal Links: 1332166

Description Ben Turner 2016-04-18 20:10:33 UTC
Description of problem:

The behavior of lvchange --zero has changed in 6.8 where it no longer updates the state of active thin pools.

Version-Release number of selected component (if applicable):

lvm2-2.02.143-7.el6.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Run lvchange --zero n on an active thin pool
2.  Check dmsetup table, zeroing disabled does not show up until the thin pool is deactivated and reactivated.

Actual results:

From what I understand this change was intended, if so it just needs to be documented.

Expected results:

From what I understand this change was intended, if so it just needs to be documented.

Additional info:

See https://bugzilla.redhat.com/show_bug.cgi?id=1324236 for further info on this issue.

Comment 2 Alasdair Kergon 2016-04-18 20:13:04 UTC
No, it wasn't an intended change.

Comment 8 Ramesh N 2016-05-05 10:35:01 UTC
I am am trying I am not able to reproduce the bug in a VM environment. I see 'skip_block_zeroing' flag after lvchange. I am not sure what is different in my VM setup.

[root@dhcp35-4 ~]# dmsetup table 
vg--brick1-pool--brick1-tpool: 0 41732096 thin-pool 253:2 253:3 512 0 1 skip_block_zeroing 
vg--brick1-pool--brick1_tdata: 0 41732096 linear 252:16 512
vg--brick1-pool--brick1_tmeta: 0 208896 linear 252:16 41732608
vg_dhcp354-lv_swap: 0 4194304 linear 252:2 36718592
vg_dhcp354-lv_root: 0 36716544 linear 252:2 2048
vg--brick1-pool--brick1: 0 41732096 linear 253:4 0
vg--brick1-brick1: 0 41940992 thin 253:4 1
[root@dhcp35-4 ~]#

[root@dhcp35-4 ~]# rpm -qa|grep lvm
mesa-private-llvm-3.6.2-1.el6.x86_64
lvm2-libs-2.02.143-7.el6.x86_64
lvm2-2.02.143-7.el6.x86_64
[root@dhcp35-4 ~]# 


Ben, Are you able to re-produce this issue consistently?.

Comment 9 Ramesh N 2016-05-31 07:33:17 UTC
Today I again tried to reproduce this bug with latest 6.8 VM. But I am not able to reproduce. I could see the flag 'skip_block_zeroing' immediately after lvchange. 

#Disks in the VM
[root@dhcp42-241 ~]# lsblk
NAME                            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                              11:0    1 1024M  0 rom  
vda                             252:0    0   15G  0 disk 
├─vda1                          252:1    0  500M  0 part /boot
└─vda2                          252:2    0 14.5G  0 part 
  ├─vg_dhcp42241-lv_root (dm-0) 253:0    0   13G  0 lvm  /
  └─vg_dhcp42241-lv_swap (dm-1) 253:1    0  1.5G  0 lvm  [SWAP]
vdb                             252:16   0   10G  0 disk 
vdc                             252:32   0   10G  0 disk 

#PV Create
[root@dhcp42-241 ~]# pvcreate /dev/vdc
  Physical volume "/dev/vdc" successfully created

#VG Create
[root@dhcp42-241 ~]# vgcreate rhsvg1 /dev/vdc
  Volume group "rhsvg1" successfully created

#LV Create
[root@dhcp42-241 ~]# lvcreate --thinpool rhsvg1/tp1 --size 4G --chunksize 256K
  Logical volume "tp1" created.

#dmsetup before lvchange
[root@dhcp42-241 ~]# dmsetup table
rhsvg1-tp1_tdata: 0 8388608 linear 252:32 10240
vg_dhcp42241-lv_swap: 0 3145728 linear 252:2 27281408
vg_dhcp42241-lv_root: 0 27279360 linear 252:2 2048
rhsvg1-tp1_tmeta: 0 8192 linear 252:32 8398848
rhsvg1-tp1: 0 8388608 thin-pool 253:2 253:3 512 0 0 

#lvchange
[root@dhcp42-241 ~]# lvchange --zero n rhsvg1/tp1
  Logical volume "tp1" changed.

#dmsetup after lvchange
[root@dhcp42-241 ~]# dmsetup table
rhsvg1-tp1-tpool: 0 8388608 thin-pool 253:2 253:3 512 0 1 skip_block_zeroing 
rhsvg1-tp1_tdata: 0 8388608 linear 252:32 10240
vg_dhcp42241-lv_swap: 0 3145728 linear 252:2 27281408
vg_dhcp42241-lv_root: 0 27279360 linear 252:2 2048
rhsvg1-tp1_tmeta: 0 8192 linear 252:32 8398848
[root@dhcp42-241 ~]#


#Package Versions
[root@dhcp42-241 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.8 (Santiago)
[root@dhcp42-241 ~]# rpm -qa|grep lvm
mesa-private-llvm-3.6.2-1.el6.x86_64
lvm2-2.02.143-7.el6.x86_64
lvm2-libs-2.02.143-7.el6.x86_64
[root@dhcp42-241 ~]#

Zdenek, Please let me know if you can reproduce this bug.

Comment 10 Zdenek Kabelac 2016-05-31 08:37:04 UTC
Hi

The mandatory part is - you need to have active some thin volume.

As long as you only have the 'thin-pool' the manipulation with 'skip' flag works normally and it's the same reason why it has 'escaped' in internal lvm2 testing.

Comment 11 Ramesh N 2016-05-31 08:49:31 UTC
(In reply to Zdenek Kabelac from comment #10)
> Hi
> 
> The mandatory part is - you need to have active some thin volume.
> 
> As long as you only have the 'thin-pool' the manipulation with 'skip' flag
> works normally and it's the same reason why it has 'escaped' in internal
> lvm2 testing.

Thanks Zdenek, I think now I can understand the reasoon why we are not hitting this issue from VDSM. We are doing the following for gluster brick provisioning in VDSM

1. Create PV.
2. Create VG
3. Create Thinpool and change the  --zero using lvchange command.
4. Create ThinLVs using blivet API.

Zdenek, I hope we won't hit this regression in the above scenario. Please confirm then we can close the bz#1332166

Comment 12 Zdenek Kabelac 2016-05-31 08:52:20 UTC
Yep for sequence in comment 11 it should be safe and table should have proper 'skip' flag set.

A side note: you do not need to use 'extra' lvchange command - you can specify thin-pool zeroing option directly during thin-pool creation...

Comment 13 Ramesh N 2016-05-31 09:02:33 UTC
(In reply to Zdenek Kabelac from comment #12)
> Yep for sequence in comment 11 it should be safe and table should have
> proper 'skip' flag set.
> 

Thanks for confirming.

> A side note: you do not need to use 'extra' lvchange command - you can
> specify thin-pool zeroing option directly during thin-pool creation...

Yeap. But this is something which we can change in future releases. Not a bug/blocker for the current release.

Comment 18 Zdenek Kabelac 2016-09-19 12:59:32 UTC
Two fixing patches upstream:

https://www.redhat.com/archives/lvm-devel/2016-September/msg00047.html
https://www.redhat.com/archives/lvm-devel/2016-September/msg00049.html


should make --discards & --zero options again usable on live & unused thin-pools.

Comment 21 Roman Bednář 2016-11-14 16:33:15 UTC
Marking verified using latest rpms. lvchange now updates device status properly.

# lsblk
NAME                          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
...
sdb                             8:16   0   40G  0 disk 
sdc                             8:32   0   40G  0 disk 
sda                             8:0    0   40G  0 disk 
├─vg-POOL_tmeta (dm-2)        253:2    0   12M  0 lvm  
│ └─vg-POOL-tpool (dm-4)      253:4    0   10G  0 lvm  
│   ├─vg-POOL (dm-5)          253:5    0   10G  0 lvm  
│   └─vg-thin_lv (dm-6)       253:6    0    5G  0 lvm  
└─vg-POOL_tdata (dm-3)        253:3    0   10G  0 lvm  
  └─vg-POOL-tpool (dm-4)      253:4    0   10G  0 lvm  
    ├─vg-POOL (dm-5)          253:5    0   10G  0 lvm  
    └─vg-thin_lv (dm-6)       253:6    0    5G  0 lvm  
... 

# vgs
  VG         #PV #LV #SN Attr   VSize  VFree 
  vg           1   2   0 wz--nc 40.00g 29.97g
  ...

# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  POOL    vg         twi-aotz--  10.00g             0.00   0.65                            
  thin_lv vg         Vwi-a-tz--   5.00g POOL        0.00                                   
  ...     
                                            
# dmsetup table
vg-POOL: 0 20971520 linear 253:4 0
vg-POOL-tpool: 0 20971520 thin-pool 253:2 253:3 128 0 0 
vg-POOL_tdata: 0 20971520 linear 8:0 26624
vg-POOL_tmeta: 0 24576 linear 8:0 20998144
vg-thin_lv: 0 10485760 thin 253:4 1
...
===================================================================
Before fix:

lvm2-2.02.143-7.el6

# lvchange --zero n vg/POOL
  Logical volume "POOL" changed.

# dmsetup table
vg-POOL: 0 20971520 linear 253:4 0
vg-POOL-tpool: 0 20971520 thin-pool 253:2 253:3 128 0 0 
vg-POOL_tdata: 0 20971520 linear 8:0 26624
vg-POOL_tmeta: 0 24576 linear 8:0 20998144
vg-thin_lv: 0 10485760 thin 253:4 1
...

==================================================================
After fix:

# lvchange --zero n vg/POOL
  Logical volume "POOL" changed.

# dmsetup table
vg-POOL: 0 8388608 linear 253:4 0
vg-POOL-tpool: 0 8388608 thin-pool 253:2 253:3 128 0 1 skip_block_zeroing 
vg-POOL_tdata: 0 8388608 linear 8:0 10240
vg-POOL_tmeta: 0 8192 linear 8:0 8398848
vg-thin_lv: 0 10485760 thin 253:4 1
...


Tested with:

2.6.32-663.el6.x86_64

lvm2-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
lvm2-libs-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
lvm2-cluster-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
udev-147-2.73.el6_8.2    BUILT: Tue Aug 30 15:17:19 CEST 2016
device-mapper-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-libs-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-event-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-event-libs-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-persistent-data-0.6.2-0.1.rc7.el6    BUILT: Tue Mar 22 14:58:09 CET 2016
cmirror-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016

Comment 25 errata-xmlrpc 2017-03-21 12:02:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0798.html


Note You need to log in before you can comment on or make changes to this bug.