RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1697823 - lvm allows an erroneous conversion of thinlv to thinpool and even allows mounting this pool
Summary: lvm allows an erroneous conversion of thinlv to thinpool and even allows moun...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1711360
TreeView+ depends on / blocked
 
Reported: 2019-04-09 07:54 UTC by nikhil kshirsagar
Modified: 2021-09-03 12:55 UTC (History)
13 users (show)

Fixed In Version: lvm2-2.02.186-2.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-31 20:04:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:1129 0 None None None 2020-03-31 20:05:27 UTC

Description nikhil kshirsagar 2019-04-09 07:54:54 UTC
Description of problem:
Sometimes customers mistakenly swap repaired metadata back into thinlv instead of pool.

Lvm version lvm2-2.02.166-1.el7.x86_64 did not allow something like this.

# lvs -a
  LV                VG    Attr       LSize   Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  homevol           myvg  -wi-ao---- 500.00m                                                        
  rootvol           myvg  -wi-ao----  13.34g                                                        
  [lvol0_pmspare]   wrong ewi-------   4.00m                                                        
  wrong_tmpthinlv   wrong -wi-------  20.00m                                                        
  wrongpool         wrong twi---tz-- 500.00m                                                        
  [wrongpool_tdata] wrong Twi------- 500.00m                                                        
  [wrongpool_tmeta] wrong ewi-------   4.00m                                                        
  wrongthinlv       wrong Vwi---tz--   5.00g wrongpool                                              
[root@rhel7u3-1 /]# lvconvert --thinpool wrong/wrongthinlv --poolmetadata wrong/wrong_tmpthinlv
  Operation not permitted on thin LV wrong/wrongthinlv. <----
  Operations permitted on a thin LV are:
  --merge
 
[root@rhel7u3-1 /]#

Version-Release number of selected component (if applicable):

However, latest lvm version on 7.6 lvm2-2.02.180-10.el7_6.3.x86_64 allows it, therefore destroying data,


[root@vm255-41 nkshirsa]# lvs
  LV       VG            Attr       LSize   Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root     rhel_vm253-73 -wi-ao---- <13.87g                                                        
  swap     rhel_vm253-73 -wi-ao----   1.60g                                                        
  testpool thin_vg       twi-aotz-- 200.00m                 0.00   10.94                           
  thinlv   thin_vg       Vwi-a-tz-- 500.00m testpool        0.00                                   
  tmplv    thin_vg       -wi-a----- 100.00m                                                        
[root@vm255-41 nkshirsa]# lvconvert --thinpool thin_vg/testpool --poolmetadata thin_vg/tmplv
  Cannot convert pool thin_vg/testpool with active volumes.
[root@vm255-41 nkshirsa]# vgchange -an thin_vg
  0 logical volume(s) in volume group "thin_vg" now active
[root@vm255-41 nkshirsa]# lvconvert --thinpool thin_vg/testpool --poolmetadata thin_vg/tmplv
Do you want to swap metadata of thin_vg/testpool pool with metadata volume thin_vg/tmplv? [y/n]: y
[root@vm255-41 nkshirsa]# lvconvert --thinpool thin_vg/thinlv --poolmetadata thin_vg/tmplv
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting thin_vg/thinlv and thin_vg/tmplv to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert thin_vg/thinlv and thin_vg/tmplv? [y/n]: y
  Converted thin_vg/thinlv and thin_vg/tmplv to thin pool.
[root@vm255-41 nkshirsa]# lvs -a
  LV               VG            Attr       LSize   Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root             rhel_vm253-73 -wi-ao---- <13.87g                                                        
  swap             rhel_vm253-73 -wi-ao----   1.60g                                                        
  [lvol0_pmspare]  thin_vg       ewi-------   4.00m                                                        
  testpool         thin_vg       twi---tz-- 200.00m                                                        
  [testpool_tdata] thin_vg       Twi------- 200.00m                                                        
  [testpool_tmeta] thin_vg       ewi------- 100.00m                                                        
  thinlv           thin_vg       twi---tz-- 500.00m                                                        
  [thinlv_tdata]   thin_vg       Vwi---tz-- 500.00m testpool                                               
  [thinlv_tmeta]   thin_vg       ewi-------   4.00m                                                        
[root@vm255-41 nkshirsa]# 



Additional info:

The other puzzling question we have is, if we do something like this (swapping in some empty metadata into a thinlv of the pool, not only does lvm allow it, but we can also mount the "now pool" ("earlier thinlv")  and find the data intact!)



                                                      
[root@vm255-41 nkshirsa]# lvs -a
  LV               VG            Attr       LSize   Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root             rhel_vm253-73 -wi-ao---- <13.87g                                                        
  swap             rhel_vm253-73 -wi-ao----   1.60g                                                        
  [lvol0_pmspare]  thin_vg       ewi-------   4.00m                                                        
  testpool         thin_vg       twi-aotz-- 200.00m                 0.00   10.94                           
  [testpool_tdata] thin_vg       Twi-ao---- 200.00m                                                        
  [testpool_tmeta] thin_vg       ewi-ao----   4.00m                                                        
  thinlv           thin_vg       Vwi-a-tz-- 500.00m testpool        0.00                                   
  tmplv            thin_vg       -wi-a----- 100.00m                                                        
[root@vm255-41 nkshirsa]# mkfs.xfs /dev/thin_vg/thinlv 
meta-data=/dev/thin_vg/thinlv    isize=512    agcount=8, agsize=16000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=128000, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=768, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@vm255-41 nkshirsa]# mount /dev/mapper/thin_vg-thinlv /home/nkshirsa/mountpt/
[root@vm255-41 nkshirsa]# dd if=/dev/urandom of=/home/nkshirsa/mountpt/test bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0591727 s, 177 MB/s
[root@vm255-41 nkshirsa]# umount /home/nkshirsa/mountpt/


[root@vm255-41 nkshirsa]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree  
  rhel_vm253-73   1   2   0 wz--n- <15.51g  40.00m
  thin_vg         1   3   0 wz--n- <30.00g <29.70g

[root@vm255-41 nkshirsa]# lvconvert --thinpool thin_vg/thinlv --poolmetadata thin_vg/tmplv
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting thin_vg/thinlv and thin_vg/tmplv to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert thin_vg/thinlv and thin_vg/tmplv? [y/n]: y
  Converted thin_vg/thinlv and thin_vg/tmplv to thin pool.


[root@vm255-41 nkshirsa]# lvs -a
  LV               VG            Attr       LSize   Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root             rhel_vm253-73 -wi-ao---- <13.87g                                                        
  swap             rhel_vm253-73 -wi-ao----   1.60g                                                        
  [lvol0_pmspare]  thin_vg       ewi------- 100.00m                                                        
  testpool         thin_vg       twi-aotz-- 200.00m                 6.84   10.94                           
  [testpool_tdata] thin_vg       Twi-ao---- 200.00m                                                        
  [testpool_tmeta] thin_vg       ewi-ao----   4.00m                                                        
  thinlv           thin_vg       twi-a-tz-- 500.00m                 0.00   10.04                           
  [thinlv_tdata]   thin_vg       Vwi-aotz-- 500.00m testpool        2.74                                   
  [thinlv_tmeta]   thin_vg       ewi-ao---- 100.00m   

                                                     
[root@vm255-41 nkshirsa]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree 
  rhel_vm253-73   1   2   0 wz--n- <15.51g 40.00m
  thin_vg         1   2   0 wz--n- <30.00g 29.60g <-- 100 mb used,  tmplv has disappeared



[root@vm255-41 nkshirsa]# mount /dev/mapper/thin_vg-thinlv /home/nkshirsa/mountpt/
[root@vm255-41 nkshirsa]# ls /home/nkshirsa/mountpt/
test
[root@vm255-41 nkshirsa]# ls /home/nkshirsa/mountpt/ -l
total 10240
-rw-r--r--. 1 root root 10485760 Apr  9 13:03 test

Can you please explain this behaviour? Why is the data intact when we had "metadata wiping" ?

Comment 2 Zdenek Kabelac 2019-04-09 13:20:32 UTC
I believe mounting of thin-pool would be better prohibited by enhancing mount command to disable  (by-default) mounting of any lvm2 private devices.  

blkid already knows what is lvm2 private device.

So mount can require at least some force option if user is going to mount  thin-pool or raid leg or any other device he is not supposed to use.

I do not see many option how to ensure (by lvm2) that thin-pool is not going to be mounted (at least not in any backward compatible way).

There can be one idea - to user 1st. thin-pool chunk for header - depending on how big thin-pool chunk is.
So i.e. lvm2 can allocate internal thinLV - provision 1 chunk - write some 'thin-pool-alike'  header and keep this chunk allocated forever.

But since valid chunk sizes goes up-to 2GiB - that amount of storage would be effectively lost - so not sure if this sorts of hacking we want to implement.

Any opinions?

Comment 3 nikhil kshirsagar 2019-04-10 04:34:18 UTC
Why not revert to older lvm behavior?

[root@rhel7u3-1 /]# lvconvert --thinpool wrong/wrongthinlv --poolmetadata wrong/wrong_tmpthinlv
  Operation not permitted on thin LV wrong/wrongthinlv. <----
  Operations permitted on a thin LV are:
  --merge

Comment 4 Marian Csontos 2019-04-16 14:12:45 UTC
(In reply to Zdenek Kabelac from comment #2)

> But since valid chunk sizes goes up-to 2GiB - that amount of storage would
> be effectively lost - so not sure if this sorts of hacking we want to
> implement.

Would not such large chunks be used mostly for thin pools petabytes in size where 2GB are a rounding error?

Comment 8 Corey Marthaler 2019-07-18 16:28:47 UTC
Adding QA ack for 7.8.

We can add a regression scenario for checking conversion of thinlvs to thin pool volumes like so 

# Create a random physical LV to be used as metadata for attempting to convert an existing thin volume to a thin pool volume
[root@hayes-03 ~]# lvcreate -L 20M -n randomlv snapper_thinp
  WARNING: Sum of all thin volume sizes (11.00 GiB) exceeds the size of thin pools (1.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "randomlv" created.

[root@hayes-03 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize  Pool Origin Data%  Meta%   Devices
  POOL            snapper_thinp twi-aot---  1.00g             1.04   11.62   POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao----  1.00g                            /dev/sde1(1)
  [POOL_tmeta]    snapper_thinp ewi-ao----  4.00m                            /dev/sdj1(0)
  [lvol0_pmspare] snapper_thinp ewi-------  4.00m                            /dev/sde1(0)
  mkfs_origin_1   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  mkfs_origin_2   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  mkfs_origin_3   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  mkfs_origin_4   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  mkfs_origin_5   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  origin          snapper_thinp Vwi-aot---  1.00g POOL        1.04
  other1          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other2          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other3          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other4          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other5          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  randomlv        snapper_thinp -wi-a----- 20.00m                            /dev/sde1(257)

[root@hayes-03 ~]# lvconvert --thinpool snapper_thinp/other1 --poolmetadata snapper_thinp/randomlv
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting snapper_thinp/other1 and snapper_thinp/randomlv to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert snapper_thinp/other1 and snapper_thinp/randomlv? [y/n]: y
  Converted snapper_thinp/other1 and snapper_thinp/randomlv to thin pool.

[root@hayes-03 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize  Pool Origin Data%  Meta%   Devices
  POOL            snapper_thinp twi-aot---  1.00g             1.04   11.62   POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao----  1.00g                            /dev/sde1(1)
  [POOL_tmeta]    snapper_thinp ewi-ao----  4.00m                            /dev/sdj1(0)
  [lvol0_pmspare] snapper_thinp ewi------- 20.00m                            /dev/sde1(0)
  [lvol0_pmspare] snapper_thinp ewi------- 20.00m                            /dev/sde1(262)
  mkfs_origin_1   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  mkfs_origin_2   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  mkfs_origin_3   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  mkfs_origin_4   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  mkfs_origin_5   snapper_thinp Vwi-a-t---  1.00g POOL origin 0.00
  origin          snapper_thinp Vwi-aot---  1.00g POOL        1.04
  other1          snapper_thinp twi-a-tz--  1.00g             0.00   10.20   other1_tdata(0)
  [other1_tdata]  snapper_thinp Vwi-aot---  1.00g POOL        0.00
  [other1_tmeta]  snapper_thinp ewi-ao---- 20.00m                            /dev/sde1(257)
  other2          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other3          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other4          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other5          snapper_thinp Vwi-a-t---  1.00g POOL        0.00

Comment 9 Zdenek Kabelac 2019-09-17 13:18:06 UTC
This patch is enhancing validation of volumes that can be accepted it data devices for pools.
(thin or cache)

https://www.redhat.com/archives/lvm-devel/2019-September/msg00040.html

Comment 10 Zdenek Kabelac 2019-09-18 11:16:10 UTC
This patch also enhances protection against incorrect usage of thin-pool for i.e. unwanted mkfs:

https://www.redhat.com/archives/lvm-devel/2019-September/msg00055.html

Comment 12 Corey Marthaler 2019-11-11 22:06:57 UTC
Marking verified in the latest rpms.

The conversion of a thinlv to a thinpool using a random linear for metadata is no longer allowed. That said, there doesn't appear to be any stopping the mkfs'ing of an LVM thin pool volume like mentioned in comment #10, that *IS* still allowed, but testing the mkfs of a pool was not a part of the QA ack process.


## conversion of a thinlv to a thinpool using a random linear for metadata
[root@hayes-01 ~]# lvcreate -L 20M -n randomlv snapper_thinp
  WARNING: Sum of all thin volume sizes (11.00 GiB) exceeds the size of thin pools (1.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "randomlv" created.

[root@hayes-01 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  POOL            snapper_thinp twi-aotz--  1.00g             1.04   11.62                            POOL_tdata(0) 
  [POOL_tdata]    snapper_thinp Twi-ao----  1.00g                                                     /dev/sde1(1)  
  [POOL_tmeta]    snapper_thinp ewi-ao----  4.00m                                                     /dev/sdj1(0)  
  [lvol0_pmspare] snapper_thinp ewi-------  4.00m                                                     /dev/sde1(0)  
  origin          snapper_thinp Vwi-a-tz--  1.00g POOL        1.04                                                  
  other1          snapper_thinp Vwi-a-tz--  1.00g POOL        0.00                                                  
  other2          snapper_thinp Vwi-a-tz--  1.00g POOL        0.00                                                  
  other3          snapper_thinp Vwi-a-tz--  1.00g POOL        0.00                                                  
  other4          snapper_thinp Vwi-a-tz--  1.00g POOL        0.00                                                  
  other5          snapper_thinp Vwi-a-tz--  1.00g POOL        0.00                                                  
  randomlv        snapper_thinp -wi-a----- 20.00m                                                     /dev/sde1(257)

[root@hayes-01 ~]# lvconvert --thinpool snapper_thinp/other1 --poolmetadata snapper_thinp/randomlv
  LV snapper_thinp/other1 with type thin cannot be used as a thin pool LV.



## Attempt to mkfs and mount thin pool volume (*** STILL ALLOWED ***)
[root@hayes-01 ~]# lvcreate  --thinpool POOL -L 1G  --zero y --poolmetadatasize 4M snapper_thinp
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "POOL" created.

[root@hayes-01 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices      
  POOL            snapper_thinp twi-a-tz-- 1.00g             0.00   10.94                            POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 1.00g                                                     /dev/sde1(1) 
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                                                     /dev/sdj1(0) 
  [lvol0_pmspare] snapper_thinp ewi------- 4.00m                                                     /dev/sde1(0) 

[root@hayes-01 ~]# ls /dev/snapper_thinp/*
/dev/snapper_thinp/POOL

[root@hayes-01 ~]# mkfs /dev/snapper_thinp/POOL
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
[...]
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376

Allocating group tables: done                            
Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done

[root@hayes-01 ~]# mount /dev/snapper_thinp/POOL /mnt/



3.10.0-1109.el7.x86_64

lvm2-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-libs-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-cluster-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-lockd-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-python-boom-0.9-20.el7    BUILT: Tue Sep 24 06:18:20 CDT 2019
cmirror-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-libs-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-event-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-event-libs-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019

Comment 13 Zdenek Kabelac 2019-11-12 08:47:46 UTC
Unused thin-pool (pool without any lvm2 thin LVs) is rather a generic public device (so i.e. docker can use it for whatever it wants to)  (and yes - it's a very complex part of logic)

Once user starts to use thin-pool for thinLVs  (lvcreate -V)  - the protection against mkfs should work.

Comment 14 Corey Marthaler 2019-11-12 21:01:48 UTC
Right, but as soon as you make a thin volume, the devfs entry for the pool volume disappears, so really a mkfs would be impossible anyways. That's been around for all of rhel7 I believe. I just checked on on 7.8 and 7.3 and both had the same behavior. This applies whether or not the pool is stacked as well (like on raid1).

# 7.8 once there's a virt lv, the /dev/ device is gone. 
[root@hayes-01 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices      
  POOL            snapper_thinp twi-aot--- 1.00g             1.04   11.62                            POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 1.00g                                                     /dev/sde1(1) 
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                                                     /dev/sdj1(0) 
  [lvol0_pmspare] snapper_thinp ewi------- 4.00m                                                     /dev/sde1(0) 
  mkfs_origin_1   snapper_thinp Vwi-a-t--- 1.00g POOL origin 0.00                                                 
[root@hayes-01 ~]# ls /dev/snapper_thinp/POOL
ls: cannot access /dev/snapper_thinp/POOL: No such file or directory
[root@hayes-01 ~]# mkfs /dev/snapper_thinp/POOL
mke2fs 1.42.9 (28-Dec-2013)
Could not stat /dev/snapper_thinp/POOL --- No such file or directory


# Exact same thing in rhel7.3 (lvm2-2.02.171-8.el7.x86_64)
[root@hayes-01 ~]# lvs /dev/snapper_thinp/POOL 
  Configuration setting "devices/scan_lvs" unknown.
  LV   VG            Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  POOL snapper_thinp twi-a-t--- 1.00g             0.00   10.94                           
[root@hayes-01 ~]# ls /dev/snapper_thinp/POOL
/dev/snapper_thinp/POOL
[root@hayes-01 ~]# lvcreate  -V 1G -T snapper_thinp/POOL -n other1
  Configuration setting "devices/scan_lvs" unknown.
  Using default stripesize 64.00 KiB.
  Logical volume "other1" created.
# Now its gone, so a mkfs is imposible.
[root@hayes-01 ~]#  ls /dev/snapper_thinp/POOL
ls: cannot access /dev/snapper_thinp/POOL: No such file or directory


If there's another check or feature you'd like tested, please open a bug to track it. This probably isn't the best place to do it now that it's been marked verified.

Comment 15 Zdenek Kabelac 2019-11-13 09:38:00 UTC
We are getting here now into very low technical detail of this thing - but while the symlink  '/dev/vgname/lvname' dissapeared - unfortunately many users still do access our LVs via other dev paths - i.e.  /dev/mapper/vgname-lvname.

And now starts the tricky part - older lvm2 - had not put 'read-only' linear LV on top of  hidden  pool-tpool LV -
so  'clever' user could have run   'mkfs /dev/mapper/snapper_thinp-POOL    and it would have overwritten pool's data chunks.

We had even user that were actually using & mount thin-pool this way (fortunatelly they were no creating thins in this pool).

Current lvm2 puts read-only linear mapping - ensuring there is no way to 'mkfs' pool this way anymore.

Comment 17 errata-xmlrpc 2020-03-31 20:04:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1129


Note You need to log in before you can comment on or make changes to this bug.