RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1287116 - Thin pool repair: Do not give pvmove warning if same pv is being used for new metadata volume on VG containing *one* PV
Summary: Thin pool repair: Do not give pvmove warning if same pv is being used for new...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Alasdair Kergon
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1469559
TreeView+ depends on / blocked
 
Reported: 2015-12-01 14:24 UTC by Vivek Goyal
Modified: 2021-09-03 10:56 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.02.175-2.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 15:18:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0853 0 None None None 2018-04-10 15:19:55 UTC

Description Vivek Goyal 2015-12-01 14:24:34 UTC
Description of problem:

I tried to repair a thin pool as follows.

[root@vm4-f23 ~]# lvconvert --repair test-vg/docker-pool
  WARNING: If everything works, remove "test-vg/docker-pool_meta0".
  WARNING: Use pvmove command to move "test-vg/docker-pool_tmeta" on the best fitting PV.

I got a warning message that use pvmove to move metadata volume on the best fiting PV. I have only one pv in my volume group test-vg and there is no other pv to move metadata volume to.

So this is confusing. If new metadata volume is on same pv as old metadata volume, then this warning should not be printed.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Jonathan Earl Brassow 2017-07-27 20:34:34 UTC
Those warning lines are pretty confusing for users.  I wouldn't even print "if everything works..." - if things don't work, then give the user clear instructions on the next steps.  Needs cleanup...

Comment 4 Alasdair Kergon 2017-10-09 17:51:41 UTC
Something more like this, I think:

  WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pools (20.00 MiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  WARNING: Set activation/thin_pool_autoextend_percent above 0 to specify by how much to extend thin pools reaching the threshold.
  WARNING: LV vg1/lvol1_meta9 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV vg1/lvol1_tmeta might use different PVs.  Move it with pvmove if required.

Comment 12 Alasdair Kergon 2017-10-16 15:43:34 UTC
It is correct to always leave a *pmspare volume available.

Each repair uses up one pmspare volume, and if there is enough space, it makes a new one ready for any future repair to use.

I've no idea why your 'before' didn't do this, but it was wrong.

Comment 13 Corey Marthaler 2017-10-17 16:30:17 UTC
One question I have... Is this fix to:

A. "not give pvmove warning if same pv is being used for new metadata volume"?
or 
B. "not give pvmove warning if the VG in which the pool resides only has one PV"?

The subject states A, but the fix appears to be for B. If this is for B, then I think we need to change the subject of this bug to reflect that.


# One PV in VG (No pvmove warning)

[root@host-079 ~]# pvscan
  PV /dev/sdb1   VG snapper_thinp   lvm2 [24.98 GiB / 24.98 GiB free]

[root@host-079 ~]# lvconvert --yes --repair snapper_thinp/POOL
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (5.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  WARNING: LV snapper_thinp/POOL_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.




# Multiple PVs in VG (always a pvmove warning, regardless of whether or not the new meta device was placed on a new PV)

# In this case the meta volume *was* actually placed on a new PV, so the warning is expected
[root@host-079 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Devices       
  POOL            snapper_thinp twi---tz--   5.00g  POOL_tdata(0) 
  [POOL_tdata]    snapper_thinp Twi-------   5.00g  /dev/sdb1(1)  
  [POOL_tmeta]    snapper_thinp ewi-------   4.00m  /dev/sdc1(0)  
  [lvol0_pmspare] snapper_thinp ewi-------   4.00m  /dev/sdb1(0)  

[root@host-079 ~]# lvconvert --yes --repair snapper_thinp/POOL
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (5.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  WARNING: LV snapper_thinp/POOL_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV snapper_thinp/POOL_tmeta might use different PVs.  Move it with pvmove if required.

[root@host-079 ~]# lvs -a -o +devices
  WARNING: Not using lvmetad because a repair command was run.
  LV              VG            Attr       LSize    Devices        
  POOL            snapper_thinp twi---tz--   5.00g  POOL_tdata(0)  
  POOL_meta0      snapper_thinp -wi-------   4.00m  /dev/sdc1(0)   
  [POOL_tdata]    snapper_thinp Twi-------   5.00g  /dev/sdb1(1)   
  [POOL_tmeta]    snapper_thinp ewi-------   4.00m  /dev/sdb1(0)   
  [lvol1_pmspare] snapper_thinp ewi-------   4.00m  /dev/sdb1(1281)

# In this case the meta volume was *not* placed on a new PV, so the warning is unexpected?
[root@host-079 ~]# lvconvert --yes --repair snapper_thinp/POOL
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (5.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  WARNING: LV snapper_thinp/POOL_meta1 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV snapper_thinp/POOL_tmeta might use different PVs.  Move it with pvmove if required.

[root@host-079 ~]# lvs -a -o +devices
  WARNING: Not using lvmetad because a repair command was run.
  LV              VG            Attr       LSize    Devices        
  POOL            snapper_thinp twi---tz--   5.00g  POOL_tdata(0)  
  POOL_meta0      snapper_thinp -wi-------   4.00m  /dev/sdc1(0)   
  POOL_meta1      snapper_thinp -wi-------   4.00m  /dev/sdb1(0)   
  [POOL_tdata]    snapper_thinp Twi-------   5.00g  /dev/sdb1(1)   
  [POOL_tmeta]    snapper_thinp ewi-------   4.00m  /dev/sdb1(1281)
  [lvol2_pmspare] snapper_thinp ewi-------   4.00m  /dev/sdb1(1282)

[root@host-079 ~]# lvconvert --yes --repair snapper_thinp/POOL
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (5.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  WARNING: LV snapper_thinp/POOL_meta2 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV snapper_thinp/POOL_tmeta might use different PVs.  Move it with pvmove if required.

# Again, In this case the meta volume was *not* placed on a new PV, so the warning is unexpected?
[root@host-079 ~]# lvs -a -o +devices
  WARNING: Not using lvmetad because a repair command was run.
  LV              VG            Attr       LSize    Devices        
  POOL            snapper_thinp twi---tz--   5.00g  POOL_tdata(0)  
  POOL_meta0      snapper_thinp -wi-------   4.00m  /dev/sdc1(0)   
  POOL_meta1      snapper_thinp -wi-------   4.00m  /dev/sdb1(0)   
  POOL_meta2      snapper_thinp -wi-------   4.00m  /dev/sdb1(1281)
  [POOL_tdata]    snapper_thinp Twi-------   5.00g  /dev/sdb1(1)   
  [POOL_tmeta]    snapper_thinp ewi-------   4.00m  /dev/sdb1(1282)
  [lvol3_pmspare] snapper_thinp ewi-------   4.00m  /dev/sdb1(1283)

Comment 15 Corey Marthaler 2018-01-16 18:12:05 UTC
After talking with agk, this fix is for the "only one PV in VG" scenario, hence data and meta are both on same device. Verified no pvmove message happens now in that case. Any other scenario, with multiple PVs in the VG, will result in a pvmove warning regardless of whether or not the meta device moved.

[root@host-082 ~]# vgcreate snapper_thinp /dev/sda
  Volume group "snapper_thinp" successfully created
[root@host-082 ~]# lvcreate -L 1G --type thin-pool -n POOL snapper_thinp
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "POOL" created.
[root@host-082 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Devices
  POOL            snapper_thinp twi-a-tz-- 1.00g             0.00   0.98   POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 1.00g                           /dev/sda(1)
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                           /dev/sda(257)
  [lvol0_pmspare] snapper_thinp ewi------- 4.00m                           /dev/sda(0)

[root@host-082 ~]# lvchange -an snapper_thinp
[root@host-082 ~]# lvconvert --yes --repair snapper_thinp/POOL
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: LV snapper_thinp/POOL_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.

[root@host-082 ~]# lvs -a -o +devices
  WARNING: Not using lvmetad because a repair command was run.
  LV              VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  root            rhel_host-082 -wi-ao----  <6.20g                                                     /dev/vda2(205)
  swap            rhel_host-082 -wi-ao---- 820.00m                                                     /dev/vda2(0)  
  POOL            snapper_thinp twi---tz--   1.00g                                                     POOL_tdata(0) 
  POOL_meta0      snapper_thinp -wi-------   4.00m                                                     /dev/sda(257) 
  [POOL_tdata]    snapper_thinp Twi-------   1.00g                                                     /dev/sda(1)   
  [POOL_tmeta]    snapper_thinp ewi-------   4.00m                                                     /dev/sda(0)   
  [lvol1_pmspare] snapper_thinp ewi-------   4.00m                                                     /dev/sda(258) 


3.10.0-830.el7.x86_64

lvm2-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
lvm2-libs-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
lvm2-cluster-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
lvm2-lockd-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
lvm2-python-boom-0.8.1-5.el7    BUILT: Wed Dec  6 11:15:40 CET 2017
cmirror-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-1.02.145-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-libs-1.02.145-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-event-1.02.145-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-event-libs-1.02.145-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 17:15:46 CEST 2017

Comment 18 errata-xmlrpc 2018-04-10 15:18:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0853


Note You need to log in before you can comment on or make changes to this bug.