RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1637121 - RFE: Increase default size of pool metadata area
Summary: RFE: Increase default size of pool metadata area
Keywords:
Status: CLOSED DUPLICATE of bug 1610260
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.7-Alt
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1577173
TreeView+ depends on / blocked
 
Reported: 2018-10-08 16:26 UTC by Jonathan Earl Brassow
Modified: 2021-02-12 11:38 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-03 14:39:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jonathan Earl Brassow 2018-10-08 16:26:00 UTC
We've seen a number of reports of people running out of metadata space.  It doesn't hurt us to be a little more generous in our metadata area allocation.

I realize that we have functions to grow the metadata space on the fly, but not everyone heeds the advice to keep some VG space in reserve for such situations.  We are not here to spite them - let's do them a favor.

Comment 6 Zdenek Kabelac 2019-04-03 14:39:12 UTC
Will be resolved via Bug 1610260.

Metadata will automatically extended bigger when user extends thin-pool.

*** This bug has been marked as a duplicate of bug 1610260 ***

Comment 7 Corey Marthaler 2019-06-04 18:17:20 UTC
For the record, even though this bug was marked a dup of bug 1610260, none of the four scenarios listed above to be tested in rhel7.7 changed at all. This bug should have just been closed WONTFIX if this is expected.

3.10.0-1048.el7.x86_64

lvm2-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-libs-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-libs-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-event-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-event-libs-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-persistent-data-0.8.1-1.el7    BUILT: Sat May  4 14:53:53 CDT 2019


# Case 1 (100G Converted to pool LV)
[root@hayes-02 ~]# lvcreate -n ThinDataLV -L 100G test
  Logical volume "ThinDataLV" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV         VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices     
  ThinDataLV test -wi-a----- 100.00g                                                     /dev/sdb1(0)
[root@hayes-02 ~]# lvconvert --type thin-pool test/ThinDataLV
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting test/ThinDataLV to thin pool's data volume with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert test/ThinDataLV? [y/n]: y
  Converted test/ThinDataLV to thin pool.
[root@hayes-02 ~]# lvs -a -o +devices
  LV                 VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices            
  ThinDataLV         test twi-a-tz-- 100.00g             0.00   10.43                            ThinDataLV_tdata(0)
  [ThinDataLV_tdata] test Twi-ao---- 100.00g                                                     /dev/sdb1(0)       
  [ThinDataLV_tmeta] test ewi-ao---- 100.00m                                                     /dev/sdb1(25600)   
  [lvol0_pmspare]    test ewi------- 100.00m                                                     /dev/sdb1(25625)   


# Case 2 (1G Converted to pool LV)
[root@hayes-02 ~]#  lvcreate -n ThinDataLV -L 1G test
  Logical volume "ThinDataLV" created.
[root@hayes-02 ~]# lvconvert --type thin-pool test/ThinDataLV
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting test/ThinDataLV to thin pool's data volume with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert test/ThinDataLV? [y/n]: y
  Converted test/ThinDataLV to thin pool.
[root@hayes-02 ~]# lvs -a -o +devices
  LV                 VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices            
  ThinDataLV         test twi-a-tz-- 1.00g             0.00   10.94                            ThinDataLV_tdata(0)
  [ThinDataLV_tdata] test Twi-ao---- 1.00g                                                     /dev/sdb1(0)       
  [ThinDataLV_tmeta] test ewi-ao---- 4.00m                                                     /dev/sdb1(256)     
  [lvol0_pmspare]    test ewi------- 4.00m                                                     /dev/sdb1(257)     


# Case 3 (100G single lvcreate pool LV)
[root@hayes-02 ~]# lvcreate --thinpool ThinPool -L 100G test
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "ThinPool" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV               VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices          
  ThinPool         test twi-a-tz-- 100.00g             0.00   10.43                            ThinPool_tdata(0)
  [ThinPool_tdata] test Twi-ao---- 100.00g                                                     /dev/sdb1(25)    
  [ThinPool_tmeta] test ewi-ao---- 100.00m                                                     /dev/sde1(0)     
  [lvol0_pmspare]  test ewi------- 100.00m                                                     /dev/sdb1(0)     


# Case 4 (1G single cmdline pool LV)
[root@hayes-02 ~]# lvcreate --thinpool ThinPool -L 1G test
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "ThinPool" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV               VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices          
  ThinPool         test twi-a-tz-- 1.00g             0.00   10.94                            ThinPool_tdata(0)
  [ThinPool_tdata] test Twi-ao---- 1.00g                                                     /dev/sdb1(1)     
  [ThinPool_tmeta] test ewi-ao---- 4.00m                                                     /dev/sde1(0)     
  [lvol0_pmspare]  test ewi------- 4.00m                                                     /dev/sdb1(0)


Note You need to log in before you can comment on or make changes to this bug.