RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1434893 - Wrong metadata chunk size when creating thinpool on top of MD RAID10
Summary: Wrong metadata chunk size when creating thinpool on top of MD RAID10
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1546181
TreeView+ depends on / blocked
 
Reported: 2017-03-22 14:42 UTC by g.danti
Modified: 2021-09-03 12:51 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.178-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 11:02:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3193 0 None None None 2018-10-30 11:03:23 UTC

Description g.danti 2017-03-22 14:42:30 UTC
Description of problem:
By default, when creating a thinpool thin chunk size is calculated targeting a 128 MB metadata volume. Metadata chunk size is then adjusted according. However, when creating a thinpool on top of a MD RAID10, thin chunk size is (by default) set the same than MD chunk size. With small RAID chunk size (or with large array), the 128 MB metadata volume is unable to store all reference for such a small thin data chunks. The end result is that, even with no snapshot and a single thin data volume, it is possible to exhaust metadata space well before data volume is 100% full.


Version-Release number of selected component (if applicable):
LVM version:     2.02.166(2)-RHEL7 (2016-11-16)
Library version: 1.02.135-RHEL7 (2016-11-16)
Driver version:  4.34.0


How reproducible:
Create a small-chunk, yet relatively large, MD RAID10 device. Then create a thinpool on top and start filling the data volume.


Steps to Reproduce:
# Create RAID10
truncate --size=500G disk0.img
truncate --size=500G disk1.img
truncate --size=500G disk2.img
truncate --size=500G disk3.img
losetup loop0 disk0.img   
losetup loop1 disk1.img   
losetup loop2 disk2.img   
losetup loop3 disk3.img
mdadm --create md126 --level=10 --assume-clean --chunk=64 --raid-devices=4 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3

# Create pv,vg and thinpool
pvcreate /dev/md126
vgcreate vg_kvm /dev/md126
lvcreate --thin vg_kvm --name thinpool -L 500G


Actual results:
lvs -a -o +chunk_size
  LV               VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Chunk 
  [lvol0_pmspare]  vg_kvm    ewi------- 128.00m                                                         0 
  thinpool         vg_kvm    twi-a-tz-- 500.00g             0.00   1.58                             64.00k
  [thinpool_tdata] vg_kvm    Twi-ao---- 500.00g                                                         0 
  [thinpool_tmeta] vg_kvm    ewi-ao---- 128.00m                                                         0 
  root             vg_system -wi-ao----  50.00g                                                         0 
  swap             vg_system -wi-ao----   7.62g                                                         0

Expected results:
lvs -a -o +chunk_size
  LV               VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Chunk  
  [lvol0_pmspare]  vg_kvm    ewi------- 128.00m                                                          0 
  thinpool         vg_kvm    twi-a-tz-- 500.00g             0.00   0.42                             256.00k
  [thinpool_tdata] vg_kvm    Twi-ao---- 500.00g                                                          0 
  [thinpool_tmeta] vg_kvm    ewi-ao---- 128.00m                                                          0 
  root             vg_system -wi-ao----  50.00g                                                          0 
  swap             vg_system -wi-ao----   7.62g                                                          0


Additional info:
The correct lvs -a result was obtained by using latest lvm2 code, as show below:
lvs --version
  LVM version:     2.02.169(2)-git (2016-11-30)
  Library version: 1.02.138-git (2016-11-30)
  Driver version:  4.34.0

Comment 2 Zdenek Kabelac 2018-04-04 13:44:50 UTC
I believe this one is already fixed in upstream build of 2.02.177.

Comment 7 Roman Bednář 2018-06-25 12:03:58 UTC
Verified with latest rpms. Chunk size for thin pool is now 256k even when created on top of 64 chunk md raid.


# mdadm --detail /dev/md127 | grep -E "Chunk Size|Raid Level"
        Raid Level : raid10
        Chunk Size : 64K

# lvs -a -o lv_name,chunk_size,devices
  LV               Chunk   Devices           
  root                  0  /dev/vda2(205)    
  swap                  0  /dev/vda2(0)      
  [lvol0_pmspare]       0  /dev/md127(0)     
  thinpool         256.00k thinpool_tdata(0) 
  [thinpool_tdata]      0  /dev/md127(32)    
  [thinpool_tmeta]      0  /dev/md127(128032)



lvm2-2.02.179-2.el7

Comment 9 errata-xmlrpc 2018-10-30 11:02:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3193


Note You need to log in before you can comment on or make changes to this bug.