Bug 1259957

Summary: attempting to convert striped origin to cache with a large non power of two vg extent fails
Product: Red Hat Enterprise Linux 7 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Cache Logical Volumes QA Contact: cluster-qe <cluster-qe>
Status: CLOSED WONTFIX Docs Contact:
Severity: unspecified    
Priority: unspecified CC: agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac
Version: 7.2   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-15 07:36:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
verbose lvconvert attempt none

Description Corey Marthaler 2015-09-03 22:55:25 UTC
Description of problem:
[root@host-110 ~]# pvcreate --dataalignment 87040k /dev/sdb1 /dev/sdb2 /dev/sdd1 /dev/sdd2 /dev/sde1 /dev/sde2 /dev/sdf2 /dev/sdh1 /dev/sdh2
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created
  Physical volume "/dev/sdd1" successfully created
  Physical volume "/dev/sdd2" successfully created
  Physical volume "/dev/sde1" successfully created
  Physical volume "/dev/sde2" successfully created
  Physical volume "/dev/sdf2" successfully created
  Physical volume "/dev/sdh1" successfully created
  Physical volume "/dev/sdh2" successfully created
[root@host-110 ~]# vgcreate --physicalextentsize 21760k cache_sanity /dev/sdb1 /dev/sdb2 /dev/sdd1 /dev/sdd2 /dev/sde1 /dev/sde2 /dev/sdf2 /dev/sdh1 /dev/sdh2
  Volume group "cache_sanity" successfully created
[root@host-110 ~]# lvcreate -i 3 -L 4G -n corigin cache_sanity /dev/sdb2 /dev/sde2 /dev/sdh2
  Using default stripesize 64.00 KiB.
  Rounding up size to full physical extent 4.01 GiB
  Rounding size (193 extents) up to stripe boundary size (195 extents).
  Logical volume "corigin" created.
[root@host-110 ~]# lvcreate -i 3 -L 2G -n 21760 cache_sanity /dev/sdf2 /dev/sdd2 /dev/sdb1
  Using default stripesize 64.00 KiB.
  Rounding up size to full physical extent 2.01 GiB
  Rounding size (97 extents) up to stripe boundary size (99 extents).
  Logical volume "21760" created.
[root@host-110 ~]# lvcreate -i 3 -L 12M -n 21760_meta cache_sanity /dev/sdf2 /dev/sdd2 /dev/sdb1
  Using default stripesize 64.00 KiB.
  Rounding up size to full physical extent 21.25 MiB
  Rounding size (1 extents) up to stripe boundary size (3 extents).
  Logical volume "21760_meta" created.
[root@host-110 ~]# lvconvert --yes --type cache-pool --cachepolicy smq --cachemode writethrough -c 64 --poolmetadata cache_sanity/21760_meta cache_sanity/21760
  WARNING: Converting logical volume cache_sanity/21760 and cache_sanity/21760_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/21760 to cache pool.
[root@host-110 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize   Pool Origin Data%  Meta%  Devices
  21760           cache_sanity  Cwi---C---   2.05g                           21760_cdata(0)
  [21760_cdata]   cache_sanity  Cwi-------   2.05g                           /dev/sdf2(0),/dev/sdd2(0),/dev/sdb1(0)
  [21760_cmeta]   cache_sanity  ewi-------  63.75m                           /dev/sdf2(33),/dev/sdd2(33),/dev/sdb1(33)
  corigin         cache_sanity  -wi-a-----   4.05g                           /dev/sdb2(0),/dev/sde2(0),/dev/sdh2(0)
  [lvol0_pmspare] cache_sanity  ewi-------  63.75m                           /dev/sdb1(34)
[root@host-110 ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree  
  cache_sanity    9   2   0 wz--n- 111.69g 105.46g
[root@host-110 ~]# pvscan
  PV /dev/sdb1   VG cache_sanity    lvm2 [12.41 GiB / 11.64 GiB free]
  PV /dev/sdb2   VG cache_sanity    lvm2 [12.41 GiB / 11.06 GiB free]
  PV /dev/sdd1   VG cache_sanity    lvm2 [12.41 GiB / 12.41 GiB free]
  PV /dev/sdd2   VG cache_sanity    lvm2 [12.41 GiB / 11.70 GiB free]
  PV /dev/sde1   VG cache_sanity    lvm2 [12.41 GiB / 12.41 GiB free]
  PV /dev/sde2   VG cache_sanity    lvm2 [12.41 GiB / 11.06 GiB free]
  PV /dev/sdf2   VG cache_sanity    lvm2 [12.41 GiB / 11.70 GiB free]
  PV /dev/sdh1   VG cache_sanity    lvm2 [12.41 GiB / 12.41 GiB free]
  PV /dev/sdh2   VG cache_sanity    lvm2 [12.41 GiB / 11.06 GiB free]
[root@host-110 ~]# lvconvert --yes --type cache --cachepool cache_sanity/21760 cache_sanity/corigin
  device-mapper: reload ioctl on (253:2) failed: No space left on device
  Failed to lock logical volume cache_sanity/corigin.

[10940.796105] device-mapper: space map metadata: unable to allocate new metadata block
[10940.800345] device-mapper: table: 253:2: cache: Error creating metadata object
[10940.801314] device-mapper: ioctl: error adding target to table


Version-Release number of selected component (if applicable):
3.10.0-313.el7.x86_64
lvm2-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
lvm2-libs-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
lvm2-cluster-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-libs-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-event-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-event-libs-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015


How reproducible:
Everytime

Comment 1 Corey Marthaler 2015-09-03 22:59:17 UTC
Created attachment 1070154 [details]
verbose lvconvert attempt

Comment 5 RHEL Program Management 2020-12-15 07:36:34 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.