Bug 2086557

Summary: Thin pool in lvm operator doesn't use all disks
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Shay Rozen <srozen>
Component: lvm-operatorAssignee: Santosh Pillai <sapillai>
Status: CLOSED ERRATA QA Contact: Shay Rozen <srozen>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.11CC: jolmomar, lgangava, mmuench, muagarwa, nibalach, ocs-bugs, odf-bz-bot, sapillai
Target Milestone: ---   
Target Release: ODF 4.11.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-24 13:53:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Shay Rozen 2022-05-16 12:34:13 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Creating a lvm cluster with 3 0.5T disk (1.5T for all the cluster) this pool size is only 0.75T which leaves 0.75T (minus metadata) unusable as thick provisioning is not supported.


Version of all relevant components (if applicable):
All lvmo 4.11

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Can use all the capacity that I have.

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1
Can this issue reproducible?
1

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install LVM cluster of size X




Actual results:
The thin pool size is x/2 with default installation from UI and capacity is only half from all the capacity. If you have 3 0.5T disks only 2 disks will have the thinpool and 250G from one of the disks and 500G from the 3rd disk can not be used.


Expected results:
Thin pool should used all capacity available.


Additional info:
[core@control-plane-0 ~]$ sudo lvs -a
  LV                                   VG  Attr       LSize   Pool        Origin                               Data%  Meta%  Move Log Cpy%Sync Convert
  00c1aeee-a1d7-423c-bb8a-b5bfd8d704ad vg1 Vwi-aotz-k 100.00g thin-pool-1 ffe64027-f3ef-4783-9782-af15943bf5af 10.06                                  
  122eb968-ed1e-4db9-806a-837c1493e730 vg1 Vri---tz-k 100.00g thin-pool-1 f8cd398a-0a99-4b95-a067-47be11b81f9f                                        
  12c3aed6-ef43-4d99-9354-a4b992959ad9 vg1 Vwi-aotz-k 100.00g thin-pool-1 ee2cd56d-9126-46b7-bff0-0f928c6583aa 100.00                                 
  b691be9a-428e-4b52-9f88-e932ac9ee907 vg1 Vwi-aotz-- 100.00g thin-pool-1                                      100.00                                 
  ee2cd56d-9126-46b7-bff0-0f928c6583aa vg1 Vri---tz-k 100.00g thin-pool-1 b691be9a-428e-4b52-9f88-e932ac9ee907                                        
  f8cd398a-0a99-4b95-a067-47be11b81f9f vg1 Vwi-aotz-- 100.00g thin-pool-1                                      10.06                                  
  ffe64027-f3ef-4783-9782-af15943bf5af vg1 Vwi-aotz-k 100.00g thin-pool-1 122eb968-ed1e-4db9-806a-837c1493e730 10.06                                  
  fffdbbdd-a0ce-4eac-b104-ef42f036f9cb vg1 Vwi-aotz-k 100.00g thin-pool-1 12c3aed6-ef43-4d99-9354-a4b992959ad9 100.00                                 
  [lvol0_pmspare]                      vg1 ewi-------  96.00m                                                                                         
  thin-pool-1                          vg1 twi-aotz-- 749.80g                                                  14.68  17.63                           
  [thin-pool-1_tdata]                  vg1 Twi-ao---- 749.80g                                                                                         
  [thin-pool-1_tmeta]                  vg1 ewi-ao----  96.00m  

[core@control-plane-0 ~]$  lsblk
NAME                                               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0                                                7:0    0   100G  0 loop 
loop1                                                7:1    0   100G  0 loop 
loop2                                                7:2    0   100G  0 loop 
sda                                                  8:0    0   120G  0 disk 
├─sda1                                               8:1    0     1M  0 part 
├─sda2                                               8:2    0   127M  0 part 
├─sda3                                               8:3    0   384M  0 part /boot
└─sda4                                               8:4    0 119.5G  0 part /sysroot
sdb                                                  8:16   0   500G  0 disk 
├─vg1-thin--pool--1_tmeta                          253:0    0    96M  0 lvm  
│ └─vg1-thin--pool--1-tpool                        253:2    0 749.8G  0 lvm  
│   ├─vg1-thin--pool--1                            253:3    0 749.8G  1 lvm  
│   ├─vg1-f8cd398a--0a99--4b95--a067--47be11b81f9f 253:4    0   100G  0 lvm  /var/lib/kubelet/pods/74baf36e-60a5-4cb5-a674-19c89c2acd14/volumes/kubernetes.io~csi/pvc-6133606f-aabd-4133-9a34-cdb691819878/mount
│   ├─vg1-ffe64027--f3ef--4783--9782--af15943bf5af 253:5    0   100G  0 lvm  /var/lib/kubelet/pods/04f088f0-a3a5-4ee2-9029-d7f8606abc6f/volumes/kubernetes.io~csi/pvc-76285a3d-6cd3-4284-b3cc-2cf6a7ba434a/mount
│   ├─vg1-00c1aeee--a1d7--423c--bb8a--b5bfd8d704ad 253:6    0   100G  0 lvm  /var/lib/kubelet/pods/1ac9ea5f-25e0-4cf3-b5bc-d22503d72f5a/volumes/kubernetes.io~csi/pvc-78ffe877-587f-44c3-81ee-2674b59a1f2d/mount
│   ├─vg1-b691be9a--428e--4b52--9f88--e932ac9ee907 253:7    0   100G  0 lvm  
│   ├─vg1-12c3aed6--ef43--4d99--9354--a4b992959ad9 253:8    0   100G  0 lvm  
│   └─vg1-fffdbbdd--a0ce--4eac--b104--ef42f036f9cb 253:9    0   100G  0 lvm  
└─vg1-thin--pool--1_tdata                          253:1    0 749.8G  0 lvm  
  └─vg1-thin--pool--1-tpool                        253:2    0 749.8G  0 lvm  
    ├─vg1-thin--pool--1                            253:3    0 749.8G  1 lvm  
    ├─vg1-f8cd398a--0a99--4b95--a067--47be11b81f9f 253:4    0   100G  0 lvm  /var/lib/kubelet/pods/74baf36e-60a5-4cb5-a674-19c89c2acd14/volumes/kubernetes.io~csi/pvc-6133606f-aabd-4133-9a34-cdb691819878/mount
    ├─vg1-ffe64027--f3ef--4783--9782--af15943bf5af 253:5    0   100G  0 lvm  /var/lib/kubelet/pods/04f088f0-a3a5-4ee2-9029-d7f8606abc6f/volumes/kubernetes.io~csi/pvc-76285a3d-6cd3-4284-b3cc-2cf6a7ba434a/mount
    ├─vg1-00c1aeee--a1d7--423c--bb8a--b5bfd8d704ad 253:6    0   100G  0 lvm  /var/lib/kubelet/pods/1ac9ea5f-25e0-4cf3-b5bc-d22503d72f5a/volumes/kubernetes.io~csi/pvc-78ffe877-587f-44c3-81ee-2674b59a1f2d/mount
    ├─vg1-b691be9a--428e--4b52--9f88--e932ac9ee907 253:7    0   100G  0 lvm  
    ├─vg1-12c3aed6--ef43--4d99--9354--a4b992959ad9 253:8    0   100G  0 lvm  
    └─vg1-fffdbbdd--a0ce--4eac--b104--ef42f036f9cb 253:9    0   100G  0 lvm  
sdc                                                  8:32   0   500G  0 disk 
└─vg1-thin--pool--1_tdata                          253:1    0 749.8G  0 lvm  
  └─vg1-thin--pool--1-tpool                        253:2    0 749.8G  0 lvm  
    ├─vg1-thin--pool--1                            253:3    0 749.8G  1 lvm  
    ├─vg1-f8cd398a--0a99--4b95--a067--47be11b81f9f 253:4    0   100G  0 lvm  /var/lib/kubelet/pods/74baf36e-60a5-4cb5-a674-19c89c2acd14/volumes/kubernetes.io~csi/pvc-6133606f-aabd-4133-9a34-cdb691819878/mount
    ├─vg1-ffe64027--f3ef--4783--9782--af15943bf5af 253:5    0   100G  0 lvm  /var/lib/kubelet/pods/04f088f0-a3a5-4ee2-9029-d7f8606abc6f/volumes/kubernetes.io~csi/pvc-76285a3d-6cd3-4284-b3cc-2cf6a7ba434a/mount
    ├─vg1-00c1aeee--a1d7--423c--bb8a--b5bfd8d704ad 253:6    0   100G  0 lvm  /var/lib/kubelet/pods/1ac9ea5f-25e0-4cf3-b5bc-d22503d72f5a/volumes/kubernetes.io~csi/pvc-78ffe877-587f-44c3-81ee-2674b59a1f2d/mount
    ├─vg1-b691be9a--428e--4b52--9f88--e932ac9ee907 253:7    0   100G  0 lvm  
    ├─vg1-12c3aed6--ef43--4d99--9354--a4b992959ad9 253:8    0   100G  0 lvm  
    └─vg1-fffdbbdd--a0ce--4eac--b104--ef42f036f9cb 253:9    0   100G  0 lvm  
sdd                                                  8:48   0   500G  0 disk 
sr0                                                 11:0    1 101.8M  0 rom  

sdd doesn't have a thin pool

Comment 3 Shay Rozen 2022-05-16 14:07:48 UTC
When using cli install and not mentioning sizePercent it get created with 75%

Comment 5 Mudit Agarwal 2022-05-27 13:28:27 UTC
Should we document it?

Comment 6 Shay Rozen 2022-06-13 09:58:50 UTC
Talked to Nithya and we should should document it. The decision is to raise the value to 90% and it was fixed in another BZ. The customer should know that 10% of his storage will not be used.

Comment 7 N Balachandran 2022-06-17 16:25:44 UTC
The following PRs will address this BZ:

1. Set the default value of the thinpool size to 90% of the VG
https://github.com/red-hat-storage/lvm-operator/pull/213

2. Send VG alerts only if the thinpool usage itself crosses a certain level.
https://github.com/red-hat-storage/lvm-operator/pull/205

Comment 13 errata-xmlrpc 2022-08-24 13:53:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156