Bug 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV.
Summary: [IBM Power] LocalVolumeSet provisions boot partition as PV.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.6
Hardware: ppc64le
OS: Linux
high
high
Target Milestone: ---
: 4.7.0
Assignee: Santosh Pillai
QA Contact: Chao Yang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-12 07:57 UTC by Aaruni Aggarwal
Modified: 2021-02-24 15:33 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:32:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
This attachment contains all the pods and output of oc describe rook-ceph operator as well as for ocs-operator. Also contains log for one of the osd-prepare pod (197.98 KB, text/plain)
2020-11-12 07:57 UTC, Aaruni Aggarwal
no flags Details
crd for local volumeset (16.90 KB, text/plain)
2020-11-12 12:16 UTC, Aaruni Aggarwal
no flags Details
Logs for local-storage operator (57.85 KB, text/plain)
2020-11-12 12:19 UTC, Aaruni Aggarwal
no flags Details
logs for diskmaker-manager (6.65 MB, text/plain)
2020-11-12 12:37 UTC, Aaruni Aggarwal
no flags Details
logs for localvolumeset-local-provisioner (69.75 KB, text/plain)
2020-11-12 12:38 UTC, Aaruni Aggarwal
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift local-storage-operator pull 198 0 None closed Bug 1897050: localVolumeSet: keep default MinSize to 1Gi and ignore devices with `boot` substring in the partLabel 2021-02-18 08:43:11 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:33:13 UTC

Description Aaruni Aggarwal 2020-11-12 07:57:33 UTC
Created attachment 1728641 [details]
This attachment contains all the pods and output of oc describe rook-ceph operator as well as for ocs-operator. Also contains log for one of the osd-prepare pod

Description of problem (please be detailed as possible and provide log
snippests):

After creating ocs storagecluster from OCP UI , osd pods are not coming up 

Version of all relevant components (if applicable):
OCS version - 4.6
ceph version 14.2.8-111.el8cp (2e6029d57bc594eceba4751373da6505028c2650) nautilus (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)? Yes


Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible? Yes


Can this issue reproduce from the UI? Yes, seen only on UI


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Create ocs-operator in openshift-storage namespace.
2. Create local storage operator in openshift-local-storage namespace, and then create local volume set from local-storage operator as disks appear to be HDD in our case.
3. Create storagecluster from ocs-operator in internal-attached device mode.


Actual results: osd pods are not getting created.


Expected results: there should be 3 osd pods , one for each worker node


Additional info:

Comment 2 Santosh Pillai 2020-11-12 08:44:58 UTC
can you please confirm if PVs were provisioned by LocalVolumeSet?

Comment 3 Aaruni Aggarwal 2020-11-12 11:36:11 UTC
Yes , PVs were created

[root@nx121-ahv ~]# oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                       STORAGECLASS   REASON   AGE
local-pv-1e31f771   256Gi      RWO            Delete           Available                                                               localblock              39h
local-pv-2ac8dd7d   4Mi        RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-1-data-0-7ttnl   localblock              39h
local-pv-494ea7c2   4Mi        RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-0-data-0-565bg   localblock              39h
local-pv-8137c873   256Gi      RWO            Delete           Available                                                               localblock              39h
local-pv-97511c2c   4Mi        RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-2-data-0-5kt9t   localblock              39h
local-pv-ec7f2b80   256Gi      RWO            Delete           Available                                                               localblock              39h

[root@nx121-ahv ~]# oc get pvc
NAME                                      STATUS    VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
db-noobaa-db-0                            Pending                                                 ocs-storagecluster-ceph-rbd   39h
ocs-deviceset-localblock-0-data-0-565bg   Bound     local-pv-494ea7c2   4Mi        RWO            localblock                    39h
ocs-deviceset-localblock-1-data-0-7ttnl   Bound     local-pv-2ac8dd7d   4Mi        RWO            localblock                    39h
ocs-deviceset-localblock-2-data-0-5kt9t   Bound     local-pv-97511c2c   4Mi        RWO            localblock                    39h

[root@nx121-ahv ~]# 

[root@nx121-ahv ~]# oc get sc
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
localblock                    kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  39h
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   39h
ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  39h
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   39h

[root@nx121-ahv ~]#

Comment 4 Santosh Pillai 2020-11-12 11:47:48 UTC
Something is wrong with the PV. The capacity is only 4Mi for all of them.

Comment 5 Santosh Pillai 2020-11-12 12:06:11 UTC
Please share : 
1. localVolumeSet CRD 
2. logs from localvolumeset daemon. 
3. Local storage operator logs.

Comment 6 Aaruni Aggarwal 2020-11-12 12:16:41 UTC
Created attachment 1728744 [details]
crd for local volumeset

Comment 7 Aaruni Aggarwal 2020-11-12 12:19:29 UTC
Created attachment 1728745 [details]
Logs for local-storage operator

Comment 8 Santosh Pillai 2020-11-12 12:33:51 UTC
(In reply to Aaruni from comment #6)
> Created attachment 1728744 [details]
> crd for local volumeset

Sorry, I meant the CR.

Comment 9 Aaruni Aggarwal 2020-11-12 12:37:27 UTC
Created attachment 1728746 [details]
logs for diskmaker-manager

Comment 10 Aaruni Aggarwal 2020-11-12 12:38:32 UTC
Created attachment 1728747 [details]
logs for localvolumeset-local-provisioner

Comment 11 Santosh Pillai 2020-11-12 13:12:50 UTC
@Rohan

Looks like issue with LocalVolumeSet. PVs are being created from boot device. Lsblk output from one of the nodes

`[core@worker-0 ~]$ lsblk  --bytes  --pairs  --output  "NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,RM,STATE,FSTYPE,SERIAL,KNAME,PARTLABEL"
NAME="vda" ROTA="1" TYPE="disk" SIZE="42949672960" MODEL="" VENDOR="0x1af4" RO="0" RM="0" STATE="" FSTYPE="" SERIAL="" KNAME="vda" PARTLABEL=""
NAME="vda1" ROTA="1" TYPE="part" SIZE="4194304" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="" SERIAL="" KNAME="vda1" PARTLABEL="PowerPC-PReP-boot"
NAME="vda2" ROTA="1" TYPE="part" SIZE="402653184" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="ext4" SERIAL="" KNAME="vda2" PARTLABEL="boot"
NAME="vda4" ROTA="1" TYPE="part" SIZE="42541760000" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="crypto_LUKS" SERIAL="" KNAME="vda4" PARTLABEL="luks_root"
NAME="vdb" ROTA="1" TYPE="disk" SIZE="512" MODEL="" VENDOR="0x1af4" RO="1" RM="0" STATE="" FSTYPE="" SERIAL="" KNAME="vdb" PARTLABEL=""
NAME="vdc" ROTA="1" TYPE="disk" SIZE="274877906944" MODEL="" VENDOR="0x1af4" RO="0" RM="0" STATE="" FSTYPE="" SERIAL="" KNAME="vdc" PARTLABEL=""
NAME="coreos-luks-root-nocrypt" ROTA="1" TYPE="dm" SIZE="42524982784" MODEL="" VENDOR="" RO="0" RM="0" STATE="running" FSTYPE="xfs" SERIAL="" KNAME="dm-0" PARTLABEL=""
` 

Notice device `vda1`. The PARTLABEL is `PowerPC-PReP-boot`, which would fail this check -
https://github.com/openshift/local-storage-operator/blob/fe65d2700416ba62e6a9fde947f1ee24c9e5b2bb/pkg/diskmaker/controllers/lvset/matcher.go#L54


Snippet from diskmaker logs:

`{"level":"info","ts":1605038528.3909867,"logger":"localvolumeset-symlink-controller","msg":"symlinking","Request.Namespace":"openshift-local-storage","Request.Name":"localblock","Device.Name":"vda1"}
{"level":"info","ts":1605038528.393325,"logger":"localvolumeset-symlink-controller","msg":"symlinking","Request.Namespace":"openshift-local-storage","Request.Name":"localblock","Device.Name":"vda1","sourcePath":"/dev/vda1","targetPath":"/mnt/local-storage/localblock/vda1"}`

Comment 12 Rohan CJ 2020-11-18 09:15:27 UTC
Will look into a fix. Workaround is to set a .spec.deviceInclusionSpec.minSize to something like 100 Gi

Comment 13 Rohan CJ 2020-11-18 09:25:04 UTC
Also looking at your device layout, you could set:

  localvolumeset.spec.deviceInclusionSpec.deviceTypes = ['disk']

instead of 
   localvolumeset.spec.deviceInclusionSpec.deviceTypes = ['disk','part']

Comment 14 Aaruni Aggarwal 2020-11-18 11:42:52 UTC
(In reply to Rohan CJ from comment #12)
> Will look into a fix. Workaround is to set a
> .spec.deviceInclusionSpec.minSize to something like 100 Gi

Now I am setting minSize equal to the size of my additional attached disk . In my case its 256Gi . As for me 100Gi didn't work

Comment 15 Rohan CJ 2020-11-18 13:59:07 UTC
@Hemant, would it make sense to check for the prep-boot like we do for bios partlabel?

Also, is this a blocker?

The partition type explained: https://tldp.org/HOWTO/IBM7248-HOWTO/faq.html#134-The-PReP-boot-partition

Comment 16 Pratik Surve 2020-11-19 11:45:05 UTC
Observer the similar behavior over AWS-UPI-RHEL-I3 where the worker node is of rhel-7 image 


for i in $(oc get nodes |grep worker |awk '{print$1}'); do oc debug node/$i -- chroot /host lsblk  --bytes  --pairs  --output  "NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,RM,STATE,FSTYPE,SERIAL,KNAME,PARTLABEL" ; done
Creating debug namespace/openshift-debug-node-chzvr ...
Starting pod/ip-10-0-59-96us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
NAME="nvme0n1" ROTA="0" TYPE="disk" SIZE="128849018880" MODEL="Amazon Elastic Block Store              " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="vol040999c2486aa0412" KNAME="nvme0n1" PARTLABEL=""
NAME="nvme0n1p1" ROTA="0" TYPE="part" SIZE="1048576" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="" SERIAL="" KNAME="nvme0n1p1" PARTLABEL=""
NAME="nvme0n1p2" ROTA="0" TYPE="part" SIZE="128846904320" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="xfs" SERIAL="" KNAME="nvme0n1p2" PARTLABEL=""
NAME="nvme1n1" ROTA="0" TYPE="disk" SIZE="2500000000000" MODEL="Amazon EC2 NVMe Instance Storage        " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="AWSB6958A5581996F3DB" KNAME="nvme1n1" PARTLABEL=""
NAME="nvme2n1" ROTA="0" TYPE="disk" SIZE="2500000000000" MODEL="Amazon EC2 NVMe Instance Storage        " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="AWSBE89B4C66EAC4F7D2" KNAME="nvme2n1" PARTLABEL=""

Removing debug pod ...
Removing debug namespace/openshift-debug-node-chzvr ...
Creating debug namespace/openshift-debug-node-b6bmd ...
Starting pod/ip-10-0-74-22us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
NAME="nvme0n1" ROTA="0" TYPE="disk" SIZE="128849018880" MODEL="Amazon Elastic Block Store              " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="vol04735699405048a19" KNAME="nvme0n1" PARTLABEL=""
NAME="nvme0n1p1" ROTA="0" TYPE="part" SIZE="1048576" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="" SERIAL="" KNAME="nvme0n1p1" PARTLABEL=""
NAME="nvme0n1p2" ROTA="0" TYPE="part" SIZE="128846904320" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="xfs" SERIAL="" KNAME="nvme0n1p2" PARTLABEL=""
NAME="nvme1n1" ROTA="0" TYPE="disk" SIZE="2500000000000" MODEL="Amazon EC2 NVMe Instance Storage        " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="AWS222216B00E36EEAF0" KNAME="nvme1n1" PARTLABEL=""
NAME="nvme2n1" ROTA="0" TYPE="disk" SIZE="2500000000000" MODEL="Amazon EC2 NVMe Instance Storage        " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="AWS6A250182A01DBEA19" KNAME="nvme2n1" PARTLABEL=""

Removing debug pod ...
Removing debug namespace/openshift-debug-node-b6bmd ...
Creating debug namespace/openshift-debug-node-wgz46 ...
Starting pod/ip-10-0-81-9us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
NAME="nvme0n1" ROTA="0" TYPE="disk" SIZE="128849018880" MODEL="Amazon Elastic Block Store              " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="vol0fe7c4eef64d04b9d" KNAME="nvme0n1" PARTLABEL=""
NAME="nvme0n1p1" ROTA="0" TYPE="part" SIZE="1048576" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="" SERIAL="" KNAME="nvme0n1p1" PARTLABEL=""
NAME="nvme0n1p2" ROTA="0" TYPE="part" SIZE="128846904320" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="xfs" SERIAL="" KNAME="nvme0n1p2" PARTLABEL=""
NAME="nvme1n1" ROTA="0" TYPE="disk" SIZE="2500000000000" MODEL="Amazon EC2 NVMe Instance Storage        " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="AWS63A2B04037BCC167C" KNAME="nvme1n1" PARTLABEL=""
NAME="nvme2n1" ROTA="0" TYPE="disk" SIZE="2500000000000" MODEL="Amazon EC2 NVMe Instance Storage        " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="AWS2709233678AE3C953" KNAME="nvme2n1" PARTLABEL=""

Removing debug pod ...
Removing debug namespace/openshift-debug-node-wgz46 ...


for i in $(oc get nodes |grep worker |awk '{print$1}'); do oc debug node/$i -- chroot /host lsblk ; done                                                                                   
Creating debug namespace/openshift-debug-node-rdwl4 ...
Starting pod/ip-10-0-59-96us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1     259:2    0  120G  0 disk 
|-nvme0n1p1 259:3    0    1M  0 part 
`-nvme0n1p2 259:4    0  120G  0 part /
nvme1n1     259:0    0  2.3T  0 disk 
nvme2n1     259:1    0  2.3T  0 disk 

Removing debug pod ...
Removing debug namespace/openshift-debug-node-rdwl4 ...
Creating debug namespace/openshift-debug-node-d72nv ...
Starting pod/ip-10-0-74-22us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1     259:2    0  120G  0 disk 
|-nvme0n1p1 259:3    0    1M  0 part 
`-nvme0n1p2 259:4    0  120G  0 part /
nvme1n1     259:0    0  2.3T  0 disk 
nvme2n1     259:1    0  2.3T  0 disk 

Removing debug pod ...
Removing debug namespace/openshift-debug-node-d72nv ...
Creating debug namespace/openshift-debug-node-vf5c7 ...
Starting pod/ip-10-0-81-9us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1     259:2    0  120G  0 disk 
|-nvme0n1p1 259:3    0    1M  0 part 
`-nvme0n1p2 259:4    0  120G  0 part /
nvme1n1     259:0    0  2.3T  0 disk 
nvme2n1     259:1    0  2.3T  0 disk 

Removing debug pod ...
Removing debug namespace/openshift-debug-node-vf5c7 ...

Comment 17 Hemant Kumar 2020-11-19 12:16:05 UTC
So basically this is going to be whack-a-mole and whatever rules we put in the code is not going to be sufficient and disks that shouldn't be claimed will be claimed. First thing, we need to do is add a big warning to docs about using this feature - https://docs.openshift.com/container-platform/4.6/storage/persistent_storage/persistent-storage-local.html#local-storage-discovery_persistent-storage-local . I filed a doc bug about warning users - https://bugzilla.redhat.com/show_bug.cgi?id=1899491


@Pratik what similar behaviour you are pointing to? Can you please clarify?


> Now I am setting minSize equal to the size of my additional attached disk . In my case its 256Gi . As for me 100Gi didn't work

Do we know why 100Gi did not work? We should debug that and find root cause of it.

> @Hemant, would it make sense to check for the prep-boot like we do for bios partlabel?

@Rohan I think things like this are bound to happen when using LocalVolumeSet feature. I am thinking we should rather document this properly and if there are any bugs with minSize filter, we should fix it. I am not sure adding more rules to LSO disk filtering will help. The default filtering rules can also make it impossible for someone to use a disk that is filtered out. If filtering is less strict then they can always add their own filters (such as minSize in this case).

Comment 18 Aaruni Aggarwal 2020-11-19 13:27:54 UTC
(In reply to Hemant Kumar from comment #17)
> So basically this is going to be whack-a-mole and whatever rules we put in
> the code is not going to be sufficient and disks that shouldn't be claimed
> will be claimed. First thing, we need to do is add a big warning to docs
> about using this feature -
> https://docs.openshift.com/container-platform/4.6/storage/persistent_storage/
> persistent-storage-local.html#local-storage-discovery_persistent-storage-
> local . I filed a doc bug about warning users -
> https://bugzilla.redhat.com/show_bug.cgi?id=1899491
> 
> 
> @Pratik what similar behaviour you are pointing to? Can you please clarify?
> 
> 
> > Now I am setting minSize equal to the size of my additional attached disk . In my case its 256Gi . As for me 100Gi didn't work
> 
> Do we know why 100Gi did not work? We should debug that and find root cause
> of it.
> 
> > @Hemant, would it make sense to check for the prep-boot like we do for bios partlabel?
> 
> @Rohan I think things like this are bound to happen when using
> LocalVolumeSet feature. I am thinking we should rather document this
> properly and if there are any bugs with minSize filter, we should fix it. I
> am not sure adding more rules to LSO disk filtering will help. The default
> filtering rules can also make it impossible for someone to use a disk that
> is filtered out. If filtering is less strict then they can always add their
> own filters (such as minSize in this case).


> Now I am setting minSize equal to the size of my additional attached disk . In my case its 256Gi . As for me 100Gi didn't work
> 
> Do we know why 100Gi did not work? We should debug that and find root cause
> of it.
The pvs were getting created from boot device . that's why no osd pods were there. Situation was exactly same.

Comment 19 Aaruni Aggarwal 2020-11-19 13:32:00 UTC
(In reply to Aaruni from comment #18)
> (In reply to Hemant Kumar from comment #17)
> > So basically this is going to be whack-a-mole and whatever rules we put in
> > the code is not going to be sufficient and disks that shouldn't be claimed
> > will be claimed. First thing, we need to do is add a big warning to docs
> > about using this feature -
> > https://docs.openshift.com/container-platform/4.6/storage/persistent_storage/
> > persistent-storage-local.html#local-storage-discovery_persistent-storage-
> > local . I filed a doc bug about warning users -
> > https://bugzilla.redhat.com/show_bug.cgi?id=1899491
> > 
> > 
> > @Pratik what similar behaviour you are pointing to? Can you please clarify?
> > 
> > 
> > > Now I am setting minSize equal to the size of my additional attached disk . In my case its 256Gi . As for me 100Gi didn't work
> > 
> > Do we know why 100Gi did not work? We should debug that and find root cause
> > of it.
> > 
> > > @Hemant, would it make sense to check for the prep-boot like we do for bios partlabel?
> > 
> > @Rohan I think things like this are bound to happen when using
> > LocalVolumeSet feature. I am thinking we should rather document this
> > properly and if there are any bugs with minSize filter, we should fix it. I
> > am not sure adding more rules to LSO disk filtering will help. The default
> > filtering rules can also make it impossible for someone to use a disk that
> > is filtered out. If filtering is less strict then they can always add their
> > own filters (such as minSize in this case).
> 
> 
> > Now I am setting minSize equal to the size of my additional attached disk . In my case its 256Gi . As for me 100Gi didn't work
> > 
> > Do we know why 100Gi did not work? We should debug that and find root cause
> > of it.
> The pvs were getting created from boot device . that's why no osd pods were
> there. Situation was exactly same.

after adding minSize=100Gi the pvs that were getting in bound state had capacity equal to 4Mi

Comment 20 Pratik Surve 2020-11-19 13:42:06 UTC
> @Pratik what similar behavior you are pointing to? Can you please clarify?

Here without using many minSize filters PV is getting created from the Boot disk which is of size 1 Mi Just wanted to notify the BZ that it also affecting AWS-rhel-i3 deployment. After applying the workaround of minSize of 100Gi it worked

Comment 21 Santosh Pillai 2020-11-19 13:45:50 UTC
(In reply to Aaruni from comment #19)
> 
> after adding minSize=100Gi the pvs that were getting in bound state had
> capacity equal to 4Mi


Could be the case that previous PV was not deleted or the 4Mi disk simlink was not deleted.

Comment 22 Rohan CJ 2020-11-20 12:15:15 UTC
Removing `blocker?` since we're planning to fix this in docs, not code.

Comment 24 Jan Safranek 2020-11-20 15:57:39 UTC
Would it make sense to have the minSize default to 1GiB and if someone wants smaller volumes they need to explicitly override it?

Comment 26 Santosh Pillai 2021-01-15 06:32:22 UTC
Notes on Testing:
- MinSize in LocalVolumeSet now has a default value of 1Gi.  If user doesn't provide minSize then any disks below 1Gi will be ignored for provisioning PVs. User can override minSize anytime.
- PVs won't be provisioned on disks with `partLabel` containing substring  `boot`, `BOOT`, 'bios` and `BIOS`

Comment 27 Chao Yang 2021-01-19 06:52:00 UTC
lsblk  --bytes  --pairs  --output  "NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,RM,STATE,FSTYPE,SERIAL,KNAME,PARTLABEL"
NAME="nvme0n1" ROTA="0" TYPE="disk" SIZE="128849018880" MODEL="Amazon Elastic Block Store              " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="vol05eadde8fd667f4b2" KNAME="nvme0n1" PARTLABEL=""
NAME="nvme0n1p1" ROTA="0" TYPE="part" SIZE="1048576" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="" SERIAL="" KNAME="nvme0n1p1" PARTLABEL="BIOS-BOOT"
NAME="nvme0n1p2" ROTA="0" TYPE="part" SIZE="133169152" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="vfat" SERIAL="" KNAME="nvme0n1p2" PARTLABEL="EFI-SYSTEM"
NAME="nvme0n1p3" ROTA="0" TYPE="part" SIZE="402653184" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="ext4" SERIAL="" KNAME="nvme0n1p3" PARTLABEL="boot"
NAME="nvme0n1p4" ROTA="0" TYPE="part" SIZE="128311082496" MODEL="" VENDOR="" RO="0" RM="0" STATE="" FSTYPE="xfs" SERIAL="" KNAME="nvme0n1p4" PARTLABEL="root"
NAME="nvme1n1" ROTA="0" TYPE="disk" SIZE="1073741824" MODEL="Amazon Elastic Block Store              " VENDOR="" RO="0" RM="0" STATE="live" FSTYPE="" SERIAL="vol00c040c72b1fc0517" KNAME="nvme1n1" PARTLABEL="

Localvolumeset minSize=0
    "spec": {
      "deviceInclusionSpec": {
        "deviceTypes": [
          "disk"
        ],
        "minSize": "0Ti"
      },

PV is not provisioned for nvme0n1p3

Comment 30 errata-xmlrpc 2021-02-24 15:32:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.