Bug 2040313 - [RFE] Allow users to control the disks to be consumed during installation and scaling the ODF cluster [NEEDINFO]
Summary: [RFE] Allow users to control the disks to be consumed during installation and...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Pranshu Srivastava
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-13 12:53 UTC by Bipin Kunal
Modified: 2023-08-09 16:46 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-02-01 07:01:40 UTC
Embargoed:
afrahman: needinfo? (rojoseph)


Attachments (Terms of Use)

Description Bipin Kunal 2022-01-13 12:53:22 UTC
Description of problem (please be detailed as possible and provide log
snippests):

As of now users or admins do not have control over the number of disks to be consumed by ODF at the time of installation or scaling. 

During the installation, 
  - With Internal Mode: It only allows to choose the disk size and it automatically uses 3 OSD of that size during deployment. There is no way to start with more than 3 disks/OSD's. At the time of scaling it again doesn't allow to add more than 3 disks at a time. So if my OSD size is 2 TiB, I can only scale with 6 TiB(2*3) total capacity.  There is no way to simply increase the capacity as per the admin's choice. We can always restrict the scaling in the multiple of (size of osd * 3).

  - With Internal-Attached Mode: At the time of deployment and scaling it consumes all the disk on the storage node for its consumption. I would say this is quite a wrong assumption that all the disks are meant for ODF purposes and thus we should allow users to control which disk to be consumed and which is not. In the UI we can discover all and give users a choice of selection. Depending upon whether it is replica 3 or replica 1(flexible scaling) we can add the constraints on the number of disks being chosen. 

Version of all relevant components (if applicable):
4.9


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

I can still work with the existing mechanism but not having this flexibility adds unnecessary constraints. 

Keep in mind every time we add a new disk, data rebalancing will happen. So this adds burden in Internal mode if I need to run add capacity multiple times to increase the cluster to the desired size.

Consuming all the disks in Internal-Attached mode is even an unpleasant experience in my opinion. This might work in the majority of cases but assuming all the disks are meant for ODF is not a robust experience. 

Is there any workaround available to the best of your knowledge?
I guess Workaround may be here to use CLI way to install and expand.


Note You need to log in before you can comment on or make changes to this bug.