Description of problem:
In a hosted engine environment, you need to import the hosted_storage storage domain for the hosted engine to be imported. However, once this is done you see this hosted_storage storage domain when ever you create vdisks.
Since the goal is to keep hosted_storage separate from user vm's this storage domain should be masked in the UI to provent users allocating vdisks to the 'management' storage domain.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Import the hosted_storage
2. create a vm, and add a vdisk
3. note the targets for the vdisk allocation includes the hosted_storage domain
user can allocate vdisks to the hosted_storage domain
hosted_storage should not be available as a SD to select when creating (or moving) vdisks (in vm tab, or disks tab for example)
*** Bug 1309151 has been marked as a duplicate of this bug. ***
AFAIK, this isn't the correct behavior moving forwards - we'd like to use this domain as any other domain.
(In reply to Allon Mureinik from comment #2)
> AFAIK, this isn't the correct behavior moving forwards - we'd like to use
> this domain as any other domain.
I guess I'm looking at this from a reliability/availability standpoint. By isolating the engine SD, we can strip out some of the operational 'interference' that may otherwise affect the availability or responsiveness of the engine.
Here's some of the benefits I was thinking of defined as either generic or specific to hyperconverged ;
- you insulate the management platform from user IOPS and disk contention, potentially improving responsiveness
- having a separate storage domain would allow the admin to use different caching/tiering/QoS/data protection schemes from their storage array specifically for the engine
- SD's that use thin provisioning by default could be overcommitted leading to ENOSPC errors. With the engine running in it's own SD, even if this is allowed to happen the management plane will always be available to the admin for diagnostics and troubleshooting.
- with a smaller dedicated SD for hosted engine, recovery actions are quicker, ensuring the management plane is as stable and reliable as possible
- implementing geo-replication features becomes easier since the hosted engine SD could be ignored.
- in the case of a glusterfs environment, you can also increase reliability by using thick provisioning for the 'bricks' that support the engine - guaranteeing space.
- a separate SD allows different lvmcache configuration and policies to be applied. For example, you could use SSD's with the engine in writethrough mode, and ssd's with the other SD's in writeback. This would give the read benefit to the engine, but if an ssd goes 'bang' it *doesn't* present the opportunity for data loss - again increasing the reliability of the engine.
I understand the desire for treating the hosted engine SD as any other, but for me reliability of the engine should take precedence
A simple solution would be based on permissions.
This is something we already have today. If your user has no permissions
on the HE SD he won't be able to see it.
The permission solution is still allowing superusers to override that behaviour. Do we want that? In setups where the admins are superusers this solution will not guarantee abuse of the HE SD.
Another option is to use a config item for the allowed operations on HE SD.
Another one was suggested in Bug 1354200 - to create a new data type
*** Bug 1336200 has been marked as a duplicate of this bug. ***
*** This bug has been marked as a duplicate of bug 1354200 ***