1. Proposed title of this feature request Select between Eager or Lazy Zeroed disk format for thick provisioning in VMWare 3. What is the nature and description of the request? In CloudForms provisioning dialogs and via the Automate model there appears to be the following options for a disk format: 1) unchanged or Default 2) thin 3) thick However in VMware there are two thick provisioning options. Lazy Zeroed and Eager Zeroed. I have some experimental results on which is used, but which one of these is guaranteed to be selected when I pick thick? I do not want this to change in the future without knowing. Given an answer from my previous question: How do I provision a VM with the other thick disk format? CFME provisions thick as Lazy Zeroed as currently this is not in the code base to select Eager zeroed (does this by marking the flag 'eagerlyScrub' to true in the api) 4. Why does the customer need this? (List the business requirements here) So they can select between the two different types. 5. How would the customer like to achieve this? (List the functional requirements here) Adding check boxes in the provision dialog. 6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. 7. Is there already an existing RFE upstream or in Red Hat Bugzilla? No 8. Does the customer have any specific time-line dependencies and which release would they like to target (i.e. RHEL5, RHEL6)? No 9. Is the sales team involved in this request and do they have any additional input? No 10. List any affected packages or components. None known 11. Would the customer be able to assist in testing this functionality if implemented? Yes
Hi Greg, Item 5 above lists adding check boxes in the provision dialog. What changes are necessary for this enhancement? Thanks, Tina
Changes required: 1) Add new field name and properties to VMware dialogs. 2) Add logic to workflow to show/hide field based on the selection of the Disk Format option. 3) Update UI to display new checkbox field. 4) Update VMware provisioning internal state-machine to process new field when building the VMware ConfigSpec.
Created attachment 1550746 [details] Provision with lazy and eager Will updating the dialog like this work ?
The attached screenshot above is a cleaner solution than what I proposed in comment #3. (Note that the option is "Eager" not "easy" as the image shows.) The dialog field option should be updated so that the exist "thick" option remains, but the display value is updated to read "Thick - Lazy Zero". This supports backward-compatibility. Then a new options for "thick_eager" should be added. There are multiple VMware dialogs that should be updated for this as follows: :disk_format: :values: thick: Thick - Lazy Zero thick_eager: Thick - Eager Zero thin: Thin unchanged: Default Doing it this way replaces steps 1-3 from comment #3. Step 4 is still required See https://pubs.vmware.com/vsphere-5-5/index.jsp?topic=%2Fcom.vmware.wssdk.apiref.doc%2Fvim.vm.RelocateSpec.DiskLocator.html for property "diskBackingInfo". The field is processed in the VMware repo here: https://github.com/ManageIQ/manageiq-providers-vmware/blob/c150def3a8a08685b8c6457db716050f7305872e/app/models/manageiq/providers/vmware/infra_manager/provision/cloning.rb#L187-L192
Setup necessary to start this work: https://github.com/ManageIQ/manageiq-providers-vmware/pull/384 https://github.com/ManageIQ/manageiq/pull/18614 https://github.com/ManageIQ/manageiq/pull/18617 (all merged) WIP: https://github.com/ManageIQ/manageiq-providers-vmware/pull/385 Status: "disk* VirtualMachineRelocateSpecDiskLocator[] An optional list that allows specifying the datastore location for each virtual disk." From the docs, this is a per-disk setting, and thus requires possibly buy-in from GM and Adam as to how to do this properly. I believe it will require refactoring.
Greg, Adam, How do we proceed with this work? Thanks, Tina
Yo everyone, I'm still working on this.
https://github.com/ManageIQ/vmware_web_service/pull/63
Since running this with the eagerZero property set to true takes a nontrivial amount of time, we may want to document the fact that if you're using lifecycle provisioning from the Automate engine you will have to change the max_retries in the engine to accommodate the extra time necessary. When I ran tests, it took just over five hundred retries of the OOTB max 100. (For the purposes of testing only, I disabled the enforce_max_retries method to see the provisioning work but that's probably not the right solution to this.)
New commits detected on ManageIQ/manageiq-providers-vmware/ivanchuk: https://github.com/ManageIQ/manageiq-providers-vmware/commit/80d2c301c8f0e222476eb73b88fb72e27150e2e8 commit 80d2c301c8f0e222476eb73b88fb72e27150e2e8 Author: Adam Grare <agrare> AuthorDate: Thu Aug 1 10:01:50 2019 -0400 Commit: Adam Grare <agrare> CommitDate: Thu Aug 1 10:01:50 2019 -0400 Merge pull request #385 from d-m-u/adding_prov_option_disks Add thick_eager disk_format option (cherry picked from commit 88a141130e7f7217b5c676ba93ffd8d28fa92a30) https://bugzilla.redhat.com/show_bug.cgi?id=1633867 content/miq_dialogs/miq_provision_vmware_cluster_dialogs_template.yaml | 3 +- content/miq_dialogs/miq_provision_vmware_dialogs_clone_to_template.yaml | 3 +- content/miq_dialogs/miq_provision_vmware_dialogs_clone_to_vm.yaml | 3 +- content/miq_dialogs/miq_provision_vmware_dialogs_template.yaml | 3 +- content/miq_dialogs/miq_provision_vmware_folder_dialogs_template.yaml | 3 +- 5 files changed, 10 insertions(+), 5 deletions(-) https://github.com/ManageIQ/manageiq-providers-vmware/commit/204173bb100052ea26e3ac346bf9979e25b2ac88 commit 204173bb100052ea26e3ac346bf9979e25b2ac88 Author: Adam Grare <agrare> AuthorDate: Thu Aug 1 09:57:36 2019 -0400 Commit: Adam Grare <agrare> CommitDate: Thu Aug 1 09:57:36 2019 -0400 Merge pull request #413 from d-m-u/adding_disk_relocate_spec Add disk relocate spec for eagerlyScrub backing option (cherry picked from commit fc2e1cfc98f7a95565852acd4ae6ac860bc005eb) https://bugzilla.redhat.com/show_bug.cgi?id=1633867 app/models/manageiq/providers/vmware/infra_manager/provision/cloning.rb | 13 +- app/models/manageiq/providers/vmware/infra_manager/provision/configuration/disk.rb | 45 +- spec/models/manageiq/providers/vmware/infra_manager/provision_spec.rb | 45 +- 3 files changed, 77 insertions(+), 26 deletions(-)
Technically also https://github.com/ManageIQ/manageiq-providers-vmware/pull/427, sorry
New commit detected on ManageIQ/manageiq-providers-vmware/ivanchuk: https://github.com/ManageIQ/manageiq-providers-vmware/commit/792f59b5d6b81852fb10ed72e071d5928a232534 commit 792f59b5d6b81852fb10ed72e071d5928a232534 Author: d-m-u <drewuhlmann> AuthorDate: Thu Aug 1 14:48:41 2019 -0400 Commit: d-m-u <drewuhlmann> CommitDate: Thu Aug 1 14:48:41 2019 -0400 Add note about thick eager zero taking a while to complete (cherry picked from commit 2b9772f3dd49e11a5e353bb71800d777ef5ebcfa) https://bugzilla.redhat.com/show_bug.cgi?id=1633867 content/miq_dialogs/miq_provision_vmware_cluster_dialogs_template.yaml | 2 + content/miq_dialogs/miq_provision_vmware_dialogs_clone_to_template.yaml | 2 + content/miq_dialogs/miq_provision_vmware_dialogs_clone_to_vm.yaml | 2 + content/miq_dialogs/miq_provision_vmware_dialogs_template.yaml | 2 + content/miq_dialogs/miq_provision_vmware_folder_dialogs_template.yaml | 2 + 5 files changed, 10 insertions(+)
I tried all the options and on the vmware side I saw the same as what I picked on the CFME side
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:4199