Description of problem: Build: Satellite 6.3.0 snap 33 Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. provision a VM from Image based provisioning 2. Add more disk to the VM and click submit 3. Actual results: Unable to save Failed to create a compute vmware (VMware) instance toby-hassian.capqe.lab.eng.rdu2.redhat.com: InvalidRequest: Error parsing string 1 as enum type VirtualDeviceConfigSpecOperation while parsing serialized value of type vim.vm.device.VirtualDeviceSpec.Operation at line 1, column 2794 while parsing property "operation" of static type VirtualDeviceConfigSpecOperation while parsing serialized DataObject of type vim.vm.device.VirtualDeviceSpec at line 1, column 2745 while parsing property "deviceChange" of static type ArrayOfVirtualDeviceConfigSpec while parsing serialized DataObject of type vim.vm.ConfigSpec at line 1, column 592 while parsing property "config" of static type VirtualMachineConfigSpec while parsing serialized DataObject of type vim.vm.CloneSpec at line 1, column 346 while parsing call information for method CloneVM_Task at line 1, column 177 while parsing SOAP body at line 1, column 167 while parsing SOAP envelope at line 1, column 0 while parsing HTTP request for method clone on object of type vim.VirtualMachine at line 1, column 0 Expected results: Vm should be provisioned Additional info: I see a relevant upstream issue https://projects.theforeman.org/issues/18181
A workaround might be creating the template in vmware with additional disks so the number of disks does not have to be changed. Provisioning with single disk worked fine for me. I'd say Priority/Severity should be lowered, vmware provisioning in general works.
6.2: tfm-rubygem-fog-vsphere-0.6.3-1.el7sat.noarch tfm-rubygem-fog-core-1.36.0-1.el7sat.noarch foreman-1.11.0.86-1.el7sat.noarch tfm-rubygem-rbvmomi-1.8.2-3.el7sat.noarch 6.3: tfm-rubygem-fog-vsphere-1.7.0-1.el7sat.noarch tfm-rubygem-fog-core-1.42.0-1.el7sat.noarch foreman-1.15.6.36-1.el7sat.noarch tfm-rubygem-rbvmomi-1.10.0-1.el7sat.noarch
so i isolated it to fog-vsphere, i took a 6.3 and removed the fog 1.7 and installed the 6.2 fog 0.6 and it works fine, so will start to look for the commit that broke it, at least we know its not foreman.
2018-05-09 15:19:04 42912ab0 [app] [I] Parameters: {"utf8"=>"✓", "authenticity_token"=>"Z+lVW6VhCDp329BdMiO7B042bjZM7F7pxEUZcigJSf0CoYwLKMut72jR01jNG6fC5m5a1Gz1KL35QhGrTNvaSw==", "host"=>{"name"=>"janis-zellous", "organization_id"=>"1", "location_id"=>"2", "hostgroup_id"=>"1", "compute_resource_id"=>"1", "content_facet_attributes"=>{"lifecycle_environment_id"=>"1", "content_view_id"=>"2", "content_source_id"=>"1"}, "puppetclass_ids"=>[""], "managed"=>"true", "progress_report_id"=>"[FILTERED]", "type"=>"Host::Managed", "interfaces_attributes"=>{"0"=>{"_destroy"=>"0", "type"=>"Nic::Managed", "mac"=>"", "identifier"=>"", "name"=>"janis-zellous", "domain_id"=>"1", "subnet_id"=>"1", "ip"=>"10.8.105.190", "ip6"=>"", "managed"=>"1", "primary"=>"1", "provision"=>"1", "execution"=>"1", "virtual"=>"0", "tag"=>"", "attached_to"=>"", "compute_attributes"=>{"type"=>"VirtualVmxnet3", "network"=>"network-106"}}}, "compute_attributes"=>{"cpus"=>"1", "corespersocket"=>"1", "memory_mb"=>"2048", "firmware"=>"bios", "cluster"=>"Satellite_Engineering", "resource_pool"=>"Resources", "path"=>"/Datacenters/RH_Engineering/vm/Toledo", "guest_id"=>"rhel7_64Guest", "scsi_controller_type"=>"VirtualLsiLogicController", "hardware_version"=>"Default", "memoryHotAddEnabled"=>"0", "cpuHotAddEnabled"=>"0", "add_cdrom"=>"0", "start"=>"1", "annotation"=>"", "volumes_attributes"=>{"0"=>{"_delete"=>"", "storage_pod"=>"", "datastore"=>"Local-Ironforge", "name"=>"Hard disk", "size_gb"=>"50", "thin"=>"true", "eager_zero"=>"false", "mode"=>"persistent"}, "1525893523219"=>{"_delete"=>"", "storage_pod"=>"", "datastore"=>"Local-Ironforge", "name"=>"Hard disk", "size_gb"=>"20", "thin"=>"true", "eager_zero"=>"false", "mode"=>"persistent"}}, "image_id"=>"Templates/RHEL7_ENG-Template"}, "architecture_id"=>"1", "operatingsystem_id"=>"4", "provision_method"=>"image", "build"=>"1", "medium_id"=>"11", "ptable_id"=>"61", "pxe_loader"=>"PXELinux BIOS", "disk"=>"", "root_pass"=>"[FILTERED]", "is_owned_by"=>"3-Users", "enabled"=>"1", "model_id"=>"", "comment"=>"", "overwrite"=>"false"}, "capabilities"=>"build image new_volume bootdisk", "provider"=>"Vmware", "media_selector"=>"install_media", "bare_metal_capabilities"=>"build"}
Download the patch and scp it to the Red Hat Satellite Server, to apply the patch do the following: # katello-service stop # tar -xvf fog-vsphere-1.7.0.tar # mv /opt/theforeman/tfm/root/usr/share/gems/gems/fog-vsphere-1.7.0 /tmp # chown root:root /opt/theforeman/tfm/root/usr/share/gems/gems/fog-vsphere-1.7.0 # restorecon -R /opt/theforeman/tfm/root/usr/share/gems/gems/fog-vsphere-1.7.0 (if using selinux) # mv fog-vsphere-1.7.0 /opt/theforeman/tfm/root/usr/share/gems/gems/ # katello-service start This is not an official patch!!! Just a patch with testing to that appears to have resolved the issue.
Created attachment 1436805 [details] patch
Created attachment 1437737 [details] hotfix Download the hotfix rpm to /root # katello-service stop # yum localupdate tfm-rubygem-fog-vsphere-1.7.0-2.HOTFIXRBBZ1538597.el7sat.noarch.rpm # katello-service start
The bug is blocked on image based provisioning on vmware bug - 1602289
Build: Satellite 6.4.0 I was able to add new disk while doing Image based provisioning, under a Single SCSI controller. If new SCSI controller is added we get an error, tracking this separately https://bugzilla.redhat.com/show_bug.cgi?id=1615733 Verifying this as original issue is resolved.
Created attachment 1475758 [details] image add disks
Created attachment 1475759 [details] Disks added to Image Image has only one disk, added another and provisioning was successful
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2927