Bug 1538597
Summary: | Cannot add new disk to VM when using image based to provision | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Sanket Jagtap <sjagtap> | ||||||||||
Component: | Compute Resources - VMWare | Assignee: | Chris Roberts <chrobert> | ||||||||||
Status: | CLOSED ERRATA | QA Contact: | Sanket Jagtap <sjagtap> | ||||||||||
Severity: | urgent | Docs Contact: | |||||||||||
Priority: | urgent | ||||||||||||
Version: | 6.3.0 | CC: | aflierl, bkearney, b.prins, chrobert, ehelms, fcami, jyejare, ktordeur, mhulan, mlinden, mmccune, pcreech, satellite6-bugs, sjagtap, suarora, vijsingh | ||||||||||
Target Milestone: | 6.4.0 | Keywords: | Regression, Triaged, UserExperience | ||||||||||
Target Release: | Unused | ||||||||||||
Hardware: | x86_64 | ||||||||||||
OS: | Linux | ||||||||||||
URL: | https://projects.theforeman.org/issues/22315 | ||||||||||||
Whiteboard: | |||||||||||||
Fixed In Version: | Doc Type: | Known Issue | |||||||||||
Doc Text: |
When using image-based provisioning against VMWare, attempting to add additional storage to the new host returns an error.
|
Story Points: | --- | ||||||||||
Clone Of: | Environment: | ||||||||||||
Last Closed: | 2018-10-16 19:14:09 UTC | Type: | Bug | ||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||
Documentation: | --- | CRM: | |||||||||||
Verified Versions: | Category: | --- | |||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
Embargoed: | |||||||||||||
Bug Depends On: | 1602289 | ||||||||||||
Bug Blocks: | |||||||||||||
Attachments: |
|
Description
Sanket Jagtap
2018-01-25 12:08:53 UTC
A workaround might be creating the template in vmware with additional disks so the number of disks does not have to be changed. Provisioning with single disk worked fine for me. I'd say Priority/Severity should be lowered, vmware provisioning in general works. 6.2: tfm-rubygem-fog-vsphere-0.6.3-1.el7sat.noarch tfm-rubygem-fog-core-1.36.0-1.el7sat.noarch foreman-1.11.0.86-1.el7sat.noarch tfm-rubygem-rbvmomi-1.8.2-3.el7sat.noarch 6.3: tfm-rubygem-fog-vsphere-1.7.0-1.el7sat.noarch tfm-rubygem-fog-core-1.42.0-1.el7sat.noarch foreman-1.15.6.36-1.el7sat.noarch tfm-rubygem-rbvmomi-1.10.0-1.el7sat.noarch so i isolated it to fog-vsphere, i took a 6.3 and removed the fog 1.7 and installed the 6.2 fog 0.6 and it works fine, so will start to look for the commit that broke it, at least we know its not foreman. 2018-05-09 15:19:04 42912ab0 [app] [I] Parameters: {"utf8"=>"✓", "authenticity_token"=>"Z+lVW6VhCDp329BdMiO7B042bjZM7F7pxEUZcigJSf0CoYwLKMut72jR01jNG6fC5m5a1Gz1KL35QhGrTNvaSw==", "host"=>{"name"=>"janis-zellous", "organization_id"=>"1", "location_id"=>"2", "hostgroup_id"=>"1", "compute_resource_id"=>"1", "content_facet_attributes"=>{"lifecycle_environment_id"=>"1", "content_view_id"=>"2", "content_source_id"=>"1"}, "puppetclass_ids"=>[""], "managed"=>"true", "progress_report_id"=>"[FILTERED]", "type"=>"Host::Managed", "interfaces_attributes"=>{"0"=>{"_destroy"=>"0", "type"=>"Nic::Managed", "mac"=>"", "identifier"=>"", "name"=>"janis-zellous", "domain_id"=>"1", "subnet_id"=>"1", "ip"=>"10.8.105.190", "ip6"=>"", "managed"=>"1", "primary"=>"1", "provision"=>"1", "execution"=>"1", "virtual"=>"0", "tag"=>"", "attached_to"=>"", "compute_attributes"=>{"type"=>"VirtualVmxnet3", "network"=>"network-106"}}}, "compute_attributes"=>{"cpus"=>"1", "corespersocket"=>"1", "memory_mb"=>"2048", "firmware"=>"bios", "cluster"=>"Satellite_Engineering", "resource_pool"=>"Resources", "path"=>"/Datacenters/RH_Engineering/vm/Toledo", "guest_id"=>"rhel7_64Guest", "scsi_controller_type"=>"VirtualLsiLogicController", "hardware_version"=>"Default", "memoryHotAddEnabled"=>"0", "cpuHotAddEnabled"=>"0", "add_cdrom"=>"0", "start"=>"1", "annotation"=>"", "volumes_attributes"=>{"0"=>{"_delete"=>"", "storage_pod"=>"", "datastore"=>"Local-Ironforge", "name"=>"Hard disk", "size_gb"=>"50", "thin"=>"true", "eager_zero"=>"false", "mode"=>"persistent"}, "1525893523219"=>{"_delete"=>"", "storage_pod"=>"", "datastore"=>"Local-Ironforge", "name"=>"Hard disk", "size_gb"=>"20", "thin"=>"true", "eager_zero"=>"false", "mode"=>"persistent"}}, "image_id"=>"Templates/RHEL7_ENG-Template"}, "architecture_id"=>"1", "operatingsystem_id"=>"4", "provision_method"=>"image", "build"=>"1", "medium_id"=>"11", "ptable_id"=>"61", "pxe_loader"=>"PXELinux BIOS", "disk"=>"", "root_pass"=>"[FILTERED]", "is_owned_by"=>"3-Users", "enabled"=>"1", "model_id"=>"", "comment"=>"", "overwrite"=>"false"}, "capabilities"=>"build image new_volume bootdisk", "provider"=>"Vmware", "media_selector"=>"install_media", "bare_metal_capabilities"=>"build"} Download the patch and scp it to the Red Hat Satellite Server, to apply the patch do the following: # katello-service stop # tar -xvf fog-vsphere-1.7.0.tar # mv /opt/theforeman/tfm/root/usr/share/gems/gems/fog-vsphere-1.7.0 /tmp # chown root:root /opt/theforeman/tfm/root/usr/share/gems/gems/fog-vsphere-1.7.0 # restorecon -R /opt/theforeman/tfm/root/usr/share/gems/gems/fog-vsphere-1.7.0 (if using selinux) # mv fog-vsphere-1.7.0 /opt/theforeman/tfm/root/usr/share/gems/gems/ # katello-service start This is not an official patch!!! Just a patch with testing to that appears to have resolved the issue. Created attachment 1436805 [details]
patch
Created attachment 1437737 [details]
hotfix
Download the hotfix rpm to /root
# katello-service stop
# yum localupdate tfm-rubygem-fog-vsphere-1.7.0-2.HOTFIXRBBZ1538597.el7sat.noarch.rpm
# katello-service start
The bug is blocked on image based provisioning on vmware bug - 1602289 Build: Satellite 6.4.0 I was able to add new disk while doing Image based provisioning, under a Single SCSI controller. If new SCSI controller is added we get an error, tracking this separately https://bugzilla.redhat.com/show_bug.cgi?id=1615733 Verifying this as original issue is resolved. Created attachment 1475758 [details]
image add disks
Created attachment 1475759 [details]
Disks added to Image
Image has only one disk, added another and provisioning was successful
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2927 |