Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1597208 - Partition table not set for host when using hammer cli, provisioning method bootdisk and host group
Summary: Partition table not set for host when using hammer cli, provisioning method b...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Host Group
Version: 6.4
Hardware: All
OS: Linux
medium
medium
Target Milestone: 6.5.0
Assignee: Lukas Zapletal
QA Contact: Mirek Długosz
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-02 09:00 UTC by Ron van der Wees
Modified: 2019-11-05 22:30 UTC (History)
10 users (show)

Fixed In Version: foreman-1.20.1.22-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-14 12:37:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 22684 0 Normal Closed Bootdisk method does not inherit disk layout from hostgroup 2020-03-11 13:10:19 UTC
Red Hat Knowledge Base (Solution) 3947841 0 Configure None Unable to provision a host using bootdisk method through hammer on Red Hat Satellite 6.4. 2019-02-28 11:55:18 UTC
Red Hat Product Errata RHSA-2019:1222 0 None None None 2019-05-14 12:37:41 UTC

Description Ron van der Wees 2018-07-02 09:00:22 UTC
Description of problem:
In bz#1019214 a feature to auto-attach a boot disk for a VMware Compute
Resource was implemented. In bz#1544498 the hammer cli was fixed to be able to
select 'bootdisk' with '--provision-method'.
Since the hammer cli fix in the last mentioned bz, kickstart can't be rendered
any more since the partition table remains empty.


Version-Release number of selected component (if applicable):
tfm-rubygem-foreman_bootdisk-10.0.2.2-1.fm1_15.el7sat.noarch
satellite-6.3.2-1.el7sat.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create a host group which has a partition table defined
2. Deploy a new how host using:

# hammer host create --name "hostame" \
  --organization "MyOrg" \
  --location "Location" \
  --hostgroup "hostgroup" \
  --compute-resource "compute_resource" \
  --provision-method bootdisk \
  --build true \
  --enabled true \
  --managed true \
  --interface "managed=true,primary=true,provision=true,compute_type=VirtualVmxnet3,compute_network=Somenetwork,ip=192.168.72.70" \
  --compute-attributes="cpus=2,corespersocket=2,memory_mb=2048,cluster='Cluster',path='Datacenters/xyz/vm',start=1,guest_id=rhel7_64Guest,scsi_controller_type=ParaVirtualSCSIController,add_cdrom=1,firmware=bios,hardware_version=Default" \
  --volume="size_gb=100G,datastore=Datastore,name=myharddisk,thin=true"


Actual results:
Partition table field remains empty for the host


Expected results:
Correctly rendered kickstart file including the partition table.


Additional info:
The host group uses the 'Kickstart default' partition table. When specifying a
partition table on the command line (-- partition-table "Kickstart default")
it is set correctly!

In production.log:
2018-06-29 09:13:17 7e2203a0 [app] [I]   Parameters: {"host"=>{"name"=>"hostname", "location_id"=>2, "organization_id"=>3, "compute_resource_id"=>1, "hostgroup_id"=>26, "build"=>true, "enabled"=>true, "provision_method"=>"bootdisk", "managed"=>true, "compute_attributes"=>{"cpus"=>"2", "corespersocket"=>"2", "memory_mb"=>"2048", "cluster"=>"Cluster", "path"=>"Datacenters/xyz/vm", "start"=>"1", "guest_id"=>"rhel7_64Guest", "scsi_controller_type"=>"ParaVirtualSCSIController", "add_cdrom"=>"1", "firmware"=>"bios", "hardware_version"=>"Default", "volumes_attributes"=>{"0"=>{"size_gb"=>"100G", "datastore"=>"Datastore", "name"=>"myharddisk", "thin"=>"true"}}}, "content_facet_attributes"=>{}, "subscription_facet_attributes"=>{}, "overwrite"=>true, "host_parameters_attributes"=>[], "interfaces_attributes"=>[{"managed"=>"true", "primary"=>"true", "provision"=>"true", "ip"=>"192.168.72.70", "compute_attributes"=>{"type"=>"VirtualVmxnet3", "network"=>"Somenetwork"}}]}, "apiv"=>"v2"}
...
2018-06-29 09:13:45 4597fe3c [templates] [I] Rendering template 'MyOrg - Satellite Kickstart Default'
2018-06-29 09:13:45 4597fe3c [app] [W] DEPRECATION WARNING: you are using deprecated @host.params in a template, it will be removed in 1.17. Use host_param instead.
2018-06-29 09:13:45 4597fe3c [app] [I]   Rendered inline template (67.7ms)
2018-06-29 09:13:45 4597fe3c [app] [W] There was an error rendering the MyOrg - Satellite Kickstart Default template: 
 | ActionView::Template::Error: undefined method `layout' for nil:NilClass
 | /usr/share/foreman/app/models/host/managed.rb:315:in `diskLayout'
 | /opt/theforeman/tfm/root/usr/share/gems/gems/safemode-1.3.2/lib/safemode/jail.rb:31:in `method_missing'
 | NxDI - Satellite Kickstart Default:121:in `bind'
 | /opt/theforeman/tfm/root/usr/share/gems/gems/safemode-1.3.2/lib/safemode.rb:51:in `eval'
 | /opt/theforeman/tfm/root/usr/share/gems/gems/safemode-1.3.2/lib/safemode.rb:51:in `eval'
 | /usr/share/foreman/lib/foreman/renderer.rb:56:in `render_safe'
 | /usr/share/foreman/lib/foreman/renderer.rb:175:in `unattended_render'
 | inline template:1:in `_99669a5c6640f65f26f4db9ef2d2b2b0'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/actionview-4.2.6/lib/action_view/template.rb:145:in `block in render'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/activesupport-4.2.6/lib/active_support/notifications.rb:166:in `instrument'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/actionview-4.2.6/lib/action_view/template.rb:333:in `instrument'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/actionview-4.2.6/lib/action_view/template.rb:143:in `render'
 | /opt/theforeman/tfm/root/usr/share/gems/gems/deface-1.2.0/lib/deface/action_view_extensions.rb:41:in `render'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/actionview-4.2.6/lib/action_view/renderer/template_renderer.rb:54:in `block (2 levels) in render_template'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/actionview-4.2.6/lib/action_view/renderer/abstract_renderer.rb:39:in `block in instrument'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/activesupport-4.2.6/lib/active_support/notifications.rb:164:in `block in instrument'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/activesupport-4.2.6/lib/active_support/notifications/instrumenter.rb:20:in `instrument'
 | /opt/rh/rh-ror42/root/usr/share/gems/gems/activesupport-4.2.6/lib/active_support/notifications.rb:164:in `instrument'
...

Comment 3 Lukas Zapletal 2018-07-02 10:04:54 UTC
Hello, your partition table does not appear to be in taxonomy as your host/template. Check Organization/Location assignment and then it will work. Please confirm so we can close the BZ.

Comment 5 Lukas Zapletal 2018-07-03 09:09:26 UTC
Unable to reproduce here. Please do the following:

[root@next ~]# foreman-rake console

irb(main):001:0> h = Host.find_by_name("test1.nat.lan")
=> #<Host::Managed id: 5, name: "test1.nat.lan", last_compile: nil, last_report: nil, updated_at: "2018-07-03 08:58:24", created_at: "2018-07-03 08:58:24", root_pass: "$5$hAjanIOwTu2HtLa1$UQb3XObfOztLoydPCt1LPpxMXfV0jN...", architecture_id: 1, operatingsystem_id: 1, environment_id: 1, ptable_id: 94, medium_id: 10, build: true, comment: nil, disk: nil, installed_at: nil, model_id: nil, hostgroup_id: 11, owner_id: 4, owner_type: "User", enabled: true, puppet_ca_proxy_id: 1, managed: true, use_image: nil, image_file: nil, uuid: nil, compute_resource_id: nil, puppet_proxy_id: 1, certname: nil, image_id: nil, organization_id: 1, location_id: 2, type: "Host::Managed", otp: nil, realm_id: nil, compute_profile_id: nil, provision_method: "build", grub_pass: "$5$hAjanIOwTu2HtLa1$UQb3XObfOztLoydPCt1LPpxMXfV0jN...", discovery_rule_id: nil, content_view_id: nil, lifecycle_environment_id: nil, global_status: 0, lookup_value_matcher: "fqdn=test1.nat.lan", pxe_loader: nil, openscap_proxy_id: nil>

irb(main):002:0> h.ptable
=> #<Ptable id: 94, name: "Kickstart default", template: "<%#\nkind: ptable\nname: Kickstart default\nmodel: Pt...", snippet: false, template_kind_id: nil, created_at: "2018-06-28 10:23:17", updated_at: "2018-06-28 10:23:17", locked: false, default: true, vendor: nil, type: "Ptable", os_family: "Redhat", job_category: "Miscellaneous", provider_type: nil, description_format: nil, execution_timeout_interval: nil>

irb(main):003:0> h.hostgroup.ptable
=> #<Ptable id: 94, name: "Kickstart default", template: "<%#\nkind: ptable\nname: Kickstart default\nmodel: Pt...", snippet: false, template_kind_id: nil, created_at: "2018-06-28 10:23:17", updated_at: "2018-06-28 10:23:17", locked: false, default: true, vendor: nil, type: "Ptable", os_family: "Redhat", job_category: "Miscellaneous", provider_type: nil, description_format: nil, execution_timeout_interval: nil>

Comment 15 Lukas Zapletal 2019-01-14 10:05:14 UTC
I again failed to reproduce, works fine for me. Couple of items for you:

1) Can you check the Partition Template - Operating System Family and also Location and Organization. Is this all set correctly? 

2) Can you help isolating bootdisk and vmware, just create new host with the same details but skip boodiskmethod and compute resource, give it a dummy MAC address and then try to render kickstart again.

3) Do you see the error during template preview or actually when system is fetching the kickstart for real?

Thanks

Comment 19 Lukas Zapletal 2019-02-20 16:12:07 UTC
For the record, there was a bug I hit during reproducing it, colleague reported it as https://bugzilla.redhat.com/show_bug.cgi?id=1679225

Comment 24 Lukas Zapletal 2019-02-27 09:08:03 UTC
Observation, slight change in JSON coming out from hammer. With bootdisk method:

2019-02-27T13:48:34 [I|app|926e0]   Parameters: {"location_id"=>2, "organization_id"=>1, "host"=>{"name"=>"lzap-test-1", "location_id"=>2, "organization_id"=>1, "ip"=>"192.168.20.71", "puppetclass_ids"=>[], "medium_id"=>10, "compute_resource_id"=>1, "hostgroup_id"=>3, "build"=>true, "enabled"=>true, "provision_method"=>"bootdisk", "managed"=>true, "compute_attributes"=>{"volumes_attributes"=>{}}, "content_facet_attributes"=>{}, "subscription_facet_attributes"=>{}, "overwrite"=>true, "interfaces_attributes"=>[]}, "apiv"=>"v2"}

Without:

2019-02-27T14:28:32 [I|app|ea103] Processing by Api::V2::HostsController#create as JSON
2019-02-27T14:28:32 [I|app|ea103]   Parameters: {"location_id"=>2, "organization_id"=>1, "host"=>{"name"=>"lzap-test-2", "location_id"=>2, "organization_id"=>1, "ip"=>"192.168.20.71", "puppetclass_ids"=>[], "medium_id"=>10, "compute_resource_id"=>1, "hostgroup_id"=>3, "build"=>true, "enabled"=>true, "managed"=>true, "compute_attributes"=>{"volumes_attributes"=>{}}, "content_facet_attributes"=>{}, "subscription_facet_attributes"=>{}, "overwrite"=>true, "interfaces_attributes"=>[]}, "apiv"=>"v2"}

The difference is: "volumes_attributes"=>{}

Comment 25 Lukas Zapletal 2019-02-27 09:34:57 UTC
Found the root cause, thanks for the reproducer.

Comment 28 Bryan Kearney 2019-02-27 11:05:25 UTC
Upstream bug assigned to lzap

Comment 29 Bryan Kearney 2019-02-27 11:05:27 UTC
Upstream bug assigned to lzap

Comment 30 Bryan Kearney 2019-03-06 13:05:30 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue https://projects.theforeman.org/issues/22684 has been resolved.

Comment 31 Lukas Zapletal 2019-03-08 12:51:17 UTC
MR ready

Comment 36 errata-xmlrpc 2019-05-14 12:37:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:1222


Note You need to log in before you can comment on or make changes to this bug.