Bug 1597208
| Summary: | Partition table not set for host when using hammer cli, provisioning method bootdisk and host group | ||
|---|---|---|---|
| Product: | Red Hat Satellite | Reporter: | Ron van der Wees <rvdwees> |
| Component: | Host Group | Assignee: | Lukas Zapletal <lzap> |
| Status: | CLOSED ERRATA | QA Contact: | Mirek Długosz <mzalewsk> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 6.4 | CC: | aagrawal, c.mcgregor, dchaudha, ehelms, inecas, kgaikwad, lzap, mhulan, roarora, rvdwees |
| Target Milestone: | 6.5.0 | Keywords: | Regression, Triaged |
| Target Release: | Unused | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | foreman-1.20.1.22-1 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-05-14 12:37:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Ron van der Wees
2018-07-02 09:00:22 UTC
Hello, your partition table does not appear to be in taxonomy as your host/template. Check Organization/Location assignment and then it will work. Please confirm so we can close the BZ. Unable to reproduce here. Please do the following:
[root@next ~]# foreman-rake console
irb(main):001:0> h = Host.find_by_name("test1.nat.lan")
=> #<Host::Managed id: 5, name: "test1.nat.lan", last_compile: nil, last_report: nil, updated_at: "2018-07-03 08:58:24", created_at: "2018-07-03 08:58:24", root_pass: "$5$hAjanIOwTu2HtLa1$UQb3XObfOztLoydPCt1LPpxMXfV0jN...", architecture_id: 1, operatingsystem_id: 1, environment_id: 1, ptable_id: 94, medium_id: 10, build: true, comment: nil, disk: nil, installed_at: nil, model_id: nil, hostgroup_id: 11, owner_id: 4, owner_type: "User", enabled: true, puppet_ca_proxy_id: 1, managed: true, use_image: nil, image_file: nil, uuid: nil, compute_resource_id: nil, puppet_proxy_id: 1, certname: nil, image_id: nil, organization_id: 1, location_id: 2, type: "Host::Managed", otp: nil, realm_id: nil, compute_profile_id: nil, provision_method: "build", grub_pass: "$5$hAjanIOwTu2HtLa1$UQb3XObfOztLoydPCt1LPpxMXfV0jN...", discovery_rule_id: nil, content_view_id: nil, lifecycle_environment_id: nil, global_status: 0, lookup_value_matcher: "fqdn=test1.nat.lan", pxe_loader: nil, openscap_proxy_id: nil>
irb(main):002:0> h.ptable
=> #<Ptable id: 94, name: "Kickstart default", template: "<%#\nkind: ptable\nname: Kickstart default\nmodel: Pt...", snippet: false, template_kind_id: nil, created_at: "2018-06-28 10:23:17", updated_at: "2018-06-28 10:23:17", locked: false, default: true, vendor: nil, type: "Ptable", os_family: "Redhat", job_category: "Miscellaneous", provider_type: nil, description_format: nil, execution_timeout_interval: nil>
irb(main):003:0> h.hostgroup.ptable
=> #<Ptable id: 94, name: "Kickstart default", template: "<%#\nkind: ptable\nname: Kickstart default\nmodel: Pt...", snippet: false, template_kind_id: nil, created_at: "2018-06-28 10:23:17", updated_at: "2018-06-28 10:23:17", locked: false, default: true, vendor: nil, type: "Ptable", os_family: "Redhat", job_category: "Miscellaneous", provider_type: nil, description_format: nil, execution_timeout_interval: nil>
I again failed to reproduce, works fine for me. Couple of items for you: 1) Can you check the Partition Template - Operating System Family and also Location and Organization. Is this all set correctly? 2) Can you help isolating bootdisk and vmware, just create new host with the same details but skip boodiskmethod and compute resource, give it a dummy MAC address and then try to render kickstart again. 3) Do you see the error during template preview or actually when system is fetching the kickstart for real? Thanks For the record, there was a bug I hit during reproducing it, colleague reported it as https://bugzilla.redhat.com/show_bug.cgi?id=1679225 Observation, slight change in JSON coming out from hammer. With bootdisk method:
2019-02-27T13:48:34 [I|app|926e0] Parameters: {"location_id"=>2, "organization_id"=>1, "host"=>{"name"=>"lzap-test-1", "location_id"=>2, "organization_id"=>1, "ip"=>"192.168.20.71", "puppetclass_ids"=>[], "medium_id"=>10, "compute_resource_id"=>1, "hostgroup_id"=>3, "build"=>true, "enabled"=>true, "provision_method"=>"bootdisk", "managed"=>true, "compute_attributes"=>{"volumes_attributes"=>{}}, "content_facet_attributes"=>{}, "subscription_facet_attributes"=>{}, "overwrite"=>true, "interfaces_attributes"=>[]}, "apiv"=>"v2"}
Without:
2019-02-27T14:28:32 [I|app|ea103] Processing by Api::V2::HostsController#create as JSON
2019-02-27T14:28:32 [I|app|ea103] Parameters: {"location_id"=>2, "organization_id"=>1, "host"=>{"name"=>"lzap-test-2", "location_id"=>2, "organization_id"=>1, "ip"=>"192.168.20.71", "puppetclass_ids"=>[], "medium_id"=>10, "compute_resource_id"=>1, "hostgroup_id"=>3, "build"=>true, "enabled"=>true, "managed"=>true, "compute_attributes"=>{"volumes_attributes"=>{}}, "content_facet_attributes"=>{}, "subscription_facet_attributes"=>{}, "overwrite"=>true, "interfaces_attributes"=>[]}, "apiv"=>"v2"}
The difference is: "volumes_attributes"=>{}
Found the root cause, thanks for the reproducer. Upstream bug assigned to lzap Upstream bug assigned to lzap Moving this bug to POST for triage into Satellite 6 since the upstream issue https://projects.theforeman.org/issues/22684 has been resolved. MR ready Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:1222 |