Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1263514 - Deleting a compute resource/profile does not cascade delete ids from hosts table
Summary: Deleting a compute resource/profile does not cascade delete ids from hosts table
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Infrastructure
Version: Nightly
Hardware: x86_64
OS: Linux
high
high
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact:
URL: http://projects.theforeman.org/issues...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-16 03:38 UTC by Chris Roberts
Modified: 2019-09-26 18:42 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-12 20:49:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 12982 0 Normal New Deleting a compute profile does not cascade delete ids from hosts table 2020-10-22 03:26:44 UTC

Description Chris Roberts 2015-09-16 03:38:57 UTC
Description of problem:
When dissociating a host from a compute resource and then deleting a compute profile/resource it leaves behind the compute_resource_id and compute_profile_id in the hosts table causing a FK error in postgres upon adding/editing the compute_resource and compute profile page

Version-Release number of selected component (if applicable):
6.0/6.1

How reproducible:


Steps to Reproduce:
1. install sat 6.0/6.1 setup compute resource/profile 
2. build host with compute profile/resource
3. delete compute profile/resource
4. try to add/modify compute resource/profile

Actual results:

PGError
ERROR: update or delete on table "compute_profiles" violates foreign key constraint "hosts_compute_profile_id_fk" on table "hosts" DETAIL: Key (id)=(9) is still referenced from table "hosts". 
app/controllers/compute_profiles_controller.rb:37:in `destroy'
app/models/concerns/foreman/thread_session.rb:33:in `clear_thread'
lib/middleware/catch_json_parse_errors.rb:9:in `call'

PGError
ERROR: update or delete on table "compute_resources" violates foreign key constraint "hosts_compute_resource_id_fk" on table "hosts" DETAIL: Key (id)=(3) is still referenced from table "hosts". 
app/controllers/compute_resources_controller.rb:68:in `destroy'
app/models/concerns/foreman/thread_session.rb:33:in `clear_thread'
lib/middleware/catch_json_parse_errors.rb:9:in `call'


Expected results:
a cascade delete setting the id fields in the hosts table for compute resource/profile to ""/nil

Additional info:

Steps I used to do a work-around:

We are changing to our auth user:
irb(main):001:0> User.current = User.first
2015-08-29 17:15:30 [D]   User Load (0.3ms)  SELECT "users".* FROM "users" ORDER BY firstname LIMIT 1
=> #<User id: 3, login: "hbcdigital", firstname: "Admin", lastname: "User", mail: "hbcdigitalunix", admin: true, last_login_on: "2015-08-26 19:54:31", auth_source_id: 1, created_at: "2015-04-28 20:31:32", updated_at: "2015-08-26 19:54:31", password_hash: "966ce2a137f20279ac685b3ac66e156c2d57a341", password_salt: "ebe67ddf59d3fd59f6c0be17828f6b7e12db7bb6", locale: nil, helptips_enabled: true, page_size: 25, disabled: false, preferences: #<struct ActiveRecord::AttributeMethods::Serialization::Attribute coder=#<ActiveRecord::Coders::YAMLColumn:0x000000093d1bd0 @object_class=Hash>, value="---\n:dashboard:\n  errata:\n    :page_size: 30\n", state=:serialized>, remote_id: "hbcdigital-1a03a276", avatar_hash: nil, default_organization_id: 1, default_location_id: 2, lower_login: "hbcdigital", mail_enabled: true>

I am selecting the host based on the id of the systems that are coming back from my query in postgresql and making it a variable

irb(main):002:0> host = Host.find(44)
2015-08-29 17:15:42 [D]   Host::Managed Load (1.4ms)  SELECT "hosts".* FROM "hosts" WHERE "hosts"."type" IN ('Host::Managed') AND "hosts"."id" = $1 LIMIT 1  [["id", 44]]
=> #<Host::Managed id: 44, name: "hd3qoms08lx.digital.hbc.com", ip: "10.130.216.78", last_compile: nil, last_freshcheck: nil, last_report: nil, updated_at: "2015-07-31 15:34:35", source_file_id: nil, created_at: "2015-06-02 14:10:20", mac: "00:50:56:9f:4d:b7", root_pass: "$1$FzxLojwn$AFFJeSM4iKaRq5DA4QuaU1", serial: nil, puppet_status: 0, domain_id: 1, architecture_id: 1, operatingsystem_id: 4, environment_id: 1, subnet_id: 2, ptable_id: 14, medium_id: 7, build: false, comment: "", disk: "", installed_at: "2015-06-02 14:23:11", model_id: nil, hostgroup_id: nil, owner_id: 3, owner_type: "User", enabled: true, puppet_ca_proxy_id: 1, managed: true, use_image: nil, image_file: nil, uuid: "501fa5c1-f9f1-e50f-ee04-7b93a99107d2", compute_resource_id: 3, puppet_proxy_id: 1, certname: nil, image_id: nil, organization_id: 1, location_id: 3, type: "Host::Managed", otp: nil, realm_id: nil, compute_profile_id: nil, provision_method: "build", content_source_id: 1, primary_interface: nil, grub_pass: "", discovery_rule_id: nil, content_view_id: nil, lifecycle_environment_id: nil>
irb(main):003:0> host.compute_resource_id = "nil"

I am editing the compute resource to nil

irb(main):003:0> host.compute_resource_id = ""

Now to save my changes:

irb(main):011:0> host.save

Comment 4 Brad Buckingham 2016-01-04 15:48:12 UTC
Created redmine issue http://projects.theforeman.org/issues/12982 from this bug

Comment 5 Bryan Kearney 2016-01-04 17:01:18 UTC
Upstream bug component is Provisioning

Comment 6 Shimon Shtein 2016-03-20 18:01:10 UTC
What behavior is desired?
I can prevent the action from actually removing compute profile/resource until all hosts are disassociated from it.
If I try to disassociate hosts automatically we will encounter a problem when the user is authorized to delete a resource, but is not authorized to see all hosts that use this resource (for example those hosts belong to a different org/loc). In this case I will either change objects (hosts) that the user is authorized to, or I still will be unable to complete the action. Same issue exists for any resource connected to a host like hostgroup, puppet environment e.t.c.

IMHO, the best action here will be validation and reject while there are hosts connected to it.

Comment 8 Chris Roberts 2016-04-12 14:41:56 UTC
Shimon,

I agree with this action:

IMHO, the best action here will be validation and reject while there are hosts connected to it.

- Chris

Comment 9 Shimon Shtein 2016-04-24 13:03:47 UTC
Looks like https://bugzilla.redhat.com/show_bug.cgi?id=1207972 should fix this issue too. I can see that in upstream it's already fixed.

Comment 12 Bryan Kearney 2016-08-10 19:09:29 UTC
Upstream bug component is Infrastructure

Comment 13 Bryan Kearney 2016-08-10 20:10:29 UTC
Upstream bug component is Provisioning

Comment 14 Bryan Kearney 2016-08-10 22:10:17 UTC
Upstream bug component is Infrastructure

Comment 15 Bryan Kearney 2016-08-18 16:09:00 UTC
Upstream bug component is Infrastructure


Note You need to log in before you can comment on or make changes to this bug.