Bug 1402119 - Satellite 6.1.10 Candlepin: Bulk delete content hosts error
Summary: Satellite 6.1.10 Candlepin: Bulk delete content hosts error
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Candlepin
Classification: Community
Component: candlepin
Version: 2.0
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: 2.0
Assignee: candlepin-bugs
QA Contact: Katello QA List
URL:
Whiteboard:
Depends On:
Blocks: 1402122
TreeView+ depends on / blocked
 
Reported: 2016-12-06 20:23 UTC by Barnaby Court
Modified: 2020-02-14 18:15 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1402117
Environment:
Last Closed: 2017-02-09 15:01:20 UTC


Attachments (Terms of Use)

Description Barnaby Court 2016-12-06 20:23:35 UTC
+++ This bug was initially created as a clone of Bug #1402117 +++

+++ This bug was initially created as a clone of Bug #1390373 +++

Description of problem:

Bulk delete of individual hosts as well in Bulk selection leads to the following error. 

Detail: 

Key (id)=(148682475755cb28015774e6f18532f8) is still referenced from table "cp_pool_source_stack". at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse:2,157
> org.postgresql.util.PSQLException: ERROR: update or delete on table "cp_consumer" violates foreign key constraint "fk_sourcestack_consumer" on table "cp_pool_source_stack"

Version-Release number of selected component (if applicable):

Candleping packages installed. 

[root@cci01-sat6 foreman]# rpm -qa |grep candlepin
candlepin-0.9.49.16-1.el7.noarch
candlepin-scl-rhino-1.7R3-3.el7.noarch
candlepin-scl-1-5.el7.noarch
candlepin-scl-runtime-1-5.el7.noarch
candlepin-scl-quartz-2.1.5-6.el7.noarch
candlepin-tomcat-0.9.49.16-1.el7.noarch
candlepin-common-1.0.22-1.el7.noarch
candlepin-selinux-0.9.49.16-1.el7.noarch
candlepin-guice-3.0-2_redhat_1.el7.noarch
How reproducible:


Steps to Reproduce:
1. Select individual or Bulk hosts (here an overcloud HA deployment
2. Click Bulk Actions in the GUI
3. Click unregister

Actual results:

Unregistration fails.

Expected results:

All selected hosts are unregistered. 

Additional info:

Foreman-debug attached.

--- Additional comment from Francisco Javier Lopez Y Grueber on 2016-10-31 16:56:28 EDT ---

Full trace:
> 
> 2016-10-31 13:57:17,851 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=] INFO  org.candlepin.common.filter.LoggingFilter - Request: verb=DELETE, uri=/candlepin/consumers/868dfe31-5768-4060-ac38-d5074ba658a8
> 2016-10-31 13:57:17,943 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=CommerzbankAG] INFO  org.candlepin.controller.CandlepinPoolManager - Batch revoking entitlements: 3
> 2016-10-31 13:57:17,958 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=CommerzbankAG] INFO  org.candlepin.controller.CandlepinPoolManager - Starting batch delete of pools and entitlements
> 2016-10-31 13:57:17,994 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=CommerzbankAG] INFO  org.candlepin.controller.CandlepinPoolManager - All deletes flushed successfully
> 2016-10-31 13:57:18,031 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=CommerzbankAG] INFO  org.candlepin.controller.CandlepinPoolManager - Modifier entitlements done.
> 2016-10-31 13:57:18,031 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=CommerzbankAG] INFO  org.candlepin.controller.CandlepinPoolManager - Recomputing status for 1 consumers.
> 2016-10-31 13:57:18,078 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=CommerzbankAG] INFO  org.candlepin.controller.CandlepinPoolManager - All statuses recomputed.
> 2016-10-31 13:57:18,180 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=CommerzbankAG] ERROR org.candlepin.common.exceptions.mappers.CandlepinExceptionMapper - Runtime Error ERROR: update or delete on table "cp_consumer" violates foreign key constraint "fk_sourcestack_consumer" on table "cp_pool_source_stack"
>   Detail: Key (id)=(148682475755cb28015774e6f18532f8) is still referenced from table "cp_pool_source_stack". at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse:2,157
> org.postgresql.util.PSQLException: ERROR: update or delete on table "cp_consumer" violates foreign key constraint "fk_sourcestack_consumer" on table "cp_pool_source_stack"
>   Detail: Key (id)=(148682475755cb28015774e6f18532f8) is still referenced from table "cp_pool_source_stack".
>         at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157) ~[postgresql-jdbc.jar:na]
>         at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886) ~[postgresql-jdbc.jar:na]
>         at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) ~[postgresql-jdbc.jar:na]
>         at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555) ~[postgresql-jdbc.jar:na]
>         at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417) ~[postgresql-jdbc.jar:na]
>         at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:363) ~[postgresql-jdbc.jar:na]
>         at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:447) ~[c3p0-0.9.1.2.jar:0.9.1.2]
>         at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:133) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.engine.jdbc.batch.internal.NonBatchingBatch.addToBatch(NonBatchingBatch.java:58) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:3357) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:3560) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.action.internal.EntityDeleteAction.execute(EntityDeleteAction.java:102) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.engine.spi.ActionQueue.execute(ActionQueue.java:393) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:385) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:308) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:339) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:52) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1240) ~[hibernate-core-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.hibernate.ejb.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:996) ~[hibernate-entitymanager-4.2.7.SP2-redhat-1.jar:4.2.7.SP2-redhat-1]
>         at org.candlepin.model.AbstractHibernateCurator.flush(AbstractHibernateCurator.java:364) ~[AbstractHibernateCurator.class:na]
>         at org.candlepin.model.AbstractHibernateCurator.save(AbstractHibernateCurator.java:359) ~[AbstractHibernateCurator.class:na]
>         at org.candlepin.model.AbstractHibernateCurator.create(AbstractHibernateCurator.java:112) ~[AbstractHibernateCurator.class:na]
>         at com.google.inject.persist.jpa.JpaLocalTxnInterceptor.invoke(JpaLocalTxnInterceptor.java:58) ~[guice-persist-3.0-redhat-1.jar:3.0-redhat-1]
>         at org.candlepin.model.ConsumerCurator.delete(ConsumerCurator.java:109) ~[ConsumerCurator.class:na]
>         at com.google.inject.persist.jpa.JpaLocalTxnInterceptor.invoke(JpaLocalTxnInterceptor.java:58) ~[guice-persist-3.0-redhat-1.jar:3.0-redhat-1]
>         at org.candlepin.resource.ConsumerResource.deleteConsumer(ConsumerResource.java:1230) ~[ConsumerResource.class:na]
> ...
>         at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314) [tomcat-coyote.jar:7.0.54]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
>         at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-coyote.jar:7.0.54]
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
> 2016-10-31 13:57:18,184 [req=9ef736cf-5a29-47ba-81d3-4fcf09e9e665, org=CommerzbankAG] INFO  org.candlepin.common.filter.LoggingFilter - Response: status=500, content-type="application/json", time=333
>

--- Additional comment from RHEL Product and Program Management on 2016-11-01 15:57:21 EDT ---

Since this issue was entered in Red Hat Bugzilla, the pm_ack has been
set to + automatically for the next planned release

--- Additional comment from Marcel Gazdík on 2016-11-10 11:03:37 EST ---

This affects 6.2 as well, the problem is that the data from cp_pool_source_stack are still pointing to the cp_consumer and they are not being deleted in cascade or in advance with the script. From the cp_pool_source_stack definion:


candlepin=# \d cp_pool_source_stack;
           Table "public.cp_pool_source_stack"
      Column       |           Type           | Modifiers 
-------------------+--------------------------+-----------
 id                | character varying(32)    | not null
 sourceconsumer_id | character varying(32)    | not null
 sourcestackid     | character varying(255)   | not null
 derivedpool_id    | character varying(32)    | not null
 created           | timestamp with time zone | 
 updated           | timestamp with time zone | 
Indexes:
    "cp_pool_source_stack_pkey" PRIMARY KEY, btree (id)
    "cp_pool_source_stack_pool_ukey" UNIQUE CONSTRAINT, btree (derivedpool_id)
    "cp_pool_source_stack_ukey" UNIQUE CONSTRAINT, btree (sourceconsumer_id, sourcestackid)
    "idx_sourcestack_pool_fk" btree (derivedpool_id)
Foreign-key constraints:
    "fk_sourcestack_consumer" FOREIGN KEY (sourceconsumer_id) REFERENCES cp_consumer(id)
    "fk_sourcestack_pool" FOREIGN KEY (derivedpool_id) REFERENCES cp_pool(id) ON DELETE CASCADE

The foreign key:

"fk_sourcestack_consumer" FOREIGN KEY (sourceconsumer_id) REFERENCES cp_consumer(id)

should be changed to: 

"fk_sourcestack_consumer" FOREIGN KEY (sourceconsumer_id) REFERENCES cp_consumer(id) ON DELETE CASCADE

Workaround:
The workaround is practically just enabling cascade delete of the pool stacks which are pointing to the cp_consumer which is being deleted.

  katello-service stop
  service postgresql start
  ALTER TABLE cp_pool_source_stack DROP CONSTRAINT fk_sourcestack_consumer;
  ALTER TABLE cp_pool_source_stack ADD CONSTRAINT fk_sourcestack_consumer FOREIGN KEY (sourceconsumer_id) REFERENCES cp_consumer(id) ON DELETE CASCADE;
  katello-service start

and re-execute the action. Even though this table is not referenced by any other table, I would recommend to put the database schema into the same state it was before and when it was failing after the delete operation will be done. This can be achieved by executing following steps: 

  katello-service stop
  service postgresql start
  ALTER TABLE cp_pool_source_stack DROP CONSTRAINT fk_sourcestack_consumer;
  ALTER TABLE cp_pool_source_stack ADD CONSTRAINT fk_sourcestack_consumer FOREIGN KEY (sourceconsumer_id) REFERENCES cp_consumer(id);
  katello-service start

--- Additional comment from Barnaby Court on 2016-11-14 08:23:35 EST ---

Is the original action a bulk delete of hypervisors that are providing host based entitlements and guests that are consuming those entitlements at the same time?

--- Additional comment from Francisco Javier Lopez Y Grueber on 2016-11-22 07:09:17 EST ---

Yes, the selected overcloud hosts have been redeployed. During stack delete the were not properly deleted.

Comment 1 Chris "Ceiu" Rog 2017-02-09 15:03:15 UTC
Unfortunately I was unable to reproduce this issue in our environment. If this comes up again, please reopen this BZ (or open a new one) and provide a dump of the database while the issue persists. Once we have a reproducer, we can investigate this further.


Note You need to log in before you can comment on or make changes to this bug.