Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1524417 - "apipie:cache:index" task is getting executed two times while updating packages from 6.1 to 6.2 using yum update.
Summary: "apipie:cache:index" task is getting executed two times while updating packag...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Upgrades
Version: 6.2.12
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact: Nikhil Kathole
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-11 12:56 UTC by sandeep mutkule
Modified: 2019-02-28 19:35 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-28 19:35:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Image attached (140.53 KB, image/png)
2017-12-11 13:28 UTC, sandeep mutkule
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 21965 0 High Resolved rake tasks are getting executed multiple times on yum update with plugins 2020-07-04 00:55:29 UTC

Description sandeep mutkule 2017-12-11 12:56:19 UTC
Description of problem:

While upgrading satellite to its latest version  "apipie:cache:index" process runs twice. Consumes too much time which should be reduced.  


Version-Release number of selected component (if applicable): satellite 6.2


How reproducible: 
1)Run below command to upgrade satellite:

satellite-installer  --scenario  satellite --upgrade 


Actual results: 
Takes too much time to upgrade 


Expected results: 
upgrading satellite should take less time compared to before 


Additional info:

Comment 1 sandeep mutkule 2017-12-11 13:26:11 UTC
While running yum update to install/upgrade packages to 6.2 from 6.1, it creates bash scripts in /tmp/ directory like: 

/tmp/sclPliDkB 
/tmp/tmp.BGgtiKxsvJ 
/tmp/rpm-tmp.1U8aWH

these scripts contain rake tasks like: 

db:migrate
db:seed
apipie:cache:index 

For the customers having large satellite setup these tasks take long time to complete. While these tasks are running in the backend there is no user friendly message on the console, so it make user think that  " yum update" command is stuck or not responding. 

Since we are running all these above tasks while running # satellite-installer --upgrade so can we skip then while updating packages? 

If not, there should be user friendly error message on the console so admins will wait till it completes.

Comment 2 sandeep mutkule 2017-12-11 13:26:49 UTC
Attaching image from a test system for reference:

Comment 3 sandeep mutkule 2017-12-11 13:28:26 UTC
Created attachment 1366001 [details]
Image attached

Comment 6 Ashish Humbe 2017-12-13 10:07:21 UTC
Hi Brad,

Yeah, this issue is seen while 6.2 to 6.3 upgrade too. 

Here are some details from one of my recent upgrade 6.2.7 to 6.2.12 

This is the script which get created under /var/tmp/

# cat /var/tmp/rpm-tmp.dM27yT 
# We need to run the db:migrate after the install transaction
# always attempt to reencrypt after update in case new fields can be encrypted
/usr/sbin/foreman-rake db:migrate db:encrypt_all >> /var/log/foreman/db_migrate.log 2>&1 || :
/usr/sbin/foreman-rake db:seed >> /var/log/foreman/db_seed.log 2>&1 || :
/usr/sbin/foreman-rake apipie:cache:index >> /var/log/foreman/apipie_cache.log 2>&1 || :
(/sbin/service foreman status && /sbin/service foreman restart) >/dev/null 2>&1



# cat /tmp/tmp.q1hd3u46nC 
rake db:migrate db:encrypt_all

# date ; cat /tmp/tmp.1YxDGNuRI2 
Wed Dec 13 05:22:36 IST 2017
rake apipie:cache:index

# date ; cat /tmp/tmp.0WDLlgAm0y 
Wed Dec 13 05:24:14 IST 2017
rake db:migrate

# date ; cat /tmp/tmp.uugUYwHdqX  
Wed Dec 13 05:24:34 IST 2017
rake db:seed

# date ; cat /tmp/tmp.DQagGFOFML 
Wed Dec 13 05:24:58 IST 2017
rake apipie:cache:index

# date ; cat /tmp/tmp.SdPGWupI6p 
Wed Dec 13 05:26:37 IST 2017
rake db:migrate

# date ; cat /tmp/tmp.T7YSLIW9by 
Wed Dec 13 05:26:58 IST 2017
rake db:seed

# date ; cat /tmp/tmp.iNIq65b0zu 
Wed Dec 13 05:27:23 IST 2017
rake apipie:cache:index

# date ; cat /tmp/tmp.h3oe3OYPx4 
Wed Dec 13 05:29:37 IST 2017
rake db:seed

# date ; cat /tmp/tmp.LN5v3Ok0AO 
Wed Dec 13 05:30:06 IST 2017
rake ldap:refresh_usergroups

# date ; cat /tmp/tmp.tE3NdUdZ6R 
Wed Dec 13 05:30:11 IST 2017
rake apipie:cache:index

# date ; cat /tmp/tmp.WeDUj2hb4N 
Wed Dec 13 05:30:15 IST 2017
rake trends:counter

# date ; cat /tmp/tmp.tE3NdUdZ6R 
Wed Dec 13 05:30:22 IST 2017
rake apipie:cache:index

# date ; cat /tmp/tmp.UKaU3YA3Xm 
Wed Dec 13 05:32:20 IST 2017
rake db:migrate

# date ; cat /tmp/tmp.uwTR0BKM4M 
Wed Dec 13 05:32:51 IST 2017
rake db:seed

# date ; cat /tmp/tmp.OHqyik9BUh 
Wed Dec 13 05:33:11 IST 2017
rake apipie:cache

# date ; cat /tmp/tmp.OHqyik9BUh 
Wed Dec 13 05:38:19 IST 2017
rake apipie:cache

Here we can see that the yum update command was stuck for 16+ mins with no output on the console. 

I will try to get similar details for 6.2 to 6.3 upgrade and update in this bz.

Comment 7 Ivan Necas 2017-12-13 18:44:14 UTC
I'm suggesting this for foreman-maintain: the idea would be to set something (such as env variable or maybe a file present on the file-system) to skip this steps, as we will run them via the upgrade process later anyway.

This task would consist of two steps:

1. update the packaging spec to allow skipping this parts (such as here https://github.com/theforeman/foreman-packaging/blob/rpm/1.16/foreman/foreman.spec#L862), it's needed to be aware of the fact

2. teach foreman-maintain how to use the way to disable this steps, introduced in 1.

Comment 8 Ivan Necas 2017-12-13 18:59:12 UTC
Acutally, thinking more about this, while the foreman-maintain addition would be useful, as we would prevent running all the rake scripts in the rpm scripts, I think the solution might be just teaching the foreman macros how to check, if the particular rake task was already run during the transaction: this would prevent from need to be run twice.

The problem is every plugin adds the steps about db:migrate and apipie:cache into their post-trans phase. This results in running this multiple times. The solution might be:

1. in %postrans, every affected rake run would create some mark that it was run. The subsequent rake runs would notice this and would skip it's call again

2. in %pretrans, we would clear the marks from previous run to allow the posttrans to be run in case of the new update session

The mark could be achieved by a simple file, that would contain all the rake calls that were already run during the update.

Comment 9 Ivan Necas 2017-12-13 19:13:00 UTC
Created redmine issue http://projects.theforeman.org/issues/21965 from this bug

Comment 10 Bryan Kearney 2019-02-07 12:09:38 UTC
The Satellite Team is attempting to provide an accurate backlog of bugzilla requests which we feel will be resolved in the next few releases. We do not believe this bugzilla will meet that criteria, and have plans to close it out in 1 month. This is not a reflection on the validity of the request, but a reflection of the many priorities for the product. If you have any concerns about this, feel free to contact Red Hat Technical Support or your account team. If we do not hear from you, we will close this bug out. Thank you.

Comment 11 Bryan Kearney 2019-02-28 19:35:16 UTC
Thank you for your interest in Satellite 6. We have evaluated this request, and while we recognize that it is a valid request, we do not expect this to be implemented in the product in the foreseeable future. This is due to other priorities for the product, and not a reflection on the request itself. We are therefore closing this out as WONTFIX. If you have any concerns about this, please do not reopen. Instead, feel free to contact Red Hat Technical Support. Thank you.


Note You need to log in before you can comment on or make changes to this bug.