Bug 2153082
| Summary: | [RFE] Option to change the recurring logics cron line. | ||
|---|---|---|---|
| Product: | Red Hat Satellite | Reporter: | Waldirio M Pinheiro <wpinheir> |
| Component: | Tasks Plugin | Assignee: | satellite6-bugs <satellite6-bugs> |
| Status: | CLOSED DUPLICATE | QA Contact: | Peter Ondrejka <pondrejk> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.11.0 | CC: | aruzicka |
| Target Milestone: | Unspecified | Keywords: | FutureFeature |
| Target Release: | Unused | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-01-03 11:57:05 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Waldirio M Pinheiro
2022-12-13 23:33:32 UTC
Let's check this one over here. // The recurring logics from my env --- hammer recurring-logic list ---|-----------|-----------|----------|------- ID | CRON LINE | ITERATION | END TIME | STATE ---|-----------|-----------|----------|------- 9 | 0 0 * * * | 95 | | active 1 | 0 0 * * * | 95 | | active 7 | 0 0 * * * | 95 | | active 8 | 0 0 * * * | 95 | | active ---|-----------|-----------|----------|------- // I'm aware of Inventory sync to be id 7 --- hammer recurring-logic show --id 7 ID: 7 Cron line: 0 0 * * * Action: Inventory scheduled sync Last occurrence: 2022-12-13 00:00:05 UTC Next occurrence: 2022-12-14 00:00:00 UTC Iteration: 95 Iteration limit: Unlimited Repeat until: Unlimited State: active --- // Checking on the db --- [root@wallsat611-rhel7 ~]# echo "select * from foreman_tasks_recurring_logics" | su - postgres -c "psql foreman" id | cron_line | end_time | max_iteration | iteration | task_group_id | state | triggering_id | purpose ----+-----------+----------+---------------+-----------+---------------+--------+---------------+--------- 9 | 0 0 * * * | | | 95 | 9 | active | | 1 | 0 0 * * * | | | 95 | 1 | active | | 7 | 0 0 * * * | | | 95 | 7 | active | | 8 | 0 0 * * * | | | 95 | 8 | active | | (4 rows) --- // Updating to 4am --- # echo "update foreman_tasks_recurring_logics set cron_line = '0 4 * * *' where id=7" | su - postgres -c "psql foreman" --- // Checking once again --- [root@wallsat611-rhel7 ~]# echo "select * from foreman_tasks_recurring_logics" | su - postgres -c "psql foreman" id | cron_line | end_time | max_iteration | iteration | task_group_id | state | triggering_id | purpose ----+-----------+----------+---------------+-----------+---------------+--------+---------------+--------- 9 | 0 0 * * * | | | 95 | 9 | active | | 1 | 0 0 * * * | | | 95 | 1 | active | | 8 | 0 0 * * * | | | 95 | 8 | active | | 7 | 0 4 * * * | | | 95 | 7 | active | | (4 rows) --- I'm not sure if should be necessary a restart, but for the sake of the sanity, let's to it. --- # foreman-maintain service status -b # hammer ping # foreman-maintain service restart << wait ~30 seconds >> # foreman-maintain service status -b # hammer ping --- At this moment, you should be able to confirm also via webUI that the cronjob was updated. Please don't just change things in the database. This BZ talks about two things - having an option to change cronlines of recurring logics and insights sync failing when cloud is too busy. The first one was already reported some time ago in https://bugzilla.redhat.com/show_bug.cgi?id=2135792 , the latter should be addressed by https://bugzilla.redhat.com/show_bug.cgi?id=2127180 . With that being said, this BZ doesn't really bring anything new to table and I'd suggest closing this as a duplicate of either of the two BZs mentioned above. Hi Waldirio, Do you have any thoughts or feedback on comment 2? Can/should this be closed as mentioned? Thanks! Hello all, Yes, definitely we can close the current one. Thank you! Waldirio *** This bug has been marked as a duplicate of bug 2135792 *** |