Bug 1554307
Summary: | [RFE] Spacewalk needs a way for the Spacewalk Adminstrator to manage Snapshots | ||
---|---|---|---|
Product: | Red Hat Satellite 5 | Reporter: | Tomáš Kašpárek <tkasparek> |
Component: | Server | Assignee: | Grant Gainey <ggainey> |
Status: | CLOSED ERRATA | QA Contact: | Radovan Drazny <rdrazny> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 580 | CC: | cshereme, ggainey, jhutar, ktordeur, rdrazny, satqe-list, tkasparek, tlestach |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | spacewalk-utils-2.5.1-31-sat | Doc Type: | Enhancement |
Doc Text: |
Feature:
Add tooling to give the Satellite Administrator more control over Snapshots
Reason:
If enable_snapshots is set for a server, entries are made in the RHNSNAPSHOT* tables with every change to every server. Over time, these tables grow without bounds, with increasing impact on space and performance of the Satellite instance.
The current tool for managing snapshots, sw-system-snapshot, is designed for a system-administrator to manage the snapshots associated with their systems. It has no way to 'see' ALL the snapshots, its use of the public API means it must be invoked with a login/password, and its use of absolute-timestamps, all make it less than useful as a tool for the Satellite Administrator.
Result:
This RFE introduces a new tool, spacewalk-manage-snapshots, that addresses these concerns. 'man spacewalk-manage-snapshots' for details.
|
Story Points: | --- |
Clone Of: | 1537766 | Environment: | |
Last Closed: | 2018-05-15 21:46:38 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1537766 | ||
Bug Blocks: | 1450111 |
Comment 3
Radovan Drazny
2018-04-24 13:42:34 UTC
spacewalk.github: 439bbadb0ef583a6ff917c897997002750b2355d When running a report and then delete right after it, there is quite a discrepancy between the number of snaps older than 1 day in report and later in number of snapshots older than 1 day to delete. Shouldn't be these two numbers same? See two following examples: [root@host-8-248-201 rpms]# spacewalk-manage-snapshots -r -i 1 Table name : rows rhnsnapshot : 95750 rhnsnapshotchannel : 191500 rhnsnapshotconfigchannel : 95750 rhnsnapshotconfigrevision : 95750 rhnsnapshotinvalidreason : 6 rhnsnapshotpackage : 27623850 rhnsnapshotservergroup : 191500 rhnsnapshottag : 0 : Snapshot info, 1-day interval : : age(days) : systems : snapshots : : 1-1 : 100 : 84657 : : 2-2 : 100 : 11093 : [root@host-8-248-201 rpms]# spacewalk-manage-snapshots -d 1 -b 1000 Deleting snapshots older than 1 days 95750 snapshots currently 12353 snapshots to be deleted, 1000 per commit ... 12353 snapshots left to purge ... 11353 snapshots left to purge ... 10353 snapshots left to purge ... 9353 snapshots left to purge ... 8353 snapshots left to purge ... 7353 snapshots left to purge ... 6353 snapshots left to purge ... 5353 snapshots left to purge ... 4353 snapshots left to purge ... 3353 snapshots left to purge ... 2353 snapshots left to purge ... 1353 snapshots left to purge ... 353 snapshots left to purge 83397 snapshots remain ----------------------------------------------------------------------- [root@host-8-248-201 rpms]# spacewalk-manage-snapshots -r -i 1 Table name : rows rhnsnapshot : 98651 rhnsnapshotchannel : 197302 rhnsnapshotconfigchannel : 98651 rhnsnapshotconfigrevision : 98651 rhnsnapshotinvalidreason : 6 rhnsnapshotpackage : 28460788 rhnsnapshotservergroup : 197302 rhnsnapshottag : 0 : Snapshot info, 1-day interval : : age(days) : systems : snapshots : : 1-1 : 100 : 96600 : : 2-2 : 100 : 2051 : [root@host-8-248-201 rpms]# spacewalk-manage-snapshots -d 1 -b 100 Deleting snapshots older than 1 days 98651 snapshots currently 2901 snapshots to be deleted, 100 per commit ... 2901 snapshots left to purge ... 2801 snapshots left to purge ... 2701 snapshots left to purge ... 2601 snapshots left to purge ... 2501 snapshots left to purge ... 2401 snapshots left to purge ... 2301 snapshots left to purge ... 2201 snapshots left to purge ... 2101 snapshots left to purge ... 2001 snapshots left to purge ... 1901 snapshots left to purge ... 1801 snapshots left to purge ... 1701 snapshots left to purge ... 1601 snapshots left to purge ... 1501 snapshots left to purge ... 1401 snapshots left to purge ... 1301 snapshots left to purge ... 1201 snapshots left to purge ... 1101 snapshots left to purge ... 1001 snapshots left to purge ... 901 snapshots left to purge ... 801 snapshots left to purge ... 701 snapshots left to purge ... 601 snapshots left to purge ... 501 snapshots left to purge ... 401 snapshots left to purge ... 301 snapshots left to purge ... 201 snapshots left to purge ... 101 snapshots left to purge ... 1 snapshots left to purge 95750 snapshots remain The difference is because you were creating a lot of snapshots very very quickly, and 'in the last day' is governed by "within the last 24 hours' worth of milliseconds" - so the "purge older than" window slides forward between "-r -i 1" and "-d 1 -b 100", and several hundred more snapshots become eligible. On the reproducing system, you can see this if you just keep re-running the "how many should I delete boss?" query against the DB: rhnschema=# select count(ss.id) from rhnsnapshot ss where ss.created < (current_timestamp - numtodsinterval(1, 'day')); count ------- 19646 (1 row) rhnschema=# select count(ss.id) from rhnsnapshot ss where ss.created < (current_timestamp - numtodsinterval(1, 'day')); count ------- 19669 (1 row) rhnschema=# select count(ss.id) from rhnsnapshot ss where ss.created < (current_timestamp - numtodsinterval(1, 'day')); count ------- 19677 (1 row) rhnschema=# select count(ss.id) from rhnsnapshot ss where ss.created < (current_timestamp - numtodsinterval(1, 'day')); count ------- 19697 (1 row) So just hitting up-arrow/enter as fast as I could made the count change from 19646 to 19697. Working as intended, I believe. Ok then. Verified on spacewalk-utils-2.5.1-31. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:1565 |