Bug 2008252
| Summary: | Update the Hypervisor last_checkin once the same still around and reporting | ||
|---|---|---|---|
| Product: | Red Hat Satellite | Reporter: | Waldirio M Pinheiro <wpinheir> |
| Component: | Hosts - Content | Assignee: | Chris Roberts <chrobert> |
| Status: | CLOSED MIGRATED | QA Contact: | Satellite QE Team <sat-qe-bz-list> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.9.0 | CC: | chrobert, pcreech, rlavi |
| Target Milestone: | Unspecified | Keywords: | MigratedToJIRA, Triaged |
| Target Release: | Unused | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2024-06-06 01:03:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Waldirio M Pinheiro
2021-09-27 17:33:47 UTC
Hello, Normally, we recommend the customer remove old entries from satellite. The sysadmin will be able to get this info with the filter below --- last_checkin < "30 days ago" --- The point is, if the hypervisor is still around with no changes during the last 30 days, they will be part of this list and "could be" removed as well. The worst-case scenario is: 1. The hypervisor is valid and has a lot of guests on top of it 2. rh_cloud plugin is pushing the information 3. Subscription is removing all the VM's and adding only the hypervisor Once we remove the hypervisor, we can see 1. No hypervisor on Satellite until the next virt-who cycle 2. rh_cloud plugin will push the new information 3. all the VM's previously not in Subscriptions will be added, once the host-guest mapping was removed 4. Subscription will update the graph with some *crazy values* with no sense In the next day, we will see the environment returning to it's original state Note, we can easily fix this behavior by keeping the last_checkin updated once the hypervisor is still around. For example 1. virt-who pushed 5 hypervisors (h1, h2, h3, h4 and h5) 2. virt-who is still pushing the information, no change on hypervisors but all of them still around (last_checking get updated) 3. hypervisor h3 get retired, now the list is (h1, h2, h4 and h5) 4. All the hypervisors will get the last_checkin updated, except by h3 In this scenario we can guarantee, if the hypervisor has last_checking gt 30 days, it's for sure because that server is not around anymore. Please, let me know if you have additional questions or concerns. Upon review of our valid but aging backlog the Satellite Team has concluded that this Bugzilla does not meet the criteria for a resolution in the near term, and are planning to close in a month. This message may be a repeat of a previous update and the bug is again being considered to be closed. If you have any concerns about this, please contact your Red Hat Account team. Thank you. Adding to sprint 127 New virt-who bug: https://issues.redhat.com/browse/RHEL-25413 Ron, I made a PR to make sure we note the checkin time, can this get 6.16? This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "SAT-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. |