Bug 1749017 - Spending a long time to validate the hostname when proceeding with the restore
Summary: Spending a long time to validate the hostname when proceeding with the restore
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Satellite Maintain
Version: 6.5.0
Hardware: All
OS: All
unspecified
medium
Target Milestone: 6.9.0
Assignee: Waldirio M Pinheiro
QA Contact: Lucie Vrtelova
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-04 17:15 UTC by Waldirio M Pinheiro
Modified: 2021-04-21 14:48 UTC (History)
10 users (show)

Fixed In Version: rubygem-foreman_maintain-0.7.2-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-21 14:48:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 30809 0 Normal Closed Spending a long time to validate the hostname when proceeding with the restore 2021-02-16 09:16:23 UTC
Red Hat Product Errata RHBA-2021:1312 0 None None None 2021-04-21 14:48:34 UTC

Description Waldirio M Pinheiro 2019-09-04 17:15:04 UTC
Description of problem:
When executing the process "foreman-maintain restore <path to the files>" we can see a long time to validate the hostname.

Below the process used to:
---
Validate hostname is the same as backup: 

28185 root      20   0  124740   2528   1072 S  10.8  0.0   0:15.40 tar zxf /backup/satellite-backup-2019-08-05-09-41-02/config_files.tar.gz etc/httpd/conf/httpd.conf --to-stdout
---

Version-Release number of selected component (if applicable):
6.5.z

How reproducible:
100%

Steps to Reproduce:
1. Create a backup
2. Restore the same
3.

Actual results:
When the conf file is bigger, the time to validate the hostname increase a lot

Expected results:
Be faster (maybe an external file just with the hostname will be enough)

Additional info:

Comment 3 Waldirio M Pinheiro 2019-09-04 17:17:02 UTC
Hi all

To complement, the foreman-maintain logs
---
D, [2019-09-02 21:01:22-0400 #23867] DEBUG -- : Running command tar zxf /backup/satellite-backup-2019-08-05-09-41-02/config_files.tar.gz etc/httpd/conf/httpd.conf --to-stdout | grep ServerName | awk {'print $2'} | tr -d '"' with stdin nil
---

Comment 4 Anurag Patel 2020-09-09 11:06:48 UTC
From the BZ description it looks like etc/httpd/conf/httpd.conf file extraction from the backup tarball is taking a lot of time. Can we have an idea about:

1. Where is the backup archive stored? (type of storage)
2. How big is the config_files.tar.gz file?
3. A rough estimate on how long does this task take?

Comment 5 Waldirio M Pinheiro 2020-09-09 20:40:41 UTC
Hello,

Thanks for your response.

> 1. Where is the backup archive stored? (type of storage)
A: Could be regular filesystem or NFS. On my test, local storage was used.


> 2. How big is the config_files.tar.gz file?
A: This will vary according to the customer environment but let me show this example

---
# ll
total 4396656
-rw-r--r--. 1 root     root     28920002265 Sep  9 16:21 config_files.tar.gz
...

# du -hs config_files.tar.gz 
33G	config_files.tar.gz
---

Then let's get the information on this file.
---
# time tar zxf config_files.tar.gz etc/httpd/conf/httpd.conf --to-stdout | grep ^ServerName
ServerName "wallsat67.usersys.redhat.com"

<< Long time here, probably all the 7 minutes >>>

real	7m0.350s
user	5m30.999s
sys	1m14.056s
---

Note. We got the result on the screen real quick, however, once the command is checking the whole file, this process will spend some additional time.


> 3. A rough estimate on how long does this task take?
A: Above.

33G is a huge one, this is from my lab machine that I use every day, sometimes we got scenarios like this, sometimes 10G, 15G, 5G ... will really be according to the customer environment.



Below we can see the solution. Using "--occurrence=1" once the tar find the first one, the process will get out and move on.
---
# time tar zxf config_files.tar.gz etc/httpd/conf/httpd.conf --to-stdout --occurrence=1 | grep ^ServerName
ServerName "wallsat67.usersys.redhat.com"

real	0m0.257s
user	0m0.029s
sys	0m0.046s
---


Thank you!
Waldirio

Comment 6 Bryan Kearney 2020-09-10 08:03:39 UTC
Upstream bug assigned to wpinheir

Comment 7 Bryan Kearney 2020-12-04 16:04:12 UTC
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/30809 has been resolved.

Comment 11 errata-xmlrpc 2021-04-21 14:48:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Satellite 6.9 Satellite Maintenance Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1312


Note You need to log in before you can comment on or make changes to this bug.