Hide Forgot
Description of problem: When executing the process "foreman-maintain restore <path to the files>" we can see a long time to validate the hostname. Below the process used to: --- Validate hostname is the same as backup: 28185 root 20 0 124740 2528 1072 S 10.8 0.0 0:15.40 tar zxf /backup/satellite-backup-2019-08-05-09-41-02/config_files.tar.gz etc/httpd/conf/httpd.conf --to-stdout --- Version-Release number of selected component (if applicable): 6.5.z How reproducible: 100% Steps to Reproduce: 1. Create a backup 2. Restore the same 3. Actual results: When the conf file is bigger, the time to validate the hostname increase a lot Expected results: Be faster (maybe an external file just with the hostname will be enough) Additional info:
Hi all To complement, the foreman-maintain logs --- D, [2019-09-02 21:01:22-0400 #23867] DEBUG -- : Running command tar zxf /backup/satellite-backup-2019-08-05-09-41-02/config_files.tar.gz etc/httpd/conf/httpd.conf --to-stdout | grep ServerName | awk {'print $2'} | tr -d '"' with stdin nil ---
From the BZ description it looks like etc/httpd/conf/httpd.conf file extraction from the backup tarball is taking a lot of time. Can we have an idea about: 1. Where is the backup archive stored? (type of storage) 2. How big is the config_files.tar.gz file? 3. A rough estimate on how long does this task take?
Hello, Thanks for your response. > 1. Where is the backup archive stored? (type of storage) A: Could be regular filesystem or NFS. On my test, local storage was used. > 2. How big is the config_files.tar.gz file? A: This will vary according to the customer environment but let me show this example --- # ll total 4396656 -rw-r--r--. 1 root root 28920002265 Sep 9 16:21 config_files.tar.gz ... # du -hs config_files.tar.gz 33G config_files.tar.gz --- Then let's get the information on this file. --- # time tar zxf config_files.tar.gz etc/httpd/conf/httpd.conf --to-stdout | grep ^ServerName ServerName "wallsat67.usersys.redhat.com" << Long time here, probably all the 7 minutes >>> real 7m0.350s user 5m30.999s sys 1m14.056s --- Note. We got the result on the screen real quick, however, once the command is checking the whole file, this process will spend some additional time. > 3. A rough estimate on how long does this task take? A: Above. 33G is a huge one, this is from my lab machine that I use every day, sometimes we got scenarios like this, sometimes 10G, 15G, 5G ... will really be according to the customer environment. Below we can see the solution. Using "--occurrence=1" once the tar find the first one, the process will get out and move on. --- # time tar zxf config_files.tar.gz etc/httpd/conf/httpd.conf --to-stdout --occurrence=1 | grep ^ServerName ServerName "wallsat67.usersys.redhat.com" real 0m0.257s user 0m0.029s sys 0m0.046s --- Thank you! Waldirio
Upstream bug assigned to wpinheir
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/30809 has been resolved.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Satellite 6.9 Satellite Maintenance Release), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1312