Bug 737531 - RHUA repo sync can run /tmp/ out of space.
Summary: RHUA repo sync can run /tmp/ out of space.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Update Infrastructure for Cloud Providers
Classification: Red Hat
Component: RHUA
Version: 2.0.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Jeff Ortel
QA Contact: mkovacik
URL:
Whiteboard:
Depends On:
Blocks: tracker-rhui-2.0.1
TreeView+ depends on / blocked
 
Reported: 2011-09-12 12:53 UTC by James Slagle
Modified: 2017-03-01 22:05 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-01 22:05:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
httpd error log (139.60 KB, application/octet-stream)
2011-09-12 12:58 UTC, James Slagle
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0367 0 normal SHIPPED_LIVE Red Hat Update Infrastructure 3.0 Release 2017-03-02 03:05:22 UTC

Description James Slagle 2011-09-12 12:53:29 UTC
repo sync's are staged in /tmp/ before being moved over to /var/lib/pulp.  If there are multiple large syncs going on at once, this could easily run /tmp/ out of space since /tmp is often being used as tmpfs or the / partition on a system is small.

The installation guide says to make /var/lib/pulp big enough to hold all of your repositories, but says nothing about /tmp/.

Comment 1 James Slagle 2011-09-12 12:58:31 UTC
Created attachment 522696 [details]
httpd error log

log shows usage of /tmp and running out of space

Comment 2 James Slagle 2011-09-12 19:10:06 UTC
After further investigation, I noticed that df and du on the system were reporting very different output.

I googled around for why and found a helpful links:
http://www.cyberciti.biz/tips/freebsd-why-command-df-and-du-reports-different-output.html

grinder caches the repo metadata in /tmp.  And it's possible it's keeping the file descriptors open even once the repo sync is completed.

Comment 3 James Slagle 2011-09-12 19:24:44 UTC
Here's output from a system:

[root@rhui2 /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvde1            5.7G  3.5G  2.2G  62% /
none                  3.5G     0  3.5G   0% /dev/shm
/dev/xvdl1            296G   99G  182G  36% /var/lib/pulp
/dev/xvdm1             20G  1.2G   18G   7% /var/lib/mongodb
/root/deploy/RHEL-6.1-RHUI-2.0-20110727.2-Server-x86_64-DVD1.iso
                       62M   62M     0 100% /mnt/rhuiso
[root@rhui2 /]# du -h --max-depth 1 --exclude ./var/lib/pulp --exclude ./var/lib/mongodb
24K	./srv
0	./sys
4.0K	./home
0	./misc
27M	./etc
14M	./sbin
63M	./root
31M	./lib64
du: cannot access `./proc/20412/task/20412/fd/4': No such file or directory
du: cannot access `./proc/20412/task/20412/fdinfo/4': No such file or directory
du: cannot access `./proc/20412/fd/4': No such file or directory
du: cannot access `./proc/20412/fdinfo/4': No such file or directory
0	./proc
0	./selinux
0	./net
4.0K	./media
109M	./lib
61M	./mnt
8.9M	./bin
98M	./tmp
136K	./dev
1.6G	./usr
4.0K	./opt
365M	./var
16K	./lost+found
20M	./boot
2.4G	.


Notice df reports 3.5 GB used on /, but du only finds 2.4 GB used.

Here's just a snippet of lsof output from the main wsgi process (pid 12548):
httpd   12548 apache  166ur  REG             202,65  8749056   35995 /tmp/tmpmPbbR2/f35bc58a174945d03a0fe9564cf378e7004381f3-primary.xml.gz.sqlite (deleted)
httpd   12548 apache  170ur  REG             202,65  8749056   35999 /tmp/tmp4K4fsI/f35bc58a174945d03a0fe9564cf378e7004381f3-primary.xml.gz.sqlite (deleted)
httpd   12548 apache  171ur  REG             202,65  6194176   36010 /tmp/tmpqZI0rT/98c3c5d7c133589e180d37bd847bd23756470a0c-primary.xml.gz.sqlite (deleted)
httpd   12548 apache  172ur  REG             202,65  1917952   36012 /tmp/tmp_9DsOx/3449a6b4a4c05bb6443eefbd0ef497feb4a156e5-primary.xml.gz.sqlite (deleted)
httpd   12548 apache  173ur  REG             202,65  1917952   36009 /tmp/temp_pulp_repov0CJHy/3449a6b4a4c05bb6443eefbd0ef497feb4a156e5-primary.xml.gz.sqlite (deleted)
httpd   12548 apache  174ur  REG             202,65  3040256   35984 /tmp/tmpMQJDMo/4a8801e3c4906b60a0e9493d38ea53600a493afa-primary.xml.gz.sqlite (deleted)


Notice the files say deleted, and indeed the first one is not on the filesystem:
[root@rhui2 /]# ls /tmp/tmpmPbbR2/f35bc58a174945d03a0fe9564cf378e7004381f3-primary.xml.gz.sqlite
ls: cannot access /tmp/tmpmPbbR2/f35bc58a174945d03a0fe9564cf378e7004381f3-primary.xml.gz.sqlite: No such file or directory

However, the process still has them open, and we can copy the full file out of /proc.

cp /proc/12548/fd/166 /tmp/primary.xml.gz.sqlite

It's the full sqlite db:
[root@rhui2 /]# sqlite3 /tmp/primary.xml.gz.sqlite
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select count(*) from files;
3506
sqlite>

Comment 4 Jeff Ortel 2011-09-12 22:44:35 UTC
Looks like (in grinder) RepoFetch is creating a YumRepository object which contains a YumSqlitePackageSack object.  Then, RepoFetch.getPackageList() is called which calls YumSqlitePackageSack.populate().  This opens the sqlite connection(s).  The problem is that YumFetchGrinder calls RepoFetch.setupRepo() but never calls YumRepository.close().  I traced through YumRepository.close() in the debugger and verified it closes the sqlite DB connections.

So in class YumFetchGrinder :

def fetchYumRepo(...):
    try:
        self.yumFetch.setupRepo()
    except:
        self.fetchPkgs.stop()
        raise

Needs to be:

def fetchYumRepo(...):
    try:
        self.yumFetch.setupRepo()
    finally:
        self.fetchPkgs.stop()
        self.yumFetch.closeRepo()


In class RepoFetch, added:

def closeRepo(self):
    self.repo.close()

Testing the fix now.

Comment 5 James Slagle 2011-09-13 13:16:54 UTC
I tagged and rebuilt grinder this morning as grinder-0.0.113-1.el6.noarch.  Installed that and restarted pulp-server.

I'm still seeing the same issue, both disk space and memory usage continue to climb as it churns through the repo syncs.  It's only the primary.xml.gz.sqlite file that is being kept open.

The main process has 40 primary.xml.gz.sqlite files open, and that corresponds with exactly how many repos it has done so far.  It's finished 36 so far, and is working on the next 4:
[root@ec2-107-20-207-64 cron.d]# lsof -p 5379 | grep primary.xml.gz.sqlite | wc -l
40

Comment 6 James Slagle 2011-09-13 13:17:32 UTC
Full lsof output:

[root@ec2-107-20-207-64 cron.d]# lsof -p 5379 | grep primary.xml.gz.sqlite
httpd   5379 apache   21ur  REG             202,65    20480   35624 /tmp/temp_pulp_repoHqdtM4/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   27ur  REG             202,65    20480   35643 /tmp/temp_pulp_repoU7a8ne/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   28ur  REG             202,65    20480   35634 /tmp/temp_pulp_repodknXrh/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   31ur  REG             202,65    20480   35607 /tmp/temp_pulp_repoqxAx7x/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   32ur  REG             202,65    20480   35628 /tmp/temp_pulp_repoGfiWoU/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   33ur  REG             202,65    20480   35625 /tmp/temp_pulp_repooGFqsB/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   34ur  REG             202,65    44032   35686 /tmp/temp_pulp_repoRycazB/7dbd924b47c052ae4d83cb0a7cf7abb48779abc3-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   35ur  REG             202,65    20480   35637 /tmp/temp_pulp_repowOCKrt/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   36ur  REG             202,65    20480   35659 /tmp/temp_pulp_repouo3OZ6/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   37ur  REG             202,65    20480   35644 /tmp/temp_pulp_repoWbfwiC/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   38ur  REG             202,65   148480   35632 /tmp/temp_pulp_repoJK7qCt/c03ad509a5080a22213586d564cf0bc8418f12cd-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   39ur  REG             202,65    20480   35641 /tmp/temp_pulp_repozzEjdO/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   40ur  REG             202,65    47104   35649 /tmp/temp_pulp_repoeS0Xok/b1ff1e50ecba24a62c1ca57c6af8a73dfb881eb9-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   41ur  REG             202,65   156672   35684 /tmp/temp_pulp_repo7B8WmF/b49877f6db9a96e55e8e3dad9cb90c06523a7562-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   42ur  REG             202,65   148480   35658 /tmp/temp_pulp_repoR8cbuq/c03ad509a5080a22213586d564cf0bc8418f12cd-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   43ur  REG             202,65   149504   35671 /tmp/temp_pulp_repogKF2Ia/26fb48415bb177260b2d48b53796648d3d1b251e-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   44ur  REG             202,65    47104   35663 /tmp/temp_pulp_repo4cDtIX/854fa8bcd3c607836333117ddb5d7ede3967409e-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   45ur  REG             202,65    20480   35670 /tmp/temp_pulp_repoKZdacp/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   46ur  REG             202,65   149504   35655 /tmp/temp_pulp_repor0gyXQ/26fb48415bb177260b2d48b53796648d3d1b251e-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   47ur  REG             202,65    20480   35653 /tmp/temp_pulp_repoKDBfor/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   48ur  REG             202,65    20480   35646 /tmp/temp_pulp_repouo2AiZ/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   49ur  REG             202,65    44032   35665 /tmp/temp_pulp_repoR1xmAy/7dbd924b47c052ae4d83cb0a7cf7abb48779abc3-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   50ur  REG             202,65    44032   35662 /tmp/temp_pulp_repoqpP98k/75bc6ca6a3cee71c10e1c65ac2bd58d22b27c492-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   51ur  REG             202,65    20480   35680 /tmp/temp_pulp_repougfMGq/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   52ur  REG             202,65    20480   35681 /tmp/temp_pulp_repoK7mrOS/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   53ur  REG             202,65   142336   35673 /tmp/temp_pulp_repojV95ov/637c9aa63a3903aac249cbf6bc8045ed2c592bc4-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   54ur  REG             202,65    20480   35666 /tmp/temp_pulp_reponCzo5D/primary.xml.gz.sqlite (deleted)
httpd   5379 apache   55ur  REG             202,65   156672   35675 /tmp/temp_pulp_repoYR7WEw/b49877f6db9a96e55e8e3dad9cb90c06523a7562-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   56ur  REG             202,65   142336   35695 /tmp/temp_pulp_repopWrMTH/637c9aa63a3903aac249cbf6bc8045ed2c592bc4-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   58ur  REG             202,65    47104   35689 /tmp/temp_pulp_repoY8umBL/d61a3a33e19386a07ba85945f5df2eb91d707c14-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   60ur  REG             202,65    48128   35688 /tmp/temp_pulp_repoThpyAp/3ba1ac9cef36ada33ea459a50e8200d1cbb3abfa-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   62ur  REG             202,65    47104   35708 /tmp/temp_pulp_repo2NDboc/d61a3a33e19386a07ba85945f5df2eb91d707c14-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   63ur  REG             202,65    48128   35701 /tmp/temp_pulp_repoYpcCrc/3ba1ac9cef36ada33ea459a50e8200d1cbb3abfa-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   75ur  REG             202,65    47104   35651 /tmp/temp_pulp_repoQx9JT0/b1ff1e50ecba24a62c1ca57c6af8a73dfb881eb9-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   76ur  REG             202,65    47104   35657 /tmp/temp_pulp_repoLhmPXH/854fa8bcd3c607836333117ddb5d7ede3967409e-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   80ur  REG             202,65 32225280   35736 /tmp/tmpZZmMEs/4bf97c5be84e894005c81772ed963aba43d9cb5f-primary.xml.gz.sqlite
httpd   5379 apache   81ur  REG             202,65    44032   35706 /tmp/temp_pulp_repoXqOotZ/75bc6ca6a3cee71c10e1c65ac2bd58d22b27c492-primary.xml.gz.sqlite (deleted)
httpd   5379 apache   85ur  REG             202,65 31955968   35737 /tmp/tmp3IA_z_/d4ff61bef3ebbdbd27922056ab06e17aaf072993c7e85326839221450ee58389-primary.xml.gz.sqlite
httpd   5379 apache   95ur  REG             202,65 40440832   35738 /tmp/tmp9bZQz9/cd9e660b33f1bf845292e9e19e01ec21a9428ab5-primary.xml.gz.sqlite
httpd   5379 apache  104ur  REG             202,65 40366080   35739 /tmp/tmpTd9Rbc/eef9c6d19499820a6e01fd66d8645988ed316649fcab4af53375c7411f495314-primary.xml.gz.sqlite

Comment 7 Jeff Ortel 2011-09-13 18:25:27 UTC
The pulp/server/util.py get_repo_packages() is also not closing the YumRepository which leaks package sack objects.  This fix is slightly more complicated because get_repo_packages() was returning actual yum package objects which has a reference to the repository.  The fix is to have it return data objects instead and have get_repo_packages() close the repo after using it.

Testing the fix now.


note: Looks like server/api/depsolver.py is also not closing the YumRepository but does not affect the sync.  This should also be fixed.

Comment 8 Jeff Ortel 2011-09-13 18:28:21 UTC
pulp fix (hash): ebf754347617ec5a87125a596c81719e48302ec9

Comment 9 James Slagle 2011-09-13 20:57:34 UTC
I've cherry picked that commit into the rhui branch, and from there pulled that into the pulp-ec2 repository on axiom.  Rebuilding pulp-ec2 in brew now and will start a full sync again to test the fix on the rhua.

Comment 10 James Slagle 2011-09-14 13:55:13 UTC
Applied this update to my test RHUA and let the full sync run overnight.  

At 3:12 am, all the syncs died.  It looks like apache was restarted with a SIGHUP at that time (probably by logrotate).

Comment 11 James Slagle 2011-09-14 14:06:12 UTC
2011-09-14 09:33:55,439 15048:140719716620032: grinder.ParallelFetch:ERROR: ParallelFetch:337 Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/grinder/ParallelFetch.py", line 332, in run
    result = self.fetcher.fetchItem(itemInfo)
  File "/usr/lib/python2.6/site-packages/grinder/activeobject.py", line 91, in __call__
    return self.object(self, *args, **kwargs)
  File "/usr/lib/python2.6/site-packages/grinder/activeobject.py", line 284, in __call__
    return self.__call(method, args, kwargs)
  File "/usr/lib/python2.6/site-packages/grinder/activeobject.py", line 260, in __call
    return self.__rmi(method.name, args, kwargs)
  File "/usr/lib/python2.6/site-packages/grinder/activeobject.py", line 138, in __rmi
    packet = pickle.load(p.stdout)
ValueError: I/O operation on closed file

Comment 12 wes hayutin 2011-10-17 20:01:59 UTC
set tracker bug. 746803

Comment 14 Sachin Ghai 2011-10-22 07:29:11 UTC
Verified this defect with following RHUI ISO:

RHEL-6.1-RHUI-2.0.1-20111017.0-Server-x86_64-DVD1.iso

I started a repo sync on RHUA for following repos:

Red Hat Repositories
  Red Hat Update Infrastructure 2.0 (RPMs) (6.0-i386)
  Red Hat Update Infrastructure 2.0 (RPMs) (6.1-i386)
  Red Hat Update Infrastructure 2.0 (RPMs) (6Server-i386)
  Red Hat Update Infrastructure 2.0 (RPMs) (6.0-x86_64)
  Red Hat Update Infrastructure 2.0 (RPMs) (6Server-x86_64)
  Red Hat Update Infrastructure 2.0 (RPMs) (6.1-x86_64)
  Red Hat Enterprise Linux Server 6 (RPMs) (6.0-i386)
  Red Hat Enterprise Linux Server 6 (RPMs) (6.0-x86_64)
  Red Hat Enterprise Linux Server 6 (RPMs) (6Server-x86_64)
  Red Hat Enterprise Linux Server 6 (RPMs) (6.1-x86_64)
  Red Hat Enterprise Linux Server 6 (RPMs) (6Server-i386)
  Red Hat Enterprise Linux Server 6 (RPMs) (6.1-i386)
  Red Hat Enterprise Linux Server 5 (RPMs) (5.6-i386)
  Red Hat Enterprise Linux Server 5 (RPMs) (5.7-i386)
  Red Hat Enterprise Linux Server 5 (RPMs) (5.6-x86_64)
  Red Hat Enterprise Linux Server 5 (RPMs) (5.7-x86_64)
  Red Hat Enterprise Linux Server 6 Optional (RPMs) (6.0-i386)
  Red Hat Enterprise Linux Server 6 Optional (RPMs) (6.0-x86_64)
  Red Hat Enterprise Linux Server 6 Optional (RPMs) (6Server-i386)
  Red Hat Enterprise Linux Server 6 Optional (RPMs) (6.1-x86_64)
  Red Hat Enterprise Linux Server 6 Optional (RPMs) (6Server-x86_64)
  Red Hat Enterprise Linux Server 6 Optional (RPMs) (6.1-i386)
  Red Hat Enterprise Linux Server 5 (RPMs) (5Server-i386)
  Red Hat Enterprise Linux Server 5 (RPMs) (5Server-x86_64)


tmp usage during sync:
========================

[root@dhcp201-152 /]# du -h --max-depth=1
112M ./lib
1.3G ./usr
0 ./net
16K ./lost+found
5.9G ./var
14M ./sbin
211M ./tmp
4.0K ./home
4.0K ./opt
0 ./selinux
du: cannot access `./proc/23354/task/23354/fd/4': No such file or directory
du: cannot access `./proc/23354/task/23354/fdinfo/4': No such file or directory
du: cannot access `./proc/23354/fd/4': No such file or directory
du: cannot access `./proc/23354/fdinfo/4': No such file or directory
0 ./proc
26M ./lib64
7.7M ./bin
24M ./etc
0 ./misc
4.0K ./cgroup
192K ./dev
46M ./root
0 ./sys
41M ./mnt
4.0K ./media
21M ./boot
20K ./srv
7.7G .



# #5 sghai 2011-10-19 18:31:27 [root@dhcp201-152 /]# du -h --max-depth=1
112M ./lib
1.3G ./usr
0 ./net
16K ./lost+found
8.9G ./var
14M ./sbin
211M ./tmp
4.0K ./home
4.0K ./opt
0 ./selinux
du: cannot access `./proc/32667/task/32667/fd/4': No such file or directory
du: cannot access `./proc/32667/task/32667/fdinfo/4': No such file or directory
du: cannot access `./proc/32667/fd/4': No such file or directory
du: cannot access `./proc/32667/fdinfo/4': No such file or directory
0 ./proc
26M ./lib64
7.7M ./bin
24M ./etc
0 ./misc
4.0K ./cgroup
192K ./dev
46M ./root
0 ./sys
41M ./mnt
4.0K ./media
21M ./boot
20K ./srv
11G .
[root@dhcp201-152 /]#


All repos are synced successfully and no error so far.

-= Repository Synchronization Status =-

Last Refreshed: 10:19:29
(updated every 5 seconds, ctrl+c to exit)

Next Sync                    Last Sync                    Last Result         
------------------------------------------------------------------------------
Red Hat Enterprise Linux Server 5 (RPMs) (5.6-i386)
10-22-2011 11:15             10-22-2011 00:27             Success    

Red Hat Enterprise Linux Server 5 (RPMs) (5.6-x86_64)
10-22-2011 11:15             10-22-2011 00:05             Success    

Red Hat Enterprise Linux Server 5 (RPMs) (5.7-i386)
10-22-2011 11:15             10-22-2011 00:40             Success    

Red Hat Enterprise Linux Server 5 (RPMs) (5.7-x86_64)
10-22-2011 11:15             10-22-2011 01:09             Success    

Red Hat Enterprise Linux Server 5 (RPMs) (5Server-i386)
10-22-2011 11:15             10-21-2011 23:47             Success    

Red Hat Enterprise Linux Server 5 (RPMs) (5Server-x86_64)
10-22-2011 11:15             10-22-2011 09:04             Success    

Red Hat Enterprise Linux Server 6 (RPMs) (6.0-i386)
10-22-2011 10:53             10-22-2011 05:15             Success    

Red Hat Enterprise Linux Server 6 (RPMs) (6.0-x86_64)
10-22-2011 10:53             10-22-2011 05:06             Success    

Red Hat Enterprise Linux Server 6 (RPMs) (6.1-i386)
10-22-2011 10:53             10-22-2011 06:49             Success    

Red Hat Enterprise Linux Server 6 (RPMs) (6.1-x86_64)
10-22-2011 10:53             10-22-2011 08:33             Success    

Red Hat Enterprise Linux Server 6 (RPMs) (6Server-i386)
10-22-2011 10:53             10-22-2011 08:44             Success    

Red Hat Enterprise Linux Server 6 (RPMs) (6Server-x86_64)
10-22-2011 10:53             10-22-2011 05:06             Success    

Red Hat Enterprise Linux Server 6 Optional (RPMs) (6.0-i386)
10-22-2011 11:15             10-22-2011 00:58             Success    

Red Hat Enterprise Linux Server 6 Optional (RPMs) (6.0-x86_64)
10-22-2011 11:15             10-22-2011 01:08             Success    

Red Hat Enterprise Linux Server 6 Optional (RPMs) (6.1-i386)
10-22-2011 11:15             10-22-2011 00:53             Success    

Red Hat Enterprise Linux Server 6 Optional (RPMs) (6.1-x86_64)
10-22-2011 11:15             10-22-2011 01:09             Success    

Red Hat Enterprise Linux Server 6 Optional (RPMs) (6Server-i386)
10-22-2011 11:15             10-22-2011 01:02             Success    

Red Hat Enterprise Linux Server 6 Optional (RPMs) (6Server-x86_64)
10-22-2011 11:15             10-22-2011 00:56             Success    

Red Hat Update Infrastructure 2.0 (RPMs) (6.0-i386)
10-22-2011 15:38             10-22-2011 09:40             Success    

Red Hat Update Infrastructure 2.0 (RPMs) (6.0-x86_64)
10-22-2011 15:38             10-22-2011 09:43             Success    

Red Hat Update Infrastructure 2.0 (RPMs) (6.1-i386)
10-22-2011 15:38             10-22-2011 09:40             Success    

Red Hat Update Infrastructure 2.0 (RPMs) (6.1-x86_64)
10-22-2011 15:38             10-22-2011 09:45             Success    

Red Hat Update Infrastructure 2.0 (RPMs) (6Server-i386)
10-22-2011 15:38             10-22-2011 09:42             Success    

Red Hat Update Infrastructure 2.0 (RPMs) (6Server-x86_64)
10-22-2011 15:38             10-22-2011 09:45             Success    


                                  Connected: dhcp201-152.englab.pnq.redhat.com
------------------------------------------------------------------------------


[root@dhcp201-152 ~]# lsof | grep tmp
httpd      5766    apache  DEL       REG              252,3               394253 /tmp/ffikVZK9U
httpd      5766    apache   22u      REG              252,3      4096     394253 /tmp/ffikVZK9U (deleted)
mongod    28121   mongodb    6u     unix 0xffff88003be63080       0t0   11537743 /tmp/mongodb-27017.sock
mongod    28121   mongodb    8u     unix 0xffff8800378be380       0t0   11537747 /tmp/mongodb-28017.sock
[root@dhcp201-152 ~]# 

/tmp space after sync completion:

[root@dhcp201-152 /]# du -h --max-depth=1
112M	./lib
1.3G	./usr
0	./net
16K	./lost+found
63G	./var
14M	./sbin
244M	./tmp
4.0K	./home
4.0K	./opt
0	./selinux
du: cannot access `./proc/6756/task/6756/fd/4': No such file or directory
du: cannot access `./proc/6756/task/6756/fdinfo/4': No such file or directory
du: cannot access `./proc/6756/fd/4': No such file or directory
du: cannot access `./proc/6756/fdinfo/4': No such file or directory
0	./proc
26M	./lib64
7.7M	./bin
24M	./etc
0	./misc
4.0K	./cgroup
192K	./dev
46M	./root
0	./sys
41M	./mnt
4.0K	./media
21M	./boot
20K	./srv
65G	.
[root@dhcp201-152 /]# 

^Crhui (sync) => 




------------------------------------------------------------------------------
             -= Red Hat Update Infrastructure Management Tool =-


-= CDS Synchronization Status =-

Last Refreshed: 12:54:50
(updated every 5 seconds, ctrl+c to exit)


cds171 ...................................................... [  UP  ]
cds207 ...................................................... [  UP  ]


Next Sync                    Last Sync                    Last Result         
------------------------------------------------------------------------------
cds171
10-22-2011 17:13             10-22-2011 12:27             Success    

cds207
10-22-2011 17:11             10-22-2011 12:20             Success    


                                  Connected: dhcp201-152.englab.pnq.redhat.com
------------------------------------------------------------------------------

All repos are synced successfully. I monitored the tmp usage and all logs files during and after repo sync. however the reported error is not reproducible.

both CDS sre synced also and didn't see any error in gofer.log related to tmp. so moving this to verified.

Comment 18 errata-xmlrpc 2017-03-01 22:05:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0367


Note You need to log in before you can comment on or make changes to this bug.