Bug 1412027

Summary: /tmp fills up easily when downloading builds for a large advisory
Product: [Community] rpm-test-trigger Reporter: Dan Callaghan <dcallagh>
Component: generalAssignee: beaker-dev-list
Status: MODIFIED --- QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: unreleasedCC: jhutar, jorris
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Dan Callaghan 2017-01-11 02:22:07 UTC
On Fedora /tmp is a tmpfs, which means it is stored in memory and limited to half of the available physical memory. In our current deployment that means /tmp is limited to 4GB. Testing a single advisory can easily fill that up, for example when testing advisory 26123:

2017-01-11 02:13:21,542 rpmtesttrigger.trigger ERROR [Errno 28] No space left on device
Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/rpmtesttrigger/build.py", line 263, in fetch_builds_lazy
    f.write(chunk)
OSError: [Errno 28] No space left on device
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/rpmtesttrigger/trigger.py", line 209, in trigger_builds
    for e_id in errata_ids])
  File "/usr/lib64/python3.5/asyncio/futures.py", line 361, in __iter__
    yield self  # This tells Task to wait for completion.
  File "/usr/lib64/python3.5/asyncio/tasks.py", line 296, in _wakeup
    future.result()
  File "/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
  File "/usr/lib64/python3.5/asyncio/tasks.py", line 241, in _step
    result = coro.throw(exc)
  File "/usr/lib/python3.5/site-packages/rpmtesttrigger/trigger.py", line 168, in get_builds_for_errata
    config, aiohttp_session, rpm_download_info, directory)
  File "/usr/lib/python3.5/site-packages/rpmtesttrigger/build.py", line 243, in download_rpms_from_koji
    await asyncio.gather(*downloads)
  File "/usr/lib64/python3.5/asyncio/futures.py", line 361, in __iter__
    yield self  # This tells Task to wait for completion.
  File "/usr/lib64/python3.5/asyncio/tasks.py", line 296, in _wakeup
    future.result()
  File "/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
  File "/usr/lib64/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/usr/lib/python3.5/site-packages/rpmtesttrigger/build.py", line 263, in fetch_builds_lazy
    f.write(chunk)
OSError: [Errno 28] No space left on device

rpm-test-trigger needs to use /var/tmp for download builds to be checked.

Comment 1 Tyrone Abdy 2017-01-12 04:19:23 UTC
So I think as we discussed this would be related to that rpmdeplint was an older version.

Do you think this is valid still (maybe we find some way to limit disk usage in tmp?)

Comment 2 Dan Callaghan 2017-01-12 04:59:40 UTC
rpmdeplint is already using /var/tmp so that is unrelated. The issue here is just the builds on the advisory may themselves be quite large (like kernel, which is the one I saw) and a few running concurrently will easily exceed 4GB.

Comment 3 Dan Callaghan 2017-01-12 05:00:36 UTC
BTW on our staging server I have currently hacked this in on our staging server, I have the patch locally but have held off posting it to avoid rebase hell with all the other patches we have in flight currently.

Comment 4 Jan Hutaƙ 2017-08-07 10:23:14 UTC
Shouldn't this be handled by monitoring of each machine running rpm-test-trigger?

Dan, do you still have that patch?

Comment 5 Dan Callaghan 2017-08-08 04:35:27 UTC
Yes: https://gerrit.beaker-project.org/5779

It's not a question of monitoring. We simply need to use /var/tmp instead of /tmp. /tmp is not suitable for large files.