Bug 476021 - Permissions issue when retrieving data from S3 in EC3E
Permissions issue when retrieving data from S3 in EC3E
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: grid (Show other bugs)
All Linux
low Severity medium
: 1.1
: ---
Assigned To: Robert Rati
Jeff Needle
Depends On:
  Show dependency treegraph
Reported: 2008-12-11 10:57 EST by Robert Rati
Modified: 2009-02-04 11:06 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2009-02-04 11:06:27 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2009:0036 normal SHIPPED_LIVE Red Hat Enterprise MRG Grid 1.1 Release 2009-02-04 11:03:49 EST

  None (edit)
Description Robert Rati 2008-12-11 10:57:58 EST
Description of problem:
An EC2E job completes, but the finalize hook is unable to place the tarball contents of the results in the routed job's spool dir because the spool dir is no longer owned by the job owner, but by user condor.

Version-Release number of selected component (if applicable):

How reproducible:
Run EC2E job and have the AMI exit before the results are able to be read from SQS by condor.

Steps to Reproduce:
Actual results:
12/11 10:18:31 (pid:13672) Job 3273.0 is finished
12/11 10:18:31 (pid:13672) Job cleanup for 3273.0 will not block, calling jobIsFinished() directly
12/11 10:18:31 (pid:13672) jobIsFinished() completed, calling DestroyProc(3273.0)

12/11 10:19:00 JobRouter (src=3266.7,dest=3273.0,route=Amazon Small): updated job status
12/11 10:19:00 JobRouter (src=3266.7,dest=3273.0,route=Amazon Small): found target job finished

Expected results:

Additional info:
The routed job is completing BEFORE the source job.  That means the EC2 job is completing and shutting down before the status message makes its way back.
When the ec2 job finishes condor chown's it's spool back to condor.condor.
The status that the routed job has completed has not yet been read. and the JR just fires off the cleanup hook, as it should, when the routed job completes.
Comment 1 Robert Rati 2008-12-17 10:14:24 EST
The finalize hook no longer attempts to write to the routed job's spool directory, and instead writes to the source job's IWD.

Fixed in:
Comment 4 errata-xmlrpc 2009-02-04 11:06:27 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.