Bug 1394791
Summary: | cronjob to update bugs.cloud.gluster.org gets killed | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Niels de Vos <ndevos> |
Component: | project-infrastructure | Assignee: | bugs <bugs> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | mainline | CC: | bugs, gluster-infra, misc, nigelb |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-11-15 12:17:31 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Niels de Vos
2016-11-14 13:14:10 UTC
This also prevents getting the weekly "Bugs with incorrect status" email. The contents is also generated by a cronjob and sent to me for forwarding. Could you make the run-report.sh use a PID file so multiple instances of the report are killed instantly? It looks like there were plenty of instances of the report running on the machine, which is why it got killed. How do you mean "PID file"? One of the scripts is run nightly, the other weekly. Both are expected to finish much sooner than a next execution is done. Do you know why the scripts were not finishing? Maybe we can address that somehow. I mean, literally that. Write the PID of your current script into a file at the start and clean up at the end. Before you write the PID file, check if a stale one exists, if it does, check if the process is still running, if it does, don't run a second time. I don't know why the scripts were not finishing. The only useful thing to do is make sure it doesn't happen again. The machine is restarted and you should be able to run the crons now. There is no swap. I guess this is likely a issue. |