15:55:48 < pil> This is not consistent. The previous restart action took some time, but completed. This time it failed and I got an error: "Timed out : did not complete after 27017ms (the timeout period was [0] ms)" 15:56:02 < pil> Ah, now it is marked as success 15:56:08 < pil> even worse .. 15:58:05 < mazz> if a timeout occurs, but the plugin actuall succeeded, I believe the agent will come in and tell the server and the server will change that timeout to success 15:58:50 < mazz> because if the plugin really did complete successfully, even after a timeout occurred, we should indicate that (to let the user know it really did complete) 16:00:31 < ghinkle> if the op timesout 16:00:39 < ghinkle> there will be nothing to check if it succeeded
heiko, i think this is working as intended. there are certain rare circumstances under which the agent-side and server-side are not completely in sync. if the user pressed "cancel" for some operation, the operation history will be marked as canceled and the cancel-request will make it down to the agent. however, the agent-side operation has to be written in a way to respond to this request, because it if isn't the operation will proceed to some terminal state (success or failure), and that result will be sent up to the server and override the cancel request. there is also a possibility that the cancel request went out but the agent-side operation has already completed (and the result is en route to the server). in all of these cases, you might see the operation history flip-flop. but, again, this is rare. quite rare. and shouldn't be looked at as a bad thing. the server tries to mark the status correctly but in the end the agent has final say (if there is a disagreement). if you have a specific use case that is reproducible that you feel doesn't match the above description, let me know. for now, putting it into NEED INFO state. heiko, please close or re-open as you see appropriate according to my feedback.
I don't really think this applies here, but as I can't remember any details from march and I did not see the issue when running operations lately, I think we can just close it.
This bug was previously known as http://jira.rhq-project.org/browse/RHQ-100 This bug is related to RHQ-628