Red Hat Bugzilla – Bug 619135
manually adding a resource with bad plugin config hoses inventory for that resource
Last modified: 2013-09-02 03:16:10 EDT
Import just the platform. I then started EAP 5.1 in all config (this did not auto-discovery for me for some reason - I bound it via 127.0.0.99 - run.sh -b 127.0.0.99 -c all).
Try to manually add it - but give a bogus JNP port (I did "http://192.168.0.99:1099). you get an error saying that JNP URL is bad. Try to manually add it with the good JNP URL and it still fails with that same error/same bad JNP URL.
From here on out, you can't manually add that server again. It isn't in the DB, but its in the agent's inventory with an inventory status of COMMITTED but with a resource ID of 0 and a sync status of NEW.
The only way to fix this is to start the agent with --purgedata to get rid of the persisted inventory.
I've recreated this. The issue seems to occur when the manually added resource component can't be started. The new resource itself passes the manual discovery, meaning the plugin config seems valid (if not correct). The new resource is committed to agent inventory at this point.
Then we make an attempt to start the resource component. If this throws an exception due to incorrect plugin config, inability to create a connection (underlying resource is down, for example), or whatever, we exit the manual add code prior to syncing the new resource with the server.
This leaves the ghost entry in the agent inventory without any server side knowledge of the resource.
I think the fix would be to allow a failed component start and just proceed. The manually added resource should show up on the server, in a DOWN state. The user can then fix the connection properties or uninventory the poorly defined resource.
Looking at generatig a patch for this now...
Created attachment 435122 [details]
proposed patch for this issue
Created attachment 435126 [details]
Same patch with a typo fixed
jshaughn -- has this patch been applied? if so, which branches
It is applied to Master.
It is not applied to release-3.0.0.
It looks like this was actually fixed in master, so setting to on_qa.
verified 10/4/2011 by following the reproduction steps.
Bulk closing of issues that were VERIFIED, had no target release and where the status changed more than a year ago.