Bug 535618 (RHQ-2295) - Uninventory, then manually add of the same server resource leaves it in inconsistent state
Summary: Uninventory, then manually add of the same server resource leaves it in incon...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: RHQ-2295
Product: RHQ Project
Classification: Other
Component: Inventory
Version: 1.3pre
Hardware: All
OS: All
high
medium
Target Milestone: ---
: ---
Assignee: Charles Crouch
QA Contact: Jeff Weiss
URL: http://jira.rhq-project.org/browse/RH...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-08-05 19:07 UTC by Jeff Weiss
Modified: 2023-09-14 01:18 UTC (History)
4 users (show)

Fixed In Version: 1.4
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
rev4650
Last Closed: 2014-05-09 15:32:44 UTC
Embargoed:


Attachments (Terms of Use)

Description Jeff Weiss 2009-08-05 19:07:00 UTC
How to repeat:

Find a server resource that you can manually add to a platform, that your server has already discovered, like apache, postgres, etc.  Write down the conn props for that resource, then uninventory it.  Now try to manually add it back, with the conn props you recorded earlier.  Be sure to manually add right away (when I did it, it was within 1 or 2 minutes of uninventorying).

Message in yellow is "A Apache HTTP Server with the specified connection properties was already in inventory"

You are left on an inventory page for the resource, but the resource isn't shown anywhere else (resource browser, left nav,etc).

Comment 1 Greg Hinkle 2009-08-06 16:12:09 UTC
Joe to make sure resources do get properly deleted.

Comment 2 Corey Welton 2009-08-26 18:55:27 UTC
Pushed to 1.4

Comment 3 Red Hat Bugzilla 2009-11-10 21:01:40 UTC
This bug was previously known as http://jira.rhq-project.org/browse/RHQ-2295


Comment 4 Charles Crouch 2010-05-17 21:38:44 UTC
I'm setting this to a higher priority for investigation in the future, since I think it indicates we're not checking the resource status properly, which would be bad. But this is not a common execution path, so will not be targeted for 2.4

Comment 5 Joseph Marques 2010-05-17 22:03:18 UTC
Hmm...this is a long-standing bug.  My apologies for not seeing it originally Jeff / Greg.

-----

Jeff...

The issue here is that uninventorying a server will communicate with the agent responsible for managing that resource.  It will remove that resource from the agent-side inventory...which then (I believe) triggers another auto-discovery scan...which re-discovers and re-reports the resource.

In this case, the error message (depending on timing) might be completely accurate.  If you go back to the auto-discovery portlet after seeing the error message, you'll notice that the resource IS in fact there (in this case, it's in the NEW state and needs to be re-imported).

Greg...

I don't believe this is an issue with async uninventory (first available in the RHQ 1.3.0 release, shortly before this bug was filed).  Part of the in-band work for uninventory is to "destroy" the just-uninventoried resources by:

1) Marking the InventoryStatus as UNINVENTORIED (async job processes these)
2) Setting the agent reference to NULL (simulate agent-side deletion)
3) Setting the parent resource to NULL (async delete can occur in any order)
4) Setting the resourceKey to 'deleted' (prevents dup resourceKey during rediscovery)

-----

So I think next steps are to kick this back to QA, re-run the reproduction steps, and determine whether the above analysis (the resource is automatically rediscovered and can be found in the AD portlet) is correct.  If it is, then the very least we can do is adjust the error message to be more accurate, and point (or redirect) the user to the correct place depending on which state (InventoryState) the resource is in.

Comment 6 Corey Welton 2010-08-30 17:39:04 UTC
Joseph/Charles:  fix or doco this?

Comment 10 Red Hat Bugzilla 2023-09-14 01:18:46 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.