Bug 1070326 (JON3-42) - RFE: Allow AS7 deployments to provide version in the artifact name
Summary: RFE: Allow AS7 deployments to provide version in the artifact name
Alias: JON3-42
Product: JBoss Operations Network
Classification: JBoss
Component: Plugin -- Tomcat, Provisioning, Plugin -- Other, Plugin -- JBoss EAP 6
Version: JON 3.3.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ER04
: JON 3.3.0
Assignee: Jay Shaughnessy
QA Contact: Armine Hovsepyan
: 1117738 (view as bug list)
Depends On: 1117505 1117738 1117755 1119781 1123834 1123916 1125343 1136488
Blocks: 1093822
TreeView+ depends on / blocked
Reported: 2014-02-26 15:42 UTC by Heiko W. Rupp
Modified: 2018-12-05 17:27 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2014-12-11 14:04:12 UTC
Type: Enhancement

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Bugzilla 970784 None None None Never
Red Hat Bugzilla 1050014 None None None Never
Red Hat Bugzilla 1112744 None None None Never
Red Hat Knowledge Base (Solution) 395443 None None None Never
Red Hat One Jira Issue Tracker JON3-42 Major Verified Do not individually track applications with the same WEB Context [PRODMGT-469] 2018-06-04 09:16:00 UTC

Internal Links: 970784 1050014 1112744

Description Heiko W. Rupp 2014-02-26 15:42:59 UTC
When a new version of an application is deployed to EAP through JON bundled deployments, they are deploying it as myapp-1.0.1.war where the previous version was myapp-1.0.0.war. The problem they are running into is JON still has the previous version inventoried, but it is showing as down or unavailable since it has been removed. Expectation is to get this older version removed from JON too and show only active and most recent version of deployed war.

See also Bug 970784, which deals with the need to explicitly run a discovery after bundle deployment and also Bug 1050014 (bundle deploy in domain mode)

While those are specifically targeted at EAP6, we should try to find a more general mechanism

Comment 1 Jay Shaughnessy 2014-03-24 21:30:25 UTC
The bundle system itself has no idea that it's replacing an inventoried resource.  So, I don't really see any ability for the bundle deploy to perform this action.  

Moreover, agent-side code does not have authority to uninventory resources. So even if we detected the dead resource I'm not sure what we would do about it.

We've had this same situation in different manifestations.  It's basically the same as when the RHQ server itself gets upgraded.  There is not a way to automatically delete the old RHQ Server resource (assuming it had been imported).

In general, we let users decide when to uninventory resources.  Uninventorying a resource is a decision to release all of the collected data for that resource.  That's not a decision we would typically want to make automatically.

Although I can understand the annoyance in this particular use case I don't think automatic removal is a good idea in general, but more significantly, I don't immediately see a way of doing it anyway.  

We're talking about being able to identify a DOWN resource as a resource that should be automatically uninventoried.  This seems more like a job for some sort of custom reaper script, run specifically for the issue at hand.

Suggestions welcome.

Comment 2 Heiko W. Rupp 2014-03-25 08:07:37 UTC
Actually I would rather go the way that the new resource, even with a different key, gets merged into the existing one, so that new = old based on the context.
We should put some marker in that timeline thingy so that the user knows about it (and if we have the audit subsystem in the future, also add it to audit).

And if we do that eap6 only, then we could unify the bundle deployment for standalone to follow the one via api for domain and this way perhaps identify what gets deployed and react accordingly.

A solution could also be a server side plugin, that takes a "pattern" (resource of type and with other matching pattern like context root trait) and goes periodically through inventory to check if there are matching resource pairs and acts accordingly.
If we had the back-channel, we could hook into those "resource added" events and perhaps even intercept them, but that is more a long term effort.

Larry will also investigate about the more exact semantics here. On the other hand we may even offer both semantics if we already have a list of map (type, match expression , decision)  with decision being none(default),delete_old,merge

Comment 3 Jay Shaughnessy 2014-04-03 22:05:34 UTC
Lately I feel like I'm against everything, but I don't really like this merge resource idea. The "dead" resource would need to somehow be recursively applied to the "live" resource.  This feels like a very heavy, difficult solution to a problem that is fairly rare, and mainly a nuisance.  It would possible need to be applied manually, which would not make it any easier for the user than doing an uninventory.

The best way to keep the legacy data is to ensure that the resource is discovered the same way each time, and therefore is the same resource for different versions of the same app.

I think the real problem here is that the user does not like the way we do discovery for web apps in AS7.  They would like to be able to deploy different versions of the same logical app, with a version identifier in the name, and have discovery realize it's the same resource.  That means we need to come up with the same resource key, independent of version.  That makes sense to me, and it's why we have a version field on resources.  But looking at the AS7 representation of the apps, or maybe it's our representation of AS7 deployments, I can see why this may be an issue.  The Deployment or Subdeployment already has the versioned name, I think, before we actually create the Web Runtime resource that actually represents the app.

I'll research more and try to find out if a different reskey/discovery mechanism is a viable approach here, but I'm pessimistic.

Otherwise, I'd recommend some sort of server-side reaper plugin for users that find it too tedious to uninventory the resources manually.  If they care about metrics gathered for older versions, then they should just keep those resources around.

Comment 4 Jay Shaughnessy 2014-04-28 16:25:50 UTC
I feel there may always be combinations where the plugin's discovery mechanism paired with the user's environment will result in issues like the one described.  In this instance it's the fact that EAP represents web deployments via name in its DMR.  So, if the user insists on incorporating the version into the WAR name, the plugin's discovery mechanism has no real ability to "know" that they are logically the same app. Because the names are different as opposed to the name being the same and the version being different (note, a separate issue (I think) is that version is currently set to null in the plugin discovery).

We could try and enhance the discovery mechanism to strip out versions and so forth, such that the resulting resource keys remained the same.  But it could be brittle, we'd always be chasing the problem after it occurs, and a change in the user's approach or Wildfly's representation could again cause a problem.  Furthermore, knowing they were logically the same doesn't immediately solve the problem, we'd still need to add new features to handle discovery changing resource names etc.  Moreover, it may end up not even being a good thing to do, the app is in fact different from the prior version and probably should be tracked as a separate/new resource.

So, the problem seems to boil down to an annoyance.  That, for some resource types some users may not want to see "dead" resources, despite the history they offer.  And instead would like to see them automatically get uninventoried.

There are maybe a couple of ways to do this.

1) Via Some sort of Availability trigger

Today it's not possible to distinguish a DOWN resource from one that is physically missing,  although perhaps for certain resource types support like this could be useful.  For example, for an EAP deployment, it may be possible to determine the difference between "missing" and "down" (This is conjecture, I'm not sure if it is or not, but it seems maybe possible). One idea on how this could work:
- Add a new AvailabilityType like DEAD. DEAD would be reported instead of DOWN if the resource did not exist agent-side.  Server-side, we'd support a system property like "rhq.server.resource-types.purge-dead-resources" that set the types for which dead resources would be automatically uninventoried.  If enabled for the type we'd uninventory it, otherwise we'd convert to DOWN and store the availability.  Or, we could do the reverse and have a property which prevented the uninventory. In this case uninventory would happen by default and no action would be needed by the user unless they wanted to keep dead resources (for the history, perhaps for comparison).

For example, when we upgrade an RHQ Server the old one is logically DEAD.  If we found that the install directory was gone we could report it as DEAD as opposed to DOWN, and wipe it from inventory.

2) Via some sort of server-side reaper

We'd still need to support an environment variable, or some way of defining what needs to be purged. In this case "rhq.server.resource-types.purge-dead-resources". would have to match a type name with an expression: For example "JBossAS7:Deployment:my-app.*".  For those types we'd have to query resources of the same parent whose names matched the regex pattern.  Then, uninventory the DOWN resources assuming there was one in the set that was UP".

This seems horribly complex and needs a lot from the user. But basically if you knew you were deploying my-app-1.0 and then my-app-1.1 *and* removing my-app 1.0 agent-side.  Then this would detect that my-app-1.0 should go away.  It would probably happen as part of data purge.

Although I think this feature is pretty silly, and users should just manually uninventory what they don't want, if we have to do it I'd certainly lean towards option 1, or something better if someone can come up with it.

Comment 5 John Mazzitelli 2014-04-28 17:01:48 UTC
having a new avail type of DEAD sounds interesting. I would pursue that idea a bit more and see if you can flesh out more details and see how hard it would be to implement that and if it really solves the problem.

Comment 6 Jay Shaughnessy 2014-04-28 18:39:15 UTC
In Comment 4 I wrote, "We could try and enhance the discovery mechanism to strip out versions and so forth, such that the resulting resource keys remained the same.  But it could be brittle, we'd always be chasing the problem after it occurs, and a change in the user's approach or Wildfly's representation could again cause a problem".  But maybe this shouldn't be dismissed so quickly.

Although this is could be considered a general issue, the main complaint is about EAP Deployments. Particularly those being [bundle] provisioned automatically to new versions, and generated with standard maven version extensions.  It may be that we should provide more of a point solution in our AS7 (maybe also AS5?) plugin discovery code that:
  1) trims versions from the resource key and the resource name
  2) uses that version to set the resource version

So, the resource name and, more importantly, the resource key, would not include the version.  There are some questions/complications:
  1) We'd still need to be able to assemble the proper address to make DMR requests.
  2) We'd likely need to handle resource upgrade to change existing names/keys for existing Deployments,

I'm going to look into this further before looking at the more complicated feature described in Comment 4.  That type of enhancement may not be necessary if this "point" change is sufficient.

Comment 7 Jay Shaughnessy 2014-04-30 19:42:26 UTC
I've created Pull Request 26 for review:


On further thought, I'm not sure any resource upgrade work is warranted here.  The next versioned deployment would get the normalized resource key and after that it would work as desired.

Comment 8 Jay Shaughnessy 2014-05-02 18:40:23 UTC
A new RFE for option 1 in Comment 4 has been created as Bug 1093822.

Comment 11 Jay Shaughnessy 2014-06-10 18:17:59 UTC

The PR (https://github.com/rhq-project/rhq/pull/26) has been merged into master.  Not yet setting MODIFIED pending demo and review...

Comment 12 Jay Shaughnessy 2014-06-12 17:42:34 UTC
The work here has been completed.  Further issues should likely be brought up as new BZs.

Comment 13 JBoss JIRA Server 2014-06-12 19:10:46 UTC
Heiko Rupp <hrupp@redhat.com> updated the status of jira JON3-42 to Resolved

Comment 14 Jay Shaughnessy 2014-06-27 20:19:53 UTC
master commit dec8bae46446d4cde46fe13ed76585c2cfc164b8
Author: Jay Shaughnessy <jshaughn@redhat.com>
Date:   Fri Jun 27 16:18:35 2014 -0400

    Adding one more thing to this feature, prevent discovery of siblings
    resolving to the same resource key.  In the somewhat unlikely case that
    two distinct sibling deployments resolve down to the same logical
    deployment, don't let it get past discovery.  For example, if the user
    has app-1.0.war and app-2.0.war and these are *really* different apps (and
    they would probably have to be since EAP would stop deployment if they
    had the same context).  In this case both would be seen as app.war, and
    that is a problem on the RHQ side.  In this sutuation generate an
    agent log warning that hopefully helps a user resolve the issue.

    Note that resource upgrade already prevents an upgrade of siblings with
    the same key, so this is an analogous change.

Comment 15 Jay Shaughnessy 2014-06-30 19:24:43 UTC
Actually, the commit in Comment 14 should have been for Bug 1112744.

Comment 16 Jay Shaughnessy 2014-07-11 14:09:39 UTC
master commit 697a1313eaed37b8a4c16f192680c3461795306b
Author: Jay Shaughnessy <jshaughn@redhat.com>
Date:   Fri Jul 11 10:07:43 2014 -0400

    I'm an idiot.  Fixing stupid regression.

Comment 17 Jay Shaughnessy 2014-07-22 14:42:47 UTC
*** Bug 1117738 has been marked as a duplicate of this bug. ***

Comment 18 Simeon Pinder 2014-07-31 15:51:53 UTC
Moving to ON_QA as available to test with brew build of DR01: https://brewweb.devel.redhat.com//buildinfo?buildID=373993

Comment 19 Jay Shaughnessy 2014-09-09 20:24:34 UTC
Moving back to ASSIGNED for ER03 due to Bug 1136488.

Comment 21 Mike Foley 2014-09-19 19:15:58 UTC
QE work is documented here.


There are no dependent BZs ... this feature is "TEST COMPLETE"

Note You need to log in before you can comment on or make changes to this bug.