In HibernateDetachUtility.nullOutFieldsByFieldAccess, we do this: Class tmpClass = object.getClass(); List<Field> fieldsToClean = new ArrayList<Field>(); while (tmpClass != null && tmpClass != Object.class) { Collections.addAll(fieldsToClean, tmpClass.getDeclaredFields()); tmpClass = tmpClass.getSuperclass(); } We should be more selective about what goes into fieldsToClean. We should check each field with this: if (java.lang.reflect.Modifier.isFinal(field.getModifiers()) && java.lang.reflect.Modifier.isStatic(field.getModifiers())) { .. this field is a static final - no need to cleanse this... } Its quite possible we should not cleanse pure "static" variables, whether or not they are final. Need to think about that some more.
I think that's probably true, we're not passing back static fields defined in the class. Also, this may be a decent area for a Map of Class to Fields, why inspect the class each time, we're always getting the same set of valid Fields, I think.
we should also skip "transient" fields as well
(In reply to comment #1) > Also, this may be a decent area for a Map of Class to Fields, why inspect the > class each time, we're always getting the same set of valid Fields, I think. AFAIK, the JRE already does its own caching of this data. Earlier versions of the JRE did a poor job of this, but later versions have highly efficient reflection API.
(In reply to comment #3) > (In reply to comment #1) > > Also, this may be a decent area for a Map of Class to Fields, why inspect the > > class each time, we're always getting the same set of valid Fields, I think. > > AFAIK, the JRE already does its own caching of this data. Earlier versions of > the JRE did a poor job of this, but later versions have highly efficient > reflection API. Now as I say that, since we are stripping out fields we don't want to process, we could cache only the fields we want to process. For now, I won't do that - but that could be a place for further optimization.
commit - 3e3e382 this is a code change/optimization. Nothing for QA to test. Note that I changed the warning message that is spit out if the util recurses too deep: LOG.warn("Recursed too deep [" + depth + " > " + depthAllowed + "], will not attempt to detach object of type [" + ((value != null) ? value.getClass().getName() : "N/A") + "]. This may cause serialization errors later. If so, " + "you can try to work around this by setting the system property [" + DEPTH_ALLOWED_SYSPROP + "] to a value higher than [" + depth + "]."); Also note that we can now configure how deep we will allow this utility to go. This is in case we hit another bug like bug #702390 - we'll at least have a workaround. To configure the depth allowed, put this in rhq-server.properties: rhq.server.hibernate-detach-utility.depth-allowed=100 where 100 can be any depth you want. The default is 50.
Added for triage for RHQ 4.0.1
cherry picked 3e3e382 to release-4.0.0 branch, commit 610b23bf0b88470c76727b81a6c9bc032ab3f147
internal refactoring. expressly untestable in itself, as documented above.
Bulk closing of issues that were VERIFIED, had no target release and where the status changed more than a year ago.