| Summary: | oracle IN clause problem when deleting members from groups | ||
|---|---|---|---|
| Product: | [Other] RHQ Project | Reporter: | John Mazzitelli <mazz> |
| Component: | Core Server | Assignee: | RHQ Project Maintainer <rhq-maint> |
| Status: | CLOSED NOTABUG | QA Contact: | Mike Foley <mfoley> |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | 4.1 | CC: | hrupp |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-10-03 15:55:50 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
ahh. never mind. looks like the only place we end up using this comes from this chunked code:
while (i < toBeDeletedResourceIds.size()) {
int j = i + 1000;
if (j > toBeDeletedResourceIds.size())
j = toBeDeletedResourceIds.size();
List<Integer> idsToDelete = toBeDeletedResourceIds.subList(i, j);
log.debug("== Bounds " + i + ", " + j);
boolean hasErrors = uninventoryResourcesBulkDelete(overlord, idsToDelete);
if (hasErrors) {
throw new IllegalArgumentException("Could not remove resources from their containing groups");
}
i = j;
}
we should at least javadoc all the locations that use those queries to be careful not to use them as-is - they need to be chunked.
|
I see this in ResourceGroup entity: public static final String QUERY_DELETE_EXPLICIT_BY_RESOURCE_IDS = "DELETE FROM RHQ_RESOURCE_GROUP_RES_EXP_MAP WHERE RESOURCE_ID IN ( :resourceIds )"; public static final String QUERY_DELETE_IMPLICIT_BY_RESOURCE_IDS = "DELETE FROM RHQ_RESOURCE_GROUP_RES_IMP_MAP WHERE RESOURCE_ID IN ( :resourceIds )"; Anyone using this with a list of more than 1000 resources will hit the Oracle 1000-limit. We need to refactor these queries out or at least chunk their usage.