+++ This bug is an upstream to downstream clone. The original bug is: +++ +++ bug 1408652 +++ ====================================================================== Description of problem: Currently we don't set the Java heap size explicitly. This means that the Java virtual machine will use the default size, which may be as large as 1/4th of the total memory of the machine. Steps to Reproduce: 1. Check the amount of virtual memory before and after the change in a scale environment. 2. 3. Actual results: Expected results: Reduce the amount of virtual memory used by DWH. Additional info: (Originally by Shirly Radco)
IF you plan to backport to 4.0.z please change the milestone as well. (Originally by Yaniv Dary)
4.0.6 has been the last oVirt 4.0 release, please re-target this bug. (Originally by Sandro Bonazzola)
sanity done, moving to scale team as decided in other thread, dwh is limited to 1GB memory [root@ ] # ps aux -ww | grep 4_0.HistoryETL ovirt 17643 3.2 5.8 3093244 106848 ? Sl 15:18 0:08 ovirt-engine-dwhd -Dorg.ovirt.engine.dwh.settings=/tmp/tmpXTeGxM/settings.properties -Xms1g -Xmx1g -classpath /usr/share/ovirt-engine-dwh/lib/*::/usr/share/java/dom4j.jar:/usr/share/java/apache-commons-collections.jar:/usr/share/java/postgresql-jdbc.jar ovirt_engine_dwh.historyetl_4_0.HistoryETL --context=Default
(In reply to Lukas Svaty from comment #4) > sanity done, moving to scale team as decided in other thread, dwh is limited > to 1GB memory > > [root@ ] # ps aux -ww | grep 4_0.HistoryETL > ovirt 17643 3.2 5.8 3093244 106848 ? Sl 15:18 0:08 > ovirt-engine-dwhd > -Dorg.ovirt.engine.dwh.settings=/tmp/tmpXTeGxM/settings.properties -Xms1g > -Xmx1g -classpath > /usr/share/ovirt-engine-dwh/lib/*::/usr/share/java/dom4j.jar:/usr/share/java/ > apache-commons-collections.jar:/usr/share/java/postgresql-jdbc.jar > ovirt_engine_dwh.historyetl_4_0.HistoryETL --context=Default just to better understand what is expected to be found in terms of scale? the new configuration is now a default and verified correct ? do we need to check the configuration impact on scale?
(In reply to eberman from comment #5) > (In reply to Lukas Svaty from comment #4) > > sanity done, moving to scale team as decided in other thread, dwh is limited > > to 1GB memory > > > > [root@ ] # ps aux -ww | grep 4_0.HistoryETL > > ovirt 17643 3.2 5.8 3093244 106848 ? Sl 15:18 0:08 > > ovirt-engine-dwhd > > -Dorg.ovirt.engine.dwh.settings=/tmp/tmpXTeGxM/settings.properties -Xms1g > > -Xmx1g -classpath > > /usr/share/ovirt-engine-dwh/lib/*::/usr/share/java/dom4j.jar:/usr/share/java/ > > apache-commons-collections.jar:/usr/share/java/postgresql-jdbc.jar > > ovirt_engine_dwh.historyetl_4_0.HistoryETL --context=Default > > just to better understand what is expected to be found in terms of scale? > the new configuration is now a default and verified correct ? Yes > > do we need to check the configuration impact on scale? Yes
(In reply to Yaniv Dary from comment #6) > (In reply to eberman from comment #5) > > (In reply to Lukas Svaty from comment #4) > > > sanity done, moving to scale team as decided in other thread, dwh is limited > > > to 1GB memory > > > > > > [root@ ] # ps aux -ww | grep 4_0.HistoryETL > > > ovirt 17643 3.2 5.8 3093244 106848 ? Sl 15:18 0:08 > > > ovirt-engine-dwhd > > > -Dorg.ovirt.engine.dwh.settings=/tmp/tmpXTeGxM/settings.properties -Xms1g > > > -Xmx1g -classpath > > > /usr/share/ovirt-engine-dwh/lib/*::/usr/share/java/dom4j.jar:/usr/share/java/ > > > apache-commons-collections.jar:/usr/share/java/postgresql-jdbc.jar > > > ovirt_engine_dwh.historyetl_4_0.HistoryETL --context=Default > > > > just to better understand what is expected to be found in terms of scale? > > the new configuration is now a default and verified correct ? > > Yes > > > > > do we need to check the configuration impact on scale? > > Yes please add exact scenario to test this configuration and topology , expected behavior etc. in order for us to prepare for this
(In reply to eberman from comment #7) > (In reply to Yaniv Dary from comment #6) > > (In reply to eberman from comment #5) > > > (In reply to Lukas Svaty from comment #4) > > > > sanity done, moving to scale team as decided in other thread, dwh is limited > > > > to 1GB memory > > > > > > > > [root@ ] # ps aux -ww | grep 4_0.HistoryETL > > > > ovirt 17643 3.2 5.8 3093244 106848 ? Sl 15:18 0:08 > > > > ovirt-engine-dwhd > > > > -Dorg.ovirt.engine.dwh.settings=/tmp/tmpXTeGxM/settings.properties -Xms1g > > > > -Xmx1g -classpath > > > > /usr/share/ovirt-engine-dwh/lib/*::/usr/share/java/dom4j.jar:/usr/share/java/ > > > > apache-commons-collections.jar:/usr/share/java/postgresql-jdbc.jar > > > > ovirt_engine_dwh.historyetl_4_0.HistoryETL --context=Default > > > > > > just to better understand what is expected to be found in terms of scale? > > > the new configuration is now a default and verified correct ? > > > > Yes > > > > > > > > do we need to check the configuration impact on scale? > > > > Yes > > please add exact scenario to test this configuration and topology , expected > behavior etc. in order for us to prepare for this This was moved to email thread. Please provide the info once it is decided. For now removing needinfo.
System team in the end did the performence tests, this is the environment: 1. 1 DC 1 CL 1 SD(nfs) 5 hosts (4 rhel 1 rhevh) 150VMs 2. Multiple CRED actions were executed on the setup as these are stored in dwh 3. No performence issues were found during this period nor afterwards (1 day) Keep in mind this does not inclease performence of dwh, just limit the stack to 1GB which was verified in comment#4. As all the focused components (dwh, dashboard) were working correctly during this period moving to VERIFIED.
In continuing to comment #9 rhv scale team asked to verify memory leaks for the heap usage over more than 48 hours with the following topology. DataSet Comparison scale env vms_disks 2069 vms 970 hosts 250 no leaks were detected for dwhd.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0540.html