Description of problem: The start of the Engine service in JBoss gets interrupted after 5 minutes by the com.arjuna.ats.arjuna Transaction Reaper. The aborted transaction was processing the OVFs from the stateless snapshots: (...) 2018-05-16 17:48:25,427+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 43) WFLYUT0021: Registered web context: '/ovirt-engine/api' for server 'default-server' 2018-05-16 17:48:26,196+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0021: Registered web context: '/ovirt-engine/sso' for server 'default-server' 2018-05-16 17:53:25,338+02 WARN [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffff939c027c:-60225362:5afc52c0:1b4 in state RUN 2018-05-16 17:53:25,338+02 WARN [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffff939c027c:-60225362:5afc52c0:1b7 in state RUN 2018-05-16 17:53:25,340+02 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012121: TransactionReaper::doCancellations worker Thread[Transaction Reaper Worker 0,5,main] successfully canceled TX 0:ffff939c027c:-60225362:5afc52c0:1b4 2018-05-16 17:53:25,340+02 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012095: Abort of action id 0:ffff939c027c:-60225362:5afc52c0:1b7 invoked while multiple threads active within it. 2018-05-16 17:53:25,342+02 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012381: Action id 0:ffff939c027c:-60225362:5afc52c0:1b7 completed with multiple threads - thread ServerService Thread Pool -- 45 was in progress with com.sun.org.apache.xml.internal.dtm.ref.dom2dtm.DOM2DTM.nextNode(DOM2DTM.java:524) com.sun.org.apache.xml.internal.dtm.ref.dom2dtm.DOM2DTM.getHandleFromNode(DOM2DTM.java:680) com.sun.org.apache.xml.internal.dtm.ref.dom2dtm.DOM2DTM.getHandleOfNode(DOM2DTM.java:732) com.sun.org.apache.xml.internal.dtm.ref.DTMManagerDefault.getDTMHandleFromNode(DTMManagerDefault.java:575) com.sun.org.apache.xpath.internal.XPathContext.getDTMHandleFromNode(XPathContext.java:183) com.sun.org.apache.xpath.internal.XPath.execute(XPath.java:306) com.sun.org.apache.xpath.internal.jaxp.XPathImpl.eval(XPathImpl.java:206) com.sun.org.apache.xpath.internal.jaxp.XPathImpl.evaluate(XPathImpl.java:271) org.ovirt.engine.core.utils.ovf.xml.XmlNode.selectSingleNode(XmlNode.java:29) org.ovirt.engine.core.utils.ovf.OvfReader.selectSingleNode(OvfReader.java:1128) org.ovirt.engine.core.utils.ovf.OvfReader.readVmDevice(OvfReader.java:269) org.ovirt.engine.core.utils.ovf.OvfReader.readManagedVmDevice(OvfReader.java:209) org.ovirt.engine.core.utils.ovf.OvfReader.readCdItem(OvfReader.java:419) org.ovirt.engine.core.utils.ovf.OvfReader.readHardwareSection(OvfReader.java:367) org.ovirt.engine.core.utils.ovf.OvfReader.readGeneralData(OvfReader.java:559) org.ovirt.engine.core.utils.ovf.OvfReader.buildVirtualSystem(OvfReader.java:205) org.ovirt.engine.core.utils.ovf.IOvfBuilder.build(IOvfBuilder.java:30) org.ovirt.engine.core.utils.ovf.OvfManager.importVm(OvfManager.java:58) org.ovirt.engine.core.bll.storage.ovfstore.OvfHelper.readVmFromOvf(OvfHelper.java:71) org.ovirt.engine.core.bll.snapshots.SnapshotsManager.getVmConfigurationInStatelessSnapshotOfVm(SnapshotsManager.java:680) org.ovirt.engine.core.bll.network.macpool.MacsUsedAcrossWholeSystem$$Lambda$756/1208388850.apply(Unknown Source) (...) There are 1969 stateless snapshots in the DB: engine=# select count(vm_id) from snapshots where snapshot_type='STATELESS'; count ------- 1969 How reproducible: Always Steps to Reproduce: 1. Create a large number of stateless snapshots 2. Restart ovirt-engine
With this amount of snapshots(~2000), if each conversion of the code below[1] takes 150 ms we already are reaching the default TX timout. The code is sequentially getting each OVF and processing it: Stream<VM> statelessSnapshotsOfRunningVMs = idsOfRunningStatelessVMs.map(snapshotsManager::getVmConfigurationInStatelessSnapshotOfVm) .filter(Optional::isPresent) .map(Optional::get); Generally its a perfect candidate for parallel stream handling. Another question, how come they reached this amount of stale snapshots? I guess as an easy fix you just increase the tx timeout and (hopefully you can) remove all those undeeded snapshots for now To increase tx timeout add this: <coordinator-environment default-timeout="600"/> under element: <subsystem xmlns="urn:jboss:domain:transactions:3.0"> in /etc/ovirt-engine/services/ovirt-engine/ovirt-engine.xml.in
(In reply to Roy Golan from comment #1) > With this amount of snapshots(~2000), if each conversion of the code > below[1] takes 150 ms we already are reaching the default TX timout. > > The code is sequentially getting each OVF and processing it: > > Stream<VM> statelessSnapshotsOfRunningVMs = > > idsOfRunningStatelessVMs.map(snapshotsManager:: > getVmConfigurationInStatelessSnapshotOfVm) > .filter(Optional::isPresent) > .map(Optional::get); > > > Generally its a perfect candidate for parallel stream handling. I agree basically but there's one problem with this approach, currently all of the multithreaded handling in Engine is done via the thread pool, using parallel streaming will use a number of threads at its discretion and might exhaust the number of threads, what do you suggest then Roy?
> I agree basically but there's one problem with this approach, currently all > of the multithreaded handling in Engine is done via the thread pool, using > parallel streaming will use a number of threads at its discretion and might > exhaust the number of threads, what do you suggest then Roy? There are 2 reasons this is not a big concern: 1. Parallel stream uses the common pool inside ForkJoin which defaults to the number of cores the JVM see (Runtime.getRuntime().availableProcessors()) 2. We are dealing with the engine startup here - anyway Wildfly is waiting for the MacPoolPerCluster Bean, which is the root of this invocation, to finish it's initialization. As I see it this is the cheapest and safest solution at the moment.
If it's not a big concern I'm for it then, patch is straight forward and simple, targeting to 4.2.4 so 4.1.z customers can have this fix on upgrade to 4.2.z
Steps to reproduce: 1. Create a stateless VM (the checkbox in the main dialog) 2. Run the VM (a snapshot will be created) 3. Verify you have 3000 VMs 4. Restart the engine, measure the engine startup time - till you see in the engine.log 'Running ovirt-engine' Verify that the engine boots in less than 2 minutes. If its still around 2 minutes with 4 core machine than lets stop and analyze further
Created attachment 1451348 [details] playbook to create and start a stateless tiny vms
(In reply to Roy Golan from comment #16) > Steps to reproduce: > > 1. Create a stateless VM (the checkbox in the main dialog) > 2. Run the VM (a snapshot will be created) > 3. Verify you have 3000 VMs > 4. Restart the engine, measure the engine startup time - till you see in the > engine.log > 'Running ovirt-engine' > > Verify that the engine boots in less than 2 minutes. If its still around 2 > minutes with 4 core machine than lets stop and analyze further Currently we have 1827 stateless and up VMs (creating more had some issues). Time measurements of 3 systemctl restart ovirt-engine (stop criteria: tail -f /var/log/ovirt-engine/engine.log | grep -i "running ovirt-engine") below. AVG of 3 restarts: 1 Min 32 Sec restart invoke: Sun Jun 17 14:07:22 IDT 2018 finish: 2018-06-17 14:08:54 Total: 1 min 32 sec restart invoke: Sun Jun 17 14:10:09 IDT 2018 finish: 2018-06-17 14:11:40 Total: 1 min 31 sec restart invoke: Sun Jun 17 14:19:29 IDT 2018 finish: 2018-06-17 14:20:52 Total: 1 min 33 Sec
Ilan thanks a lot for those results. Can you add the server.log? - because if would see there how much time the MacPoolPerCluster takes to initialize then we will know how much time the processing of the xml takes more or less. Lets see if MacPoolPerCluster init time is less than a minute. If not lets continue, since this is bit close to what we defined (2 minutes).For that I'd like to see 500 more vms up and to retest, to see how much does it adds.
Created attachment 1452385 [details] macpool.btm
Created attachment 1452386 [details] engine conf dropin for byteman
Ilan to detect how much time it takes the method to process the xmls I attached a bytenman script and a dropin conf. To use it do this: 1. yum install -y byteman 2. copy attachment 1452385 [details] to /root/macpool.btm 3. copy attachment 1452386 [details] to /etc/ovirt-engine/engine.conf.d/99-byteman.conf 4. restart the engine inspect the server.log, check for "BYTEMAN" string and share the output (hope this is not the engine.log)
After adding additional 500 VMs, Total up VMs amount: 2314 The host holding the hosted engine - leopard03 became "Non Operational", I had to restart it. After the system became stable, i did the measurements again. AVG restart time: 54 Sec restart ovirt-engine: Mon Jun 18 10:15:22 IDT 2018 finish: 2018-06-18 10:16:15 total: 53 Sec restart ovirt-engine: Mon Jun 18 10:18:15 IDT 2018 finish: 2018-06-18 10:19:10 total: 55 Sec restart ovirt-engine: Mon Jun 18 10:21:45 IDT 2018 finish: 2018-06-18 10:22:39 total: 54 Sec Restart with byteman: Total: 38 Sec 2018-06-18 11:22:36,330+03 INFO [stdout] (ServerService Thread Pool -- 59) *** BYTEMAN - started getMacsForMacPool 1529310156329 2018-06-18 11:23:14,707+03 INFO [stdout] (ServerService Thread Pool -- 59) *** BYTEMAN - ended getMacsForMacPool 1529310194706 2018-06-18 11:23:14,710+03 INFO [stdout] (ServerService Thread Pool -- 59) *** BYTEMAN - started getMacsForMacPool 1529310194710 2018-06-18 11:23:14,711+03 INFO [stdout] (ServerService Thread Pool -- 59) *** BYTEMAN - ended getMacsForMacPool 1529310194711
(In reply to Ilan Zuckerman from comment #24) > After adding additional 500 VMs, Total up VMs amount: 2314 > The host holding the hosted engine - leopard03 became "Non Operational", I > had to restart it. Why? Anything in its journal? Next time, we'd be happy to take a look!
Yes, i did see the following errors in /var/log/messages I didnt research this because i had to focus on this BZ. Any input would be appreciated. Jun 18 00:00:19 leopard03 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR Failed to read metadata from /var/run/vdsm/storage/9dc9e846-a6c0-4b4a-84be-9c72d6d63403/167cc842-401a-45f5-9dda-d4b77505e6f4/84c88110-2a5c-4c65-8916-7e3c986a95a8#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 152, in get_raw_stats#012OSError: [Errno 24] Too many open files: '/var/run/vdsm/storage/9dc9e846-a6c0-4b4a-84be-9c72d6d63403/167cc842-401a-45f5-9dda-d4b77505e6f4/84c88110-2a5c-4c65-8916-7e3c986a95a8' Jun 18 00:00:19 leopard03 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Failed to read state.#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 94, in run#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 163, in get_raw_stats#012RequestError: failed to read metadata: [Errno 24] Too many open files: '/var/run/vdsm/storage/9dc9e846-a6c0-4b4a-84be-9c72d6d63403/167cc842-401a-45f5-9dda-d4b77505e6f4/84c88110-2a5c-4c65-8916-7e3c986a95a8' Jun 18 00:00:19 leopard03 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Trying to restart the broker Jun 18 00:00:22 leopard03 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.submonitor_base.SubmonitorBase ERROR Error executing submonitor cpu-load-no-engine, args {'use_ssl': 'true', 'vm_uuid': '24c049e6-bd9d-4c5b-b3f0-fb71e35a0700', 'address': '0'}#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitor_base.py", line 115, in _worker#012 self.action(self._options)#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/cpu_load_no_engine.py", line 66, in action#012 self.refresh_ticks()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/cpu_load_no_engine.py", line 73, in refresh_ticks#012 self.system['cur'] = self.get_system_ticks()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/cpu_load_no_engine.py", line 79, in get_system_ticks#012 with open('/proc/stat', 'r') as f:#012IOError: [Errno 24] Too many open files: '/proc/stat' Jun 18 00:00:24 leopard03 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.submonitor_base.SubmonitorBase ERROR Error executing submonitor ping, args {'addr': '10.35.71.254'}#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitor_base.py", line 115, in _worker#012 self.action(self._options)#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/ping.py", line 51, in action#012 if self._ping():#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/ping.py", line 68, in _ping#012 with open(os.devnull, "w") as devnull:#012IOError: [Errno 24] Too many open files: '/dev/null
Please open a separate issue with the details - include both the journal and the ovirt HE agent logs if possible. Certainly that 'too many open files' is an interesting one.
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ] For more info please contact: rhv-devops
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:1085
sync2jira