This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 896205 - Out of memory (OOM) with native threads
Out of memory (OOM) with native threads
Status: CLOSED UPSTREAM
Product: Fedora
Classification: Fedora
Component: freemind (Show other bugs)
18
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: hannes
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-01-16 15:23 EST by Ali Akcaagac
Modified: 2013-04-17 00:44 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-03-29 04:01:21 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
alternative freemind.sh file (1.04 KB, application/x-shellscript)
2013-01-16 15:23 EST, Ali Akcaagac
no flags Details
Freemind log.0 file run via DEBUG=1 java .... (3.10 KB, application/x-bzip)
2013-01-16 15:26 EST, Ali Akcaagac
no flags Details
Screenshot 1 (101.93 KB, image/png)
2013-01-18 18:24 EST, Ali Akcaagac
no flags Details
Screenshot 2 (118.38 KB, image/png)
2013-01-18 18:26 EST, Ali Akcaagac
no flags Details
Screenshot 3 (110.40 KB, image/png)
2013-01-18 18:30 EST, Ali Akcaagac
no flags Details

  None (edit)
Description Ali Akcaagac 2013-01-16 15:23:34 EST
Created attachment 679812 [details]
alternative freemind.sh file

I'm using Fedora 18 and detected the following problem with Freemind that caused an entire day to become a PITA.

Freemind runs into OOM's easily and I have absolutely no idea how to solve this on my own.

Problem description:

- I have a bigger mind map (1.4 mb/~3000 Nodes)
- This map is usually used between Freemind and iThoughts (iOS application)
- While this map performs perfectly on a small device like an iPhone, it otoh
  causes a bunch of problems with Freemind.
- I detected that Freemind seems to spawn own processes for each Node
- This easily bursts the internal set max user processes of 1024 which is set by
  security in /etc/security/limits.d/90-nproc.conf see corresponding bugentry in
  #432903 which I also commented on.
- I tried increasing ulimit -u 4096 (which is the maximum number allowed by
  users) in hope that it may be enough but sadly Freemind exceeds this number by
  spawning even more processes. You can verify this by entering:

  while [ 1 ] ; do sleep 2 ; ps uH p `pidof java` | wc -l ; done

  in a separate bash while running Freemind only and working on a bigger
  mindmap. This info shows the started processes. Here with my mind map it
  easily performs around 15000 started processes. Sadly this will also cause
  that all processes exceeds and bash prints out errors that it can't open any
  new processes. Even running other applications fail.
- I tried setting ulimit -u 50000 and higher as root and then tried running
  freemind as root but after a while it threws out these errors again.
- What I tried next was reducing the stack size of java and increased the heap
  size. The reduction of the stack size for threads allow even more threads to
  be spawned via java Runnable. Unfortunately I run into the same errors again.
  But the point is. There is still memory left for java and the ulimit is set to 
  50000 processes. So technicly this shouldn't happen.
- What I tried next was a native version of freemind downloaded from the
  original site but the problems still exists. Even using the native jar files
  (for forms etc.) didn't solve the issue.
- I then spent some time writing an alternative freemind.sh file that suits the
  needs of the original Fedora 18 package and experimented around with it.
- I was able to reduce even more stuff like some performance boosts options
  (AWT), turning off antialiasing in Freemind etc.
- But this only gave some performance boosts, the crappy OOM still remains.

Now to my setup here:
- Lenovo Edge Notebook E450 APU
- 4 GB Ram

4 GB Ram should be more than enough. I use this system as a java and C development plattform and to say the truth I have rarely seen an app like Freemind, that spawns thousands of processes only to keep a little tree with 3000 Nodes in memory. If this really nukes the virtual machine, then I really don't know what to do next.
Comment 1 Ali Akcaagac 2013-01-16 15:26:54 EST
Created attachment 679814 [details]
Freemind log.0 file run via DEBUG=1 java ....
Comment 2 hannes 2013-01-17 00:46:31 EST
I don't know if that's something I could solve. It's probably a thing for upstream, since it's not really caused by the packaging. I will try to report it upstream.
Which Java Version are you using and does it happen with nativ mindmaps (not opened by other programs) as well?

Johannes
Comment 3 Ali Akcaagac 2013-01-17 02:34:24 EST
Moin

(In reply to comment #2)
> I don't know if that's something I could solve. It's probably a thing for
> upstream, since it's not really caused by the packaging.

Well I agree here. I am not sure anymore if this shall be considered a packaging or fedora bug in general. But here a few general arguments:

- I believe that an overall "max user processes" that defaults to just 1024 is too low. Thw way Java spawns Threads can easily go beyond this value. As I tried to explain under Bugzilla #432903.

- I figured out that packaging replaced forms.jar by jgoodies-forms.jar. Though I am not sure if this is related to anything. But my thoughts went into the area of memory consumption (maybe one of the replacement jars could cause memoryleaks or trigger events differently, but this also happens with native forms.jar which is provided by upstream freemind.).

- I also went the other way around by actually REDUCING memory for the virtual machine. One would think that if we receive OOM that this is a sign for lack of memory. But if I have 4 gb of physical ram (linux seem to give 3 gb of physical ram to the user land software). Means, after everything is loaded (Desktop, Tools running in the background, etc. at least 2 gb still remains free for anything else (where did the 1 gb go ... but this is not be adressed here).

- So having figured out that I have 2 gb remaining and guessing that by default Java may give 1 mb as stack for each thread started, then this gives us a theoretically amount of 2000 mb (2 gb) / 1 mb == 2000 possible threads (maybe a bit less due to heap size and other stuff.

- So 2000 threads in case of 1 mb provided per stack. Reducing the stack size per Thread to 128k (0.1 mb) gives us a theoretical value of 20000 Threads. 2000 mb / 0.1 mb = 20000 Threads. Again the overall heap usage is not being taken into consideration. But providing -Xss128k is a reasonable value -> generally for Java applications.

- The fun part starts by REDUCING the heap size. The less max heap you give the virtual machine the more ram is available for spawning new threads.

- E.g.-Xms16m and -Xmx128m may be an option for Freemind. People tend to give -Xmx1024m or more ram because they believe that this is the cause for OOM but this could exactly be the oposite. If we stay with 1024 mb of virtual heap size then this will remain 1 gb for system usage and stack size. Means instead of the previous 2000 max Threads (or 20000 when using 128kb stacks) we will only get 1000 max Threads (or 10000 max threads when using 128kb).

- I am playing around with these values by now to figure out a proper solution.

> I will try to report it upstream.

I would kindly ask you to do this. It would be nice to point the developer to this bugreport so he can read my comment here.

Also just for the note:
- The same problem happens with Freemind 0.9.0 native package downloaded from sf.
- The same problem happens with Freemind 1.0.0-pre 9 native package downloaded from sf (but this version performs a bit better and generally faster).

> Which Java Version are you using and does it happen with nativ mindmaps (not
> opened by other programs) as well?

The openjdk version provided by Fedora 18. There was an update this night:

-bash-4.2$ java -version
java version "1.7.0_09-icedtea"
OpenJDK Runtime Environment (fedora-2.3.4.fc18-i386)
OpenJDK Server VM (build 23.2-b09, mixed mode)
-bash-4.2$ 

This mindmap is a native mindmap. The structural data isn't the issue. It's more an event issue with all these processes forked and spawned.

On the freemind site it was claimed that someone got a 15 mb file with more than 22000 Nodes running properly. My file don't even have 1/10 of that.

I am trying XMind for the time being (downloaded it over night) to see whether this app causes similar problems.
Comment 5 Ali Akcaagac 2013-01-18 18:22:54 EST
I believe, that I've found the problem:

As I initially wrote, the ulimit -u 1024 which is set by Fedora is definately too low. Furthermore the default Stack size per Thread is 1 mb which is set by default within the jvm.

This technically allows us to have 1024 max Threads to be spawned (generally, not only jvm related). In Java's case these 1024 max Rhreads will eat 1 mb of Stack for each Thread spawned. This will eat up 1024 mb of memory alone.

Regardless of the memory Fedoras end of max Threads per user is 1024 thus regardless of the physical memory will not allow more Threads to be spawned.

In Freeminds case this will only allow SMALL sized mind maps only a few nodes (if at all). Medium or larger size mind maps will definately NOT WORK at all. Not with the default values provided. If you have larger maps which spawns 5000 Threads then with the default values will eat 5 gb of Thread stack memory.

There is NO heap or memory leak issue. I did some tests using 'virtualvm' on fedora.

yum -y install virtualvm

Running virtualvm clearly shows that the HEAP size never exeeds 200 mb. So basicly the entire mind map can easily be loaded into 200 mb heap including all Java related processes. Also all Threads.

There are 3 problems I see and 2 of them are definately Fedora related:

1) increase ulimit -u 8192 at least for normal users. This allows us to have mind maps of middle size (5000-7000 Threads are spawned)

AND

2) decrease the Stack size for the Freemind app to -Xss128k (going down from 1 mb default Stack to 128kb Stack). This allows us to launch 10000 Threads for 1 gb of ram rather than 1000 Threads where 1 thread eats 1mb of ram. This should be set as default in /bin/freemind.sh. It would be generally a good idea to reduce the stack for Java applications down to 128kb -> 256kb (but not 1 mb) and see the response from other Fedora praticipants and users.

I had around 7100 Threads launched here with my mind map and all around 200mb of Heap being eaten. Each Task has a stack size of 128kb this all eats around 910 mb of physical ram. Including the 200 mb Heap it makes all in all 1110 mb of physical memory.

Another alternative would be to tell the Freemind developer to find a way to reduce the amount of Threads being spawned. Or finding different algorithms to store the Tree.

I will apply some screenshots here showing Freemind with 3700 Nodes. Normally the Threads are around 2000-3000 Threads but as soon as you do something like sorting etc. you end up around 7100 Threads (and this IS the issue that causes the OOM. It's not lack of physical mem, its end of max user processes). Sure this will of course cause OOM's once the physical memory ends as well. But in this case the max processes per user is the show stopper and the too high chosen default stack for all Java related programs.

Basicly whoever runs a Java program will cause huge memory leaks for the entire Fedora system -> till the Java program is closed and the garbage collector is run.
Comment 6 Ali Akcaagac 2013-01-18 18:24:56 EST
Created attachment 682802 [details]
Screenshot 1
Comment 7 Ali Akcaagac 2013-01-18 18:26:36 EST
Created attachment 682803 [details]
Screenshot 2
Comment 8 Ali Akcaagac 2013-01-18 18:30:35 EST
Created attachment 682804 [details]
Screenshot 3

Those three screenshots will show you the overall Heap being used. It's below 200 mb. But the Threads launched are the killer.

a) They go beyond the 1024 (ulimit -u) max user processes set by Fedora 18
b) The default stack size used by the JVM (1mb) is too high for at least Freemind. Reducing it to 128k will allow Freemind to spawn more Threads and use less physical RAM.
c) The 7100 Threads shown here are possible because I set ulimit -u 20000 as root and ran Freemind and visualvm as root.
Comment 9 hannes 2013-02-08 12:17:32 EST
Ok, it's fixed upstream, but I would not be too optimistic, that it will be fixed in fedora shortly. The new freemind release 1.0.0 has a lot of additional deps and since I am only working on fedora in my spare time, I can't promise anything in regard to an update to 1.0.0.
Comment 10 Ali Akcaagac 2013-02-11 10:24:37 EST
Yes I heard about that. I was in direct conversation with the developer and current maintainer of freemind. After some investigating he concluded that this is because of the way he launches timer processes for schedules that might have been put into an mindmap.

My mindap has plenty of shedules, since I keep my calendar stuff inside it. That was the reason for it spawning so many processes. He now rewrote the part of the code and changed aproximately 20 lines or so. I wonder whether they can be backported to freemind 0.9.0 somehow. I even bet the code fits 1:1 here.

I need to investigate into that a bit but right now I truly run out of time and clearly spent too much time reporting here on bugzilla. I will come back to you once I find some time. Meanwhile I thank you for your support.
Comment 11 hannes 2013-03-25 15:33:14 EDT
Ok, so I finally packaged everything up and the latest freemind version is in this repo:
http://repos.fedorapeople.org/repos/hannes/freemind-1.0.0/
Just copy the .repo file to /etc/yum.repos.d/ and you're good to go...

Hope this helps,

Johannes
Comment 12 Ali Akcaagac 2013-04-16 14:03:01 EDT
Perfectly:

1) The bug has been fixed in 1.0.0-RCx and works now
2) Thanks for providing 1.0.0-RCx packages. I really apreciate them. Would be nice if you could provide future 1.0.0-RCx packages for F18 as well. Right now I am in contact with the maintainer of Freemind to have it improved in a few areas.

RC3 is out. If you find time, please package.
Comment 13 hannes 2013-04-17 00:44:23 EDT
I know that RC3 is already there, but there were a lot of changes and it's not just done by swapping the sources. So please be patient and stay tuned...

Note You need to log in before you can comment on or make changes to this bug.