Hide Forgot
Created attachment 827993 [details] number of accesses to the webserver between 17. and 22. 11 Description of problem: I have discovered that my server is going inactive. After a lot of wrangling with it, I have discovered a small moment when the load average on the server was just [...] over 90, and I had over 200 httpd processes. After digging through httpd logs I have discovered (see attached output of cut -d' ' -f 3,6- /var/log/httpd/luther.ceplovi.cz-access_log|sort|uniq -c|sort -n) that it is actually Evolution pounding on the CalDAV server there. I have installed Evolution in the afternoon of November 20, so this [root@luther ~]# grep 'Request from User : matej' /var/log/zarafa/ical.log|cut -d' ' -f 1-3|sort|uniq -c|sort -n -k 4 48 Sun Nov 17 147 Mon Nov 18 85 Tue Nov 19 157866 Wed Nov 20 58004 Thu Nov 21 642 Fri Nov 22 [root@luther ~]# looks like a pretty convincing proof of guilt (I have removed the calendar from Evolution this morning after the whole server melted down under OOM killer getting outraged). Not sure, how can I analyze the issue on the client (and also I don't want to torture my poor server too much). Version-Release number of selected component (if applicable): evolution-3.8.5-10.el7.x86_64 zarafa-server-7.0.13-1.el6.i686 zarafa-7.0.13-1.el6.i686 zarafa-ical-7.0.13-1.el6.i686 How reproducible: happened twice in the row (but that's two times more than I would like to) Steps to Reproduce: 1.just add Zarafa calendar to the Evolution as a CalDAV calendar 2. 3. Actual results: server meltes down under the sustained DOS from Evolution Expected results: smoothly working shared calendar Additional info: Maybe there is some relationship to bug 1030579, but obviously this is in the end something different (e.g., DNS v. CalDAV).
Too vague. Need a backtrace of evolution-calendar-factory or better reproducer steps.
(In reply to Matthew Barnes from comment #2) > Too vague. Need a backtrace of evolution-calendar-factory or better > reproducer steps. Sorry, I won’t try third time to kill my server (happened to me twice). I am willing to give you or Milan ssh access to the server, but it is my only home server and I (and my family) needs it for something else than doing a work for developers who wants me to do their work. Reproducer for me is simple: enable CalDAV account in Evo. Some hours (perhaps a day or two) is hit by couple of CalDAV requests per second, eventually gets to around 100 system load and OOM killer will gradually kill whole system. No, sorry I don’t see anything wrong in Evolution, and it seems to be generally happy to kill my server. Is it really a reporter’s job to resolve the issues for you? Closed as INSUFFICIENT_DATA if you want, but I won’t touch Evo unless I see at least some interest in fixing this other than throwing the issue on my head.
I tried to reproduce this, with Matej's help, but no luck. I'll try some corner cases and will see. One observation: if I have stored a wrong password for the CalDAV calendar, the user is not asked to enter a correct one for some reason.
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux.
Maybe a similar issue as here [1], except you do not use OAuth2, aka Bearer, authentication against your Zarafa server. I still think that this might be related to failed authentication and an attempt of the CalDAV backend to recover after a failed connection. Could you run evolution-calendar-factory with CalDAV debugging on and try to reproduce the issue, then check what will be in the log, please? The command is: $ CALDAV_DEBUG=all /usr/libexec/evolution-calendar-factory -w &>log.txt then kill evolution-alarm-notify and restart evolution. [1] https://git.gnome.org/browse/evolution-data-server/commit/?id=94f01174562
I don't have the Zarafa server anymore.