Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Created attachment 827993[details]
number of accesses to the webserver between 17. and 22. 11
Description of problem:
I have discovered that my server is going inactive. After a lot of wrangling with it, I have discovered a small moment when the load average on the server was just [...] over 90, and I had over 200 httpd processes.
After digging through httpd logs I have discovered (see attached output of cut -d' ' -f 3,6- /var/log/httpd/luther.ceplovi.cz-access_log|sort|uniq -c|sort -n) that it is actually Evolution pounding on the CalDAV server there.
I have installed Evolution in the afternoon of November 20, so this
[root@luther ~]# grep 'Request from User : matej' /var/log/zarafa/ical.log|cut -d' ' -f 1-3|sort|uniq -c|sort -n -k 4
48 Sun Nov 17
147 Mon Nov 18
85 Tue Nov 19
157866 Wed Nov 20
58004 Thu Nov 21
642 Fri Nov 22
[root@luther ~]#
looks like a pretty convincing proof of guilt (I have removed the calendar from Evolution this morning after the whole server melted down under OOM killer getting outraged).
Not sure, how can I analyze the issue on the client (and also I don't want to torture my poor server too much).
Version-Release number of selected component (if applicable):
evolution-3.8.5-10.el7.x86_64
zarafa-server-7.0.13-1.el6.i686
zarafa-7.0.13-1.el6.i686
zarafa-ical-7.0.13-1.el6.i686
How reproducible:
happened twice in the row (but that's two times more than I would like to)
Steps to Reproduce:
1.just add Zarafa calendar to the Evolution as a CalDAV calendar
2.
3.
Actual results:
server meltes down under the sustained DOS from Evolution
Expected results:
smoothly working shared calendar
Additional info:
Maybe there is some relationship to bug 1030579, but obviously this is in the end something different (e.g., DNS v. CalDAV).
(In reply to Matthew Barnes from comment #2)
> Too vague. Need a backtrace of evolution-calendar-factory or better
> reproducer steps.
Sorry, I won’t try third time to kill my server (happened to me twice). I am willing to give you or Milan ssh access to the server, but it is my only home server and I (and my family) needs it for something else than doing a work for developers who wants me to do their work.
Reproducer for me is simple: enable CalDAV account in Evo. Some hours (perhaps a day or two) is hit by couple of CalDAV requests per second, eventually gets to around 100 system load and OOM killer will gradually kill whole system. No, sorry I don’t see anything wrong in Evolution, and it seems to be generally happy to kill my server. Is it really a reporter’s job to resolve the issues for you?
Closed as INSUFFICIENT_DATA if you want, but I won’t touch Evo unless I see at least some interest in fixing this other than throwing the issue on my head.
I tried to reproduce this, with Matej's help, but no luck. I'll try some corner cases and will see. One observation: if I have stored a wrong password for the CalDAV calendar, the user is not asked to enter a correct one for some reason.
Comment 5RHEL Program Management
2014-03-22 06:28:56 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
Maybe a similar issue as here [1], except you do not use OAuth2, aka Bearer, authentication against your Zarafa server. I still think that this might be related to failed authentication and an attempt of the CalDAV backend to recover after a failed connection.
Could you run evolution-calendar-factory with CalDAV debugging on and try to reproduce the issue, then check what will be in the log, please? The command is:
$ CALDAV_DEBUG=all /usr/libexec/evolution-calendar-factory -w &>log.txt
then kill evolution-alarm-notify and restart evolution.
[1] https://git.gnome.org/browse/evolution-data-server/commit/?id=94f01174562