Created attachment 605246 [details] backtrace evolution Description of problem: The process of displaying messages always waiting for the completion of processes of loading messages even when it is not necessary. Demonstration video: https://docs.google.com/open?id=0B0nwzlfiB4aQbnhpTjFwbUU4YkE
Created attachment 605247 [details] backtrace evolution
Created attachment 605248 [details] backtrace evolution
Thanks for a bug report. I see in the video that you have pending quite few background operations (those activities in status bar at the bottom of the window), which seem to be, based on the backtraces, operations which are downloading your remote mails locally. Is this correct, aka do you have set to synchronize remote mails locally in your EWS account/folders? One strange thing is that the evolution tries to synchronize mails locally in multiple threads for one folder, which is obviously wrong. Also, if I understand your request properly, is this bug report about message load into preview panel, to show messages already downloaded locally immediately, regardless current operations being pending on the folder? It is doable, evolution-mapi uses in 3.4.x, thus I suppose it can be added to evolution-ews too.
> I understand your request properly, is this bug report about message load into > preview panel, to show messages already downloaded locally immediately, > regardless current operations being pending on the folder? Yes, also on 8 sec you can see that I switched to "Deleted Items" and on 16 sec return back again to "Inbox" in both cases none effect happens with message list.
Hmm, the message list is stuck due to other pending operations. I committed a change for 3.5.91 in evolution-ews, which adds behaviour to show messages quickly, if they are available locally. It doesn't influence change between folders, only between messages within currently selected folder. I'm wondering, is evolution using high CPU when you get to state of no update of message list on folder change in UI? I've a feeling it is using it, which makes the folder change break.
New video: https://docs.google.com/open?id=0B0nwzlfiB4aQc0JueDFmaUxKcVE Again, starting from some critical number of processes stops responding interface. Not updated preview when a transition between messages and even the list is not updated when changing messages folder. So is impossible to cancel download new emails and close evolution. As you can see from video evolution using CPU, but Xorg process in this time uses more CPU than evolution.
Created attachment 607408 [details] htop screenshot
Created attachment 607409 [details] Evolution screenshot
Created attachment 607412 [details] backtrace evolution
Created attachment 607414 [details] backtrace evolution
Created attachment 607415 [details] backtrace evolution
Thanks for the update. I consider couple things interesting in the backtraces, namely: - the first backtrace shows quite few threads waiting for a chance to talk to the exchange server, which is indicated by "stuck" activities in evolution's status bar; it doesn't show anything in the main thread - the second backtrace is similar to the first, only the main thread shows repaint of the window - I suppose the repaint makes the Xorg processor usage, when it tries to draw all the spinners at the status bar - the third backtrace has most of the pending EWS operations done, thus it finally got a chance to talk to the server (or it was cancelled), and the main thread is still redrawing the window. I think the folder change being stuck is caused by the higher CPU usage in the main thread, especially if the folder change is delivered to the message list on idle (the CPU usage in the main thread prevents application from the idle state), but I will investigate further to verify that. It would be good to know the reason for operations being stuck for you, though I think it's due to your slow connection to the server, as you mentioned in another of your bugs. Do you see the same if you use reliable/quick connection?
This happens on the laptop, where I rarely run the evolution, but when I run evolution, evolution loads all messages and they are very much. And yes my connection is LTE in this case. Why I can't cancel loading messages and to close evolution?
OK, I can reproduce this if I cheat evolution-ews a bit, I put a sleep() call into e_ews_connection_get_items(), thus it's like in your backtrace. Whenever I select a message to be shown in the preview panel the sleep() simulates your slow network. Once I have there "enough" pending requests in the status bar evolution starts to eat CPU, with the same backtrace as that yours, in the Xorg calls and changing folder in folder tree on the left does nothing. It seems to me that Xorg has trouble drawing widgets which are out of view for some reason. You are right that the message load should be cancelled, I do not know why it isn't (yet), as the code did exist in evolution for cancelling message load on change in message list. Fixing that may fix the problem, or half of it, at least. By the way, on the other machine, with better connection, is evolution-ews finally reliable for you, after all the fixes which landed in 3.4.4?
I'm not sure whether this will help anyhow, but could you try with the test build I just finished [1], please? It doesn't do anything special, it only changes EWS_CONNECTION_MAX_REQUESTS to 1, instead of previously used 10, thus the libsoup's queue is not full of waiting requests. I guess it should do it, but as I'm not able to reproduce it here, then I'd prefer testing on your side. Thanks in advance. [1] http://koji.fedoraproject.org/koji/taskinfo?taskID=4431094
I asked Dan Winship for his opinion and he came up with a change for libsoup. I made a test build of it for F17 here [1], thus if you could give it a try too. I'm wondering what would be the best way to test both patches, probably if you install that one for EWS, try whether anything changed, and then (regardless if yes or no), try also the libsoup package, whether it'll be better with it. Thanks in advance. [1] http://koji.fedoraproject.org/koji/taskinfo?taskID=4431173
I am updated both evolution-ews and libsoup, but this not solve problem. New video: https://docs.google.com/open?id=0B0nwzlfiB4aQQnkxN3VhckszX1E
Created attachment 607979 [details] backtrace evolution
Thanks for the update and testing the package. I see the packages made it better, there are not left pending operations in the status bar, but the check for new messages got stuck. There is shown in the backtrace that the debuginfo doesn't match for evolution-ews for some reason, maybe its update failed? Nonetheless, the backtrace shows that there is not passed a cancellable to inner call on connection, thus the cancel you did in Send/Receive dialog wasn't propagated down to the libsoup, which means this will wait until timeout is reached or the request is satisfied. I'll add the cancellable into this call too.
I built a test package [1] with patch which I committed into EWS for 3.5.91. Please try it. [1] http://koji.fedoraproject.org/koji/taskinfo?taskID=4437062
Nice work. Tried to take accumulated messages within two days with last update, evolution not hung. Tried to break receiving messages and close, evolution also closed... Good. I'll have to try more slow connection such as GPRS.
Thanks for the update. I committed both changes for evolution-ews-3.5.91, which is reaching Fedora this week, +/-. I'm closing this bug report as such.
These patches will not be in the repository Fedora 17?
> changes EWS_CONNECTION_MAX_REQUESTS to 1, instead of previously used 10 This change do not reduce the speed of receiving messages?
(In reply to comment #23) > These patches will not be in the repository Fedora 17? Nope, it'll be only part of the test package I built for you. The upstream focuses on 3.6.0 now, which is just behind the corner. Thus these final changes will be officially part of Fedora 18+. (In reply to comment #24) > > changes EWS_CONNECTION_MAX_REQUESTS to 1, instead of previously used 10 > This change do not reduce the speed of receiving messages? No, the SoupSession has its internal limit to 1 live connection anyway, thus the code change doesn't change anything from connection point of view (as far as I can tell).