RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 910351 - Connectivity failure at wrong moment freezes Evolution UI: it waits for reply from e-d-s that in turn waits indefinitely for reply from caldav
Summary: Connectivity failure at wrong moment freezes Evolution UI: it waits for reply...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: evolution
Version: 6.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: beta
: ---
Assignee: Matthew Barnes
QA Contact: Desktop QE
URL:
Whiteboard:
: 918935 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-02-12 12:43 UTC by David Jaša
Modified: 2013-07-24 11:14 UTC (History)
3 users (show)

Fixed In Version: evolution-2.32.3-14.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 11:14:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
evolution backtrace (38.34 KB, text/plain)
2013-02-12 12:43 UTC, David Jaša
no flags Details
e-d-s backtrace (28.61 KB, text/plain)
2013-02-12 12:44 UTC, David Jaša
no flags Details

Description David Jaša 2013-02-12 12:43:56 UTC
Created attachment 696477 [details]
evolution backtrace

Description of problem:
Connectivity failure at wrong moment freezes Evolution UI: it waits for reply from e-d-s that in turn waits indefinitely for reply from caldav

Version-Release number of selected component (if applicable):
evolution-2.28.3-30.el6.x86_64
evolution-data-server-2.28.3-16.el6.x86_64

How reproducible:
hard (timing-dependent) but possible

Steps to Reproduce:
1. make evo ask for some caldav action by e-d-s
2. break connectivity
3.
  
Actual results:
evolution freezes and keeps frozen:
  * after HTTP timeout expires
  * after TCP timeout expires
  * after e-d-s is killed

Expected results:
Ideally, Evo should do all unreliable actions asynchronously.
It would be enough to set a short timeout for these actions and cancel the action if there is no progress.

Additional info:

Comment 1 David Jaša 2013-02-12 12:44:37 UTC
Created attachment 696478 [details]
e-d-s backtrace

Comment 2 David Jaša 2013-02-12 12:47:13 UTC
interesting parts of backtraces (IMO):

evo:
Thread 1 (Thread 0x7f5bd288c980 (LWP 30154)):
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
#1 0x0000003b250109cd in e_flag_wait (flag=0x3420220) at e-flag.c:120
#2 0x0000003b24c2116a in e_cal_get_objects_for_uid (ecal=0xdabd170 [ECal], uid=0xdc3e6b0 "b582389779", objects=0x7fff6059fa98, error=0x7fff6059fa88) at e-cal.c:2945
#3 0x0000003b24c213ef in generate_instances (ecal=0xdabd170 [ECal], start=1360537200, end=1360969200, uid=<value optimized out>, cb=0x3b24c1aec0 <add_instance>, cb_data=0x767be70) at e-cal.c:3541
#4 0x0000003b24c24dd7 in e_cal_generate_instances_for_object (ecal=0xdabd170 [ECal], icalcomp=<value optimized out>, start=1360537200, end=1360969200, cb=0x7f5bcaadc960 <add_instance_cb>,
cb_data=0x7fff6059fc60) at e-cal.c:3811
#5 0x00007f5bcaadc95a in process_added (query=0x2761850 [ECalView], objects=<value optimized out>, model=0x8d345c0 [ECalModelCalendar]) at e-cal-model.c:1607
#6 0x00007f5bcaadbe98 in process_event (query=0x2761850 [ECalView], objects=0x80abec0 = {...}, model=0x8d345c0 [ECalModelCalendar], process_fn=0x7f5bcaadc550 <process_added>, in=0xf6a4bf0,
save_list=0xf6a4c00, copy_fn=0x36c562d2b0 <icalcomponent_new_clone>, free_fn=0x36c562cab0 <icalcomponent_free>) at e-cal-model.c:1795
#7 0x00007f5bcaadc076 in e_cal_view_objects_added_cb (query=<value optimized out>, objects=<value optimized out>, model=<value optimized out>) at e-cal-model.c:1823
#8 0x00007f5bcaadc510 in process_modified (query=0x2761850 [ECalView], objects=<value optimized out>, model=0x8d345c0 [ECalModelCalendar]) at e-cal-model.c:1703
#9 0x00007f5bcaadbe98 in process_event (query=0x2761850 [ECalView], objects=0x10365120 = {...}, model=0x8d345c0 [ECalModelCalendar], process_fn=0x7f5bcaadc320 <process_modified>, in=0xf6a4bf4,
save_list=0xf6a4c08, copy_fn=0x36c562d2b0 <icalcomponent_new_clone>, free_fn=0x36c562cab0 <icalcomponent_free>) at e-cal-model.c:1795
#10 0x00007f5bcaadc036 in e_cal_view_objects_modified_cb (query=<value optimized out>, objects=<value optimized out>, model=<value optimized out>) at e-cal-model.c:1831


e-d-s:
Thread 4 (Thread 0x7f5bbe840700 (LWP 30226)):
#0 0x00000036aaa0e94c in __libc_recv (fd=<value optimized out>, buf=<value optimized out>, n=<value optimized out>, flags=<value optimized out>) at ../sysdeps/unix/sysv/linux/x86_64/recv.c:34
#1 0x0000003bc122651b in recv (transport_data=<value optimized out>, buf=<value optimized out>, buflen=<value optimized out>) at /usr/include/bits/socket2.h:45
#2 soup_gnutls_pull_func (transport_data=<value optimized out>, buf=<value optimized out>, buflen=<value optimized out>) at soup-gnutls.c:393
#3 0x00000036be21a3e2 in _gnutls_read (session=0x20ea0f0, iptr=0x2f23ef0, sizeOfPtr=5, flags=0) at gnutls_buffers.c:300
#4 0x00000036be21a843 in _gnutls_io_read_buffered (session=0x20ea0f0, iptr=<value optimized out>, sizeOfPtr=5, recv_type=<value optimized out>) at gnutls_buffers.c:532
#5 0x00000036be2161e1 in _gnutls_recv_int (session=0x20ea0f0, type=GNUTLS_APPLICATION_DATA, htype=4294967295, data=
0x1ec9cf0 "\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305", <incomplete sequence \305>..., sizeofdata=8192) at gnutls_record.c:904
#6 0x0000003bc1226430 in soup_gnutls_read (channel=0x16e8e00, buf=
0x1ec9cf0 "\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305", <incomplete sequence \305>..., count=8192, bytes_read=0x7f5bbe83d948, err=0x7f5bbe83d988) at soup-gnutls.c:197
#7 0x00000036aba2f528 in IA__g_io_channel_read_chars (channel=0x16e8e00, buf=
0x1ec9cf0 "\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305\305", <incomplete sequence \305>..., count=8192, bytes_read=0x7f5bbe83da70, error=0x7f5bbe83d988) at giochannel.c:1860
#8 0x0000003bc123860a in read_from_network (sock=0x2d8feb0 [SoupSocket], buffer=<value optimized out>, len=8192, nread=0x7f5bbe83da70, error=0x7f5bbe83da68) at soup-socket.c:1251
#9 0x0000003bc1238a33 in soup_socket_read_until (sock=0x2d8feb0 [SoupSocket], buffer=0x7f5bbe83da80, len=8192, boundary=0x3bc12449d4, boundary_len=1, nread=0x7f5bbe83da70, got_boundary=0x7f5bbe83da7c, cancellable=0x0, error=
0x7f5bbe83da68) at soup-socket.c:1440
#10 0x0000003bc122dbe1 in read_metadata (msg=0x20274b0 [SoupMessage], to_blank=1) at soup-message-io.c:313
#11 0x0000003bc122df25 in io_read (sock=0x2d8feb0 [SoupSocket], msg=0x20274b0 [SoupMessage]) at soup-message-io.c:810
#12 0x0000003bc1237aff in process_queue_item (item=0x174f890) at soup-session-sync.c:262
#13 0x0000003bc1237d63 in send_message (session=<value optimized out>, msg=0x20274b0 [SoupMessage]) at soup-session-sync.c:322
#14 0x00007f5bc240138a in send_and_handle_redirection (soup_session=0x1539b70 [SoupSessionSync], msg=0x20274b0 [SoupMessage], new_location=0x0) at e-cal-backend-caldav.c:897
#15 0x00007f5bc2404fac in caldav_server_list_objects (cbdav=<value optimized out>, objs=0x7f5bbe83fd10, len=0x7f5bbe83fd1c, only_hrefs=<value optimized out>, start_time=1357636901, end_time=1363684901) at e-cal-backend-caldav.c:1184
#16 0x00007f5bc24066f4 in synchronize_cache (cbdav=0x1539920 [ECalBackendCalDAV], start_time=1357636901, end_time=1363684901) at e-cal-backend-caldav.c:1670
#17 0x00007f5bc24076ce in caldav_synch_slave_loop (data=<value optimized out>) at e-cal-backend-caldav.c:1976
#18 0x00000036aba62004 in g_thread_create_proxy (data=0x1926910) at gthread.c:635
#19 0x00000036aaa07851 in start_thread (arg=0x7f5bbe840700) at pthread_create.c:301
#20 0x00000036aa2e890d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Comment 3 Milan Crha 2013-02-18 11:10:15 UTC
The strange thing is that the eds backtrace doesn't show the pending operation, the one for which evolution is waiting. I checked the eds CalDAV's code and there is not involved any direct locking, thus it might be something else.

Comment 4 Milan Crha 2013-02-18 11:28:50 UTC
I checked upstream bugzilla and there are filled similar bug reports, but none has a concrete solution, it just stopped exhibiting to the upstream reporter(s). David, do you have any exact steps for this, please? The "break connectivity" is slightly vague. Also, what calendar types do you have configured in evolution, please? I see in backtrace and here in comments CalDAV, surely On This Computer/Personal and Birthdays & Anniversaries calendars, but are there any other calendars configured too? I suppose you've 3 CalDAV calendars, right?

Comment 5 David Jaša 2013-02-18 12:10:47 UTC
(In reply to comment #4)
> I checked upstream bugzilla and there are filled similar bug reports, but
> none has a concrete solution, it just stopped exhibiting to the upstream
> reporter(s). David, do you have any exact steps for this, please?

Unfortunately, no

> The "break connectivity" is slightly vague.

It is what happens at suspend, move to different location, resume after 20+ minutes of sleep

> Also, what calendar types do you have
> configured in evolution, please? I see in backtrace and here in comments
> CalDAV, surely On This Computer/Personal and Birthdays & Anniversaries
> calendars, but are there any other calendars configured too? I suppose
> you've 3 CalDAV calendars, right?

1x  On This Computer/Personal (with events)
1x  Birthdays & Anniversaries (empty)
2x  CalDAV (both at redhat.com server)
4x  iCal (Doodle, 2x Facebook, http://www.lvb.cz/event/ical )

Comment 6 Milan Crha 2013-03-08 10:17:40 UTC
*** Bug 918935 has been marked as a duplicate of this bug. ***

Comment 13 Milan Crha 2013-06-10 15:20:34 UTC
David, could you retest, please?

Comment 14 Milan Crha 2013-07-24 11:14:06 UTC
I spoke with David on IRC and he told me that he didn't see this since the rebase, thus I'm closing this.


Note You need to log in before you can comment on or make changes to this bug.