Bug 504329

Summary: gvfs crash
Product: [Fedora] Fedora Reporter: Alexey Kuznetsov <axet>
Component: gvfsAssignee: Tomáš Bžatek <tbzatek>
Status: CLOSED WONTFIX QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: medium Docs Contact:
Priority: low    
Version: 12CC: alexl, tbzatek, tsmetana
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-12-05 06:52:33 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
full log none

Description Alexey Kuznetsov 2009-06-05 15:44:53 UTC
happence once, when i check hudge torrent file (70gb)

gvfsd-smb[3151]: segfault at c ip 08061322 sp bf9dc240 error 4 in gvfsd-smb[8048000+22000]


[axet@axet-laptop ~]$ locate gvfsd-smb
/usr/libexec/gvfsd-smb
/usr/libexec/gvfsd-smb-browse
[axet@axet-laptop ~]$ rpm -qf /usr/libexec/gvfsd-smb
gvfs-smb-1.2.3-2.fc11.i586

Comment 1 Tomáš Bžatek 2009-06-05 15:57:48 UTC
Can you please provide backtrace for this crash? The information you provided is not very useful. https://fedoraproject.org/wiki/StackTraces

You can execute the backend directly:
  GVFS_DEBUG=1 /usr/libexec/gvfsd-smb server=localhost share=public user=tbzatek

Comment 2 Alexey Kuznetsov 2009-06-05 18:42:33 UTC
unfortunately i unable to install debug info, all f11 mirrors busy and sycing new fedora release

unfortunately

...

Trying other mirror.
http://ftp.jaist.ac.jp/pub/Linux/Fedora/releases/11/Everything/i386/debug/repodata/repomd.xml: [Errno 14] HTTP Error 403: Forbidden
Trying other mirror.
http://ftp.kddilabs.jp/Linux/packages/fedora/releases/11/Everything/i386/debug/repodata/repomd.xml: [Errno 14] HTTP Error 403: Forbidden

...

here is tail of log file you required:



Queued new job 0x942b158 (GVfsJobSeekWrite)
job_read send reply, 32768 bytes
Queued new job 0x9442ee0 (GVfsJobRead)
job_seek_write send reply, pos 1022902272
Queued new job 0x9439c80 (GVfsJobWrite)
job_read send reply, 65535 bytes
Queued new job 0x9442d30 (GVfsJobRead)
job_write send reply
Queued new job 0x943a288 (GVfsJobWrite)
job_read send reply, 65535 bytes
Queued new job 0x942af80 (GVfsJobSeekRead)
job_write send reply
Queued new job 0x9439f08 (GVfsJobWrite)
job_seek_read send reply, pos 451293184
Queued new job 0x9439e30 (GVfsJobRead)
job_write send reply
Queued new job 0x9462aa8 (GVfsJobWrite)
job_read send reply, 65535 bytes
Queued new job 0x943a1f8 (GVfsJobRead)
job_write send reply
Queued new job 0x9444ad8 (GVfsJobCloseWrite)
job_read send reply, 32768 bytes
Queued new job 0x9462d48 (GVfsJobRead)
job_close_write send reply

Comment 3 Alexey Kuznetsov 2009-06-05 18:43:58 UTC
Created attachment 346700 [details]
full log

Comment 4 Tomáš Bžatek 2009-06-08 09:18:12 UTC
yeah, release engineering is switching rawhide to F12 and making some changes to repository information.

You can download packages directly from the build system (check your version and arch first): http://koji.fedoraproject.org/koji/buildinfo?buildID=102520

The log provided didn't show any useful information, all looks correct.

Comment 5 Alexey Kuznetsov 2009-06-08 10:15:09 UTC
too many dependencies, lest do it tomorrow after mirrors going to open.

Comment 6 Alexey Kuznetsov 2009-06-09 15:44:38 UTC
(gdb) t a a bt

Thread 28 (Thread 0xb69ffb70 (LWP 3809)):
#0  _int_malloc (av=<value optimized out>, bytes=<value optimized out>)
    at malloc.c:4682
#1  0x0098aafe in *__GI___libc_malloc (bytes=486) at malloc.c:3638
#2  0x00a9b97e in talloc_strdup () from /usr/lib/libtalloc.so.1
#3  0x002663a0 in SMBC_parse_path (ctx=0xb55ecd48, context=0xb7200c68, 
    fname=0xb52034c8 "smb://mini.local/www/%D0%9F%D0%BE%D0%BB%D0%BD%D0%BE%D0%BC%D0%B5%D1%82%D1%80%D0%B0%D0%B6%D0%BD%D1%8B%D0%B5/%D0%97%D0%BE%D0%BB%D0%BE%D1%82%D0%BE%D0%B9%20%D0%B3%D0%BB%D0%BE%D0%B1%D1%83%D1%81%201%20(2008%"..., 
    pp_workgroup=0x0, pp_server=0xb69ff18c, pp_share=0xb69ff188, 
    pp_path=0xb69ff17c, pp_user=0xb69ff184, pp_password=0xb69ff180, 
    pp_options=0x0) at libsmb/libsmb_path.c:263
#4  0x00265593 in SMBC_read_ctx (context=0xb7200c68, file=0xb57e31c8, 
    buf=0x8eaa960, count=65535) at libsmb/libsmb_file.c:282
#5  0x0804eb39 in do_read (backend=0x8e84868, job=0x8e89a68, 
    handle=0xb57e31c8, buffer=0x8eaa960 "\200W\250", bytes_requested=0)
    at gvfsbackendsmb.c:733
#6  0x0805875b in run (job=0x8e89a68) at gvfsjobread.c:124
#7  0x080558ad in g_vfs_job_run (job=0x8e89a68) at gvfsjob.c:198
#8  0x08052e2e in job_handler_callback (data=0x8e89a68, user_data=0x8e75b20)
    at gvfsdaemon.c:142
#9  0x00b592cf in g_thread_pool_thread_proxy (data=0x8e7d708)
---Type <return> to continue, or q <return> to quit---
    at gthreadpool.c:265
#10 0x00b57c7f in g_thread_create_proxy (data=0x8e8ccb0) at gthread.c:635
#11 0x00ac0935 in start_thread (arg=0xb69ffb70) at pthread_create.c:297
#12 0x009f482e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:130

Thread 1 (Thread 0xb7f0e9c0 (LWP 2742)):
#0  0x08061322 in command_read_cb (source_object=0xb7254ca0, res=0x8e76090, 
    user_data=0xb58e9628) at gvfschannel.c:477
#1  0x0014375f in async_ready_callback_wrapper (source_object=0xb7254ca0, 
    res=0x8e76090, user_data=0xb58e9628) at ginputstream.c:480
#2  0x0014d852 in IA__g_simple_async_result_complete (simple=0x8e76090)
    at gsimpleasyncresult.c:567
#3  0x0015bf37 in read_async_cb (data=0x8e7e268, condition=17, fd=33)
    at gunixinputstream.c:476
#4  0x0012462b in fd_source_dispatch (source=0x8e92ef0, 
    callback=0x15be20 <read_async_cb>, user_data=0x8e7e268)
    at gasynchelper.c:117
#5  0x00b2d1e8 in g_main_dispatch (context=<value optimized out>)
    at gmain.c:1814
#6  IA__g_main_context_dispatch (context=<value optimized out>) at gmain.c:2367
#7  0x00b307f8 in g_main_context_iterate (context=0x8e7cea0, 
    block=<value optimized out>, dispatch=1, self=0x8e74040) at gmain.c:2448
#8  0x00b30caf in IA__g_main_loop_run (loop=0x8e7d5d0) at gmain.c:2656
---Type <return> to continue, or q <return> to quit---
#9  0x0805190a in daemon_main (argc=4, argv=0xbfd30ee4, max_job_threads=1, 
    default_type=0x80639b8 "smb-share", mountable_name=0x0, 
    first_type_name=0x80639b8 "smb-share") at daemon-main.c:294
#10 0x08051c16 in main (argc=4, argv=0xbfd30ee4) at daemon-main-generic.c:39
Current language:  auto; currently minimal
Current language:  auto; currently c
(gdb)

Comment 7 Alexey Kuznetsov 2009-06-09 15:45:19 UTC
Program received signal SIGSEGV, Segmentation fault.
0x08061322 in command_read_cb (source_object=0xb7254ca0, res=0x8e76090, 
    user_data=0xb58e9628) at gvfschannel.c:477
477	      reader->channel->priv->request_reader = NULL;
(gdb) t a a bt

Comment 8 Alexey Kuznetsov 2009-06-09 15:45:43 UTC
i'm still under debugger, awaiting command sir!

Comment 9 Tomáš Bžatek 2009-06-10 14:43:10 UTC
Even though that bytes_requested=0 looks suspicious, SMBC_read_ctx() is called with count=65535 anyway (this is weird, we don't modify that number, gdb is making things up). Perhaps "t a a bt full" would show more.

Please also post your samba, libsmbclient and libtalloc package versions.

Comment 10 Alexey Kuznetsov 2009-06-11 06:03:56 UTC
Program received signal SIGSEGV, Segmentation fault.
0x08061322 in command_read_cb (source_object=0x83a0078, res=0x8396f80, 
    user_data=0xb38e2d88) at gvfschannel.c:477
477	      reader->channel->priv->request_reader = NULL;
(gdb) t a a bt full

Thread 279 (Thread 0xb7d67b70 (LWP 31107)):
#0  0x00878424 in __kernel_vsyscall ()
No symbol table info available.
#1  0x002512d2 in pthread_cond_timedwait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S:179
No locals.
#2  0x00116f2e in g_cond_timed_wait_posix_impl (cond=0xfffffdfc, 
    entered_mutex=0x176ac9, abs_time=0xb7d672b8) at gthread-posix.c:242
        result = <value optimized out>
        end_time = {tv_sec = 1244664976, tv_nsec = 785478000}
        timed_out = <value optimized out>
        __PRETTY_FUNCTION__ = "g_cond_timed_wait_posix_impl"
#3  0x00178d3c in g_async_queue_pop_intern_unlocked (queue=0x839e740, 
    try=<value optimized out>, end_time=0xb7d672b8) at gasyncqueue.c:365
        retval = <value optimized out>
        __PRETTY_FUNCTION__ = "g_async_queue_pop_intern_unlocked"
#4  0x001c9167 in g_thread_pool_wait_for_new_task (pool=<value optimized out>)
    at gthreadpool.c:220
        end_time = {tv_sec = 1244664976, tv_usec = 785478}
#5  g_thread_pool_thread_proxy (pool=<value optimized out>)
    at gthreadpool.c:254
        task = <value optimized out>
---Type <return> to continue, or q <return> to quit---
        pool = 0x839e708
#6  0x001c7c7f in g_thread_create_proxy (data=0x83b0bb8) at gthread.c:635
        __PRETTY_FUNCTION__ = "g_thread_create_proxy"
#7  0x0024c935 in start_thread (arg=0xb7d67b70) at pthread_create.c:297
        __res = <value optimized out>
        __ignore1 = <value optimized out>
        __ignore2 = <value optimized out>
        pd = 0xb7d67b70
        now = <value optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {2486260, 0, 4001536, 
                -1210682328, 1160747777, -1604644754}, mask_was_saved = 0}}, 
          priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, 
              cleanup = 0x0, canceltype = 0}}}
        not_first_call = <value optimized out>
#8  0x003ce82e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:130
No locals.

Thread 1 (Thread 0xb7f689c0 (LWP 24071)):
#0  0x08061322 in command_read_cb (source_object=0x83a0078, res=0x8396f80, 
    user_data=0xb38e2d88) at gvfschannel.c:477
        count_read = <value optimized out>
#1  0x002a575f in async_ready_callback_wrapper (source_object=0x83a0078, 
    res=0x8396f80, user_data=0xb38e2d88) at ginputstream.c:480
---Type <return> to continue, or q <return> to quit---
No locals.
#2  0x002af852 in IA__g_simple_async_result_complete (simple=0x8396f80)
    at gsimpleasyncresult.c:567
        __PRETTY_FUNCTION__ = "IA__g_simple_async_result_complete"
#3  0x002bdf37 in read_async_cb (data=0x83ae150, condition=17, fd=22)
    at gunixinputstream.c:476
        simple = 0x8396f80
        error = 0x0
        count_read = 0
#4  0x0028662b in fd_source_dispatch (source=0x83b3950, 
    callback=0x2bde20 <read_async_cb>, user_data=0x83ae150)
    at gasynchelper.c:117
        __PRETTY_FUNCTION__ = "fd_source_dispatch"
#5  0x0019d1e8 in g_main_dispatch (context=<value optimized out>)
    at gmain.c:1814
        dispatch = 0x2865f0 <fd_source_dispatch>
        user_data = 0x83ae150
        callback = 0x2bde20 <read_async_cb>
        cb_funcs = 0x2464bc
        cb_data = 0x83c5ee0
        current_source_link = {data = 0x83b3950, next = 0x0}
        source = 0x83b3950
        current = 0x8396718
---Type <return> to continue, or q <return> to quit---
        i = 1
#6  IA__g_main_context_dispatch (context=<value optimized out>) at gmain.c:2367
No locals.
#7  0x001a07f8 in g_main_context_iterate (context=0x839dea0, 
    block=<value optimized out>, dispatch=1, self=0x8395040) at gmain.c:2448
        max_priority = 2147483647
        timeout = -1
        some_ready = 1
        nfds = <value optimized out>
        allocated_nfds = <value optimized out>
        fds = <value optimized out>
        __PRETTY_FUNCTION__ = "g_main_context_iterate"
#8  0x001a0caf in IA__g_main_loop_run (loop=0x839e5d0) at gmain.c:2656
        self = 0x8395040
        __PRETTY_FUNCTION__ = "IA__g_main_loop_run"
#9  0x0805190a in daemon_main (argc=4, argv=0xbff8da74, max_job_threads=1, 
    default_type=0x80639b8 "smb-share", mountable_name=0x0, 
    first_type_name=0x80639b8 "smb-share") at daemon-main.c:294
        var_args = <value optimized out>
        connection = 0x839b458
        loop = 0x0
        daemon = <value optimized out>
        derror = {name = 0x0, message = 0x0, dummy1 = 1, dummy2 = 0, 
---Type <return> to continue, or q <return> to quit---
          dummy3 = 0, dummy4 = 1, dummy5 = 0, padding1 = 0x15c}
        mount_spec = 0x0
        mount_source = 0x839b458
        error = 0x0
        res = <value optimized out>
        type = <value optimized out>
#10 0x08051c16 in main (argc=4, argv=0xbff8da74) at daemon-main-generic.c:39
No locals.
(gdb)

Comment 11 Tomáš Bžatek 2009-06-16 14:21:22 UTC
OK, so this is out of particular backend scope. This is going to be hard to debug remotely.

I'm unable to reproduce the issue on my machines, so I've prepared a testing build which I would like to ask you to try. Please download directly from Koji: http://koji.fedoraproject.org/koji/taskinfo?taskID=1418083

Now the `GVFS_DEBUG=1 /usr/libexec/gvfsd-smb server=...` should give you more verbose debug messages. I made some changes to the code, it's possible it will crash at different place or won't crash at all, but you might see inconsistent behaviour (that's why this is only testing build, not intended for production).

Comment 12 Alexey Kuznetsov 2009-06-17 06:15:52 UTC
I have switched back to f10, too many problems on my booth notebooks (-intel, -ati).

My test i will continue make under virtual machine.

My current environment is: Fedora 10 - VirtualBox - Fedora 11 - ktorrent - remote samba share on fedora 11 server.

Crash still reproducible. Crash can take about 1-3 hour before occur.


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xb7d1cb70 (LWP 9873)]
handle_incoming_pdu (cli=<value optimized out>) at libsmb/async_smb.c:278
278		mid = SVAL(pdu, smb_mid);
(gdb) t a a bt full

Thread 3 (Thread 0xb7d1cb70 (LWP 9873)):
#0  handle_incoming_pdu (cli=<value optimized out>) at libsmb/async_smb.c:278
        buf_len = <value optimized out>
        rest_len = <value optimized out>
        pdu = 0xb72f2ff0 "\205"
        req = <value optimized out>
        mid = <value optimized out>
#1  cli_state_handler (cli=<value optimized out>) at libsmb/async_smb.c:386
        res = <value optimized out>
        available = 4
        old_size = <value optimized out>
        new_size = <value optimized out>
        req = <value optimized out>
        __FUNCTION__ = "cli_state_handler"
#2  0x0049d9c6 in run_events (event_ctx=0xb7212d40, selrtn=2, 
    read_fds=0xb7d1bf8c, write_fds=0xb7d1bf0c) at lib/events.c:255
        flags = <value optimized out>
        fde = 0xb72f2ff0
        now = {tv_sec = 1245218850, tv_usec = 731381}
        __FUNCTION__ = "run_events"
#3  0x0049dc40 in event_loop_once (ev=0xb7212d40) at lib/events.c:312
        now = {tv_sec = 1245218850, tv_usec = 731333}
---Type <return> to continue, or q <return> to quit---
        to = {tv_sec = 9998, tv_usec = 999975}
        r_fds = {fds_bits = {8192, 0 <repeats 31 times>}}
        w_fds = {fds_bits = {8192, 0 <repeats 31 times>}}
        maxfd = 13
        ret = <value optimized out>
#4  0x004c3c2e in cli_pull (cli=0xb7257830, fnum=<value optimized out>, 
    start_offset=886833152, size=65535, window_size=65535, 
    sink=0x4c2b30 <cli_read_sink>, priv=0xb7d1c128, received=0xb7d1c0f8)
    at libsmb/clireadwrite.c:441
        frame = <value optimized out>
        req = 0xb72a5e40
#5  0x004c3ced in cli_read (cli=0xb7257830, fnum=-1221644304, 
    buf=0x8f55800 "\222\266\235\325\177u2k;\b\23\355n\21\333\65Z\275\255>r\367\255\365\65\367\5߳\312n\210\262a\2n;\233`\323\vg*9O[$\234\235\3t\236\64\ar\315\345\213h\230c\251\353\353y\274\234c\37~\203\271\303Wr\217+\253\200mN\331¥\254*\6\24\244#g\261\261\313s\252vz\214\310K*T[\24\375\271\215K\24\333F\320\334\n\24¡\356\373\253n\225\22\301\22\211\320US홺һ\372~'\251\261\71\274\274\314D\302\324eV\203\24\376\267Q\330aq\231jɮ\277\66\375#.\21\266;\360\275\255\344\236L\356M\30C\360\322\"<\333I\16\325ސA,S\233x3\274\232ΜH"..., offset=886833152, 
    size=65535) at libsmb/clireadwrite.c:464
        status = <value optimized out>
        ret = -5249072672825933824
#6  0x0042e603 in SMBC_read_ctx (context=0xb7200c68, file=0xb729e8f8, 
---Type <return> to continue, or q <return> to quit---
    buf=0x8f55800, count=65535) at libsmb/libsmb_file.c:306
        ret = <value optimized out>
        server = 0xb72b3f50 "192.168.54.3"
        share = 0xb72c7408 "www"
        user = 0xb7254630 ""
        password = 0xb72c1f48 ""
        path = 0xb7278e28 "\\Полнометражные\\Золотой глобус 2 (2009)\\Золотой глобус. 31. ОАЭ.avi"
        targetpath = 0xb727a6d0 "\\Полнометражные\\Золотой глобус 2 (2009)\\Золотой глобус. 31. ОАЭ.avi"
        targetcli = 0xb7257830
        frame = 0xb7254e30
        offset = 886833152
        __FUNCTION__ = "SMBC_read_ctx"
#7  0x0804eb39 in do_read (backend=0x8f21868, job=0x8f34838, 
    handle=0xb729e8f8, 
    buffer=0x8f55800 "\222\266\235\325\177u2k;\b\23\355n\21\333\65Z\275\255>r\367\255\365\65\367\5߳\312n\210\262a\2n;\233`\323\vg*9O[$\234\235\3t\236\64\ar\315\345\213h\230c\251\353\353y\274\234c\37~\203\271\303Wr\217+\253\200mN\331¥\254*\6\24\244#g\261\261\313s\252vz\214\310K*T[\24\375\271\215K\24\333F\320\334\n\24¡\356\373\253n\225\22\301\22\211\320US홺һ\372~'\251\261\71\274\274\314D\302\324eV\203\24\376\267Q\330aq\231jɮ\277\66\375#.\21\266;\360\275\255\344\236L\356M\30C\360\322\"<\333I\16\325ސA,S\233x3\274\232ΜH"..., bytes_requested=8703680)
---Type <return> to continue, or q <return> to quit---
    at gvfsbackendsmb.c:733
        op_backend = 0x8f21868
        res = <value optimized out>
        smbc_read = 0xb72f2ff0
#8  0x0805875b in run (job=0x8f34838) at gvfsjobread.c:124
No locals.
#9  0x080558ad in g_vfs_job_run (job=0x8f34838) at gvfsjob.c:198
        class = 0x8f31f20
#10 0x08052e2e in job_handler_callback (data=0x8f34838, user_data=0x8f12b20)
    at gvfsdaemon.c:142
No locals.
#11 0x00bf12cf in g_thread_pool_thread_proxy (data=0x8f1a7e8)
    at gthreadpool.c:265
        task = 0x8f34838
        pool = 0x8f1a7e8
#12 0x00befc7f in g_thread_create_proxy (data=0x8f2e970) at gthread.c:635
        __PRETTY_FUNCTION__ = "g_thread_create_proxy"
#13 0x00b38935 in start_thread (arg=0xb7d1cb70) at pthread_create.c:297
        __res = <value optimized out>
        __ignore1 = <value optimized out>
        __ignore2 = <value optimized out>
        pd = 0xb7d1cb70
        now = <value optimized out>
---Type <return> to continue, or q <return> to quit---
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {11841524, 0, 4001536, 
                -1210989528, -176028657, 823562081}, mask_was_saved = 0}}, 
          priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, 
              cleanup = 0x0, canceltype = 0}}}
        not_first_call = <value optimized out>
#14 0x00a6c82e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:130
No locals.

Thread 1 (Thread 0xb7f1d9c0 (LWP 9848)):
#0  0x00b62422 in __kernel_vsyscall ()
No symbol table info available.
#1  0x00a62276 in *__GI___poll (fds=0xafbff4, nfds=18, timeout=-1)
    at ../sysdeps/unix/sysv/linux/poll.c:87
        resultvar = <value optimized out>
        oldtype = 0
        result = <value optimized out>
#2  0x00bd5aeb in IA__g_poll (fds=0x8f1c870, nfds=18, timeout=-1)
    at gpoll.c:127
No locals.
#3  0x00bc8783 in g_main_context_poll (n_fds=<value optimized out>, 
    fds=<value optimized out>, priority=<value optimized out>, 
    timeout=<value optimized out>, context=<value optimized out>)
    at gmain.c:2761
---Type <return> to continue, or q <return> to quit---
        poll_func = 0x12
#4  g_main_context_iterate (n_fds=<value optimized out>, 
    fds=<value optimized out>, priority=<value optimized out>, 
    timeout=<value optimized out>, context=<value optimized out>)
    at gmain.c:2443
        max_priority = 2147483647
        timeout = -1
        some_ready = <value optimized out>
        nfds = 18
        allocated_nfds = <value optimized out>
        fds = <value optimized out>
        __PRETTY_FUNCTION__ = "g_main_context_iterate"
#5  0x00bc8caf in IA__g_main_loop_run (loop=0x8f212c0) at gmain.c:2656
        self = 0x8f11040
        __PRETTY_FUNCTION__ = "IA__g_main_loop_run"
#6  0x0805190a in daemon_main (argc=4, argv=0xbfc3bdf4, max_job_threads=1, 
    default_type=0x8063ad8 "smb-share", mountable_name=0x0, 
    first_type_name=0x8063ad8 "smb-share") at daemon-main.c:294
        var_args = <value optimized out>
        connection = 0x8f12030
        loop = 0xfffffdfc
        daemon = <value optimized out>
        derror = {name = 0x0, message = 0x0, dummy1 = 1, dummy2 = 0, 
---Type <return> to continue, or q <return> to quit---
          dummy3 = 0, dummy4 = 1, dummy5 = 0, padding1 = 0x15c}
        mount_spec = 0x8f19e10
        mount_source = 0x8f12030
        error = 0x0
        res = <value optimized out>
        type = <value optimized out>
#7  0x08051c16 in main (argc=4, argv=0xbfc3bdf4) at daemon-main-generic.c:39
No locals.
(gdb)

Comment 13 Alexey Kuznetsov 2009-06-17 06:16:21 UTC
server output:

....

job_seek_read send reply, pos 886833152
command_read_cb: reader = 0x0xb72546f0, reader->channel = 0x0x8f2f5e0
command_read_cb: reader->channel->priv = 0x0x8f2f5f8
command_read_cb: count_read = 20, reader->data_pos = 0
command_read_cb: queueing data_read request... reader->channel = 0x0x8f2f5e0, reader->channel->priv = 0x0x8f2f5f8
Queued new job 0x8f34838 (GVfsJobRead)
finish_request: reader->channel = 0x0x8f2f5e0, reader->channel->priv = 0x0x8f2f5f8
job_read send reply, 65535 bytes
Queued new job 0x8f55318 (GVfsJobRead)
command_read_cb: reader = 0x0xb72b6c08, reader->channel = 0x0x8f30f60
command_read_cb: reader->channel->priv = 0x0x8f30f78
command_read_cb: count_read = 20, reader->data_pos = 0
command_read_cb: queueing data_read request... reader->channel = 0x0x8f30f60, reader->channel->priv = 0x0x8f30f78
finish_request: reader->channel = 0x0x8f30f60, reader->channel->priv = 0x0x8f30f78

Comment 14 Tomáš Bžatek 2009-06-17 14:21:05 UTC
This is really weird, the crash seems to be at random places. The only idea what to test is to run the backend inside valgrind instead of gdb. Let's see if it detects heap corruption or similar things.

(In reply to comment #12)
> Crash still reproducible. Crash can take about 1-3 hour before occur.
So, using ktorrent, I guess it goes through gvfs-fuse-daemon, randomly reading random size blocks over the file, am I right? I will try to write a small torture test on the file.

Comment 15 Alexey Kuznetsov 2009-06-17 14:49:18 UTC
My torrents is about 40GB :) And i probably (i'm not sure) can't reproduce problem with transmission bt client. ktorrent may have different engine (of course) which can have different read/write technique.

ps: virtualization is awesome stuff, in one click and got my os in previous debug state, with my active application :)

Comment 16 Alexey Kuznetsov 2009-06-17 14:53:00 UTC
wait man.

valgrind can't diagnose some state (for ex. memory leaks) on crash.

Comment 17 Tomáš Bžatek 2009-06-17 15:33:44 UTC
Valgrind reports issues continuously, when that event happens. I don't need full memleak report, just messages about invalid reads and writes or any other corruption warnings.

Strange that it doesn't happen with Transmission. It's probably the way every application accesses the data.

Comment 18 Alexey Kuznetsov 2009-06-18 14:53:45 UTC
it wont crash under valgrind

Comment 19 Alexey Kuznetsov 2009-06-19 08:47:45 UTC
it still crash without valgrind in 3 hours of run.

Comment 20 Alexey Kuznetsov 2009-06-20 15:46:46 UTC
same for f10.

gvfsd-smb[14931]: segfault at c ip 08060832 sp bf8190c0 error 4 in gvfsd-smb[8048000+21000]
[axet@axet-laptop ~]$ rpm -q gvfsd-smb
package gvfsd-smb is not installed
[axet@axet-laptop ~]$ rpm -qf /usr/libexec/gvfsd-smb
gvfs-smb-1.0.3-10.fc10.i386
[axet@axet-laptop ~]$ 


i did not remember that 3-4 month ago. that some new feature arise after f10 release.

Comment 21 Alexey Kuznetsov 2009-09-21 10:43:41 UTC
[axet@axet-laptop ~]$ valgrind /usr/libexec/gvfsd-smb server=mini.local share=www
==8518== Memcheck, a memory error detector.
==8518== Copyright (C) 2002-2008, and GNU GPL'd, by Julian Seward et al.
==8518== Using LibVEX rev 1884, a library for dynamic binary translation.
==8518== Copyright (C) 2004-2008, and GNU GPL'd, by OpenWorks LLP.
==8518== Using valgrind-3.4.1, a dynamic binary instrumentation framework.
==8518== Copyright (C) 2000-2008, and GNU GPL'd, by Julian Seward et al.
==8518== For more details, rerun with: -v
==8518== 
==8518== Thread 2:
==8518== Invalid read of size 2
==8518==    at 0x40F182E: cli_state_handler (async_smb.c:278)
==8518==    by 0x40B59C5: run_events (events.c:255)
==8518==    by 0x40B5C3F: event_loop_once (events.c:312)
==8518==    by 0x40DBC2D: cli_pull (clireadwrite.c:441)
==8518==    by 0x40DBCEC: cli_read (clireadwrite.c:464)
==8518==    by 0x4046602: SMBC_read_ctx (libsmb_file.c:306)
==8518==    by 0x804EB38: do_read (gvfsbackendsmb.c:733)
==8518==    by 0x805875A: run (gvfsjobread.c:124)
==8518==    by 0x80558AC: g_vfs_job_run (gvfsjob.c:198)
==8518==    by 0x8052E2D: job_handler_callback (gvfsdaemon.c:142)
==8518==    by 0x61F80BE: (within /lib/libglib-2.0.so.0.2000.5)
==8518==    by 0x61F6A4E: (within /lib/libglib-2.0.so.0.2000.5)
==8518==  Address 0x4a3d2da is not stack'd, malloc'd or (recently) free'd

Comment 22 Alexey Kuznetsov 2009-09-21 15:57:29 UTC
valgrind log for:

gvfs-smb-1.2.3-12.fc11.i586
libsmbclient-3.3.2-0.33.fc11.i586

same function as in comment 12.

for my source code it is:

mid = SVAL(pdu, smb_mid);

Comment 23 Alexey Kuznetsov 2009-11-09 00:23:21 UTC
*** Bug 526701 has been marked as a duplicate of this bug. ***

Comment 24 Alexey Kuznetsov 2009-11-09 00:23:39 UTC
*** Bug 533757 has been marked as a duplicate of this bug. ***

Comment 25 Bug Zapper 2010-11-04 11:10:22 UTC
This message is a reminder that Fedora 12 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 12.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '12'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 12's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 12 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 26 Bug Zapper 2010-12-05 06:52:33 UTC
Fedora 12 changed to end-of-life (EOL) status on 2010-12-02. Fedora 12 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.