$ ostree init --repo=/tmp/test/repo $ ostree --repo=/tmp/test/repo remote add --no-gpg-verify mojefedora https://firefox-flatpak.mojefedora.cz/repo/ $ ostree --repo=/tmp/test/repo pull --depth=-1 --mirror mojefedora > Receiving metadata objects: 999/(estimating) 15.2 kB/s 319.6 kB > Segmentation fault (core dumped) #0 0x00007f7d5dcfeabe in multi_socket (multi=multi@entry=0x565254bf6000, checkall=checkall@entry=false, s=<optimized out>, ev_bitmask=1, running_handles=running_handles@entry=0x565254bf9160) at ../../lib/multi.c:2574 #1 0x00007f7d5dcfedab in curl_multi_socket_action (multi=0x565254bf6000, s=<optimized out>, ev_bitmask=<optimized out>, running_handles=running_handles@entry=0x565254bf9160) at ../../lib/multi.c:2760 #2 0x00007f7d5e84232a in event_cb (fd=<optimized out>, condition=<optimized out>, data=0x565254bf90e0) at src/libostree/ostree-fetcher-curl.c:451 #3 0x00007f7d5e4a9f40 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0 #4 0x00007f7d5e4aa2d8 in g_main_context_iterate.isra () at /lib64/libglib-2.0.so.0 #5 0x00007f7d5e4aa383 in g_main_context_iteration () at /lib64/libglib-2.0.so.0 #6 0x00007f7d5e8097d5 in ostree_repo_pull_with_options (self=<optimized out>, remote_name_or_baseurl=<optimized out>, options=options@entry=0x565254bf5b30, progress=progress@entry=0x565254be5240, cancellable=cancellable@entry=0x0, error =error@entry=0x7ffea2cd61d8) at src/libostree/ostree-repo-pull.c:4491 #7 0x0000565253e22637 in ostree_builtin_pull (argc=<optimized out>, argv=<optimized out>, invocation=<optimized out>, cancellable=0x0, error=0x7ffea2cd61d8) at src/ostree/ot-builtin-pull.c:370 #8 0x0000565253e1a6b0 in ostree_run (argc=<optimized out>, argv=<optimized out>, commands=0x565253e40020 <commands>, res_error=0x7ffea2cd6230) at src/ostree/ot-main.c:202 #9 0x0000565253e0b84d in main (argc=6, argv=0x7ffea2cd6348) at src/ostree/main.c:138 2526│ static CURLMcode multi_socket(struct Curl_multi *multi, 2527│ bool checkall, 2528│ curl_socket_t s, 2529│ int ev_bitmask, 2530│ int *running_handles) 2531│ { 2532│ CURLMcode result = CURLM_OK; 2533│ struct Curl_easy *data = NULL; 2534│ struct Curl_tree *t; 2535│ struct curltime now = Curl_now(); 2536│ 2537│ if(checkall) { 2538│ /* *perform() deals with running_handles on its own */ 2539│ result = curl_multi_perform(multi, running_handles); 2540│ 2541│ /* walk through each easy handle and do the socket state change magic 2542│ and callbacks */ 2543│ if(result != CURLM_BAD_HANDLE) { 2544│ data = multi->easyp; 2545│ while(data && !result) { 2546│ result = singlesocket(multi, data); 2547│ data = data->next; 2548│ } 2549│ } 2550│ 2551│ /* or should we fall-through and do the timer-based stuff? */ 2552│ return result; 2553│ } 2554│ if(s != CURL_SOCKET_TIMEOUT) { 2555│ 2556│ struct Curl_sh_entry *entry = sh_getentry(&multi->sockhash, s); 2557│ 2558│ if(!entry) 2559│ /* Unmatched socket, we can't act on it but we ignore this fact. In 2560│ real-world tests it has been proved that libevent can in fact give 2561│ the application actions even though the socket was just previously 2562│ asked to get removed, so thus we better survive stray socket actions 2563│ and just move on. */ 2564│ ; 2565│ else { 2566│ struct curl_llist *list = &entry->list; 2567│ struct curl_llist_element *e; 2568│ SIGPIPE_VARIABLE(pipe_st); 2569│ 2570│ /* the socket can be shared by many transfers, iterate */ 2571│ for(e = list->head; e; e = e->next) { 2572│ data = (struct Curl_easy *)e->ptr; 2573│ 2574├> if(data->magic != CURLEASY_MAGIC_NUMBER) 2575│ /* bad bad bad bad bad bad bad */ 2576│ return CURLM_INTERNAL_ERROR; (gdb) p data > $1 = (struct Curl_easy *) 0x0 $ rpm -q ostree ostree-libs libcurl > ostree-2019.2-1.fc31.x86_64 > ostree-libs-2019.2-1.fc31.x86_64 > libcurl-7.64.1-2.fc31.x86_64
I'm fairly confident this is a dupe of https://bugzilla.redhat.com/show_bug.cgi?id=1697566. Just going to mark it as such since there's more info there. See specifically https://bugzilla.redhat.com/show_bug.cgi?id=1697566#c21 for a potential workaround (and please post on the issue if that fixes it for you as well). Thanks! *** This bug has been marked as a duplicate of bug 1697566 ***