+++ This bug was initially created as a clone of Bug #1399072 +++ +++ This bug was initially created as a clone of Bug #1396010 +++ Description of problem: ======================= On a 2 x (4 + 2) Distributed-Disperse volume considering the redundancy count killed 4 bricks and can see that healing has started which is not expected as the data bricks are up and running. Version-Release number of selected component (if applicable): 3.8.4-5.el7rhgs.x86_64 How reproducible: ================= Always Steps to Reproduce: =================== 1) Create a distributed disperse volume and start it. 2) Fuse mount the volume on a client. 3) Kill the bricks based on the redundancy count. 4) From mount point, untar linux kernel package and wait till it completes. check gluster vol heal <volname> info, we can see that heal is getting triggered. I am seeing a high cpu utilization on the nodes and we are suspecting because of this issue the cpu utilization is growing. d x [k + n] --> where k is data bricks count and n is the redundancy count 2 x (4+2) Actual results: =============== Even though all the data bricks are up, healing is getting started. Expected results: ================= Healing should not happen as all the data bricks are up and running. --- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-17 04:04:35 EST --- This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Prasad Desala on 2016-11-17 04:11:06 EST --- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4470 root 20 0 2234880 274460 3056 S 100.3 3.4 236:28.74 glusterfs [root@dhcp37-190 ~]# ./profile.sh 4470 1155 pthread_cond_timedwait@@GLIBC_2.3.2,syncenv_task,syncenv_processor,start_thread,clone 435 pthread_cond_wait@@GLIBC_2.3.2,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,syncop_mt_dir_scan,ec_shd_index_sweep,ec_shd_index_healer,start_thread,clone 181 epoll_wait,event_dispatch_epoll_worker,start_thread,clone 155 pthread_cond_wait@@GLIBC_2.3.2,syncop_mt_dir_scan,ec_shd_index_sweep,ec_shd_index_healer,start_thread,clone 137 __memset_sse2,calloc,__gf_calloc,synctask_create,synctask_new1,synctask_new,ec_launch_heal,ec_gf_getxattr,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 134 pthread_cond_timedwait@@GLIBC_2.3.2,__ec_shd_healer_wait,ec_shd_healer_wait,ec_shd_index_healer,start_thread,clone 100 sigwait,glusterfs_sigwaiter,start_thread,clone 100 pthread_join,event_dispatch_epoll,main 99 nanosleep,gf_timer_proc,start_thread,clone 50 pthread_cond_wait@@GLIBC_2.3.2,syncop_lookup,syncop_inode_find,ec_shd_index_heal,syncop_mt_dir_scan,ec_shd_index_sweep,ec_shd_index_healer,start_thread,clone 20 pthread_cond_wait@@GLIBC_2.3.2,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,syncop_mt_dir_scan,ec_shd_index_sweep,ec_shd_index_healer,start_thread,clone 17 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,_dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 15 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,_dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 14 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,_dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 11 unlink,sys_unlink,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 10 unlink,sys_unlink,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 10 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,socket_event_handler,event_dispatch_epoll_handler,at,start_thread,clone 10 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 9 read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_vfscanf,fscanf,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 9 munmap,__GI__IO_setb,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 9 munmap,__GI__IO_setb,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 9 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,_dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 8 munmap,__GI__IO_setb,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 6 read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_vfscanf,fscanf,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 6 mmap64,__GI__IO_file_doallocate,__GI__IO_doallocbuf,__GI__IO_file_seekoff,fseek,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 6 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 5 sysmalloc,_int_malloc,calloc,__gf_calloc,synctask_create,synctask_new1,synctask_new,ec_launch_heal,ec_gf_getxattr,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 5 munmap,__GI__IO_setb,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 5 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,rpc_clnt_submit,client_submit_request,client3_3_lookup,client_lookup,ec_wind_lookup,ec_dispatch_mask,ec_dispatch_all,ec_manager_lookup,__ec_manager,ec_manager,ec_lookup,ec_gf_lookup,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 4 unlink,sys_unlink,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 4 read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_vfscanf,fscanf,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 4 pthread_spin_lock,gf_mem_set_acct_info,__gf_calloc,iobref_new,client_submit_request,client_fdctx_destroy,client3_3_release,client_release,fd_destroy,at,args_cbk_wipe,cluster_replies_wipe,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 4 mmap64,__GI__IO_file_doallocate,__GI__IO_doallocbuf,__GI__IO_file_seekoff,fseek,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 4 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,rpc_clnt_submit,client_submit_request,client3_3_open,client_open,cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 4 _dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 3 read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_vfscanf,fscanf,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 3 read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_vfscanf,fscanf,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 3 munmap,__GI__IO_setb,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 3 mmap64,__GI__IO_file_doallocate,__GI__IO_doallocbuf,__GI__IO_file_seekoff,fseek,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 3 __lll_lock_wait_private,_L_lock_4780,_int_free,fd_destroy,at,args_cbk_wipe,cluster_replies_wipe,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 3 _dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 2 writev,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 2 unlink,sys_unlink,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 2 readv,sys_readv,__socket_ssl_readv,__socket_cached_read,vector=<optimized,__socket_readv,pointer>,,pointer>,,at,socket_event_handler,event_dispatch_epoll_handler,at,start_thread,clone 2 read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_vfscanf,fscanf,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_entry,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 2 pthread_cond_wait@@GLIBC_2.3.2,syncop_readdir,syncop_mt_dir_scan,ec_shd_index_sweep,ec_shd_index_healer,start_thread,clone 2 munmap,__GI__IO_setb,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=1),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 2 __lll_lock_wait,pthread_cond_timedwait@@GLIBC_2.3.2,syncenv_task,syncenv_processor,start_thread,clone 2 __lll_lock_wait_private,_L_lock_4780,_int_free,__inode_ctx_free,__inode_destroy,at,inode_unref,loc_wipe,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 2 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,rpc_clnt_submit,client_submit_request,client3_3_lookup,client_lookup,ec_wind_lookup,ec_dispatch_mask,ec_dispatch_all,ec_manager_lookup,__ec_manager,ec_manager,ec_lookup,ec_gf_lookup,syncop_lookup,syncop_inode_find,ec_shd_index_heal,syncop_mt_dir_scan,ec_shd_index_sweep,ec_shd_index_healer,start_thread,clone 2 __lll_lock_wait,_L_cond_lock_792,__pthread_mutex_cond_lock,pthread_cond_timedwait@@GLIBC_2.3.2,syncenv_task,syncenv_processor,start_thread,clone 2 _dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 2 __close_nocancel,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 xdr_void,xdr_union,xdr_to_rpc_reply,rpc_clnt_reply_init,rpc_clnt_handle_reply,rpc_clnt_notify,rpc_transport_notify,socket_event_poll_in,socket_event_handler,event_dispatch_epoll_handler,at,start_thread,clone 1 xdrmem_create,xdr_to_generic,client3_3_lookup_cbk,rpc_clnt_handle_reply,rpc_clnt_notify,rpc_transport_notify,socket_event_poll_in,socket_event_handler,event_dispatch_epoll_handler,at,start_thread,clone 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f281c403500,,socket_submit_request,rpc_clnt_submit,client_submit_request,client_fdctx_destroy,client3_3_release,client_release,fd_destroy,at,args_cbk_wipe,cluster_replies_wipe,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f281c0029a0,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_inodelk,ec_heal_entry,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f280c407630,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_lookup,client_lookup,ec_wind_lookup,ec_dispatch_mask,ec_dispatch_all,ec_manager_lookup,__ec_manager,ec_manager,ec_lookup,ec_gf_lookup,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f280c403500,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f280c000920,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f28046061b0,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f27f4203e40,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f27ec001d20,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_getxattr,client_getxattr,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f27ec000c20,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f27e02045d0,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,sys_writev,__socket_rwv,__socket_writev,entry=entry@entry=0x7f27e0201a30,,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 writev,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 unlink,sys_unlink,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 unlink,sys_unlink,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 unlink,sys_unlink,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_entry,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 unlink,sys_unlink,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=1),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 sysmalloc,_int_malloc,calloc,__gf_calloc,__socket_ioq_new,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_getxattr,client_getxattr,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 swapcontext,synctask_switchto,syncenv_processor,start_thread,clone 1 readv,sys_readv,__socket_ssl_readv,__socket_ssl_read,opvector=0x7f28540f6518,,vector=<optimized,__socket_readv,at,pointer>,,pointer>,,at,socket_event_handler,event_dispatch_epoll_handler,at,start_thread,clone 1 __read_nocancel,__GI__IO_file_read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_getline_info,fgets_unlocked,internal_getent,_nss_files_getservbyport_r,getservbyport_r@@GLIBC_2.2.5,getnameinfo,gf_resolve_ip6,af_inet_client_get_remote_sockaddr,sockaddr=sockaddr@entry=0x7f285e609d60,,socket_connect,rpc_clnt_reconnect,gf_timer_proc,start_thread,clone 1 read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_vfscanf,fscanf,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 read,__GI__IO_file_underflow,__GI__IO_default_uflow,__GI__IO_vfscanf,fscanf,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=1),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 pthread_mutex_unlock,dl_iterate_phdr,_Unwind_Find_FDE,??,_Unwind_Backtrace,backtrace,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 pthread_getspecific,__glusterfs_this_location,_gf_msg,ec_getxattr,ec_gf_getxattr,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btZCmISZ",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btysQ6OV",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btYMXVdg",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btYlIuW5",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btXYqEHP",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btxp2HX7",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btXiyq7H",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btx21mLw",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btvSMFGX",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btV1lNzC",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btupo2ob",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btuFWviq",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btuBsrxr",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btslIIzp",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btRZHOBe",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=1),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btRaJB40",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btqmx6iE",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btQJ77Hu",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btPXO0LL",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btPtUF7F",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btnQJSRn",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btMX98hk",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btlhHBuK",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btlGguOS",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btL9Rz5V",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btL7PQnS",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btkwSSHJ",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btjH3TWi",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btjGiPVK",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btjcveIs",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btj4WryT",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btIvU3dW",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btI2Gc7G",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/bte3XYKw",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btdS4Qmo",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btDgluFm",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/btcc8niD",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btBzZrho",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/btb5HdQI",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/bta7S3nQ",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/bt7vtKnG",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/bt7mIRcg",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/bt6L9Zto",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/bt4xd0Dn",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 open64,open,"/tmp/bt4pc4vI",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/bt4KU3jY",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 open64,open,"/tmp/bt0boxit",,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 munmap,__GI__IO_setb,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 mmap64,__GI__IO_file_doallocate,__GI__IO_doallocbuf,__GI__IO_file_seekoff,fseek,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 mmap64,__GI__IO_file_doallocate,__GI__IO_doallocbuf,__GI__IO_file_seekoff,fseek,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 __memset_sse2,calloc,__gf_calloc,synctask_create,synctask_new1,synctask_new,_run_dir_scan_task,subvol=subvol@entry=0x7f28540142c0,,ec_shd_index_sweep,ec_shd_index_healer,start_thread,clone 1 __memset_sse2,calloc,__gf_calloc,synctask_create,synctask_new1,synctask_new,_run_dir_scan_task,subvol=subvol@entry=0x7f2854013030,,ec_shd_index_sweep,ec_shd_index_healer,start_thread,clone 1 madvise,_int_free,synctask_destroy,syncenv_processor,start_thread,clone 1 madvise,_int_free,syncenv_processor,start_thread,clone 1 __lll_unlock_wake,_L_unlock_697,pthread_mutex_unlock,syncenv_task,syncenv_processor,start_thread,clone 1 __lll_unlock_wake,_L_unlock_697,pthread_mutex_unlock,_dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 __lll_unlock_wake,_L_unlock_569,__pthread_mutex_unlock_usercnt,pthread_cond_timedwait@@GLIBC_2.3.2,syncenv_task,syncenv_processor,start_thread,clone 1 __lll_lock_wait_private,_L_lock_4780,_int_free,synctask_destroy,syncenv_processor,start_thread,clone 1 __lll_lock_wait_private,_L_lock_4780,_int_free,loc_wipe,ec_fop_data_release,ec_heal_done,synctask_wrap,??,?? 1 __lll_lock_wait_private,_L_lock_4780,_int_free,iobref_destroy,iobref_unref,client_local_wipe,client3_3_getxattr_cbk,rpc_clnt_handle_reply,rpc_clnt_notify,rpc_transport_notify,socket_event_poll_in,socket_event_handler,event_dispatch_epoll_handler,at,start_thread,clone 1 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,dl_iterate_phdr,_Unwind_Find_FDE,??,??,_Unwind_Backtrace,backtrace,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,dl_iterate_phdr,_Unwind_Find_FDE,??,??,_Unwind_Backtrace,backtrace,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,dl_iterate_phdr,_Unwind_Find_FDE,??,??,_Unwind_Backtrace,backtrace,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_entry,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,_dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_840,pthread_mutex_lock,_dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_getxattr,syncop_gfid_to_path,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,synctask_wake,synctask_create,synctask_new1,synctask_new,ec_launch_heal,ec_gf_getxattr,syncop_getxattr,ec_shd_selfheal,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,synctask_switchto,syncenv_processor,start_thread,clone 1 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,socket_submit_request,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_uninodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,rpc_clnt_submit,client_submit_request,client_fdctx_destroy,client3_3_release,client_release,fd_destroy,at,args_cbk_wipe,cluster_replies_wipe,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,rpc_clnt_submit,client_submit_request,client3_3_inodelk,client_inodelk,cluster_inodelk,ec_heal_entry,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 __lll_lock_wait,_L_lock_791,pthread_mutex_lock,rpc_clnt_fill_request_info,rpc_clnt_notify,rpc_transport_notify,__socket_read_reply,at,pointer>,,pointer>,,at,socket_event_handler,event_dispatch_epoll_handler,at,start_thread,clone 1 fd_ref,client3_3_open,client_open,cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 epoll_ctl,event_dispatch_epoll_handler,at,start_thread,clone 1 ec_value_ignore,key_value_cmp,dict_foreach_match,are_dicts_equal,ec_dict_compare,ec_combine_check,ec_combine,ec_lookup_cbk,client3_3_lookup_cbk,rpc_clnt_handle_reply,rpc_clnt_notify,rpc_transport_notify,socket_event_poll_in,socket_event_handler,event_dispatch_epoll_handler,at,start_thread,clone 1 ??,dl_iterate_phdr,_Unwind_Find_FDE,??,_Unwind_Backtrace,backtrace,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 _dl_addr,backtrace_symbols_fd,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_open,ec_heal_data,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? 1 __close_nocancel,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,syncop_lookup,syncop_inode_find,ec_shd_index_heal,_dir_scan_job_fn,synctask_wrap,??,?? 1 __close_nocancel,__GI__IO_file_close_it,fclose@@GLIBC_2.2.5,gf_backtrace_fillframes,gf_backtrace_save,synctask_yield,__syncbarrier_wait,waitfor=waitfor@entry=4),cluster_inodelk,ec_heal_metadata,ec_heal_do,ec_synctask_heal_wrap,synctask_wrap,??,?? --- Additional comment from Prasad Desala on 2016-11-17 04:35:11 EST --- sosreports@http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/Prasad/1396010/ --- Additional comment from Worker Ant on 2016-11-28 03:49:34 EST --- REVIEW: http://review.gluster.org/15937 (cluster/ec: Healing should not start if only "data" bricks are UP) posted (#2) for review on master by Ashish Pandey (aspandey) --- Additional comment from Worker Ant on 2016-11-28 06:08:44 EST --- REVIEW: http://review.gluster.org/15937 (cluster/ec: Healing should not start if only "data" bricks are UP) posted (#3) for review on master by Ashish Pandey (aspandey) --- Additional comment from Worker Ant on 2016-11-28 16:32:09 EST --- COMMIT: http://review.gluster.org/15937 committed in master by Xavier Hernandez (xhernandez) ------ commit a3e5c0566a7d867d16d80ca28657238ff1008a22 Author: Ashish Pandey <aspandey> Date: Mon Nov 28 13:42:33 2016 +0530 cluster/ec: Healing should not start if only "data" bricks are UP Problem: In a disperse volume with "K+R" configuration, where "K" is the number of data bricks and "R" is the number of redundancy bricks (Total number of bricks, N = K+R), if only K bricks are UP, we should NOT start heal process. This is because the bricks, which are supposed to be healed, are not UP. This will unnecessary eat up the resources. Solution: Check for the number of xl_up_count and only if it is greater than ec->fragments (number of data bricks), start heal process. Change-Id: I8579f39cfb47b65ff0f76e623b048bd67b15473b BUG: 1399072 Signed-off-by: Ashish Pandey <aspandey> Reviewed-on: http://review.gluster.org/15937 Reviewed-by: Xavier Hernandez <xhernandez> Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org>
REVIEW: http://review.gluster.org/15974 (cluster/ec: Healing should not start if only "data" bricks are UP) posted (#1) for review on release-3.9 by Ashish Pandey (aspandey)
COMMIT: http://review.gluster.org/15974 committed in release-3.9 by Pranith Kumar Karampuri (pkarampu) ------ commit 63c6eafac95fbeb126fbc168933f4f204ea0a0ec Author: Ashish Pandey <aspandey> Date: Mon Nov 28 13:42:33 2016 +0530 cluster/ec: Healing should not start if only "data" bricks are UP Problem: In a disperse volume with "K+R" configuration, where "K" is the number of data bricks and "R" is the number of redundancy bricks (Total number of bricks, N = K+R), if only K bricks are UP, we should NOT start heal process. This is because the bricks, which are supposed to be healed, are not UP. This will unnecessary eat up the resources. Solution: Check for the number of xl_up_count and only if it is greater than ec->fragments (number of data bricks), start heal process. >Change-Id: I8579f39cfb47b65ff0f76e623b048bd67b15473b >BUG: 1399072 >Signed-off-by: Ashish Pandey <aspandey> >Reviewed-on: http://review.gluster.org/15937 >Reviewed-by: Xavier Hernandez <xhernandez> >Smoke: Gluster Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >NetBSD-regression: NetBSD Build System <jenkins.org> >Signed-off-by: Ashish Pandey <aspandey> Change-Id: I8579f39cfb47b65ff0f76e623b048bd67b15473b BUG: 1399989 Signed-off-by: Ashish Pandey <aspandey> Reviewed-on: http://review.gluster.org/15974 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Xavier Hernandez <xhernandez> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.1, please open a new bug report. glusterfs-3.9.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-January/029725.html [2] https://www.gluster.org/pipermail/gluster-users/