Description of problem: ======================= With quorum package, "gluste volume heal <vol-name> info" is not listing the entries that requires to be healed. It shows 0 Version-Release number of selected component (if applicable): ============================================================ [11/12/11 - 20:28:54 root@king arequal]# gluster --version glusterfs 3.3.0.3rhs built on Sep 27 2012 07:13:27 (glusterfs-3.3.0.3rhs-31.el6rhs.x86_64) Steps to Reproduce: =================== 1. Create 2*2 distributed replicate volume using 4 servers (server1,server2,server3 and server4) 2. Start the volume. 3. Mount the volume on client. 4. Bring down server1 (poweroff on server1) 5. Create files and directories from the mount point. 6. Execute "gluster volume heal <vol-name> info" on server2 Actual results: =============== [11/12/11 - 20:22:43 root@king arequal]# gluster volume heal vol-abcd info Heal operation on volume vol-abcd has been successful Brick 10.70.34.115:/home/abcd Number of entries: 0 Brick 10.70.34.119:/home/abcd Number of entries: 0 Brick 10.70.34.118:/home/abcd Number of entries: 0 Brick 10.70.34.102:/home/abcd Number of entries: 0 Expected results: ================ It should show the entries to be healed. Additional info: ================ wc from the xattrop which confirms that the files are there to be healed [11/12/11 - 20:33:02 root@king arequal]# ls /home/abcd/.glusterfs/indices/xattrop/ | wc 12313 12313 455589 [11/12/11 - 20:33:08 root@king arequal]#
Hi Rahul, Can you please confirm the behavior on version 3.3.0.5rhs-36 (mentioned in fixed in version). I tried to see if the issue exist, and it works for me in that version. Moving the bug to ON_QA, please re-open if happens with later version too.
Verified this bug with latest update 3. This bug is not seen any more. Moving it to verified state. Verified with version: ====================== [root@dhcp159-94 ~]# gluster --version glusterfs 3.3.0.5rhs built on Nov 8 2012 22:30:35 (glusterfs-3.3.0.5rhs-37.el6rhs.x86_64) Log Snippet: ============ [root@dhcp159-94 ~]# ls /home/dr/.glusterfs/indices/xattrop/ | wc 732 732 27092 [root@dhcp159-94 ~]# gluster volume info vol-dr Volume Name: vol-dr Type: Distributed-Replicate Volume ID: 62c2fa5a-6253-4d7f-adc2-c1514d90b7db Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.16.159.57:/home/dr Brick2: 10.16.159.94:/home/dr Brick3: 10.16.159.108:/home/dr Brick4: 10.16.159.211:/home/dr Options Reconfigured: cluster.server-quorum-type: server [root@dhcp159-94 ~]# gluster volume heal vol-dr info Heal operation on volume vol-dr has been successful Brick 10.16.159.57:/home/dr Number of entries: 0 Brick 10.16.159.94:/home/dr Number of entries: 731 /etc.1/sysconfig/cbq /etc.1/selinux/targeted/modules/active/modules/tftp.pp /etc.1/sysconfig/rhn/allowed-actions/configfiles /etc.1/sudoers.d /etc.1/selinux/targeted/modules/active/modules/roundup.pp /etc.1/fonts/conf.avail/70-yes-bitmaps.conf /etc.1/selinux/targeted/modules/active/modules/vmware.pp /etc.1/pam.d/login /etc.1/libvirt/qemu/networks/autostart /etc.1/inputrc /etc.1/selinux/targeted/modules/active/modules/sosreport.pp /etc.1/tune-profiles/latency-performance /etc.1/mail/mailertable.db /etc.1/cron.d /etc.1/inittab /etc.1/polkit-1/localauthority/30-site.d /etc.1/ssh/ssh_host_rsa_key /etc.1/sysconfig/network-scripts/ifcfg-eth0 /etc.1/rc.d/init.d/gluster-swift-object /etc.1/rsyslog.conf /etc.1/default/nss /etc.1/rwtab.d /etc.1/selinux/targeted/modules/active/modules/ktalk.pp /etc.1/libvirt/nwfilter/clean-traffic.xml /etc.1/fonts/conf.avail/60-latin.conf /etc.1/init/libvirtd.conf /etc.1/vdsm-reg/logger.conf /etc.1/selinux/targeted/modules/active/modules/aisexec.pp /etc.1/libreport/plugins /etc.1/rc.d/init.d/lvm2-lvmetad /etc.1/selinux/targeted/modules/active/modules/plymouthd.pp /etc.1/tune-profiles/server-powersave /etc.1/selinux/targeted/modules/active/modules/rhcs.pp /etc.1/selinux/targeted/modules/active/modules/dirsrv.pp /etc.1/iproute2/rt_dsfield /etc.1/rwtab.d/vdsm /etc.1/ld.so.conf /etc.1/selinux/targeted/contexts/virtual_image_context /etc.1/fonts/conf.avail/10-sub-pixel-rgb.conf /etc.1/selinux/targeted/modules/active/modules/milter.pp /etc.1/selinux/targeted/modules/active/modules/munin.pp /etc.1/selinux/targeted/contexts/users/unconfined_u /etc.1/ssl /etc.1/tune-profiles/latency-performance/sysctl.ktune /etc.1/fonts/conf.avail/51-local.conf /etc.1/securetty /etc.1/security/console.apps/config-util /etc.1/yum/pluginconf.d/aliases.conf /etc.1/pki/CA/certs /etc.1/rc.d/init.d/dnsmasq /etc.1/selinux/targeted/modules/active/modules/bind.pp /etc.1/yum/pluginconf.d/merge-conf.conf /etc.1/rc.d/init.d/rpcidmapd /etc.1/init/serial.conf /etc.1/ld.so.cache /etc.1/ConsoleKit/run-seat.d /etc.1/security/console.apps/rhn_register /etc.1/libreport/events/report_Kerneloops.xml /etc.1/selinux/targeted/modules/active/modules/mono.pp /etc.1/sysconfig/sandbox /etc.1/pki/CA/private /etc.1/selinux/targeted/modules/active/modules/fail2ban.pp /etc.1/sysconfig/modules/kvm.modules /etc.1/sysconfig/network-scripts/ifdown-ipv6 /etc.1/ctdb/events.d/61.nfstickle /etc.1/abrt/plugins/CCpp.conf /etc.1/java/security /etc.1/selinux/targeted/modules/active/modules/aiccu.pp /etc.1/logrotate.conf /etc.1/sysconfig/authconfig /etc.1/sysconfig/network-scripts/ifdown-sit /etc.1/selinux/targeted/modules/active/modules/remotelogin.pp /etc.1/multipath /etc.1/anacrontab /etc.1/modprobe.d/dist-alsa.conf /etc.1/vdsm /etc.1/rc.d/init.d/smb /etc.1/yum/pluginconf.d/rhnplugin.conf /etc.1/openldap/cacerts /etc.1/pki/vdsm/certs /etc.1/shells /etc.1/prelink.conf /etc.1/yum/pluginconf.d /etc.1/sysconfig/networking/devices
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html