Hide Forgot
Description of problem: I saw a case where a gfid file would end up with a link-count of 1. Specifically gluster volume heal ${volname} info reported a number of files under heal (which didn't heal by themselves). Upon inspection I found that the listed gfid's referenced files that had a link count of 1. I hope that's a clear enough explanation, but just to be sure: bagheera ~ # gluster volume heal mail info Brick uriel.interexcel.co.za:/mnt/gluster/mail <gfid:547eaa89-0f80-4507-8d11-cb370bc36eae> <gfid:bf518ba9-c2e4-4d9e-bac2-9cf3c32770d1> <gfid:266a624a-5788-4cd2-998c-7f727d6b5144> <gfid:4a620549-fee1-432b-a865-a6e7bcee1e28> <gfid:833618dd-0e21-46fb-812f-6c211c60dcfe> <gfid:6f9c0dbd-18f3-4bcd-a018-12afc8676d9f> Status: Connected Number of entries: 6 Brick bagheera.interexcel.co.za:/mnt/gluster/mail <gfid:2daab67e-56be-455d-a18e-f75e01a5fc75> <gfid:94c2261d-df33-4260-9c72-7e61b7c63e36> <gfid:2aec1ef1-d95a-41c4-a17d-591be32731f1> <gfid:0289814e-91b3-42b8-b3b9-99ab60e81e99> <gfid:a9a7e634-760f-4cab-be84-d063dedebf46> <gfid:cf585252-e536-4d51-9a27-ef399da1692c> <gfid:90ba5183-e187-4a35-874d-aa2c92d47160> Status: Connected Number of entries: 7 mail # gluster volume heal mail info | sed -e '1,/^Number/ d' | sed -nre 's/^<gfid:([-0-9a-z]+)>/\1/p' | while read gfid; do stat -c%h\ %n .glusterfs/${gfid:0:2}/${gfid:2:2}/${gfid}; done 1 .glusterfs/2d/aa/2daab67e-56be-455d-a18e-f75e01a5fc75 1 .glusterfs/94/c2/94c2261d-df33-4260-9c72-7e61b7c63e36 1 .glusterfs/2a/ec/2aec1ef1-d95a-41c4-a17d-591be32731f1 1 .glusterfs/02/89/0289814e-91b3-42b8-b3b9-99ab60e81e99 1 .glusterfs/a9/a7/a9a7e634-760f-4cab-be84-d063dedebf46 1 .glusterfs/cf/58/cf585252-e536-4d51-9a27-ef399da1692c 1 .glusterfs/90/ba/90ba5183-e187-4a35-874d-aa2c92d47160 %h - number of hard links At this point there is basically not much that can be done, one can look at the content of the file and copy it where it belongs (via the client most likely), and then ultimately one has to rm that dangling file (which also resolves the heal entry). As per suggestion on IRC this situation probably deserves some special reporting in volume heal ... info (similar to split brain) along with a way of fixing it, two possible resolutions: 1. volume heal ${volname} dangling link <gfid> <path>; or 2. volume heal ${volname} danglink drop <gfid>
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.
This is still an issue. Jaco, can you please reassign this to a newer version and set it to NEW? I can't do that since it's your ticket.
Re-opened as per request from Joe Julian.
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.