Description of problem: ========================= AS Part of bug verification 248998 - [AFR]: Files not available in the mount point after converting Distributed volume type to Replicated one. I ran the following case: TC#1:automatic heal should be triggered and all files must be availble on both bricks( data,metadata and entry heals must pass ) 1:create a single brick volume 2:now start volume, and add some files and directories and note them 3:now add-brick such that this brick makes the volume a replica vol 1x2 by using below command gluster v add-brick <vname> rep 2 <newbirck> 4:Now from the mount point check if you are able to see all the files and dirs created in step 2 and check that they are accessible too 5. Now check the heal info command to see that all heals are complete 6. Now check the backend brick to make sure all files are replicated 7. Now create new files and dirs and make sure they are replicated to both bricks 8. Make sure data,metadata and entry heals pass However only automatic entry heal is happening. the metadata and data heal is not at all happening, unless a named lookup happens Also, note that the afr bits are still pending for data and metadata bits, obviously. Also, what is surprising is that there is not file under .glusterfs/indices/xattarops for the good(or old) brick, but there is an empty file under the new brick Also, the folder .glusterfs/indices/dirty is present only in the new brick and not on the good brick Also, the heal info shows as no files to heal [root@dhcp35-191 feb]# gluster v heal oct info Brick 10.70.35.191:/rhs/brick1/oct Number of entries: 0 Brick 10.70.35.191:/rhs/brick2/oct Number of entries: 0 Version-Release number of selected component (if applicable): ================== 3.7.9-2 [root@dhcp35-191 feb]# rpm -qa|grep gluster glusterfs-client-xlators-3.7.9-2.el7rhgs.x86_64 glusterfs-server-3.7.9-2.el7rhgs.x86_64 python-gluster-3.7.5-19.el7rhgs.noarch gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64 vdsm-gluster-4.16.30-1.3.el7rhgs.noarch glusterfs-3.7.9-2.el7rhgs.x86_64 glusterfs-api-3.7.9-2.el7rhgs.x86_64 glusterfs-cli-3.7.9-2.el7rhgs.x86_64 glusterfs-geo-replication-3.7.9-2.el7rhgs.x86_64 gluster-nagios-common-0.2.3-1.el7rhgs.noarch glusterfs-libs-3.7.9-2.el7rhgs.x86_64 glusterfs-fuse-3.7.9-2.el7rhgs.x86_64 glusterfs-rdma-3.7.9-2.el7rhgs.x86_64
Patch is merged in upstream http://review.gluster.org/#/c/15118/. Regards Mohit Agrawal
Update: ============ Build : glusterfs-3.12.2-7.el7rhgs.x86_64 Scenario : 1) create single brick volume 2) create files ( empty files too ) and dirs from mount point 3) change permission for some files and change owner for some files 4) convert Distribute to replicate ( 1 * 2 ) by adding brick 5) check the heal info 6) calculate areequal for all the bricks 7) From backend, check all the files are present in both bricks > areequal-checksum is same for all bricks and same as from client client areequal checksum # arequal-checksum -p /mnt/dist/ Entry counts Regular files : 200 Directories : 101 Symbolic links : 0 Other : 0 Total : 301 Metadata checksums Regular files : 13198 Directories : 24d74c Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 00 Directories : 3001013100002e00 Symbolic links : 0 Other : 0 Total : 3001013100002e00 > from brick0 # arequal-checksum -p /bricks/brick5/b0 -i .glusterfs -i .landfill Entry counts Regular files : 200 Directories : 101 Symbolic links : 0 Other : 0 Total : 301 Metadata checksums Regular files : 13198 Directories : 24d74c Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 00 Directories : 3001013100002e00 Symbolic links : 0 Other : 0 Total : 3001013100002e00 > from brick1 # arequal-checksum -p /bricks/brick5/b1 -i .glusterfs -i .landfill Entry counts Regular files : 200 Directories : 101 Symbolic links : 0 Other : 0 Total : 301 Metadata checksums Regular files : 13198 Directories : 24d74c Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 00 Directories : 3001013100002e00 Symbolic links : 0 Other : 0 Total : 3001013100002e00 > same scenario has been execute for converting distribute to 1 * 3 replicate volume.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607