Description of problem: gluster 4.1.6 geo-replication is not replicating renames Our application is basically a mail server. We have mail stored on several (configurable) mount points, each having it's own volume (gluster) The volumes are geo-replicated to a mirror server that is set up identically on the other site, the domain name basically being the only difference. The mail application writes to a spool file, which is renamed when completed spooling. the spool file still exists on the geo-slave and the spool file is still there, basically the rename does not get replicated. the slave site the volumes are set to read-only, until we want to fail them over. But to simplify the issue, I simply create a txt file on the root and immediately rename it, only the original name gets replicated, the rename is never done on the slave. Version-Release number of selected component (if applicable): 4.1.6 How reproducible: easy. Steps to Reproduce: 1. create a file on the root mount of the volume 2. rename (mv) the file 3. verify on slave that file has been renamed. Actual results: [vfeuk][xbesite1][fs-5][root@xbevmio01el-opco2-fs-25]: /glusterVol/.test/mfs_opco1_int_17 # echo "test 123" >>test4.txt;mv test4.txt test4_renamed.txt;ls -l test4.txt test4_renamed.txt ls: cannot access 'test4.txt': No such file or directory -rw-r----- 1 root root 9 Dec 17 22:15 test4_renamed.txt [vfeuk][xbesite1][fs-5][root@xbevmio01el-opco2-fs-25]: /glusterVol/.test/mfs_opco1_int_17 # findmnt . TARGET SOURCE FSTYPE OPTIONS /glusterVol/.test/mfs_opco1_int_17 fs-5:/mfs_opco1_int_17 fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 slave server: [vfeuk][xmssite2][fs-5][root@xmsvmio02el-opco2-fs-25]: /glusterVol/.test/mfs_opco1_int_17 # ls -l test4.txt test4_renamed.txt ls: cannot access 'test4_renamed.txt': No such file or directory -rw-r----- 1 root root 9 Dec 17 22:15 test4.txt Expected results: file is renamed. Additional info: gluster v geo mfs_opco1_int_17 fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se::mfs_opco1_int_17 status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- fs-5-b.vfeuk.xbesite1.sero.gic.ericsson.se mfs_opco1_int_17 /exportg/mfs_opco1_int_17_b root fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se::mfs_opco1_int_17 fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se Active Changelog Crawl 2018-12-17 22:18:32 fs-6-b.vfeuk.xbesite1.sero.gic.ericsson.se mfs_opco1_int_17 /exportg/mfs_opco1_int_17_b root fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se::mfs_opco1_int_17 fs-6-b.vfeuk.xmssite2.sero.gic.ericsson.se Passive N/A N/A /exportg/mfs_opco1_int_17_b/internal # gluster v geo mfs_opco1_int_17 fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se::mfs_opco1_int_17 config access_mount:false allow_network: change_detector:changelog change_interval:5 changelog_archive_format:%Y%m changelog_batch_size:727040 changelog_log_file:/var/log/glusterfs/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/changes-${local_id}.log changelog_log_level:WARNING checkpoint:1545081336 chnagelog_archive_format:%Y%m cli_log_file:/var/log/glusterfs/geo-replication/cli.log cli_log_level:INFO connection_timeout:60 georep_session_working_dir:/var/lib/glusterd/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/ gluster_cli_options: gluster_command:gluster gluster_command_dir:/usr/sbin gluster_log_file:/var/log/glusterfs/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/mnt-${local_id}.log gluster_log_level:WARNING gluster_logdir:/var/log/glusterfs gluster_params:aux-gfid-mount acl gluster_rundir:/var/run/gluster glusterd_workdir:/var/lib/glusterd gsyncd_miscdir:/var/lib/misc/gluster/gsyncd ignore_deletes:false isolated_slaves: log_file:/var/log/glusterfs/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/gsyncd.log log_level:WARNING log_rsync_performance:true master_disperse_count:1 master_replica_count:1 max_rsync_retries:10 meta_volume_mnt:/var/run/gluster/shared_storage pid_file:/var/run/gluster/gsyncd-mfs_opco1_int_17-fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se-mfs_opco1_int_17.pid remote_gsyncd: replica_failover_interval:1 rsync_command:rsync rsync_opt_existing:true rsync_opt_ignore_missing_args:true rsync_options: rsync_ssh_options: slave_access_mount:false slave_gluster_command_dir:/usr/sbin slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/mnt-${master_node}-${master_brick_id}.log slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/mnt-mbr-${master_node}-${master_brick_id}.log slave_gluster_log_level:INFO slave_gluster_params:aux-gfid-mount acl slave_log_file:/var/log/glusterfs/geo-replication-slaves/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/gsyncd.log slave_log_level:INFO slave_timeout:120 special_sync_mode: ssh_command:ssh ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem ssh_port:22 state_file:/var/lib/glusterd/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/monitor.status state_socket_unencoded: stime_xattr_prefix:trusted.glusterfs.9762dbff-67fc-41fa-b326-a327476869be.cf526ed9-a7a2-4f56-a94a-ba9e0d70d166 sync_acls:true sync_jobs:1 sync_xattrs:true tar_command:tar use_meta_volume:true use_rsync_xattrs:false use_tarssh:false working_dir:/var/lib/misc/gluster/gsyncd/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/ [vfeuk][xbesite1][fs-5][root@xbevmio01el-opco2-fs-25]: /exportg/mfs_opco1_int_17_b/internal # gluster v info mfs_opco1_int_17 Volume Name: mfs_opco1_int_17 Type: Replicate Volume ID: 9762dbff-67fc-41fa-b326-a327476869be Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: fs-5-b.vfeuk.xbesite1.sero.gic.ericsson.se:/exportg/mfs_opco1_int_17_b Brick2: fs-6-b.vfeuk.xbesite1.sero.gic.ericsson.se:/exportg/mfs_opco1_int_17_b Options Reconfigured: features.read-only: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on diagnostics.client-log-level: WARNING diagnostics.brick-log-level: WARNING network.ping-timeout: 5 performance.write-behind-window-size: 64MB server.allow-insecure: on server.event-threads: 3 client.event-threads: 8 network.inode-lru-limit: 200000 performance.md-cache-timeout: 600 performance.cache-invalidation: on performance.stat-prefetch: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on performance.parallel-readdir: on cluster.lookup-optimize: on cluster.favorite-child-policy: mtime performance.io-thread-count: 64 performance.readdir-ahead: on performance.cache-size: 512MB nfs.disable: on cluster.enable-shared-storage: enable [vfeuk][xbesite1][fs-5][root@xbevmio01el-opco2-fs-25]: /exportg/mfs_opco1_int_17_b/internal # gluster v status mfs_opco1_int_17 Status of volume: mfs_opco1_int_17 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick fs-5-b.vfeuk.xbesite1.sero.gic.ericss on.se:/exportg/mfs_opco1_int_17_b 49159 0 Y 28908 Brick fs-6-b.vfeuk.xbesite1.sero.gic.ericss on.se:/exportg/mfs_opco1_int_17_b 49159 0 Y 3611 Self-heal Daemon on localhost N/A N/A Y 22663 Self-heal Daemon on fs-6-b.vfeuk.xbesite1.s ero.gic.ericsson.se N/A N/A Y 14227 Task Status of Volume mfs_opco1_int_17 ------------------------------------------------------------------------------ There are no active volume task [vfeuk][xbesite1][fs-5][root@xbevmio01el-opco2-fs-25]: /exportg/mfs_opco1_int_17_b/internal # gluster peer status Number of Peers: 1 Hostname: fs-6-b.vfeuk.xbesite1.sero.gic.ericsson.se Uuid: 5c8b7596-840a-48e9-a3fe-e9ced3a48df0 State: Peer in Cluster (Connected) gluster volume statedump mfs_opco1_int_17 Segmentation fault (core dumped) xfs_info /exportg/mfs_opco1_int_17_b meta-data=/dev/mapper/glustervg-lvm_mfs_opco1_int_17_b isize=1024 agcount=16, agsize=163808 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2620928, imaxpct=33 = sunit=32 swidth=32 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=32 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 getfattr -d -m. -ehex /exportg/mfs_opco1_int_17_b getfattr: Removing leading '/' from absolute path names # file: exportg/mfs_opco1_int_17_b trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.9762dbff-67fc-41fa-b326-a327476869be.cf526ed9-a7a2-4f56-a94a-ba9e0d70d166.entry_stime=0x5c18140100000000 trusted.glusterfs.9762dbff-67fc-41fa-b326-a327476869be.cf526ed9-a7a2-4f56-a94a-ba9e0d70d166.stime=0x5c18140100000000 trusted.glusterfs.9762dbff-67fc-41fa-b326-a327476869be.xtime=0x5c18140e000c9211 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.volume-id=0x9762dbff67fc41fab326a327476869be [vfeuk][xbesite1][fs-5][root@xbevmio01el-opco2-fs-25]: /exportg/mfs_opco1_int_17_b/internal # uname -r; cat /etc/issue 4.4.103-6.38-default Welcome to SUSE Linux Enterprise Server 12 SP3 (x86_64) - Kernel \r (\l). [vfeuk][xmssite2][fs-5][root@xmsvmio02el-opco2-fs-25]: /glusterVol/.test/mfs_opco1_int_17 # df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 9.8G 12K 9.8G 1% /dev tmpfs tmpfs 9.9G 4.0K 9.9G 1% /dev/shm tmpfs tmpfs 9.9G 483M 9.4G 5% /run tmpfs tmpfs 9.9G 0 9.9G 0% /sys/fs/cgroup /dev/sda2 btrfs 32G 1.6G 29G 6% / /dev/sda1 ext3 259M 46M 200M 19% /boot tmpfs tmpfs 2.0G 0 2.0G 0% /run/user/0 /dev/mapper/glustervg-lvm_glusterVarLog xfs 50G 232M 50G 1% /exportg/gluster_var_log /dev/mapper/glustervg-lvm_mfs_opco1_int_10_b xfs 10G 4.7G 5.4G 47% /exportg/mfs_opco1_int_10_b /dev/mapper/glustervg-lvm_mfs_opco1_int_11_b xfs 10G 446M 9.6G 5% /exportg/mfs_opco1_int_11_b /dev/mapper/glustervg-lvm_mfs_opco1_int_12_b xfs 10G 4.6G 5.5G 46% /exportg/mfs_opco1_int_12_b /dev/mapper/glustervg-lvm_mfs_opco1_int_13_b xfs 10G 1.1G 9.0G 11% /exportg/mfs_opco1_int_13_b /dev/mapper/glustervg-lvm_mfs_opco1_int_14_b xfs 10G 518M 9.5G 6% /exportg/mfs_opco1_int_14_b /dev/mapper/glustervg-lvm_mfs_opco1_int_15_b xfs 10G 1.2G 8.8G 12% /exportg/mfs_opco1_int_15_b /dev/mapper/glustervg-lvm_mfs_opco1_int_16_b xfs 10G 637M 9.4G 7% /exportg/mfs_opco1_int_16_b /dev/mapper/glustervg-lvm_mfs_opco1_int_17_b xfs 10G 2.6G 7.5G 26% /exportg/mfs_opco1_int_17_b /dev/mapper/glustervg-lvm_mfs_opco1_int_18_b xfs 10G 38M 10G 1% /exportg/mfs_opco1_int_18_b /dev/mapper/glustervg-lvm_mfs_opco1_int_19_b xfs 10G 38M 10G 1% /exportg/mfs_opco1_int_19_b /dev/mapper/glustervg-lvm_mfs_opco1_int_1a_b xfs 10G 38M 10G 1% /exportg/mfs_opco1_int_1a_b /dev/mapper/glustervg-lvm_mfs_opco1_int_1b_b xfs 10G 38M 10G 1% /exportg/mfs_opco1_int_1b_b /dev/mapper/glustervg-lvm_mfs_opco1_int_1c_b xfs 10G 38M 10G 1% /exportg/mfs_opco1_int_1c_b /dev/mapper/glustervg-lvm_mfs_opco1_int_1d_b xfs 10G 42M 10G 1% /exportg/mfs_opco1_int_1d_b /dev/mapper/glustervg-lvm_mfs_opco1_int_1e_b xfs 10G 42M 10G 1% /exportg/mfs_opco1_int_1e_b /dev/mapper/glustervg-lvm_mfs_opco1_int_1f_b xfs 10G 38M 10G 1% /exportg/mfs_opco1_int_1f_b 10.221.81.224:/gluster_shared_storage fuse.glusterfs 32G 2.1G 29G 7% /run/gluster/shared_storage om-1-b.vfeuk.xmssite2.sero.gic.ericsson.se:/gcluster fuse.glusterfs 25G 957M 25G 4% /cluster fs-5:/gluster_shared_storage fuse.glusterfs 32G 2.1G 29G 7% /glusterVol/.test/gluster_shared_storage fs-5:/mfs_opco1_int_10 fuse.glusterfs 10G 4.8G 5.3G 48% /glusterVol/.test/mfs_opco1_int_10 fs-5:/mfs_opco1_int_11 fuse.glusterfs 10G 548M 9.5G 6% /glusterVol/.test/mfs_opco1_int_11 fs-5:/mfs_opco1_int_12 fuse.glusterfs 10G 4.7G 5.4G 47% /glusterVol/.test/mfs_opco1_int_12 fs-5:/mfs_opco1_int_13 fuse.glusterfs 10G 1.2G 8.9G 12% /glusterVol/.test/mfs_opco1_int_13 fs-5:/mfs_opco1_int_14 fuse.glusterfs 10G 620M 9.4G 7% /glusterVol/.test/mfs_opco1_int_14 fs-5:/mfs_opco1_int_15 fuse.glusterfs 10G 1.3G 8.7G 13% /glusterVol/.test/mfs_opco1_int_15 fs-5:/mfs_opco1_int_16 fuse.glusterfs 10G 740M 9.3G 8% /glusterVol/.test/mfs_opco1_int_16 fs-5:/mfs_opco1_int_17 fuse.glusterfs 10G 2.7G 7.4G 27% /glusterVol/.test/mfs_opco1_int_17 fs-5:/mfs_opco1_int_18 fuse.glusterfs 10G 140M 9.9G 2% /glusterVol/.test/mfs_opco1_int_18 fs-5:/mfs_opco1_int_19 fuse.glusterfs 10G 144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_19 fs-5:/mfs_opco1_int_1a fuse.glusterfs 10G 144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1a fs-5:/mfs_opco1_int_1b fuse.glusterfs 10G 144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1b fs-5:/mfs_opco1_int_1c fuse.glusterfs 10G 144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1c fs-5:/mfs_opco1_int_1d fuse.glusterfs 10G 144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1d fs-5:/mfs_opco1_int_1e fuse.glusterfs 10G 144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1e fs-5:/mfs_opco1_int_1f fuse.glusterfs 10G 144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1f fs-1-b.vfeuk.xmssite2.sero.gic.ericsson.se:/perf_opco1 fuse.glusterfs 16G 625M 16G 4% /glusterVol/perf_opco1 fs-1-b.vfeuk.xmssite2.sero.gic.ericsson.se:/cps_opco1 fuse.glusterfs 2.0G 64M 2.0G 4% /opt/global/cps fs-1-b.vfeuk.xmssite2.sero.gic.ericsson.se:/logs_opco1 fuse.glusterfs 50G 2.1G 48G 5% /glusterVol/logs_opco1 ================== If any logs required please ask.
Hi, Please share- 1. gluster log 2. geo-replication log from master and 3. mount log from slave. - Sunny
(In reply to Sunny Kumar from comment #1) > Hi, > > Please share- 1. gluster log 2. geo-replication log from master and 3. mount > log from slave. > > - Sunny How do I attach logs, or do you wish I email them to you?
Created attachment 1516584 [details] logs-master logs from master, geo master, brick glustredd, mount logs and changelogs from brick.
Created attachment 1516585 [details] logs-slave geo slave logs, brick logs, mount logs, glusterd logs
Created attachment 1516586 [details] additional info for test volume and geo session. new test volume and replication session to create requested logs. Same server, same brick type, lvm xfs, save servers as in description.
Never mind found how to attach logs. I created a new volume and replication session to reproduce the logs you requested. As the other sessions are not created at the moment as they are turned off to do other tests. However same result and very similar setup, except debug logs are turned on and smaller bricks.
(In reply to Sunny Kumar from comment #1) > Hi, > > Please share- 1. gluster log 2. geo-replication log from master and 3. mount > log from slave. > > - Sunny Is there any news on this bug? It has been over a month since I filed it.
Hi, Looks like https://review.gluster.org/#/c/glusterfs/+/20093/. But I am trying for reproducer to analyse more. If something is missing in step to reproduce please add. - Sunny
(In reply to Sunny Kumar from comment #8) > Hi, > > Looks like https://review.gluster.org/#/c/glusterfs/+/20093/. > > But I am trying for reproducer to analyse more. > > If something is missing in step to reproduce please add. > > - Sunny sounds like it could be, which version of gluster is this released in?
It is quite easy to reproduce, so don't think I missed anything. default config, replica 2 on both sites, create geo session, start, create and rename a file.
You can try this simple test to reproduce the problem. On Master [svc_sp_st_script@hplispnfs30079 conf]$ touch test.txt [svc_sp_st_script@hplispnfs30079 conf]$ vi test.txt a b c d [svc_sp_st_script@hplispnfs30079 conf]$ ll test.txt -rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test.txt On Slave [root@hplispnfs40079 conf]# ll test.txt -rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test.txt [root@hplispnfs40079 conf]# cat test.txt a b c d On Master [svc_sp_st_script@hplispnfs30079 conf]$ mv test.txt test-moved.txt [svc_sp_st_script@hplispnfs30079 conf]$ ll test-moved.txt -rw-r----- 1 svc_sp_st_script domain users 8 Apr 2 14:59 test-moved.txt On Slave File is not deleted, test-moved.txt does not exist and is not replicated. [root@hplispnfs40079 conf]# ll testfile -rw-r----- 1 svc_sp_st_script domain users 6 Apr 2 14:52 testfile
I also tried setting use_tarssh:true but this did not change the behavior. [root@hplispnfs30079 conf]# gluster volume geo-replication common hplispnfs40079::common config access_mount:false allow_network: change_detector:changelog change_interval:5 changelog_archive_format:%Y%m changelog_batch_size:727040 changelog_log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/changes-${local_id}.log changelog_log_level:INFO checkpoint:0 chnagelog_archive_format:%Y%m cli_log_file:/var/log/glusterfs/geo-replication/cli.log cli_log_level:INFO connection_timeout:60 georep_session_working_dir:/var/lib/glusterd/geo-replication/common_hplispnfs40079_common/ gluster_cli_options: gluster_command:gluster gluster_command_dir:/usr/sbin gluster_log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/mnt-${local_id}.log gluster_log_level:INFO gluster_logdir:/var/log/glusterfs gluster_params:aux-gfid-mount acl gluster_rundir:/var/run/gluster glusterd_workdir:/var/lib/glusterd gsyncd_miscdir:/var/lib/misc/gluster/gsyncd ignore_deletes:false isolated_slaves: log_file:/var/log/glusterfs/geo-replication/common_hplispnfs40079_common/gsyncd.log log_level:INFO log_rsync_performance:false master_disperse_count:1 master_replica_count:1 max_rsync_retries:10 meta_volume_mnt:/var/run/gluster/shared_storage pid_file:/var/run/gluster/gsyncd-common-hplispnfs40079-common.pid remote_gsyncd:/usr/libexec/glusterfs/gsyncd replica_failover_interval:1 rsync_command:rsync rsync_opt_existing:true rsync_opt_ignore_missing_args:true rsync_options: rsync_ssh_options: slave_access_mount:false slave_gluster_command_dir:/usr/sbin slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/mnt-${master_node}-${master_brick_id}.log slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/mnt-mbr-${master_node}-${master_brick_id}.log slave_gluster_log_level:INFO slave_gluster_params:aux-gfid-mount acl slave_log_file:/var/log/glusterfs/geo-replication-slaves/common_hplispnfs40079_common/gsyncd.log slave_log_level:INFO slave_timeout:120 special_sync_mode: ssh_command:ssh ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem ssh_port:22 state_file:/var/lib/glusterd/geo-replication/common_hplispnfs40079_common/monitor.status state_socket_unencoded: stime_xattr_prefix:trusted.glusterfs.bb691a2e-801c-435b-a905-11ad249d43a7.ab3b208f-8cd1-4a2d-bf56-4a98434605c5 sync_acls:true sync_jobs:3 sync_xattrs:true tar_command:tar use_meta_volume:true use_rsync_xattrs:false use_tarssh:true working_dir:/var/lib/misc/gluster/gsyncd/common_hplispnfs40079_common/
This issue is fixed in upstream and 5.x and 6.x series Patch: https://review.gluster.org/#/c/glusterfs/+/20093/
Workaround: The issue affects only single distribute volumes i.e 1*2 and 1*3 volumes. It doesn't affect n*2 or n*3 volumes where n>1. So one way to fix is to convert single distribute to two distribute volume or upgrade to later versions if it can't be waited until next 4.1.x release.
REVIEW: https://review.gluster.org/22476 (cluster/dht: Fix rename journal in changelog) posted (#1) for review on release-4.1 by Kotresh HR
(In reply to Kotresh HR from comment #13) > This issue is fixed in upstream and 5.x and 6.x series > > Patch: https://review.gluster.org/#/c/glusterfs/+/20093/ We are having the issue in replicate mode (using replica 2). Adrian Sender
(In reply to Kotresh HR from comment #14) > Workaround: > The issue affects only single distribute volumes i.e 1*2 and 1*3 volumes. > It doesn't affect n*2 or n*3 volumes where n>1. So one way to fix is to > convert > single distribute to two distribute volume or upgrade to later versions > if it can't be waited until next 4.1.x release. greate thanks, is it planned to be backported to for 4.x as my os (sles 12.2) does not currenty support 5.x gluster) I would have to upgrade the os to sles 12.3
I have backported the patch https://review.gluster.org/#/c/glusterfs/+/22476/. It's not merged yet.
REVIEW: https://review.gluster.org/22476 (cluster/dht: Fix rename journal in changelog) merged (#1) on release-4.1 by Kotresh HR
I was curious if anyone ever got this resolved? I was running 4.1.7 and set up a geo-replica, this had the above issue with the renaming of files and directories. I have tired up grading to 4.1.8 and have now moved to 5.6 and best I have now is replicated renames of directories. Renaming of files still doesn't get replicated to the geo-replica volume.
Some custom compiled RPMS are versioned 4.1.8-0.1.git... and contain the fixes. What a mess this project has become. Broken all versions. Official it will be in 4.1.9 but you can use the below RPMS - we are running it again in production now and appears to be ok. Make sure you update clients as well. [1] RPMs for 4.1 including the fix for el7: https://build.gluster.org/job/rpm-el7/3599/artifact/ -Adrian Sender
Thank you Adrian, appreciate the feedback. Unfortunatley that URL returns me a 404 error so cannot get that installed. I may just wait for the 4.1.9 release to go GA, seems based on the release cycle that it mught well be out next week. Unless you happen to have a copy of the RPM's that you can share? Thanks, Ben.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.9, please open a new bug report. glusterfs-4.1.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/gluster-users/2019-June/036679.html [2] https://www.gluster.org/pipermail/gluster-users/
Thx seems to fixed in 6.6