Description of problem: ======================= When you try to create a geo-rep session having master volume as type: Tier, it fails with error "Index initialization failed". log snippet: ============ [2015-04-27 11:02:04.162311] E [glusterd-volgen.c:2709:volgen_graph_build_clients] 0-: volume inconsist ency: total number of bricks (5) is not divisible with number of bricks per cluster (2) in a multi-clus ter setup [2015-04-27 11:02:04.162344] E [glusterd-volgen.c:4972:glusterd_create_volfiles] 0-management: Could no t generate trusted client volfiles [2015-04-27 11:02:04.162355] E [glusterd-geo-rep.c:3899:glusterd_marker_changelog_create_volfile] 0-: U nable to create volfile for setting of marker while 'geo-replication start' [2015-04-27 11:02:04.162364] W [glusterd-geo-rep.c:5433:glusterd_op_gsync_create] 0-management: marker/ changelog start failed [2015-04-27 11:02:04.162376] E [glusterd-syncop.c:1330:gd_commit_op_phase] 0-management: Commit of oper ation 'Volume Geo-replication Create' failed on localhost : Index initialization failed This error comes when gluster volume set returns non-zero for "changelog.changelog:" and "geo-replication.indexing:". On tiered volume, any set command is failing which could be the reason for georep failing its session. Version-Release number of selected component (if applicable): ============================================================= glusterfs-server-3.7dev-0.1015.gita3578de.el6.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1. Create Master cluster 2. Create Slave cluster 3. Create and start master volume (2x2=4) 4. Create and start slave volume (2x2=4) 5. attach-tier to master volume (2x2=5) 6. attach-tier to slave volume (2x2=5) 7. Create a geo-rep session between master and slave volume. Actual results: =============== Fails to create with error "Index initialization failed" Expected results: ================= It should successfully create the georep session.
We tested this on 1) Tested on master: works 2) Tested on 3.7 branch : works [root@rhs-srv-09 test]# gluster volume info Volume Name: test Type: Tier Volume ID: 28e37d68-8699-40dd-8e55-c04cf1ceb5a2 Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: rhs-srv-09:/home/ssd/s2 Brick2: rhs-srv-09:/home/ssd/s1 Cold Bricks: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: rhs-srv-09:/home/disk/d1 Brick4: rhs-srv-09:/home/disk/d2 Options Reconfigured: performance.readdir-ahead: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on [root@rhs-srv-09 test]# [root@rhs-srv-09 test]# gluster volume geo-rep test rhs-srv-08::test status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- rhs-srv-09.gdev.lab.eng.rdu2.redhat.com test /home/ssd/s2 root rhs-srv-08::test rhs-srv-08 Active Changelog Crawl 2015-05-19 02:53:19 rhs-srv-09.gdev.lab.eng.rdu2.redhat.com test /home/ssd/s1 root rhs-srv-08::test rhs-srv-08 Active Changelog Crawl 2015-05-19 02:53:19 rhs-srv-09.gdev.lab.eng.rdu2.redhat.com test /home/disk/d1 root rhs-srv-08::test rhs-srv-08 Active Changelog Crawl N/A rhs-srv-09.gdev.lab.eng.rdu2.redhat.com test /home/disk/d2 root rhs-srv-08::test rhs-srv-08 Active Changelog Crawl 2015-05-19 02:52:18