Before use glusterfs in production I made some tests: Scenario: 2 replicated nodes 1. Create a sparse file of 50g in a folder of a mountpoint created by glusterfs. 2. Connect to a loop device (losetup /dev/loop0 ...) -> both nodes in sync, show 50g, "du -sh" the real small size ... 3. Create a PV on /dev/loop0 (pvcreate /dev/loop0) 4. Create VG ... 5. Copy LV´s of a local VG into this VG (dd for each LV) -> both sparse files grow in sync Problem: If one node disapears for a time during the copy and joins again during that (test: kill glusterfsd and restart on one node) the result is that the sparse files have a different size and don´t get in sync again.
Hi Christian, I ran the tests that you have specified. On the backend, the ls and md5sum match after selfheal completes. The du -sh size differs. This is valid behavior of gluster handling spare files. With regards, Shishir