Description of problem: When a Gluster storage domain that's the master domain is placed into maintenance mode, the sequence fails with "TarCopyFailed". As a result, another storage domain cannot be assignmed as master. This occurs when the 'master' subtree of the master SD is being "copied" to the new master SD. The error is against the source of the "copy", specifically that some files associated with active tasks for OVF Store updates are reporting "file changed as we read it". Version-Release number of selected component (if applicable): RHEV-M 3.6.5 RHEV-H 7.2 20160413.0.el7 vdsm-4.17.26-0 glusterfs-*-3.7.1-16.el7 How reproducible: Not (yet). Steps to Reproduce: 1. 2. 3. Actual results: The following is reported in the vdsm logs on the SPM; jsonrpc.Executor/0::ERROR::2016-07-21 05:35:39,989::sp::864::Storage.StoragePool::(masterMigrate) migration to new master failed Traceback (most recent call last): File "/usr/share/vdsm/storage/sp.py", line 853, in masterMigrate exclude=('./lost+found',)) File "/usr/share/vdsm/storage/fileUtils.py", line 68, in tarCopy raise TarCopyFailed(tsrc.returncode, tdst.returncode, out, err) TarCopyFailed: (1, 0, '', '') Expected results: The "copy' Additional info:
hi, So to fix this bug completely in gluster we need to develop a feature where we store ctimes in an extended attribute of the file. Until then we gave options in 3.1.3, which slows down I/O but gives the consistency. Do let me know if you are okay with the work around until the feature completes or I should go ahead and mark this bug as duplicate of the bug I pointed out earlier. Pranith
Closing as per Pranith's suggestion. This should be followed up on Gluster's side. *** This bug has been marked as a duplicate of bug 1298724 ***