Bug 1029337 - Deleted files reappearing
Deleted files reappearing
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: replicate (Show other bugs)
3.4.1
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: bugs@gluster.org
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-12 03:08 EST by Øystein Viggen
Modified: 2015-10-07 09:13 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-07 09:13:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Øystein Viggen 2013-11-12 03:08:18 EST
Description of problem:
When I delete a lot of files on a replicated volume while rebooting one glusterfs server, some of the deleted files will reappear when the server boots up.


Version-Release number of selected component (if applicable):
Glusterfs packages 3.4.1-ubuntu1~precise1 from the semi-official Ubuntu PPA.


How reproducible:
Every time for me.


Steps to Reproduce:
1. Make a two server, two brick cluster with a replicate 2 volume.
2. Mount on a (Glusterfs FUSE) client and unpack a large source archive on the gluster volume.  I use the full linux 3.12 source.
3. "shutdown -h now" one of the gluster servers.  Volume continues to work as expected.
4. On the client, start "rm -Rf linux-3.12".
5. While this rm is running, boot up the node you shut down earlier.

Actual results:
rm will give an error message about a "Directory not empty", like so:
rm: cannot remove `linux-3.12/arch/x86/include/asm': Directory not empty

Some files in this directory will show up either in the "heal queue" shown with "gluster v heal volname info" or as actual healed files with "gluster v heal volname info healed".


Expected results:
Deleted files should stay deleted.

Additional info:
The two servers and the client are all virtual machines on VMware, and run Ubuntu 12.04.  Each brick is 16 GB, formatted with "mkfs.xfs -i size=512 /dev/sdb1".

I've also tried a 4 server cluster with one brick on each, still replicate 2, cluster.server-quorum-type=server and cluster.server-quorum-ratio=51%.  Results are similar.  As could be expected, the reappearing files seem to be files that were stored on the node that booted.  Only the booted server and its replicate partner has files on the brick - the two others have a matching empty directory.

See also http://comments.gmane.org/gmane.comp.file-systems.gluster.user/13553 for the relevant mailinglist discussion.
Comment 1 Niels de Vos 2015-05-17 17:59:01 EDT
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs@gluster.org".

If there is no response by the end of the month, this bug will get automatically closed.
Comment 2 Kaleb KEITHLEY 2015-10-07 09:13:58 EDT
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Note You need to log in before you can comment on or make changes to this bug.