Bug 1140818 - symlink changes to directory, that reappears on removal
Summary: symlink changes to directory, that reappears on removal
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.5.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1348915
TreeView+ depends on / blocked
 
Reported: 2014-09-11 18:33 UTC by Andrew
Modified: 2016-06-22 10:02 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
: 1348915 (view as bug list)
Environment:
Last Closed: 2016-06-17 15:57:32 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Andrew 2014-09-11 18:33:24 UTC
Description of problem:

two replicated bricks provides volume www, mounted on client node as /var/www via fuse.glusterfs (glusterfs defaults,_netdev). rsync deploys about 9300 php files to directory /var/www/backend/releases/<new-time-stamp> on client node, and then relinks (symlink) /var/www/backend/current as this uploaded <new-time-stamp> dir like this:

cd /var/www/backend && ln -s releases/20140911151733 /var/www/backend/current_tmp && mv -Tf /var/www/backend/current_tmp /var/www/backend/current

please note that symlink 'current' already existed and pointed to another dir in /var/www/backend/releases/ when command above was run.

What happens is that instead of getting new symlink 'current' pointing to newly uploaded source, we get directory 'current' that cannot be deleted - 'rm -fr current' succeeds, but it reappears second later. The only way to remove 'current' is to shut down all nodes, stop glusterfs[d] and remove directory on bricks themselves. This is critical as we have to kill whole cluster.

Version-Release number of selected component (if applicable):
3.5.2-ubuntu1~trusty1, amd64

Actual results:
'current' appears as directory, cannot be deleted

Expected results:
'current' should appear as link that points to new directory

Additional info:
all nodes are ubuntu 14.04LTS amd64 VM's, having at least 1GB of free memory left. Two mirrored bricks are glusterfs v3.5.2. Four client nodes mounts volume via NFS, deployment node (where rsync and commands below runs) mounts volume as glusterfs.

full source deployment script that causes problem:
  * executing `deploy:copy_remote_cache'
  * executing "rsync -az --delete /var/www/backend/shared/cached-copy/ /var/www/backend/releases/20140911151733/"
  * executing "chmod -R g+w /var/www/backend/releases/20140911151733"
  * executing "sh -c 'if [ -d /var/www/backend/releases/20140911151733/var/cache ] ; then rm -rf /var/www/backend/releases/20140911151733/var/cache; fi'"
  * executing "sh -c 'mkdir -p /var/www/backend/releases/20140911151733/var/cache && chmod -R 0777 /var/www/backend/releases/20140911151733/var/cache'"
  * executing "chmod -R g+w /var/www/backend/releases/20140911151733/var/cache"
  * executing "find /var/www/backend/releases/20140911151733/web/css /var/www/backend/releases/20140911151733/web/images /var/www/backend/releases/20140911151733/web/js -exec touch -t 201409111521.41 {} ';' &> /dev/null || true"
  * executing "cd /var/www/backend && ln -s releases/20140911151733 /var/www/backend/current_tmp && mv -Tf /var/www/backend/current_tmp /var/www/backend/current"

Comment 1 Niels de Vos 2014-10-14 12:39:16 UTC
Lala, what information would you like from the Gluster Bug Triage 'group'?

Comment 2 Lalatendu Mohanty 2014-12-12 12:22:59 UTC
Nothing in specific. While I was triaging the bug, I thought it would be a good candidate from group triage.

Comment 3 Cody Ashe-McNalley 2016-03-23 17:14:01 UTC
This still occurs in 3.7.6 (release 1.el7).  Renaming a symlink results in 2 symlinks (original & renamed).  Deleting the parent directory of the symlink results in the recreation of the parent directory and symlink.  However, the renamed symlink does not reappear in the recreated parent directory (only the original symlink).

Type: Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Options Reconfigured:
cluster.metadata-self-heal: on
cluster.entry-self-heal: on
cluster.data-self-heal: on
cluster.self-heal-daemon: on
cluster.self-heal-window-size: 32
performance.flush-behind: off
cluster.lookup-unhashed: off
cluster.readdir-optimize: on
performance.io-thread-count: 32
performance.write-behind-window-size: 1GB
performance.cache-size: 8GB
performance.readdir-ahead: on
nfs.disable: on

Comment 4 Niels de Vos 2016-06-17 15:57:32 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.