Bug 1375555 - SMB[md-cache Private build]:ll lists the file on cifs mount even after unlink is successfull from another mount [NEEDINFO]
Summary: SMB[md-cache Private build]:ll lists the file on cifs mount even after unlink...
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: md-cache
Version: 3.8
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Poornima G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-13 12:02 UTC by surabhi
Modified: 2017-11-07 10:37 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-07 10:37:57 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
ndevos: needinfo? (pgurusid)


Attachments (Terms of Use)

Description surabhi 2016-09-13 12:02:07 UTC
Description of problem:

ll from second mount point lists the removed/unlinked file from first mount point.

From cifs mount1 : create ln -s file1 true1
From cifs mount 1 : unlink true1
cifs mount2 : ll true1


[root@dhcp42-41 cifs3]# rm -rf true1


[root@dhcp42-41 cifs1]# ll true1
lrwxrwxrwx. 1 root root 5 Sep 13 16:45 true1 -> file1

[root@dhcp42-41 cifs4]# ll
total 1253384
-rw-r--r--. 1 root root 1073741824 Sep 13 16:05 file1
-rw-r-----. 1 root root  209715200 Aug 31 17:43 g.txt
-rw-------. 1 root root          0 Sep  1 16:33 lockfile
drwxr-xr-x+ 4 root root       4096 Aug 30 19:15 run19963
drwxr-xr-x+ 2 root root       4096 Aug 31 15:15 run20278
lrwxrwxrwx. 1 root root          5 Sep 13 16:45 true1 -> file1


Version-Release number of selected component (if applicable):
glusterfs-3.8.2-0.24.gitf524648.el7.x86_64 (Private build)

How reproducible:
2/2

Steps to Reproduce:
1.As mentioned in description
2.
3.

Actual results:

The file still gets listed by ll from another mount point.


Expected results:
File should not be listed once it is removed or unlink is successfull.


Additional info:

Comment 1 Niels de Vos 2016-09-27 12:11:15 UTC
Please list the patches that are included in the 'private build'. In order to have the cache-invalidation in md-cache work, some volume options need to be set. The output of 'gluster volume info' should show those, can you provide that as well?

Comment 2 surabhi 2016-09-29 15:47:26 UTC
the following options are set on the volume while testing with the priv build :


  # gluster volume set <volname> features.cache-invalidation on
  # gluster volume set <volname> features.cache-invalidation-timeout 600
  # gluster volume set <volname> performance.stat-prefetch on
  # gluster volume set <volname> performance.cache-samba-metadata on
  # gluster volume set <volname> performance.cache-invalidation on
  # gluster volume set <volname> performance.md-cache-timeout 600


Volume Name: vol2
Type: Distributed-Replicate
Volume ID: deb836b9-a977-4d4d-bbd7-33f0b60634bb
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.64:/mnt/brick1/b1
Brick2: 10.70.47.66:/mnt/brick1/b2
Brick3: 10.70.47.64:/mnt/brick2/b3
Brick4: 10.70.47.66:/mnt/brick2/b4
Options Reconfigured:
storage.batch-fsync-delay-usec: 0
server.allow-insecure: on
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.stat-prefetch: on
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet

For the patches, I would let poornima add the details.

Comment 3 Niels de Vos 2017-11-07 10:37:57 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.