Bug 1368185 - Directory on volume seems to be empty, but brick contains data
Summary: Directory on volume seems to be empty, but brick contains data
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.8
Hardware: Unspecified
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-18 15:18 UTC by Sebbo
Modified: 2017-11-07 10:41 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-07 10:41:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Latest GlusterFS log files after IRC chat (686.80 KB, application/zip)
2016-08-18 15:18 UTC, Sebbo
no flags Details
Latest GlusterFS log files of host p-v00283 (318.48 KB, application/zip)
2016-08-22 09:05 UTC, Sebbo
no flags Details
Latest GlusterFS log files of host p-v00367 (675.59 KB, application/zip)
2016-08-22 09:05 UTC, Sebbo
no flags Details
Latest GlusterFS log files of host p-v00368 (1.03 MB, application/zip)
2016-08-22 09:06 UTC, Sebbo
no flags Details
tcpdump -i any -s 0 -w /tmp/20160826-gluster.pcap tcp (888.97 KB, application/octet-stream)
2016-08-26 09:39 UTC, Sebbo
no flags Details

Description Sebbo 2016-08-18 15:18:05 UTC
Created attachment 1191906 [details]
Latest GlusterFS log files after IRC chat

Description of problem:
I had a single server with three single 2 TB disks (bricks). Due of the high CPU load at night, I had to detach and attach two of them to seperate servers.

Since this, some - not all - directories on the volume looks empty, but on the bricks, I still can see files. If I try to list/access these files over the volume mount point by manually typing in the file path, I can access these files.

Version-Release number of selected component (if applicable):
3.7.14-ubuntu1~xenial1

How reproducible:


Steps to Reproduce:
1. Add multiple disks to the same server
2. Create a volume with the attached disks
3. Mount the volume
4. Upload data to this mounted volume
5. Remove one or more bricks from the volume
   gluster volume remove-brick gfsvbackup server01:/export/disk1/brick
   gluster volume remove-brick gfsvbackup server01:/export/disk2/brick

   setfattr -x trusted.glusterfs.volume-id /export/disk1/brick
   setfattr -x trusted.gfid /export/disk1/brick
   rm -rf /export/disk1/brick/.glusterfs/

   setfattr -x trusted.glusterfs.volume-id /export/disk2/brick
   setfattr -x trusted.gfid /export/disk2/brick
   rm -rf /export/disk2/brick/.glusterfs/

6. Attach the removed bricks (disks) to an other server
7. Add the bricks back to the volume
   gluster volume add-brick gfsvbackup server02:/export/disk1/brick
   gluster volume add-brick gfsvbackup server03:/export/disk2/brick

Actual results:
- Unable to delete directories
- Unable to see some directories/files

Expected results:
- Able to delete directories
- Able to see all directories/files

Additional info:
Here are some pastebin logs, after talking to nbalacha on IRC:
http://pastebin.com/E2JazkhG
http://pastebin.com/iwnV75Hy
http://pastebin.com/xskmzZPb
http://paste.fedoraproject.org/410279/71531775/

IRC history:
(16:25:25) nbalacha: which is the directory you are trying to delete?
(16:25:46) Sebbo: I think so, yes: p-v00283.itonicsit.de:/gfsvbackup on /mnt/backup type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)

(16:26:33) Sebbo: This one (on the brick): /export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick/p-v00284/backup/; Means this on the volume: /mnt/backup/p-v00284/backup/

(16:27:03) nbalacha: Can you get the xattrs for this dir as well
(16:27:27) Sebbo: root@p-v00367:~# getfattr -e hex -m . -d /export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick/p-v00284/backup/
(16:27:27) Sebbo: getfattr: Removing leading '/' from absolute path names
(16:27:27) Sebbo: # file: export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick/p-v00284/backup/
(16:27:27) Sebbo: trusted.gfid=0xcceec3f836b043a3a1a3fe8262e20a5c
(16:27:27) Sebbo: trusted.glusterfs.dht=0x000000010000000055555555aaaaaaa9
(16:29:15) Sebbo: root@p-v00283:~# getfattr -e hex -m . -d /export/43e26be5-31b6-4f40-b787-c3f71422f48f/brick/p-v00284/backup/
(16:29:15) Sebbo: getfattr: Removing leading '/' from absolute path names
(16:29:15) Sebbo: # file: export/43e26be5-31b6-4f40-b787-c3f71422f48f/brick/p-v00284/backup/
(16:29:15) Sebbo: trusted.gfid=0xcceec3f836b043a3a1a3fe8262e20a5c
(16:29:15) Sebbo: trusted.glusterfs.dht=0x00000001000000000000000055555554
(16:29:29) Sebbo: root@p-v00368:~# getfattr -e hex -m . -d /export/c718f376-e117-4a68-be26-096a907ae04a/brick/p-v00284/backup/
(16:29:29) Sebbo: getfattr: Removing leading '/' from absolute path names
(16:29:29) Sebbo: # file: export/c718f376-e117-4a68-be26-096a907ae04a/brick/p-v00284/backup/
(16:29:29) Sebbo: trusted.gfid=0xcceec3f836b043a3a1a3fe8262e20a5c
(16:29:29) Sebbo: trusted.glusterfs.dht=0x0000000100000000aaaaaaaaffffffff

(16:31:07) nbalacha: ok, and which brick holds the files?
(16:32:21) Sebbo: The one on p-v00367. Should be Brick1: p-v00367.itonicsit.de:/export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick

(16:49:27) nbalacha: is the file visible from the mount point?
(16:50:09) Sebbo: Yeah, seems like :D

(16:50:28) nbalacha: what happens if you try to list the dir contents now
(16:50:38) nbalacha: does this file show up?
(16:50:52) Sebbo: Nope, still empty

Comment 1 Nithya Balachandran 2016-08-18 15:53:41 UTC
Hash values of the files in question.

duplicity-full.20160806T010525Z.vol1.difftar.gz
Name = duplicity-full.20160806T010525Z.vol1.difftar.gz, hash = 2251337762, (hex = 0x8630b022)

duplicity-full.20160812T010039Z.manifest
Name = duplicity-full.20160812T010039Z.manifest, hash = 2271582285, (hex = 0x8765984d)

duplicity-full.20160813T010042Z.vol1.difftar.gz
Name = duplicity-full.20160813T010042Z.vol1.difftar.gz, hash = 2224035941, (hex = 0x84901865)

duplicity-full.20160814T010043Z.manifest
Name = duplicity-full.20160814T010043Z.manifest, hash = 2861278409, (hex = 0xaa8ba4c9)

duplicity-full-signatures.20160812T010039Z.sigtar.gz
Name = duplicity-full-signatures.20160812T010039Z.sigtar.gz, hash = 1832983180, (hex = 0x6d411a8c)

duplicity-inc.20160807T000510Z.to.20160809T000044Z.vol1.difftar.gz
Name = duplicity-inc.20160807T000510Z.to.20160809T000044Z.vol1.difftar.gz, hash = 1775115633, (hex = 0x69ce1d71)

duplicity-inc.20160809T010039Z.to.20160810T000039Z.vol1.difftar.gz
Name = duplicity-inc.20160809T010039Z.to.20160810T000039Z.vol1.difftar.gz, hash = 1536677123, (hex = 0x5b97d503)

duplicity-inc.20160810T010042Z.to.20160811T000042Z.vol1.difftar.gz
Name = duplicity-inc.20160810T010042Z.to.20160811T000042Z.vol1.difftar.gz, hash = 2544928928, (hex = 0x97b088a0)

duplicity-inc.20160816T000039Z.to.20160817T000040Z.vol1.difftar.gz
Name = duplicity-inc.20160816T000039Z.to.20160817T000040Z.vol1.difftar.gz, hash = 2435459191, (hex = 0x912a2877)

duplicity-inc.20160817T000040Z.to.20160818T000041Z.manifest
Name = duplicity-inc.20160817T000040Z.to.20160818T000041Z.manifest, hash = 2848153898, (hex = 0xa9c3612a)

duplicity-new-signatures.20160807T000510Z.to.20160809T000044Z.sigtar.gz
Name = duplicity-new-signatures.20160807T000510Z.to.20160809T000044Z.sigtar.gz, hash = 2828140661, (hex = 0xa8920075)

duplicity-new-signatures.20160817T000040Z.to.20160818T000041Z.sigtar.gz
Name = duplicity-new-signatures.20160817T000040Z.to.20160818T000041Z.sigtar.gz, hash = 2436599408, (hex = 0x913b8e70)

Comment 2 Nithya Balachandran 2016-08-19 14:41:16 UTC
All these files should have been listed in dht_readdirp as they exist in the hashed subvol.

Can you please provide the following info:
The brick logs for this volume - they should be in /var/log/glusterfs/bricks on the 3 nodes that make up this volume.
For any directory which still has this problem - a network capture on the client
 while listing a directory that has this problem and the brick logs after this operation.

Comment 3 Sebbo 2016-08-22 09:05:15 UTC
Created attachment 1192840 [details]
Latest GlusterFS log files of host p-v00283

Comment 4 Sebbo 2016-08-22 09:05:46 UTC
Created attachment 1192841 [details]
Latest GlusterFS log files of host p-v00367

Comment 5 Sebbo 2016-08-22 09:06:18 UTC
Created attachment 1192844 [details]
Latest GlusterFS log files of host p-v00368

Comment 6 Sebbo 2016-08-22 09:08:53 UTC
Sorry for the delay.

I'm not familiar with network captures. Can you tell me, how I can do this? Thanks!

Comment 7 Nithya Balachandran 2016-08-24 09:31:28 UTC
Sorry for the delay answering.

Please do the following for a directory where ls shows no entries but the rm -rf fails with "dir not empty":

1. Fuse mount the volume
2. Run the following to start capturing packets from another terminal:
tcpdump -i any -s 0 -w /tmp/gluster.pcap tcp
3. From the Fuse mount:
ls -l <dir> 
rm -rf <dir>


4. Stop the packet capture (Ctrl C)

Please send me the /tmp/gluster.pcap after this is done along with:
1. the name of the dir on which you tried this 
2. listing of the files in the dir on each brick.
3. xattrs on the dir on the brick

Comment 8 Sebbo 2016-08-26 09:38:16 UTC
Here you go. :)

I've attached the tcpdump called 20160826-gluster.pcap.

1. Name of dir: WindowsImageBackup
2. List of files on each brick:

root@p-v00283:/export/43e26be5-31b6-4f40-b787-c3f71422f48f/brick/WindowsImageBackup# ls -lh
total 0
drwxr-xr-x 10 transfer transfer 223 Aug 18 03:01 dns1
root@p-v00283:/export/43e26be5-31b6-4f40-b787-c3f71422f48f/brick/WindowsImageBackup#

root@p-v00367:/export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick/WindowsImageBackup# ls -lh
total 0
drwxr-xr-x 10 michael-hagl ansible 209 Aug 18 03:01 dns1
root@p-v00367:/export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick/WindowsImageBackup#

root@p-v00368:/export/c718f376-e117-4a68-be26-096a907ae04a/brick/WindowsImageBackup# ls -lh
total 0
drwxr-xr-x 10 michael-hagl ansible 209 Aug 18 03:01 dns1
root@p-v00368:/export/c718f376-e117-4a68-be26-096a907ae04a/brick/WindowsImageBackup#

3. xattrs on the dir on the brick

# getfattr -e hex -m . -d /export/43e26be5-31b6-4f40-b787-c3f71422f48f/brick/WindowsImageBackup/
getfattr: Removing leading '/' from absolute path names
# file: export/43e26be5-31b6-4f40-b787-c3f71422f48f/brick/WindowsImageBackup/
trusted.gfid=0x9bbc205975d94373a343b9473140f70a
trusted.glusterfs.dht=0x000000010000000055555555aaaaaaa9

# getfattr -e hex -m . -d /export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick/WindowsImageBackup/
getfattr: Removing leading '/' from absolute path names
# file: export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick/WindowsImageBackup/
trusted.gfid=0x9bbc205975d94373a343b9473140f70a
trusted.glusterfs.dht=0x0000000100000000aaaaaaaaffffffff

# getfattr -e hex -m . -d /export/c718f376-e117-4a68-be26-096a907ae04a/brick/WindowsImageBackup/
getfattr: Removing leading '/' from absolute path names
# file: export/c718f376-e117-4a68-be26-096a907ae04a/brick/WindowsImageBackup/
trusted.gfid=0x9bbc205975d94373a343b9473140f70a
trusted.glusterfs.dht=0x00000001000000000000000055555554

Comment 9 Sebbo 2016-08-26 09:39:16 UTC
Created attachment 1194269 [details]
tcpdump -i any -s 0 -w /tmp/20160826-gluster.pcap tcp

Comment 10 Nithya Balachandran 2016-08-29 08:15:29 UTC
Info from the pastebin links:

Volume Name: gfsvbackup
Type: Distribute
Volume ID: 0b7b6027-0b35-497d-bddb-4d64b34828b0
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: p-v00367.itonicsit.de:/export/6e221bd6-5079-4038-b607-ad2f95ba800f/brick
Brick2: p-v00368.itonicsit.de:/export/c718f376-e117-4a68-be26-096a907ae04a/brick
Brick3: p-v00283.itonicsit.de:/export/43e26be5-31b6-4f40-b787-c3f71422f48f/brick
Options Reconfigured:
server.allow-insecure: on


All three servers are identical.
 
###
### lsb_release -a
###
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:        16.04
Codename:       xenial
###
### uname -a
###
Linux p-v00367 4.4.0-34-generic #53-Ubuntu SMP Wed Jul 27 16:06:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
 
###
### cat /etc/apt/sources.list.d/gluster-ubuntu-glusterfs-3_8-vivid.list
###
deb http://ppa.launchpad.net/gluster/glusterfs-3.7/ubuntu xenial main
deb-src http://ppa.launchpad.net/gluster/glusterfs-3.7/ubuntu xenial main
 
###
### Log files from the server, where some files are on the brick mount point
###
Please download here: https://files.itonicsit.de/201608181605-Sebbo-glusterfs-logs.zip
 
###
### Volume details
###
 
Count of accessing clients: ~35-40 Linux Ubuntu servers

Comment 11 Nithya Balachandran 2016-08-29 08:25:28 UTC
Question on the remove bricks steps:
1. Did you run the following commands:
 gluster volume remove-brick gfsvbackup server01:/export/disk1/brick start 
 gluster volume remove-brick gfsvbackup server01:/export/disk1/brick commit

per brick before the add-brick?


 gluster volume remove-brick gfsvbackup server01:/export/disk1/brick
is not a valid command.

Did you delete the data on the removed bricks before adding it back again?
Did you run a rebalance after the add-brick?

Comment 12 Nithya Balachandran 2016-08-29 08:51:29 UTC
I don't see any calls related to rm -rf in the packet capture. Was this also done during the capture?

Comment 13 Sebbo 2016-08-29 08:54:01 UTC
No, I just did it like below:

1. Remove brick from volume
   gluster volume remove-brick gfsvbackup p-v00283.itonicsit.de:/export/disk1/brick

2. Remove GlusterFS data from brick (to give the ability to add it later again, if required)
   setfattr -x trusted.glusterfs.volume-id /export/disk1/brick
   setfattr -x trusted.gfid /export/disk1/brick
   rm -rf /export/disk1/brick/.glusterfs/

I didn't know, that there are parameters like "start" and "commit" for this task. You may should add these parameters as required parameter, if they are really necessary to prevent such issues.

You said, that the command is not valid without the parameter "start" or "commit". I can't remember myself, if I may have used the parameter "start", but anyway: I've only executed one command and after that, I've took a look at the volume status, where I could see, that the brick was removed. Due of this information, I've detached the brick (hard disk) from the volume and attached it on the other server, where the volume was added back to the volume.

No, I did NOT delete the data. I've only removed the brick from the volume, attached the storage to another server and added the brick back to the volume.

Yes, I did run a rebalance after add-brick:

   gluster volume add-brick <VOLUME-NAME> <server03:/path/to/brick>
   gluster volume rebalance <VOLUME-NAME> fix-layout start
   gluster volume rebalance <VOLUME-NAME> start

These commands were used therefore.

Yes, I've executed all your mentioned commands - also 'rm -rf'.

Comment 14 Niels de Vos 2016-09-12 05:39:50 UTC
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html

Comment 15 Sebbo 2016-10-13 11:26:52 UTC
Any news, @Nithya?

Comment 16 Nithya Balachandran 2016-10-24 09:32:10 UTC
Sorry Sebbo. I had to look into something else so have not had a chance to continue on this.

I shall try to take it up sometime next week.
Do you still see these errors?

Comment 17 Sebbo 2016-10-24 15:41:49 UTC
Ah, ok. No problem. I just want to help and figure out, if it's a bug or if I just did something wrong. :)

No, I haven't seen this error anymore yet. But I've also not migrated anything again. ;)

Comment 18 Sebbo 2016-11-08 15:22:35 UTC
Has this something to do with this issue?

root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
/mnt/backup/p-v00310/database/tmp.oTwPo7
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
/mnt/backup/p-v00310/database/tmp.sQa14E
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
mktemp: failed to create file via template ‘/mnt/backup/p-v00310/database/tmp.XXXXXX’: No such file or directory
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
mktemp: failed to create file via template ‘/mnt/backup/p-v00310/database/tmp.XXXXXX’: No such file or directory
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
mktemp: failed to create file via template ‘/mnt/backup/p-v00310/database/tmp.XXXXXX’: No such file or directory
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
mktemp: failed to create file via template ‘/mnt/backup/p-v00310/database/tmp.XXXXXX’: No such file or directory
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
/mnt/backup/p-v00310/database/tmp.cHtTtO
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
mktemp: failed to create file via template ‘/mnt/backup/p-v00310/database/tmp.XXXXXX’: No such file or directory
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
mktemp: failed to create file via template ‘/mnt/backup/p-v00310/database/tmp.XXXXXX’: No such file or directory
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
mktemp: failed to create file via template ‘/mnt/backup/p-v00310/database/tmp.XXXXXX’: No such file or directory
root@p-v00310:~# mktemp /mnt/backup/p-v00310/database/tmp.XXXXXX
mktemp: failed to create file via template ‘/mnt/backup/p-v00310/database/tmp.XXXXXX’: No such file or directory
root@p-v00310:~#

In some cases, it's possible to create a temp file and in the most not.

Below the log entries from the volume:

[2016-11-08 15:18:08.764542] W [defaults.c:1381:default_release] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/cluster/distribute.so(dht_create+0x33d) [0x7fd6c159933d] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/cluster/distribute.so(dht_local_wipe+0xa7) [0x7fd6c1579cc7] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0xc9) [0x7fd6c66a9009]))) 0-fuse: xlator does not implement release_cbk
[2016-11-08 15:18:09.621746] W [dht-layout.c:179:dht_layout_search] 0-gfsvbackup-dht: no subvolume for hash (value) = 4049232635
[2016-11-08 15:18:09.623040] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-3
[2016-11-08 15:18:09.623076] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-4
[2016-11-08 15:18:09.623140] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-6
[2016-11-08 15:18:09.623169] W [dht-layout.c:179:dht_layout_search] 0-gfsvbackup-dht: no subvolume for hash (value) = 4049232635
[2016-11-08 15:18:09.623205] W [fuse-bridge.c:1911:fuse_create_cbk] 0-glusterfs-fuse: 2125: /p-v00310/database/tmp.i5HPwT => -1 (No such file or directory)
[2016-11-08 15:18:09.623355] W [defaults.c:1381:default_release] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/cluster/distribute.so(dht_create+0x33d) [0x7fd6c159933d] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/cluster/distribute.so(dht_local_wipe+0xa7) [0x7fd6c1579cc7] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0xc9) [0x7fd6c66a9009]))) 0-fuse: xlator does not implement release_cbk
[2016-11-08 15:18:10.563819] W [dht-layout.c:179:dht_layout_search] 0-gfsvbackup-dht: no subvolume for hash (value) = 3522779309
[2016-11-08 15:18:10.564683] W [dht-layout.c:179:dht_layout_search] 0-gfsvbackup-dht: no subvolume for hash (value) = 3903786731
[2016-11-08 15:18:10.565346] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-3
[2016-11-08 15:18:10.565374] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-4
[2016-11-08 15:18:10.565437] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-6
[2016-11-08 15:18:10.565465] W [dht-layout.c:179:dht_layout_search] 0-gfsvbackup-dht: no subvolume for hash (value) = 3903786731
[2016-11-08 15:18:10.565501] W [fuse-bridge.c:1911:fuse_create_cbk] 0-glusterfs-fuse: 2130: /p-v00310/database/tmp.MfR90s => -1 (No such file or directory)
[2016-11-08 15:18:10.565635] W [defaults.c:1381:default_release] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/cluster/distribute.so(dht_create+0x33d) [0x7fd6c159933d] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/cluster/distribute.so(dht_local_wipe+0xa7) [0x7fd6c1579cc7] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0xc9) [0x7fd6c66a9009]))) 0-fuse: xlator does not implement release_cbk
[2016-11-08 15:18:11.533114] W [dht-layout.c:179:dht_layout_search] 0-gfsvbackup-dht: no subvolume for hash (value) = 3572146200
[2016-11-08 15:18:11.534115] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-3
[2016-11-08 15:18:11.534146] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-4
[2016-11-08 15:18:11.534208] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gfsvbackup-dht: failed to get disk info from gfsvbackup-client-6
[2016-11-08 15:18:11.534235] W [dht-layout.c:179:dht_layout_search] 0-gfsvbackup-dht: no subvolume for hash (value) = 3572146200
[2016-11-08 15:18:11.534272] W [fuse-bridge.c:1911:fuse_create_cbk] 0-glusterfs-fuse: 2132: /p-v00310/database/tmp.rzffeG => -1 (No such file or directory)
[2016-11-08 15:18:11.534407] W [defaults.c:1381:default_release] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/cluster/distribute.so(dht_create+0x33d) [0x7fd6c159933d] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/cluster/distribute.so(dht_local_wipe+0xa7) [0x7fd6c1579cc7] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0xc9) [0x7fd6c66a9009]))) 0-fuse: xlator does not implement release_cbk

Comment 19 Sebbo 2016-11-08 15:27:07 UTC
Ah, sorry. Forget this question. The resolv.conf has been updated to something wrong and this caused a very slow DNS resolution, why GlusterFS was sometimes not reachable. After setting the correct DNS servers, everything worked fine again. :)

Comment 20 Niels de Vos 2017-11-07 10:41:28 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.