Bug 1011002

Summary: for directory, path is '(null)' in log while accessing Directory using gfid on aux mount (dist-rep volume)
Product: Red Hat Gluster Storage Reporter: Rachana Patel <racpatel>
Component: glusterfsAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.1CC: mzywusko, sasundar, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:22:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Rachana Patel 2013-09-23 13:18:49 UTC
Description of problem:
for directory, path is '(null)' in log while accessing  Directory using gfid on aux mount (dist-rep volume)

Version-Release number of selected component (if applicable):
3.4.0.32rhs-1.el6rhs.x86_64

How reproducible:
Haven't tried

Steps to Reproduce:
1.while testing Distributed geo-rep, remove-brick from master, found this  in aux mount log

log snippet
[root@old5 remove_change]# grep null /var/log/glusterfs/geo-replication/remove_change/ssh%3A%2F%2Froot%4010.70.37.195%3Agluster%3A%2F%2F127.0.0.1%3Aremove_chnage.%2Frhs%2Fbrick3%2Fc3.gluster.log 
[2013-09-11 14:02:49.714731] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 14:46:14.860200] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 16:01:06.955303] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 17:03:46.002971] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 17:24:58.632782] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 18:00:17.132789] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 18:18:28.052553] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 18:31:36.008665] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 19:03:52.816896] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 19:52:19.084951] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 20:06:26.802149] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 20:53:53.077079] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 21:50:36.473021] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-11 22:24:55.037157] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-12 00:00:40.955991] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-12 01:07:24.049102] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-12 01:26:36.669833] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-12 01:48:48.108069] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0
[2013-09-12 01:53:50.070511] I [dht-layout.c:633:dht_layout_normalize] 0-remove_change-dht: found anomalies in (null). holes=1 overlaps=0 missing=0 down=0 misc=0


Steps performed for geo rep testing were as below:-
1. create and start geo rep session between master and slave volume.
[root@old5 ~]# gluster volume geo remove_change status
NODE                           MASTER           SLAVE                                HEALTH    UPTIME                
-----------------------------------------------------------------------------------------------------------------
old5.lab.eng.blr.redhat.com    remove_change    ssh://10.70.37.195::remove_chnage    Stable    4 days 07:12:33       
old6.lab.eng.blr.redhat.com    remove_change    ssh://10.70.37.195::remove_chnage    Stable    4 days 23:52:43 

2. start creating data on master volume from mount point
[root@rhs-client22 ~]# mount | grep remove_change
10.70.35.179:/remove_change on /mnt/remove_change type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
10.70.35.179:/remove_change on /mnt/remove_change_nfs type nfs (rw,addr=10.70.35.179)

3. remove brick(s) from master volume

--> gluster volume remove-brick remove_change 10.70.35.179:/rhs/brick3/c3 10.70.35.235:/rhs/brick3/c3 start

4. once remove-brick is completed perform commit operation
 gluster volume remove-brick remove_change 10.70.35.179:/rhs/brick3/c3 10.70.35.235:/rhs/brick3/c3 status
 gluster volume remove-brick remove_change 10.70.35.179:/rhs/brick3/c3 10.70.35.235:/rhs/brick3/c3 commit

[root@old5 ~]# gluster v info remove_change
 
Volume Name: remove_change
Type: Distributed-Replicate
Volume ID: eb500199-37d4-4cb9-96ed-ae5bc1bf2498
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.35.179:/rhs/brick3/c1
Brick2: 10.70.35.235:/rhs/brick3/c1
Brick3: 10.70.35.179:/rhs/brick3/c2
Brick4: 10.70.35.235:/rhs/brick3/c2
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on

5. Found this entry in all aux mount log.

Actual results:
It is getting null for path

Expected results:
It should show path of that dir

Additional info:

Comment 3 SATHEESARAN 2013-10-11 11:02:40 UTC
I was testing with quota, and I did the following steps,

1. Create a volume - (e.g) testvol - 2x2 distributed replicate volume
2. Created a volume - (e.g) quotavol - single brick volume
3. No data was fed in any volume
4. Enabled quota single brick volume
5. Rebooted the NODE1 twice (It happened accidentally)
6. checked the volume status of single brick volume [it was normal]
7. Set the quota on 1 brick volume to 1GB and rebooted it again
8. checked the status of volume again
9. Enabled quota on another volume 2X2, dist-rep
10. Mounted it, created dirrectory master
11. Set quota on this directory to 1GB
12. Did 'dd' in that dir by name 'master'
13. Executing 'gluster volume quota <vol-name> list', frequently

After doing all this, I saw few errors in /var/log/glusterfs/quotad.log as follows :
[2013-10-11 07:09:46.305406] E [afr-common.c:3832:afr_notify] 0-testvol-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.
[2013-10-11 07:09:46.338332] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.338489] W [socket.c:522:__socket_rwv] 0-testvol-client-2: readv on 10.70.37.153:24007 failed (No data available)
[2013-10-11 07:09:46.338553] E [rpc-clnt.c:368:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x164) [0x7f27e10bf0f4] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connectio
n_cleanup+0xc3) [0x7f27e10bec33] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f27e10beb4e]))) 0-testvol-client-2: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called 
at 2013-10-11 07:09:46.338474 (xid=0x7579x)
[2013-10-11 07:09:46.338565] W [client-handshake.c:1863:client_dump_version_cbk] 0-testvol-client-2: received RPC status error
[2013-10-11 07:09:46.338585] I [client.c:2103:client_rpc_notify] 0-testvol-client-2: disconnected from 10.70.37.153:24007. Client process will keep trying to connect to glusterd until 
brick's port is available. 
[2013-10-11 07:09:46.338642] W [socket.c:522:__socket_rwv] 0-testvol-client-0: readv on 10.70.37.153:24007 failed (No data available)
[2013-10-11 07:09:46.338681] E [rpc-clnt.c:368:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x164) [0x7f27e10bf0f4] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connectio
n_cleanup+0xc3) [0x7f27e10bec33] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f27e10beb4e]))) 0-testvol-client-0: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called 
at 2013-10-11 07:09:46.338634 (xid=0x7579x)
[2013-10-11 07:09:46.338690] W [client-handshake.c:1863:client_dump_version_cbk] 0-testvol-client-0: received RPC status error
[2013-10-11 07:09:46.338700] I [client.c:2103:client_rpc_notify] 0-testvol-client-0: disconnected from 10.70.37.153:24007. Client process will keep trying to connect to glusterd until 
brick's port is available. 
[2013-10-11 07:09:46.338716] E [socket.c:2158:socket_connect_finish] 0-testvol-client-1: connection to 10.70.37.184:24007 failed (No route to host)
[2013-10-11 07:09:46.360014] W [socket.c:522:__socket_rwv] 0-glusterfs: readv on 127.0.0.1:24007 failed (No data available)
[2013-10-11 07:09:46.386576] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.386708] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.386771] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.386827] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.386881] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.386935] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.386988] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387062] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387122] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387177] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387230] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387284] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387337] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387422] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387480] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387534] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387599] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387653] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387707] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387761] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387815] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387869] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0
[2013-10-11 07:09:46.387923] I [dht-layout.c:641:dht_layout_normalize] 0-testvol: found anomalies in (null). holes=1 overlaps=0 missing=0 down=2 misc=0

Comment 5 Vivek Agarwal 2015-12-03 17:22:23 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.