Description of problem: During NFS concurrent write, etc-glusterfs-glusterd.vol.log shows the following log entries: .2014-07-01 20:40:06.564499] I [glusterd-handler.c:1169:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req .2014-07-01 20:40:06.972202] I [glusterd-handler.c:1114:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req .2014-07-01 20:41:01.757756] E [glusterd-utils.c:153:glusterd_lock] 0-management: Unable to get lock for uuid: 7c6ac836-9ee1-4d90-9093-364b31db9a3a, lock held by: 4a21155b-d64f-4f88-a05f-11fc8346f83f .2014-07-01 20:41:01.757799] E [glusterd-op-sm.c:5730:glusterd_op_sm] 0-management: handler returned: -1 .2014-07-01 20:41:01.760343] E [glusterd-utils.c:186:glusterd_unlock] 0-management: Cluster lock not held! .2014-07-01 20:41:01.760386] E [glusterd-op-sm.c:5730:glusterd_op_sm] 0-management: handler returned: -1 .2014-07-01 20:41:33.801686] E [glusterd-utils.c:153:glusterd_lock] 0-management: Unable to get lock for uuid: 7c6ac836-9ee1-4d90-9093-364b31db9a3a, lock held by: 4a21155b-d64f-4f88-a05f-11fc8346f83f .2014-07-01 20:41:33.801718] E [glusterd-op-sm.c:5730:glusterd_op_sm] 0-management: handler returned: -1 .2014-07-01 20:41:33.803919] E [glusterd-utils.c:186:glusterd_unlock] 0-management: Cluster lock not held! .2014-07-01 20:41:33.803958] E [glusterd-op-sm.c:5730:glusterd_op_sm] 0-management: handler returned: -1 .2014-07-01 20:45:07.973494] I [glusterd-handler.c:1169:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req .2014-07-01 20:45:07.974679] I [glusterd-handler.c:1169:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req md5sum on files looks alright. Volume heal info shows 0 heal-failed nor split-brain. Gluster peer all connected Not sure if that's cosmetic or actual locking issue. Version-Release number of selected component (if applicable): 3.5.1 How reproducible: Steps to Reproduce: 1. Create Volume 2. Enable quota 3. Mount volume on NFS client 4. Start 100 rsync process from another filesystems. Actual results: Log entries as shown above Expected results: No locking error Additional info:
2014-07-01 20:41:01.757756] E [glusterd-utils.c:153:glusterd_lock] 0-management: Unable to get lock for uuid: 7c6ac836-9ee1-4d90-9093-364b31db9a3a, lock held by: 4a21155b-d64f-4f88-a05f-11fc8346f83f This log indicates that another peer (uuid : 4a21155b-d64f-4f88-a05f-11fc8346f83f) had already taken the cluster lock for one of the ongoing transaction and the framework doesn't allow multiple transactions to go through. With an ongoing transaction this log is expected considering another transaction attempt has been made. Closing this bug, please re-open if the explanation is not satisfactory.
I want to say that the memory will taken 20GB by glusterd when this problem happend ?
This bug has been CLOSED, and there has not been a response to the requested NEEDINFO in more than 4 weeks. The NEEDINFO flag is now getting cleared so that our Bugzilla household is getting more in order.